Some users of Elon Musk’s X are turning to its artificial intelligence Grok for fact-checking, raising concerns among fact-checkers that it could help spread misinformation.
Earlier this month, X allowed users to ask Grok, the xAI, questions about various topics. The move was similar to Perplexity, which launched an automated account on X to offer a similar experience.
Shortly after xAI launched Grok’s automated account on X, users began experimenting with asking it questions. Some people in markets, including India, have started asking Grok to fact-check comments and questions that relate to specific political beliefs.
Fact-checkers are concerned about using Grok — or any other similar AI assistant — in this way, because bots can frame their answers to sound convincing even if they’re not factually correct. Grok has been seen spreading fake news and misinformation in the past.
Last August, five secretaries of state called on Musk to make critical changes to Grok after misleading information generated by the assistant appeared on social media ahead of the US election.
Other chatbots, including OpenAI’s ChatGPT and Google’s Gemini, were also seen generating false information about last year’s election. Additionally, in 2023, disinformation researchers found that AI chatbots, including ChatGPT, can easily be used to create persuasive texts with misleading narratives.
“AI assistants like Grok are very good at using natural language and giving answers that sound like they were spoken by a human. So AI products are claiming to be natural and authentic-sounding, even when they are potentially very wrong. That’s the danger,” Angie Holan, director of the International Fact-Checking Network (IFCN) at Poynter, told TechCrunch.
Unlike AI assistants, fact-checkers use multiple trusted sources to verify information. They also take full responsibility for their conclusions, including their names and organizations to ensure their credibility.
Pratik Sinha, co-founder of the Indian non-profit fact-checking website Alt News, said that while Grok’s answers may seem convincing at first, they are only as convincing as the data it’s fed.
“Who will decide what data to give him, and that’s where the issue of government interference and so on comes in,” he said.
“There’s no transparency. Anything that’s not transparent is going to hurt because anything that’s not transparent can be manipulated in any way.”