As millions adopt Grok to fact-check, misinformation abounds

On June 9, soon after United States President Donald Trump dispatched US National Guard troops to Los Angeles to quell the protests taking place over immigration raids, California Governor Gavin Newsom posted two photographs on X. The images showed dozens of troopers wearing the National Guard uniform sleeping on the floor in a cramped space, with a caption that decried Trump for disrespecting the troops.

X users immediately turned to Grok, Elon Musk’s AI, which is integrated directly into X, to fact-check the veracity of the image. For that, they tagged @grok in a reply to the tweet in question, triggering an automatic response from the AI.“You’re sharing fake photos,” one user posted, citing a screenshot of Grok’s response that claimed a reverse image search could not find the exact source. In another instance, Grok said the images were recycled from 2021, when former US President Joe Biden, a Democrat, withdrew troops from Afghanistan. Melissa O’Connor, a conspiracy-minded influencer, cited a ChatGPT analysis that also said the images were from the Afghanistan evacuation.

However, non-partisan fact-checking organisation PolitiFact found that both AI citations were incorrect. The images shared by Newsom were real, and had been published in the San Francisco Chronicle.

The bot-sourced erroneous fact checks formed the basis for hours of cacophonous debates on X, before Grok corrected itself.

Unlike OpenAI’s standalone app ChatGPT, Grok’s integration into X offers users immediate access to real-time AI answers without quitting the app, a feature that has been reshaping user behaviour since its March launch. However, the increasingly first stop for fact checks during breaking news or for other general posts often provides convincing but inaccurate answers.“I think in some ways, it helps, and in some ways, it doesn’t,” said Theodora Skeadas, an AI policy expert formerly at Twitter. “People have more access to tools that can serve a fact-checking function, which is a good thing. However, it is harder to know when the information isn’t accurate.”

There’s no denying that chatbots could help users be more informed and gain context on events unfolding in real time. But currently, its tendency to make things up outstrips its usefulness.

Chatbots, including ChatGPT and Google’s Gemini, are large language models (LLMs) that learn to predict the next word in a sequence by analysing enormous troves of data from the internet. The outputs of chatbots are reflections of the patterns and biases in the data it is trained on, which makes them prone to factual errors and misleading information called “hallucinations”.

For Grok, these inherent challenges are further complicated because of Musk’s instructions that the chatbot should not adhere to political correctness, and should be suspicious of mainstream sources. Where other AI models have guidelines around politically sensitive queries, Grok doesn’t. The lack of guardrails has resulted in Grok praising Hitler, and consistently parroting anti-Semitic views, sometimes to unrelated user questions.

Related Articles

Back to top button