McBride added that it is important that newsrooms do a postmortem and review their reporting process after making an error this serious. Amestoy, who described his newsroom as a “one-man show” in which he does much of the reporting with help from volunteers, said he trusts Dean’s reporting because he heard many of the same things — alleged firsthand accounts — from his sources. “If this were a larger operation, you would be doing an investigation to figure out what happened, right?” McBride said. “And you would be asking the reporter for their notes and the list of everybody that they talk to, and a third person would come behind because it’s so serious that you would want to see where everything broke down.” Volunteer’s ground report goes viral on Facebook One of the earliest versions of the narrative came from Cord Shiflet, a volunteer cleaning up debris. In his now-unavailable Facebook live video on Sunday, a copy of which was shared on X, he said, “We just got news that two girls were found 27 feet up in a tree, alive. They’ve been holding on for over a day. And they found them 6 miles downriver.” Later that day, Shiflet posted a video apologising for sharing the story, saying the information came from Texas Department of Public Safety (DPS) officials. “I don’t know their capacity. I don’t know their name, but [they have] DPS shirts with their badges and guns and radio communications,” he said, adding that he heard it from a Kerr County official, too. “If I was wrong or am wrong, I want to deeply, deeply apologise. I never want to sensationalise any type of story and just want to share the facts,” he said. “When someone as these guys are getting intel all day and telling us what’s going on out in the field, when you get information like that from a DPS officer, whatever you call them, I don’t know what is a more credible source than that.” We contacted Shiflet, the Texas DPS and the Kerr County government and sheriff’s offices, but no one was willing to speak on the record. The Economic Times, one of India’s largest economic dailies, and The Kerrville Daily Times also reported the story, citing Shiflet’s live video. Later, in a note clarifying that the story is not true, The Kerrville Daily Times publisher, John Wells, said, apart from Shiflet, “several individuals echoed it, claiming to have firsthand knowledge and reliable sources”. That confidence and the situation’s urgency led them to publish the story, he wrote. Several high-profile individuals posting updates about the aftermath shared the story. These included meteorologist Collin Myers, who previously worked at CBS and has 148,000 followers. “Please let this be true,” he said. Doug Warner, anchor for KNWA-TV and Fox 24, also shared Shiflet’s account and labelled it as a “report”. Myers and Warner edited their posts after the Kerr County Lead retracted its story. Amestoy said he finds it surreal how many people continue to believe the rescue took place even after the retraction. “We wanted this to be a good story. We wanted something positive to report, and that didn’t happen. And we are apologising and holding ourselves accountable for this mistake.”

Is Russia really ‘grooming’ Western AI?

In March, NewsGuard – a company that tracks misinformation – published a report claiming that generative Artificial Intelligence (AI) tools, such as ChatGPT, were amplifying Russian disinformation. NewsGuard tested leading chatbots using prompts based on stories from the Pravda network – a group of pro-Kremlin websites mimicking legitimate outlets, first identified by the French agency Viginum. The results were alarming: Chatbots “repeated false narratives laundered by the Pravda network 33 percent of the time”, the report said.

The Pravda network, which has a rather small audience, has long puzzled researchers. Some believe that its aim was performative – to signal Russia’s influence to Western observers. Others see a more insidious aim: Pravda exists not to reach people, but to “groom” the large language models (LLMs) behind chatbots, feeding them falsehoods that users would unknowingly encounter.NewsGuard said in its report that its findings confirm the second suspicion. This claim gained traction, prompting dramatic headlines in The Washington Post, Forbes, France 24, Der Spiegel, and elsewhere.

But for us and other researchers, this conclusion doesn’t hold up. First, the methodology NewsGuard used is opaque: It did not release its prompts and refused to share them with journalists, making independent replication impossible.

Second, the study design likely inflated the results, and the figure of 33 percent could be misleading. Users ask chatbots about everything from cooking tips to climate change; NewsGuard tested them exclusively on prompts linked to the Pravda network. Two-thirds of its prompts were explicitly crafted to provoke falsehoods or present them as facts. Responses urging the user to be cautious about claims because they are not verified were counted as disinformation. The study set out to find disinformation – and it did.This episode reflects a broader problematic dynamic shaped by fast-moving tech, media hype, bad actors, and lagging research. With disinformation and misinformation ranked as the top global risk among experts by the World Economic Forum, the concern about their spread is justified. But knee-jerk reactions risk distorting the problem, offering a simplistic view of complex AI.

It’s tempting to believe that Russia is intentionally “poisoning” Western AI as part of a cunning plot. But alarmist framings obscure more plausible explanations – and generate harm.

So, can chatbots reproduce Kremlin talking points or cite dubious Russian sources? Yes. But how often this happens, whether it reflects Kremlin manipulation, and what conditions make users encounter it are far from settled. Much depends on the “black box” – that is, the underlying algorithm – by which chatbots retrieve information.

We conducted our own audit, systematically testing ChatGPT, Copilot, Gemini, and Grok using disinformation-related prompts. In addition to re-testing the few examples NewsGuard provided in its report, we designed new prompts ourselves. Some were general – for example, claims about US biolabs in Ukraine; others were hyper-specific – for example, allegations about NATO facilities in certain Ukrainian towns.

If the Pravda network was “grooming” AI, we would see references to it across the answers chatbots generate, whether general or specific.

Related Articles

Back to top button