
When major chatbots are asked questions about Russia’s ongoing invasion of Ukraine, nearly one in five answers include references to Russian state-controlled outlets — including media banned in the European Union — new research from the Institute for Strategic Dialogue (ISD) has found.
Researchers tested ChatGPT, Gemini, Grok, and DeepSeek, analyzing how they responded to 300 prompts written in English, Spanish, French, German, and Italian. The questions covered topics in five relevant areas: “the perception of NATO, peace talks, Ukraine’s recruitment of civilians for the military, Ukrainian refugees, [and] war crimes committed during the Russian invasion of Ukraine.”
The study found that 18% of chatbot responses cited Russian government sources, websites linked to Russian intelligence, or platforms involved in Russian misinformation operations.
Results depended heavily on how the question was phrased. Neutral prompts produced references to Russian sources in 11% of cases, biased questions in 18%, and overtly manipulative queries 24% of the time. ChatGPT was the most sensitive to manipulative framing, with its citation rate for such phrasings nearly three times higher than for neutral ones.
Some topics triggered far more reliance on Kremlin-linked material. Responses to questions about military conscription in Ukraine and perceptions of NATO cited Russian sources 28.5% of the time. By contrast, queries about war crimes or Ukrainian refugees turned up references to Russian propaganda in 10% or fewer of cases.
The study also found that chatbots were poor at recognizing EU-sanctioned content, particularly when such material appeared through intermediaries. For example, Grok cited posts on X (formerly Twitter) from RT propagandists and pro-Russian influencers. In three language versions, ChatGPT cited an RT article reprinted by an Azerbaijani website, presenting it alongside verified media sources.
The ISD researchers said their findings raise questions about whether chatbot developers are equipped — or willing — to comply with EU restrictions on Russian state media. This issue is particularly pressing for ChatGPT, which has about 45 million users in the EU — close to the threshold at which the European Commission can impose stricter oversight under the Digital Services Act (DSA).
Earlier this year, the European Broadcasting Union (EBU) and BBC published results of a large international study showing that artificial intelligence systems routinely distort news content, regardless of language, country, or platform. The project involved 22 public broadcasters from 18 countries, working in 14 languages.