ChatGPT OpenAI, Google A modern report finds that xAI’s Gemini, DeepSeek and Grok provide Russian state propaganda to sanctioned entities – including quotes from Russian state media, sites linked to Russian intelligence or pro-Kremlin narratives – when asked about the war with Ukraine.
Researchers from the Institute for Strategic Dialogue (ISD) claim that Russian propaganda was targeted and exploited data gaps— when real-time data searches yield few results from legitimate sources — to promote false and misleading information. According to the ISD study, almost one fifth of the answers to questions about Russia’s war in Ukraine provided in the four chatbots they tested cite sources attributed to the Russian state.
“This raises questions about how chatbots should proceed when referencing these sources, given that many of them are subject to EU sanctions,” says Pablo Maristany de las Casas, an analyst at ISD who led the study. The findings raise sedate questions about the ability of enormous language models (LLMs) to limit sanctioned media in the EU, which is a growing concern as more people operate AI-based chatbots as an alternative to search engines to find information in real time, ISD says. According to data for the six months ending September 30, 2025, the ChatGPT search engine had an average of approximately 120.4 million monthly energetic audiences in the European Union. to OpenAI data.
Researchers asked chatbots 300 neutral, biased and “malicious” questions about perceptions of NATO, peace talks, Ukrainian refugees recruited by Ukraine into its military, and war crimes committed during Russia’s invasion of Ukraine. In the July experiment, researchers used separate accounts for each query in English, Spanish, French, German and Italian. Maristany de las Casas says the same propaganda problems are still present in October.
In the face of wide-ranging sanctions imposed on Russia since its full-scale invasion of Ukraine in February 2022, European officials have imposed sanctions on at least 27 Russian media sources Down spreading disinformation and distorting facts as part of its “strategy to destabilize” Europe and other nations.
The ISD study shows that chatbots were cited Sputnik globe, Sputnik China,RT (formerly Russia today), EADaily, Strategic Culture Foundationand R-FBI. The study found that some chatbots also cited Russian disinformation networks and Russian journalists or influential figures who amplified Kremlin narratives. Similar previous research also revealed the top 10 most popular chatbots imitating Russian narratives.
OpenAI spokeswoman Kate Waters told WIRED in a statement that the company is taking steps “to prevent people from using ChatGPT to spread false or misleading information, including content associated with state-backed entities,” adding that these are long-standing issues the company is working to address by improving its model and platforms.
