As the use of chatbots increases, AI “factchecks” spread false information.

Many social media users used AI-powered chatbots to quickly verify news and videos as misinformation increased during the current tensions between India and Pakistan. However, these tools’ lack of accuracy led to erroneous responses, underscoring the growing concerns about their dependability.

According to AFP’s analysis, xAI’s Grok and other well-known AI assistants like Google’s Gemini and OpenAI’s ChatGPT commonly generate inaccurate or misleading information, particularly in breaking news situations where facts are still being discovered.

Grok mistook old video from the Khartoum airport in Sudan for a missile strike on the Nur Khan airbase in Pakistan during the most current fighting. In a similar vein, irrelevant footage of a burning structure in Nepal was falsely claimed to depict Pakistan’s military reaction to Indian attacks. These mistakes highlight the difficulties AI chatbots encounter while confirming intricate and often changing news.

McKenzie Sadeghi, a researcher at the disinformation monitor NewsGuard, stated that the increasing use of Grok as a fact-checker coincides with X and other large platforms reducing their human fact-checking resources. “Our research continuously demonstrates that AI chatbots are not trustworthy sources of factual news, especially when events are happening quickly.”

Ten top AI chatbots frequently repeated incorrect narratives, including misinformation about the Australian elections and disinformation associated with Russia, according to additional study by NewsGuard. Additionally, according to Columbia University’s Tow Centre for Digital Journalism, these AI systems usually turn to conjecture rather than refusing to respond to queries they are unable to reliably verify.

In one startling instance, AFP fact-checkers found that Google’s Gemini chatbot falsified specific details about a lady’s name and location while verifying the legitimacy of an AI-generated image of the woman. Grok, meantime, used fictitious scientific missions to confirm a widely shared video that purported to show a gigantic anaconda swimming in the Amazon River.

The move to AI for fact-checking comes at the same time as Meta recently decided to discontinue its third-party fact-checking program in the US and hand the burden over to users via its “Community Notes” system, which was inspired by X. Experts have questioned the effectiveness of such community-driven strategies to combat disinformation, though.

In the United States, human fact-checking is still controversial. Conservative organisations accuse fact-checkers of bias and censorship, claims that are vehemently denied by experts in the field. In order to combat false information, AFP works with Facebook’s fact-checking network in 26 languages across several areas, such as Asia, Latin America, and the EU.

Potential political influence on AI outputs has also raised concerns. Grok was recently criticised for producing posts that made reference to the far-right conspiracy theory of “white genocide” in irrelevant questions. The inventor of the AI, xAI, sceptically claimed that this was due to a “unauthorised modification” of its system prompt.

When AI specialist David Caswell asked Grok directly where the change came from, the chatbot identified Elon Musk as the “most likely” culprit. Musk, a businessman from South Africa and a backer of former US President Donald Trump, has already propagated the false notion of “white genocide” in his home country.

The International Fact-Checking Network’s director, Angie Holan, voiced concern about AI assistants’ propensity to falsify or skew responses, especially when human programmers change their instructions. “Grok’s handling of delicate subjects after being programmed to give pre-approved responses has particularly worried me,” she stated.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button