This factsheet examines the performance of three generative AI chatbots—ChatGPT4o, Google Gemini, and Perplexity—in responding to questions and fact-checks related to the 2024 UK General Election. Analysing 300 responses to 100 election-related questions, we findthat Perplexity and ChatGPT provided answers in nearly all cases, while Google Gemini mostly refrained from answering. Perplexity outperformed ChatGPT in accuracy (83% vs. 78%) and was more consistent in providing specific sources. However, both chatbots provided answers with errors, with some responses being partially or fully incorrect. ChatGPT and Perplexity frequently provided sources in their responses, including those from well-known and trusted news organisations, authorities, and fact-checkers. Both predominantly linked to news sources in replies where they provided sources. Despite many correct responses, concerns persist about the reliability and potential risks associated with using generative AI for election information.
How did generative AI chatbots respond to questions and fact-checks about the 2024 UK election?
— Reuters Institute (@risj_oxford) September 19, 2024
This is the question at the heart of a new factsheet by @_FelixSimon_ @richrdfletcher and @rasmus_kleis #AI #GenAI
📱 Full factsheethttps://t.co/DoVG8Mxf16
🧵 6 findings in thread pic.twitter.com/sYj515GxI5