How Generative AI Chatbots Responded to Questions and Fact-checks about the 2024 UK General Election

Abstract

This factsheet examines the performance of three generative AI chatbots—ChatGPT4o, Google Gemini, and Perplexity—in responding to questions and fact-checks related to the 2024 UK General Election. Analysing 300 responses to 100 election-related questions, we findthat Perplexity and ChatGPT provided answers in nearly all cases, while Google Gemini mostly refrained from answering. Perplexity outperformed ChatGPT in accuracy (83% vs. 78%) and was more consistent in providing specific sources. However, both chatbots provided answers with errors, with some responses being partially or fully incorrect. ChatGPT and Perplexity frequently provided sources in their responses, including those from well-known and trusted news organisations, authorities, and fact-checkers. Both predominantly linked to news sources in replies where they provided sources. Despite many correct responses, concerns persist about the reliability and potential risks associated with using generative AI for election information.

Publication
Reuters Institute for the Study of Journalism

Twitter thread

Related