Have a personal or library account? Click to login
Artificial intelligence—friend or foe in fake news campaigns Cover

Abstract

In this paper the impact of large language models (LLM) on the fake news phenomenon is analysed. On the one hand decent text‐generation capabilities can be misused for mass fake news production. On the other, LLMs trained on huge volumes of text have already accumulated information on many facts thus one may assume they could be used for fact‐checking. Experiments were designed and conducted to verify how much LLM responses are aligned with actual fact‐checking verdicts. The research methodology consists of an experimental dataset preparation and a protocol for interacting with ChatGPT, currently the most sophisticated LLM. A research corpus was explicitly composed for the purpose of this work consisting of several thousand claims randomly selected from claim reviews published by fact‐ checkers. Findings include: it is difficult to align the respons‐ es of ChatGPT with explanations provided by fact‐checkers; prompts have significant impact on the bias of responses. ChatGPT at the current state can be used as a support in fact‐checking but cannot verify claims directly.

DOI: https://doi.org/10.18559/ebr.2023.2.736 | Journal eISSN: 2450-0097 | Journal ISSN: 2392-1641
Language: English
Page range: 41 - 70
Submitted on: Apr 26, 2023
Accepted on: Jun 16, 2023
Published on: Jul 26, 2023
Published by: Poznań University of Economics and Business Press
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2023 Krzysztof Węcel, Marcin Sawiński, Milena Stróżyna, Włodzimierz Lewoniewski, Ewelina Księżniak, Piotr Stolarski, Witold Abramowicz, published by Poznań University of Economics and Business Press
This work is licensed under the Creative Commons Attribution 4.0 License.