Have a personal or library account? Click to login
How Artificial Intelligence Can Influence Elections: Analyzing the Large Language Models (LLMs) Political Bias Cover

How Artificial Intelligence Can Influence Elections: Analyzing the Large Language Models (LLMs) Political Bias

Open Access
|Jul 2024

References

  1. Acemoglu, D. (2021). <em>Harms of AI [Working Paper].</em> National Bureau Of Economic Research.
  2. Bulck, L., &amp; Moons, P. (2023). What if your patient switches from Dr. Google to Dr. ChatGPT? A vignette-based survey of the trustworthiness, value and danger of ChatGPT-generated responses to health questions. <em>European journal of cardiovascular nursing</em>, 95-98.
  3. Hosseini, A. (2023, December 3). <em>The rise of Large Language Models.</em> Retrieved from pwc: <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://www.pwc.com/m1/en/media-centre/articles/the-rise-of-large-language-models.html">https://www.pwc.com/m1/en/media-centre/articles/the-rise-of-large-language-models.html</ext-link>
  4. Jakesch, M., Bhat, A., Buschek, D., Zalmanson, L., &amp; Naaman, M. (2023). Co-Writing with Opinionated Language Models Affects Users’ Views. <em>Association for Computing Machinery, New York, NY, USA, Article 111</em>, 1-15.
  5. Jérôme Rutinowski, S. F. (2024). The Self-Perception and Political Biases of ChatGPT. <em>Human Behavior and Emerging Technologies, vol. 2024</em>.
  6. Kotek, H., Docku, R., &amp; Sun, D. (2023). Gender bias and stereotypes in Large Language Models. <em>In Proceedings of The ACM Collective Intelligence Conference (CI '23)</em>, 12–24.
  7. Lancaster, A. (2023, March 20). <em>Beyond Chatbots: The Rise Of Large Language Models.</em> Retrieved from Forbes: <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://www.forbes.com/sites/forbestechcouncil/2023/03/20/beyond-chatbots-the-rise-of-large-language-models/?sh=97ac54a2319b">https://www.forbes.com/sites/forbestechcouncil/2023/03/20/beyond-chatbots-the-rise-of-large-language-models/?sh=97ac54a2319b</ext-link>
  8. Liang, P. P., Wu, C., Morency, L.-P., &amp; Salakhutdinov, R. (2021). Towards Understanding and Mitigating Social Biases in Language Models. <em>Proceedings of the 38th International Conference on Machine Learning, PMLR</em>, 6565-6576.
  9. Liu, R., Jia, C., Wei, J., Xu, G., &amp; Vosoughi, S. (2022). Quantifying and alleviating political bias in language models. <em>Artificial Intelligence</em>, Volume 304.
  10. Majid, A. (2024, February 25). <em>Top 50 news websites in the US: Strong growth at UK newsbrand The Independent in January</em>. Retrieved from pressgazette.co.uk: <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://pressgazette.co.uk/media-audience-and-business-data/media_metrics/most-popular-websites-news-us-monthly-3/">https://pressgazette.co.uk/media-audience-and-business-data/media_metrics/most-popular-websites-news-us-monthly-3/</ext-link>
  11. Metze, K., Morandin-Reis, R. C., Lorand-Metze, I., &amp; Florindo, J. B. (2024). Bibliographic Research with ChatGPT may be Misleading: The Problem of Hallucination. <em>Journal of Pediatric Surgery</em>, Volume 59, Issue 1, p 158.
  12. Motoki, F. P. (2024). More human than human: measuring ChatGPT political bias. <em>Public Choice 198</em>, 3–23.
  13. Ramadan, I. (2023). The Main and Basic Differences between the Google. <em>International Journal of Scientific and Research Publications</em>, 446-447.
  14. Rozado, D. (2023). The Political Biases of ChatGPT. <em>Soc. Sci. 2023, 12(3)</em>, 148.
  15. van Dis, E. A. (2023). ChatGPT: five priorities for research. <em>Nature</em>, 224-226.
Language: English
Page range: 1882 - 1891
Published on: Jul 3, 2024
Published by: The Bucharest University of Economic Studies
In partnership with: Paradigm Publishing Services
Publication frequency: 1 times per year

© 2024 George-Cristinel Rotaru, Sorin Anagnoste, Vasile-Marian Oancea, published by The Bucharest University of Economic Studies
This work is licensed under the Creative Commons Attribution 4.0 License.