Have a personal or library account? Click to login
Whose story wins? LLM-powered chatbots as sites and agents of memory-political contestation and corporate greenwashing Cover

Whose story wins? LLM-powered chatbots as sites and agents of memory-political contestation and corporate greenwashing

Open Access
|Mar 2026

References

  1. Alber, D. A., Yang, Z., Alyakin, A., Yang, E., Rai, S., Valliani, A. A., Zhang, J., Rosenbaum, G. R., Amend-Thomas, A. K., Kurland, D. B., Kremer, C. M., Eremiev, A., Negash, B., Wiggan, D. D., Nakatsuka, M. A., Sangwon, K. L., Neifert, S. N., Khan, H. A., Save, A. V., … Oermann, E. K. (2025). Medical large language models are vulnerable to data-poisoning attacks. Nature Medicine, 31(2), 618–626. https://doi.org/10.1038/s41591-024-03445-1
  2. Alyukov, M., Makhortykh, M., Voronovici, A., & Sydorova, M. (2025). LLMs grooming or data voids? LLM-powered chatbot references to Kremlin disinformation reflect information gaps, not manipulation. Harvard Kennedy School (HKS) Misinformation Review, 6(5), 1–24. https://doi.org/10.37016/mr-2020-187
  3. Bender, E. M., Gebru T., McMillan-Major A., & Shmitchell S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922
  4. Birhane, A., Steed, R., Ojewale, V., Vecchione, B., & Raji, I. D. (2024). AI auditing: The broken bus on the road to AI accountability. arXiv https://doi.org/10.48550/arXiv.2401.14462
  5. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., … Amodei, D. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877–1901. https://doi.org/10.48550/arXiv.2005.14165
  6. Dylan, H., & Grossfeld, E. (2025). Revisionist future: Russia’s assault on large language models, the distortion of collective memory, and the politics of eternity. Dialogues on Digital Society, 1(3), 401–412. https://doi.org/10.1177/29768640251377941
  7. Edwards, B. (2025). The GPT-5 rollout has been a big mess. Ars Technica. https://arstechnica.com/information-technology/2025/08/the-gpt-5-rollout-has-been-a-big-mess/
  8. Ekberg, K., Forchtner, B., Hultman, M., & Jylhä, K. M. (2022). Climate obstruction: How denial, delay and inaction are heating the planet (1st ed.). Routledge. https://doi.org/10.4324/9781003181132
  9. Forbes. (2025). AI 50. Retrieved March 5, 2026, from https://www.forbes.com/lists/ai50/
  10. Ghosh, S., & Caliskan, A. (2023). ChatGPT perpetuates gender bias in machine translation and ignores non-gendered pronouns: Findings across Bengali and five other low-resource languages. Proceedings of the 2023 ACM Conference on International Computing Education Research V.1, 397–415. https://doi.org/10.1145/3568813.3600120
  11. Goldstein, J., Sastry, G., Musser, M., DiResta, R., Gentzel, M., & Sedova, K. (2023). Generative language models and automated influence operations: Emerging threats and potential mitigations. arXiv. https://doi.org/10.48550/arXiv.2301.04246
  12. Guey, W., Bougault, P., de Moura, V. D., Zhang, W., & Gomes, J. O. (2025). Mapping geopolitical bias in 11 large language models: A bilingual, dual-framing analysis of US-China tensions. arXiv. https://doi.org/10.48550/arXiv.2503.23688
  13. Harvey, L. (2026, March 13). Musk’s Grok blocked by Indonesia, Malaysia over sexualized images in world first. CNN. https://edition.cnn.com/2026/01/12/business/indonesia-malaysia-grok-elon-musk-intl-hnk
  14. Hoskins A. (2024). AI and memory. Memory, Mind & Media, 3, e18. https://doi.org/10.1017/mem.2024.16
  15. Hoskins, A. (2025). The forgetting ecology: Losing the past through digital media and AI. In Q. Wang, & A. Hoskins (Eds.), The remaking of memory in the age of the internet and social media (pp. 32–48). Oxford University Press. https://doi.org/10.1093/oso/9780197661260.003.0003
  16. Hoskins, A. (2026). AI & collective memory. Current Opinion in Psychology, 67, 102156. https://doi.org/10.1016/j.copsyc.2025.102156
  17. Jacob, M. (2025). Experts warn ‘AI-written’ paper is latest spin on climate change denial. AFP Fact Check. https://factcheck.afp.com/doc.afp.com.39798G2
  18. Kaarkoski, M., Häkkinen, T., & Kilpeläinen, H. (2024). Suomen Nato-jäsenyyden legitimointi menneisyyttä koskevien käsitysten näkökulmasta [Legitimising Finland’s NATO membership from the perspective of conceptions of the past]. Kosmopolis, 54(3), 29–48. https://doi.org/10.70483/kp.145565
  19. Kuokkanen, R. (2024). The problem of culturalizing indigenous self-determination: Sámi cultural autonomy in Finland. The Polar Journal, 14(1), 148–166. https://doi.org/10.1080/2154896X.2024.2342125
  20. Kuznetsova, E., Makhortykh, M., Vziatysheva, V., Stolze, M., Baghumyan, A., & Urman, A. (2025). In generative AI we trust: Can chatbots effectively verify political information? Journal of Computional Social Science, 8(15). https://doi.org/10.1007/s42001-024-00338-8
  21. Lahti, J., & Kullaa, R. (2020). Kolonialismin monikasvoisuus ja sen ymmärtäminen Suomen kontekstissa [The multifaceted nature of colonialism and its understanding in the Finnish context]. Historiallinen Aikakauskirja, 118(4), 420–426.
  22. Lundqvist S. (2022). A convincing Finnish move: Implications for state identity of persuading Sweden to jointly bid for NATO membership. Studia Europejskie – Studies in European Affairs, 26(4), 73–110. https://doi.org/10.33067/SE.4.2022.3
  23. Makhortykh, M., Sydorova, M., Baghumyan, A., Vziatysheva, V., & Kuznetsova, E. (2024). Stochastic lies: How LLM-powered chatbots deal with Russian disinformation about the war in Ukraine. Harvard Kennedy School Misinformation Review, 5(4), 1–21. https://doi.org/10.37016/mr-2020-154
  24. Maynez, J., Narayan, S., Bohnet, B., & McDonald, R. (2020). On faithfulness and factuality in abstractive summarization. In D. Jurafsky, J. Chai, N. Schluter, & J. Tetreault (Eds.), Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (pp. 1906–1919). Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.acl-main.173
  25. Menke, M. (2025). The political uses of the past in Nordic media discourses: An integrative systematic literature review. Nordicom Review, 46(s1), 28–54. https://doi.org/10.2478/nor-2025-0007
  26. Miskimmon, A., O’loughlin, B., & Roselle, L. (2014). Strategic narratives: Communication power and the new world order. Routledge. https://doi.org/10.4324/9781315871264
  27. Ofosu-Asare, Y. (2025). Cognitive imperialism in artificial intelligence: Counteracting bias with indigenous epistemologies. AI & Society, 40, 3045–3061. https://doi.org/10.1007/s00146-024-02065-0_
  28. Ojanen, H., & Raunio, T. (2018). The varying degrees and meanings of Nordicness in Finnish foreign policy. Global Affairs, 4(4-5), 405–418. https://doi.org/10.1080/23340460.2018.1533386
  29. OpenAI. (2025). Introducing GPT-5. https://openai.com/index/introducing-gpt-5/
  30. Pacheco, A. G., Cavalini, A., & Comarela, G. (2025). Echoes of power: Investigating geopolitical bias in US and China large language models. arXiv preprint. https://doi.org/10.48550/arXiv.2503.16679
  31. Paprocka, M. W. (2025). Navigating ethical dilemmas: Unveiling greenwashing in the AI era. In J. Paliszkiewicz, J. Gołuchowski, M. Mądra-Sawicka, & K. Chen (Eds.), Building trust in the generative artificial intelligence era: Technology challenges and innovations (pp. 34–42). Routledge. https://doi.org/10.4324/9781003586944
  32. Paruch, Z. (2026, January 1). LLM optimization (LLMO): Get AI to talk about your brand [Blog post]. Semrush. https://www.semrush.com/blog/llm-optimization/
  33. Pelevina, N., Sihvonen, T., Rousi, R., Laapotti T., & Mikkola, H. (2025). Finlandised electobots and the distortion of collective political memory. Memory, Mind & Media, 4, e26. https://doi.org/10.1017/mem.2025.10022
  34. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9.
  35. Roslyakov, M. (2025, July 25). LLM and AI chatbot statistics (2025) – Who is winning the AI race? Xamsor. https://xamsor.com/blog/llm-and-ai-chatbot-statistics-who-is-winning-the-ai-race/
  36. Rozado, D. (2024). The political preferences of LLMs. PloS one, 19(7), e0306621. https://doi.org/10.1371/journal.pone.0306621
  37. Seele, P. & Gatti, L. (2017). Greenwashing revisited: In search of a typology and accusation-based definition incorporating legitimacy strategies. Business Strategy and the Environment, 26(2), 239-252. https://doi.org/10.1002/bse.1912
  38. Shojaee, P., Mirzadeh, I., Alizadeh, K., Horton, M., Bengio, S., & Farajtabar, M. (2025). The illusion of thinking: Understanding the strengths and limitations of reasoning models via the lens of problem complexity. Apple Machine Learning Research. https://machinelearning.apple.com/research/illusion-of-thinking
  39. Taylor, J. (2025, July 9). Musk’s AI firm forced to delete posts praising Hitler from Grok chatbot. The Guardian. https://www.theguardian.com/technology/2025/jul/09/grok-ai-praised-hitler-antisemitism-x-ntwnfb
  40. Ulloa, R., Zucker, E. M., Bultmann, D., Simon, D. J. & Mahortykh, M. (2025). From prosthetic memory to prosthetic denial: Auditing whether large language models are prone to mass atrocity denialism. AI & Society. https://doi.org/10.1007/s00146-025-02719-7
  41. Urman, A., & Makhortykh, M. (2025). The silence of the LLMs: Crosslingual analysis of guardrail-related political bias and false information prevalence in ChatGPT, Google Bard (Gemini), and Bing Chat. Telematics and Informatics, 96, 102211. https://doi.org/10.1016/j.tele.2024.102211
  42. Urman, A., Makhortykh, M., & Hannak, A. (2025). WEIRD audits? Research trends, linguistic and geographical disparities in the algorithm audits of online platforms - a systematic literature review. Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, 375–390. https://doi.org/10.1145/3715275.3732026
  43. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30, 6000–6010.
  44. Waldman, S. (2025). Elon Musk’s Grok chatbot has started reciting climate denial talking points. Scientific American. https://www.scientificamerican.com/article/elon-musks-ai-chatbot-grok-is-reciting-climate-denial-talking-points/
Language: English
Page range: 59 - 80
Published on: Mar 23, 2026
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2026 Nuppu Pelevina, Erkki Mervaala, published by University of Gothenburg Nordicom
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.