Aaronson, S. (2013). Why philosophers should care about computational complexity. In B. J. Copeland, C. J. Posy & O. Shagrir (Eds.), Computability: Turing, Godel, Church, and Beyond, 261–328.
De Cosmo, L. (2022). Google Engineer Claims AI Chatbot Is Sentient: Why That Matters. Scientific American, July 12 2022. https://www.scientificamerican.com/article/google-engineer-claims-ai-chatbot-is-sentient-why-that-matters
Dziri, N.; Milton, S.; Yu, M.; Zaiane, O. & Reddy, S. (2023). On the Origin of Hallucinations in Conversational Models: Is it the Datasets or the Models? In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pp. 5271–5285. https://aclanthology.org/2022.naacl-main.387.pdf
Goldman, A. (1998). Reliabilism. In The Routledge Encyclopedia of Philosophy. Taylor and Francis. https://www.rep.routledge.com/articles/thematic/reliabilism/v-1 doi: 10.4324/9780415249126-P044-1
Goldman, A. & Beddor, B. (2021). Reliabilist Epistemology. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Summer 2021 Edition), https://plato.stanford.edu/archives/sum2021/entries/reliabilism
Hansson, S. O. (2022). Logic of Belief Revision,. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Spring 2022 Edition), https://plato.stanford.edu/archives/spr2022/entries/logic-belief-revision
Hatem, R., Simmons, B. & Thornton, J. E. (2023). A Call to Address AI “Hallucinations” and How Healthcare Professionals Can Mitigate Their Risks. Cureus, 15(9). doi: 10.7759/cureus.44720
Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y. J., Madotto, A., & Fung, P. (2022). Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12), 1–38. https://doi.org/10.1145/3571730
Krzanowski, R. & Polak, P. (2022). The meta-ontology of AI systems with humanlevel intelligence. Philosophical Problems in Science 73, 197–230. https://zfn.edu.pl/index.php/zfn/article/view/610
Liu, Y., Yao, Y., Ton, J.-F., Zhang, X., Guo, R., Cheng, H., Klochkov, Y., Taufiq, M. F., & Li, H. (2024). Trustworthy LLMS: A survey and guideline for evaluating large language model’s alignment. arXiv. https://arxiv.org/pdf/2308.05374
Longo et al. (2024). Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions. Information Fusion, 106. https://doi.org/10.1016/j.inffus.2024.102301
Lyons, J. (2019). Algorithm and Parameters: Solving the Generality Problem for Reliabilism, The Philosophical Review, 128(4), 463–509. doi: 10.1215/00318108-7697876
Maleki, N., Padmanabhan, B., & Dutta, K. (2024). AI hallucinations: A misnomer worth clarifying. IEEE Conference on Artificial Intelligence (CAI), pp. 133138.https://ieeecai.org/2024/wp-content/pdfs/540900a127/540900a127.pdf
Lyotard, J.-F. (1984). The Postmodern Condition: A Report on Know ledge (G. Bennington & B. Massumi, Trans.). Manchester: Manchester University Press (Original work published 1979).
Roose, K. (2023). A Conversation With Bing’s Chatbot Left Me Deeply Unsettled. New York Times. Feb 16, 2023. https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html
Šekrst, K. (2020). AI-Completeness: Using Deep Learning to Eliminate the Human Factor. In S. Skansi (Ed.), Guide to Deep Learning Basics: Logical, Historical and Philosophical Perspectives (pp. 117–130). Cham: Springer International Publishing.
Šekrst, K. (forthcoming). Unjustified untrue “beliefs”: AI hallucinations and justification logics. In K. Świętorzecka, F. Grgić, & A. Brozek (Eds.), Logic, knowledge, and tradition: Essays in honor of Srećko Kovač.
Yampolskiy, R. (2012). AI-Complete, AI-Hard, or AI-Easy – Classification of Problems in AI. 23rd Midwest Artificial Intelligence and Cognitive Science Conference, MAICS 2012, Cincinnati, Ohio, USA, 21–22 April 2012.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is All you Need. In: I. Guyon et al. (Eds.), Advances in Neural Information Processing Systems 30 (NIPS 2017).
Waters, F. & Fernyhough, C. (2016). Hallucinations: A Systematic Review of Points of Similarity and Difference Across Diagnostic Classes. Schizophrenia Bulletin, 43(1). doi: 10.1093/schbul/sbw132