References
- Aaronson, S. (2013). Why philosophers should care about computational complexity. In B. J. Copeland, C. J. Posy & O. Shagrir (Eds.), Computability: Turing, Godel, Church, and Beyond, 261–328.
- Arkoudas, K. (2023). GPT-4 Can’t Reason. arXiv. https://arxiv.org/pdf/2308.03762
- Cantwell Smith, Brian (2019). The Promise of Artificial Intelligence. MIT Press.
- De Cosmo, L. (2022). Google Engineer Claims AI Chatbot Is Sentient: Why That Matters. Scientific American, July 12 2022. https://www.scientificamerican.com/article/google-engineer-claims-ai-chatbot-is-sentient-why-that-matters
- Dziri, N.; Milton, S.; Yu, M.; Zaiane, O. & Reddy, S. (2023). On the Origin of Hallucinations in Conversational Models: Is it the Datasets or the Models? In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pp. 5271–5285. https://aclanthology.org/2022.naacl-main.387.pdf
- Gardner, H. (2011). Frames of Mind. New York: Basic Books.
- Gettier, Edmund L. (1963). Is Justified True Belief Knowledge?. Analysis, 23(6), 121–123. doi: 10.1093/analys/23.6.121
- Goldman, A. (1998). Reliabilism. In The Routledge Encyclopedia of Philosophy. Taylor and Francis. https://www.rep.routledge.com/articles/thematic/reliabilism/v-1 doi: 10.4324/9780415249126-P044-1
- Goldstein, S., & Levinstein, B. (2024). Does ChatGPT have a mind? arXiv. https://arxiv.org/pdf/2407.11015
- Goldman, A. & Beddor, B. (2021). Reliabilist Epistemology. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Summer 2021 Edition), https://plato.stanford.edu/archives/sum2021/entries/reliabilism
- Hansson, S. O. (2022). Logic of Belief Revision,. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Spring 2022 Edition), https://plato.stanford.edu/archives/spr2022/entries/logic-belief-revision
- Hatem, R., Simmons, B. & Thornton, J. E. (2023). A Call to Address AI “Hallucinations” and How Healthcare Professionals Can Mitigate Their Risks. Cureus, 15(9). doi: 10.7759/cureus.44720
- Hintze, A. (2023). ChatGPT believes it is conscious. arXiv. https://arxiv.org/abs/2304.12898
- Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y. J., Madotto, A., & Fung, P. (2022). Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12), 1–38. https://doi.org/10.1145/3571730
- Krzanowski, R. & Polak, P. (2022). The meta-ontology of AI systems with humanlevel intelligence. Philosophical Problems in Science 73, 197–230. https://zfn.edu.pl/index.php/zfn/article/view/610
- Liu, Y., Yao, Y., Ton, J.-F., Zhang, X., Guo, R., Cheng, H., Klochkov, Y., Taufiq, M. F., & Li, H. (2024). Trustworthy LLMS: A survey and guideline for evaluating large language model’s alignment. arXiv. https://arxiv.org/pdf/2308.05374
- Longo et al. (2024). Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions. Information Fusion, 106. https://doi.org/10.1016/j.inffus.2024.102301
- Lyons, J. (2019). Algorithm and Parameters: Solving the Generality Problem for Reliabilism, The Philosophical Review, 128(4), 463–509. doi: 10.1215/00318108-7697876
- Maleki, N., Padmanabhan, B., & Dutta, K. (2024). AI hallucinations: A misnomer worth clarifying. IEEE Conference on Artificial Intelligence (CAI), pp. 133138.https://ieeecai.org/2024/wp-content/pdfs/540900a127/540900a127.pdf
- Lyotard, J.-F. (1984). The Postmodern Condition: A Report on Know ledge (G. Bennington & B. Massumi, Trans.). Manchester: Manchester University Press (Original work published 1979).
- Open AI (2024). GPT-4 System Card. OpenAI.com. Mar 23, 2023. https://cdn.openai.com/papers/gpt-4-system-card.pdf
- Piedrahita, O. A., & Carter, J. A. (2024). Can AI believe? Philosophy & Technology, 37(89). https://doi.org/10.1007/s13347-024-00780-6
- Poston, T. (2024). Internalism and externalism in epistemology. https://iep.utm.edu/int-ext
- Roose, K. (2023). A Conversation With Bing’s Chatbot Left Me Deeply Unsettled. New York Times. Feb 16, 2023. https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html
- Russell, S. & Norvig, P. (2003). Artificial intelligence: A Modern Approach. New Jersey: Prentice Hall.
- Searle, J. (1980). Minds, Brains and Programs. Behavioral and Brain Sciences, 3(3), 417–457. doi: 10.1017/S0140525X00005756
- Šekrst, K. (2020). AI-Completeness: Using Deep Learning to Eliminate the Human Factor. In S. Skansi (Ed.), Guide to Deep Learning Basics: Logical, Historical and Philosophical Perspectives (pp. 117–130). Cham: Springer International Publishing.
- Šekrst, K. (forthcoming). Unjustified untrue “beliefs”: AI hallucinations and justification logics. In K. Świętorzecka, F. Grgić, & A. Brozek (Eds.), Logic, knowledge, and tradition: Essays in honor of Srećko Kovač.
- Tonymoy et al. (2024). A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models. arXiv. https://arxiv.org/abs/2401.01313
- Turing, A. (1950). Computing Machinery and Intelligence. Mind, LIX (236), 433460. doi: 10.1093/mind/LIX.236.433
- Yampolskiy, R. (2012). AI-Complete, AI-Hard, or AI-Easy – Classification of Problems in AI. 23rd Midwest Artificial Intelligence and Cognitive Science Conference, MAICS 2012, Cincinnati, Ohio, USA, 21–22 April 2012.
- Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is All you Need. In: I. Guyon et al. (Eds.), Advances in Neural Information Processing Systems 30 (NIPS 2017).
- Vectara (2024). Hallucination Leaderboard. Retrieved on May 31, 2024. https://github.com/vectara/hallucination-leaderboard
- Waters, F. & Fernyhough, C. (2016). Hallucinations: A Systematic Review of Points of Similarity and Difference Across Diagnostic Classes. Schizophrenia Bulletin, 43(1). doi: 10.1093/schbul/sbw132