Have a personal or library account? Click to login
Artificial Intelligence, Value Alignment and Rationality Cover

Artificial Intelligence, Value Alignment and Rationality

Open Access
|Jun 2022

Abstract

The problem of value alignment in the context of AI studies is becoming more and more acute. This article deals with the basic questions concerning the system of human values corresponding to what we would like digital minds to be capable of. It has been suggested that as long as humans cannot agree on a universal system of values in the positive sense, we might be able to agree on what has to be avoided. The article argues that while we may follow this suggestion, we still need to keep the positive approach in focus as well. A holistic solution to the value alignment problem is not in sight and there might possibly never be a final solution. Currently, we are facing an era of endless adjustment of digital minds to biological ones. The biggest challenge is to keep humans in control of this adjustment. Here the responsibility lies with the humans. Human minds might not be able to fix the capacity of digital minds. The philosophical analysis shows that the key concept when dealing with this issue is value plurality. It may well be that we have to redefine our understanding of rationality in order to successfully deal with the value alignment problem. The article discusses an option to elaborate on the traditional understanding of rationality in the context of AI studies.

DOI: https://doi.org/10.2478/bjes-2022-0004 | Journal eISSN: 2674-4619 | Journal ISSN: 2674-4600
Language: English
Page range: 79 - 98
Published on: Jun 23, 2022
In partnership with: Paradigm Publishing Services
Publication frequency: 2 issues per year

© 2022 Zhumagul Bekenova, Peeter Müürsepp, Gulzhikhan Nurysheva, Laura Turarbekova, published by Tallinn University of Technology
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.