Abstract
A faster and smarter healthcare system using artificial intelligence (AI) methods might reliably detect critical diseases, decreasing the patients’ waiting period for diagnosis and treatment. When using such tools, ethical issues related to privacy protection have been constantly raised. Fair AI models should avoid any possible source of bias in the datasets under study, while additionally eliminating discrimination against individuals based on their race, sex, religion, and other categories. More specifically, the European Commission and the World Health Organization have generated a corpus of procedures asking AI stakeholders to overcome these challenges in the medical field. Transparency on the data acquisition procedures and equipment, detailed information about how the model has been trained and evaluated, and extended efforts towards explaining how the decisions of the models have been generated are critical if the patients are going to accept these technologies. The paper discusses some of the specific challenges of adopting AI tools within the medical field, focusing on both opportunities and associated limitations and risks.