Abstract
This article applies Peter Singer’s ethical framework to evaluate the impact of predictive AI systems in end-of-life care. Singer’s work is particularly fitting for this analysis because it focuses on the normative variables that AI technologies influence: the moral significance of suffering, the formation of reflective preferences, and the assignment of responsibility for foreseeable outcomes. His perspective on personhood, emphasis on autonomy as rational self-determination, and principle of equal consideration of interests provide a solid foundation for assessing how algorithmic models affect clinical timing, deliberation, and risk allocation. By examining two predictive tools used in end-of-life care, the article demonstrates how Singer’s framework clarifies the ways in which AI can enhance procedural rationality in end-of-life decisions. Additionally, it identifies the risks that AI poses to autonomy, responsibility diffusion, and the reproduction of structural inequalities. Ultimately, Singer’s framework offers a conceptual means to differentiate algorithmic interventions that support ethical decision-making from those that undermine the non-negotiable ethical conditions he advocates.