Have a personal or library account? Click to login
Sentencing algorithms and equal consideration of interests Cover

Sentencing algorithms and equal consideration of interests

Open Access
|Dec 2025

References

  1. ANGWIN, J., LARSON, J., MATTU, S. & KIRCHNER, L. (2016): Machine bias: There's software used across the country to predict future criminals. And it's biased against blacks. In: ProPublica, [online] [Retrieved August 25, 2025] Available at: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  2. BAGARIC, M. & HUNTER, D. (2022): Enhancing the integrity of the sentencing process through the use of artificial intelligence. In: J. Ryberg & J. V. Roberts (eds.): Sentencing and Artificial Intelligence. New York: Oxford University Press, pp. 122–144.
  3. CHATZIATHANASIOU, K. (2022): Beware the lure of narratives: “Hungry judges” should not motivate the use of “artificial intelligence” in law. In: German Law Journal, 23(4), pp. 452–464.
  4. CHIAO, V. (2024): Algorithmic decision-making, statistical evidence and the rule of law. In: Episteme, 21(4), pp. 1241–1265.
  5. DANZIGER, S., LEVAV, J. & AVNAIM-PESSO, L. (2011): Extraneous factors in judicial decisions. In: Proceedings of the National Academy of Sciences of the United States of America, 108(17), pp. 6889–6892.
  6. DAVIES, B. & DOUGLAS, T. (2022): Learning to discriminate: The perfect proxy problem in artificially intelligent sentencing. In: J. Ryberg & J. V. Roberts (eds.): Sentencing and Artificial Intelligence. New York: Oxford University Press, pp. 97–121.
  7. FLORES, A. W., LOWENKAMP, C. T. & BECHTEL, K. (2017): False positives, false negatives, and false analyses: A rejoinder to ‘Machine bias: There's software used across the country to predict future criminals. And it's biased against blacks”. In: Federal Sentencing Reporter, 30(1), pp. 27–32.
  8. GHOSE, S., TSE, Y. F., RASAEE, K., SEBO, J. & SINGER, P. (2024): The case for animal-friendly AI. arXiv preprint arXiv:2403.01199
  9. GHOSE, S., HÄYRY, M. & SINGER, P. (2025): Sentience and beyond – A representative interview with Peter Singer AI. In: Cambridge Quarterly of Healthcare Ethics, First View, pp. 1–9.
  10. GLÖCKNER, A. (2016): The irrational hungry judge effect revisited: Simulations reveal that the magnitude of the effect is overestimated. In: Judgment and Decision Making, 11(6), pp. 601–610.
  11. HAGENDORFF, T., BOSSERT, L. N., TSE, Y. F. & SINGER, P. (2023): Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals. In: AI and Ethics 3, pp. 717–734.
  12. HARE, R. M. (1981): Moral Thinking: Its Levels, Method, and Point. Oxford: Oxford University Press.
  13. HIMMELREICH, C. (2009): Despite DNA evidence, twins charged in heist go free. In: Time, 23 March, [online] [Retrieved August 25, 2025] Available at: https://time.com/archive/6946089/despite-dna-evidence-twins-charged-in-heist-go-free
  14. LIPPERT-RASMUSSEN, K. (2011): “We are all different”: Statistical discrimination and the right to be treated as an individual. In: The Journal of Ethics, 15(1), pp. 47–59.
  15. LIPPERT-RASMUSSEN, K. (2022): Algorithm-based sentencing and discrimination. In: J. Ryberg & J. V. Roberts (eds.): Sentencing and Artificial Intelligence. New York: Oxford University Press, pp. 74–96.
  16. MARR, B. & WARD, M. (2019): Artificial Intelligence in Practice: How 50 Successful Companies Used AI and Machine Learning to Solve Problems. Hoboken, NJ: Wiley.
  17. REICH, C. L. & VIJAYKUMAR, S. (2021): A possibility in algorithmic fairness: Can calibration and equal error rates be reconciled? In: 2nd Symposium on Foundations of Responsible Computing (FORC 2021). Leibniz International Proceedings in Informatics (LIPIcs) 192, pp. 4: 4:1–4:21.
  18. SCHAUER, F. (2003): Profiles, Probabilities and Stereotypes. Cambridge, MA. & London: Belknap Press.
  19. SCHWARZE, M. & ROBERTS, J. V. (2022): Reconciling artificial and human intelligence: Supplementing not supplanting the sentencing judge. In: J. Ryberg & J. V. Roberts (eds.): Sentencing and Artificial Intelligence. New York: Oxford University Press, pp. 206–225.
  20. SIEGEL, E. (2018): Predictive Analytics: The Power to Predict Who Will Click, Lie, or Die. Hoboken, NJ: Wiley.
  21. SINGER. P. (1981): The Expanding Circle: Ethics, Evolution, and Moral Progress. Princeton & Oxford: Princeton University Press.
  22. SINGER, P. (1999): A Darwinian Left: Politics, Evolution, and Cooperation. London: Weidenfeld & Nicolson.
  23. SINGER, P. (2011): Practical Ethics. New York: Cambridge University Press.
  24. SINGER, P. (2023): Ethics in the Real World: 90 Essays on Things That Matter. Princeton & Oxford: Princeton University Press.
  25. SINGER, P. & TSE, Y. F. (2023): AI ethics: the case for including animals. In: AI and Ethics, 3, pp. 539–551.
  26. WEINSHALL-MARGEL, K. & SHAPARD, J. (2011): Overlooked factors in the analysis of parole decisions. In: Proceedings of the National Academy of Sciences, 108(42), p. E833.
  27. ZERILLI, J., DANAHER, J., MACLAURIN, J., GAVAGHAN, C., KNOTT, A., LIDDICOAT, J. & NOORMAN, M. (2021): A Citizen's Guide to Artificial Intelligence. Cambridge, MA: The MIT Press.
  28. ZERILLI, J. (2022): Algorithmic sentencing: Drawing lessons from human factors research. In: J. Ryberg & J. V. Roberts (eds.): Sentencing and Artificial Intelligence. New York: Oxford University Press, pp. 165–183.
DOI: https://doi.org/10.2478/ebce-2025-0015 | Journal eISSN: 2453-7829 | Journal ISSN: 1338-5615
Language: English
Page range: 246 - 258
Published on: Dec 31, 2025
Published by: University of Prešov
In partnership with: Paradigm Publishing Services
Publication frequency: 2 issues per year

© 2025 Tomislav Bracanović, published by University of Prešov
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.