Have a personal or library account? Click to login
Towards fair AI in Estonia’s public service: discussing and disseminating bias prevention in automated decision-making Cover

Towards fair AI in Estonia’s public service: discussing and disseminating bias prevention in automated decision-making

By: Kristi Joamets  
Open Access
|Dec 2025

References

  1. Adams-Prassl, J., Binns, R. and Kelly-Lyth, A. (2023) ‘Directly discriminatory algorithms’, Modern Law Review, 86(1), pp. 144–175. Available at: https://doi.org/10.1111/1468-2230.12759
  2. AI HLEG (2019) Ethics guidelines for trustworthy AI. Independent High-Level Expert Group on Artificial Intelligence. Brussels: European Commission. Available at: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (Accessed: 3 October 2025).
  3. Ali, S.I., Albadoo, S.F. and Al Mubarak, M. (2025) ‘Ethical ramifications and remedial approaches on bias in artificial intelligence’, in S.I. Ali et al. (eds) Studies in Systems, Decision and Control, 237. Springer Science and Business Media Deutschland GmbH, pp. 95–108. Available at: https://doi.org/10.1007/978-3-031-86708-8_8
  4. Alrawahna, A.S., Alzghoul, A. and Awad, H. (2025) ‘The impact of artificial intelligence on public sector decision-making: benefits, challenges, and policy implications’, International Review of Management and Marketing, 15(5), pp. 125–138. Available at: https://doi.org/10.32479/irmm.19419
  5. Amnesty International (2021) ‘Dutch childcare benefit scandal an urgent wake-up call to ban racist algorithms’, Amnesty International, 25 October. Available at: https://www.amnesty.org/en/latest/news/2021/10/xenophobic-machines-dutch-child-benefit-scandal/ (Accessed: 3 October 2025).
  6. Amri, A.A., Ismail, A.R. and Zarir, A.A. (2018) ‘Comparative performance of deep learning and machine learning algorithms on imbalanced handwritten data’, International Journal of Advanced Computer Science and Applications, 9(2), pp. 258–264. Available at: https://doi.org/10.14569/IJACSA.2018.090236
  7. Arku, D., Doku-Amponsah, K. and Howard, N.K. (2020) ‘A Markov-modulated tree-based gradient boosting model for auto-insurance risk premium pricing’, Risk and Decision Analysis, 8(1–2), pp. 1–13. Available at: https://doi.org/10.3233/RDA-180050
  8. Aytekin, A.B. (2023) ‘Algorithmic bias in the context of European Union anti-discrimination directives’, CEUR Workshop Proceedings, 3442.
  9. Bakker, D. (2010) ‘Language sampling’, in J.J. Song (ed) The Oxford handbook of linguistic typology. Oxford: Oxford University Press, pp. 100–128. Available at: https://doi.org/10.1093/oxfordhb/9780199281251.013.0007
  10. Bano, M., Gunatilake, H. and Hoda, R. (2025) ‘What does a software engineer look like? Exploring societal stereotypes in LLMs’, in 2025 IEEE/ACM 47th International Conference on Software Engineering: Software Engineering in Society (ICSE-SEIS), pp. 173–184. Available at: https://doi.org/10.1109/ICSESEIS66351.2025.00023
  11. Bhattacharya, A., Stumpf, S. and Verbert, K. (2024) ‘Representation debiasing of generated data involving domain experts’, in Adjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization (UMAP Adjunct ‘24), pp. 516–522. Available at: https://doi.org/10.1145/3631700.3664910
  12. Bose, B.K. (2019) ‘Artificial intelligence applications in renewable energy systems and smart grid: some novel applications’, in B.K. Bose (ed) Power electronics in renewable energy systems and smart grid: technology and applications. Hoboken, NJ: Wiley, pp. 625–675. Available at: https://doi.org/10.1002/9781119515661.ch12
  13. Carter, C. (2024) ‘Why the algorithmic recruiter discriminates: the causal challenges of data-driven discrimination’, Maastricht Journal of European and Comparative Law, 31(3), pp. 333–359. Available at: https://doi.org/10.1177/1023263X241248474
  14. Chen, W., Cauteruccio, F., Li, Y., Zheng, J. and Liu, W. (2025) ‘Three technical routes of AI’, in W. Hussain, L.M. López, M. Sahni, Z.-S. Chen and J. Liu (eds) Cutting-edge artificial intelligence: advances and implications in real-world applications. Boca Raton: CRC Press, pp. 35–54. Available at: https://doi.org/10.1201/9781032632483-3
  15. Choi, Y., Hong, J., Lee, E., Kim, J. and Kim, S. (2025) ‘Enhancing fairness in financial AI models through constraint-based bias mitigation’, Journal of Information Processing Systems, 21(1), pp. 89–101. Available at: https://doi.org/10.3745/JIPS.01.0111
  16. Constantino, J. (2025) ‘Accountable AI: it takes two to tango’, in R. Gsenger and M.- T. Sekwenz (eds) Digital Decade: how the EU shapes digitalisation research, Vol. 3. Baden-Baden: Nomos Verlagsgesellschaft mbH und Co., pp. 95–114.
  17. Constitution of the Republic of Estonia (1992) RT, 26, 349. Available at: https://www.riigiteataja.ee/en/eli/521052015001/consolide (Accessed: 3 October 2025).
  18. Cross, J.L., Choma, M.A. and Onofrey, J.A. (2024) ‘Bias in medical AI: implications for clinical decision-making’, PLoS Digital Health, 3(11), article e0000651. Available at: https://doi.org/10.1371/journal.pdig.0000651
  19. Daish, P., Roach, M. and Dix, A. (2023) ‘Raising user awareness of bias-leakage via proxies in AI models to improve fairness in decision-making’, in Proceedings of the AISB Convention, pp. 86–88.
  20. Dave, G.S., Pandhare, A.P., Kulkarni, A.P. and Khankal, D.V. (2025) ‘Innovative data techniques for centrifugal pump optimization with machine learning and AI model’, PLoS One, 20(6), article e0325952. Available at: https://doi.org/10.1371/journal.pone.0325952
  21. Dastin, J. (2022) ‘Amazon scraps secret AI recruiting tool that showed bias against women’, in K. Martin (ed) Ethics of data and analytics: concepts and cases. Boca Raton: Auerbach Publications.
  22. DeAlcala, D., Serna, I., Morales, A., Fierrez, J. and Ortega-Garcia, J. (2023) ‘Measuring bias in AI models: a statistical approach introducing N-Sigma’, in 2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC), pp. 1167–1172. Available at: https://doi.org/10.1109/COMPSAC57700.2023.00176
  23. Debray, T.P.A., Damen, A.A.A.G., Snel, K.I.E., Ensor, J., Hooft, L. Reitsma, J.B., Riley, R.D. and Moons, K.G.M. (2017) ‘A guide to systematic review and meta-analysis of prediction model performance’, BMJ (Online), 356. Available at: https://doi.org/10.1136/bmj.i6460
  24. Dineva, K. and Atanasova, T. (2020) ‘Systematic look at machine learning algorithms: advantages, disadvantages and practical applications’, in International Multidisciplinary Scientific GeoConference on Surveying Geology Mining Ecology Management (SGEM), (2.1), pp. 317–324. Available at: https://doi.org/10.5593/sgem2020/2.1/s07.041
  25. Dreyling, R.M., Tammet, T. and Pappel, I. (2023) ‘Digital transformation insights from an AI solution in search of a problem’, in T.K. Dang, J. Küng and T.M. Chung (eds) Future data and security engineering. Big data, security and privacy, smart city and Industry 4.0 applications. FDSE 2023. Communications in computer and information science, 1925, pp. 341–351. Available at: https://doi.org/10.1007/978-981-99-8296-7_24
  26. Dreyling III, R., Tammet, T. and Pappel, I. (2024) ‘Technology push in AI-enabled services: how to master technology integration in case of Bürokratt’, SN Computer Science, 5(6), article 738. Available at: https://doi.org/10.1007/s42979-024-03064-0
  27. Edler, D. (2019) ‘Digitisation and urban development in Estonia: smart city solutions in “e-Estonia”—the example of Tartu’, Geographische Rundschau, 71(4), pp. 46–51.
  28. Equal Treatment Act (Estonia) (2008) RT I, 56, 315. Available at: https://www.riigiteataja.ee/en/eli/530102013066/consolide (Accessed: 3 October 2025).
  29. European Commission (2020) White paper on artificial intelligence: a European approach to excellence and trust. Available at: https://commission.europa.eu/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en (Accessed: 3 October 2025).
  30. European Commission (2024) Estonia 2024 Digital Decade country report. Available at: https://digital-strategy.ec.europa.eu/en/factpages/estonia-2024-digital-decade-country-report (Accessed: 3 October 2025).
  31. Fountain, J.E. (2022) ‘The moon, the ghetto and artificial intelligence: reducing systemic racism in computational algorithms’, Government Information Quarterly, 39(2), article 101645. Available at: https://doi.org/10.1016/j.giq.2021.101645
  32. Fontanari, T., Fróes, T.C. and Recamonde-Mendoza, M. (2022) ‘Cross-validation strategies for balanced and imbalanced datasets’, in J.C. Xavier-Junior and R.A. Rios (eds) Intelligent systems. BRACIS 2022. Lecture notes in computer science, 13653, pp. 626–640. Available at: https://doi.org/10.1007/978-3-031-21686-2_43
  33. Franzoni, V. (2023) ‘From black box to glass box: advancing transparency in artificial intelligence systems for ethical and trustworthy AI’, in O. Gervasi et al. (eds) Computational science and its applications—ICCSA 2023 Workshops. ICCSA 2023. Lecture notes in computer science, 14107, pp. 118–130. Available at: https://doi.org/10.1007/978-3-031-37114-1_9
  34. Gender Equality Act (Estonia) (2004) RT I, 27, 181. Available at: https://www.riigiteataja.ee/en/eli/530102013038/consolide (Accessed: 3 October 2025).
  35. Goel, R.K. and Nelson, M.A. (2025) ‘Awareness of artificial intelligence: diffusion of AI versus ChatGPT information with implications for entrepreneurship’, Journal of Technology Transfer, 50(1), pp. 96–113. Available at: https://doi.org/10.1007/s10961-024-10089-3
  36. Göksal, Ş.İ. and Solarte-Vásquez, M.C. (2024) ‘The blockchain-based trustworthy artificial intelligence supported by stakeholders-in-the-loop model’, Scientific Papers of the University of Pardubice, Series D: Faculty of Economics and Administration, 32(2), article 2. Available at: https://doi.org/10.46585/sp32022083
  37. Göksal, Ş. İ., Solarte-Vásquez, M.C. and Chochia, A. (2025) ‘The EU AI Act’s alignment within the European Union’s regulatory framework on artificial intelligence’, International and Comparative Law Review, 24(2), pp. 25–53. Available at: https://doi.org/10.2478/iclr-2024-0017
  38. Gonçalves, R., Gouveia, F., Lynce, I. and Santos, J.F. (2025) ‘Proxy attribute discovery in machine learning datasets via inductive logic programming’, in A. Gurfinkel and M. Heule (eds) Tools and algorithms for the construction and analysis of systems. TACAS 2025. Lecture notes in computer science, 15697, pp. 343–363. Available at: https://doi.org/10.1007/978-3-031-90653-4_17
  39. Gonuguntla, K., Shaik, A. and Agrawal, P. (2025) ‘Statistical bias’, in A.E.M. Eltorai, J.A. Bakal and C.M. Gibson (eds) Translational cardiology. Amsterdam: Elsevier, pp. 179–180. Available at: https://doi.org/10.1016/B978-0-323-91790-2.00039-3
  40. González-Sendino, R., Serrano, E. and Bajo, J. (2024) ‘Mitigating bias in artificial intelligence: fair data generation via causal models for transparent and explainable decision-making’, Future Generation Computer Systems, 155, pp. 384–401. Available at: https://doi.org/10.1016/j.future.2024.02.023
  41. Hartung, T., Hoffmann, S. and Whaley, P. (2025) ‘Assessing risk of bias in toxicological studies in the era of artificial intelligence’, Archives of Toxicology, 99(8), pp. 3065–3090. Available at: https://doi.org/10.1007/s00204-025-03978-5
  42. Henzinger, T., Karimi, M., Kueffner, K. and Mallik, K. (2023) ‘Runtime monitoring of dynamic fairness properties’, in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT ‘23), pp. 604–614. Available at: https://doi.org/10.1145/3593013.3594028
  43. Heo, J. and Kim, J.-Y. (2022) ‘PIM for ML training’, in J.-Y. Kim, B. Kim and T.T.-H. Kim (eds) Processing-in-memory for AI: from circuits to systems. Cham: Springer International Publishing, pp. 121–142. Available at: https://doi.org/10.1007/978-3-030-98781-7_6
  44. Hofmann, B. (2025) ‘Biases in AI: acknowledging and addressing the inevitable ethical issues’, Frontiers in Digital Health, 7. Available at: https://doi.org/10.3389/fdgth.2025.1614105
  45. Issaro, S., Nilsook, P. and Wannapiroon, P. (2023) ‘Artificial intelligence engineering for predictive analytics’, in 2023 Research, Invention and Innovation Congress: Innovation electrical and electronic: innovation for a better life (RI2C), pp. 51–58. Available at: https://doi.org/10.1109/RI2C60382.2023.10355979
  46. Jain, L.R. and Menon, V. (2023) ‘AI algorithmic bias: understanding its causes, ethical and social implications’, in 2023 IEEE 35th International Conference on Tools with Artificial Intelligence (ICTAI), pp. 460–467. Available at: https://doi.org/10.1109/ICTAI59109.2023.00073
  47. Kalda, K., Sell, R. and Soe, R.-M. (2021) ‘Use case of autonomous vehicle shuttle and passenger acceptance analysis’, Proceedings of the Estonian Academy of Sciences, 70(4), pp. 429–435. Available at: https://doi.org/10.3176/proc.2021.4.09
  48. Kim, C. (2019) ‘A modular framework for collaborative multimodal annotation and visualisation’, in Companion Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI ‘19 Companion), pp. 165–166. Available at: https://doi.org/10.1145/3308557.3308730
  49. Kolberg, J., Rathgeb, C. and Busch, C. (2023) ‘The influence of gender and skin colour on the watchlist imbalance effect in facial identification scenarios’, in J.J. Rousseau and B. Kapralos (eds) Pattern recognition, computer vision, and image processing. ICPR 2022 International Workshops and Challenges. ICPR 2022. Lecture notes in computer science, 13643, pp. 465–478. Available at: https://doi.org/10.1007/978-3-031-37660-3_33
  50. Kore, A., Bavil, E.A., Subasri, V., Abdalla, M., Fine, B., Dolatabadi, E. and Abdalla, M. (2024) ‘Empirical data drift detection experiments on real-world medical imaging data’, Nature Communications, 15(1), article 1887. Available at: https://doi.org/10.1038/s41467-024-46142-w
  51. Kulkarni, A., Chong, D. and Batarseh, F.A. (2020) ‘Foundations of data imbalance and solutions for a data democracy’, in F.A. Batarseh and R. Yang (eds) Data democracy: at the nexus of artificial intelligence, software development, and knowledge engineering. Amsterdam: Elsevier, pp. 83–106. Available at: https://doi.org/10.1016/B978-0-12-818366-3.00005-8
  52. Lacmanovic, S. and Skare, M. (2025) ‘Artificial intelligence bias auditing—current approaches, challenges and lessons from practice’, Review of Accounting and Finance, 24(3), pp. 375–400. Available at: https://doi.org/10.1108/RAF-01-2025-0006
  53. Lobo, P.R. (2022) ‘Bias in hate speech and toxicity detection’, in Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (AIES ‘22), p. 910. Available at: https://doi.org/10.1145/3514094.3539519
  54. Marripudugala, M. (2025) ‘Leveraging large language models for bias detection and mitigation in data analytics models’, in 2025 International Conference on Intelligent Computing and Control Systems (ICICCS), pp. 960–965. Available at: https://doi.org/10.1109/ICICCS65191.2025.10984507
  55. Matzat, L. (2019) ‘Finnish credit score ruling raises questions about discrimination and how to avoid it’, AlgorithmWatch. Available at: https://algorithmwatch.org/en/finnish-credit-score-ruling-raises-questions-about-discrimination-and-howto-avoid-it/ (Accessed: 3 October 2025).
  56. Metsallik, J. and Ross, P. (2021) ‘Testing the applicability of digital decision support on a nationwide EHR’, in R. Jallouli, M.A., Bach Tobij, H. Mcheick and G. Piho (eds) Digital economy. Emerging technologies and business innovation. ICDEc 2021 Lecture notes in business information processing, 431, pp. 134–146. Available at: https://doi.org/10.1007/978-3-030-92909-1_9
  57. Mian, S.M., Khan, M.S., Shawez, M. and Kaur, A. (2024) ‘Artificial intelligence (AI), machine learning (ML) and deep learning (DL): a comprehensive overview on techniques, applications and research directions’, in 2024 2nd International Conference on Sustainable Computing and Smart Systems (ICSCSS), pp. 1404–1409. Available at: https://doi.org/10.1109/ICSCSS60660.2024.10625198
  58. Ministry of Economic Affairs and Communications (Estonia) (2021) Estonia’s Digital Agenda 2030. Available at: https://www.mkm.ee/en/e-state-and-connectivity/digital-agenda-2030 (Accessed: 3 October 2025).
  59. Nõmmik, S. (2025) ‘Challenges with the design and use of AI-based tools in public sector in Estonia: moving beyond the potential of a great idea on paper’, in N. Urs, D. Špaček and D. Nõmmik (eds) Digital transformation in European public services. Governance and public management, F355. Cham: Palgrave Macmillan, pp. 129–154. Available at: https://doi.org/10.1007/978-3-031-81425-9_7
  60. Pappachan, P., Moslehpour, M., Bansal, R. and Rahaman, M. (2024) ‘Transparency and accountability’, in B. Gupta (ed) Challenges in large language model development and AI ethics. Hershey, PA: IGI Global, pp. 178–211. Available at: https://doi.org/10.4018/979-8-3693-3860-5.ch006
  61. Porter, Z., Ryan, P., Morgan, P., Al-Qaddoumi, J., Twomey, B., Noordhof, P., McDermid, J. and Habli, I. (2025) ‘Unravelling responsibility for AI’, Journal of Responsible Technology, 23, article 100124. Available at: https://doi.org/10.1016/j.jrt.2025.100124
  62. Pulivarthy, P. and Whig, P. (2024) ‘Bias and fairness addressing discrimination in AI systems’, in P. Bhattacharya, A. Hassan, H. Liu and B. Bhushan (eds) Ethical dimensions of AI development. Hershey, PA: IGI Global, pp. 103–126. Available at: https://doi.org/10.4018/979-8-3693-4147-6.ch005
  63. Radanliev, P. (2025) ‘AI ethics: integrating transparency, fairness, and privacy in AI development’, Applied Artificial Intelligence, 39(1), article 2463722. Available at: https://doi.org/10.1080/08839514.2025.2463722
  64. Raja, V.J., Dhanamalar, M., Solaimalai, G., Rani, D.L., Deepa, P. and Vidhya, R.G. (2024) ‘Machine learning revolutionising performance evaluation: recent developments and breakthroughs’, in 2024 2nd International Conference on Sustainable Computing and Smart Systems (ICSCSS), pp. 780–785. Available at: https://doi.org/10.1109/ICSCSS60660.2024.10625103
  65. Reddy-Best, K.L. and Olson, E. D. (2020) ‘Packers, dilators and the options for either male or female: navigating movement of transgender and gender non-conforming bodies, appearances and luggage through airport security’, Fashion, Style and Popular Culture, 7(2–3), pp. 223–246. Available at: https://doi.org/10.1386/fspc_00016_1
  66. ‘Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (text with EEA relevance)’ (2016) Official Journal L 119, 4.5.2016, pp. 1–88. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32016R0679
  67. ‘Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No. 300/2008, (EU) No. 167/2013, (EU) No. 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)’ (2024) Official Journal L 2024/1689, 12.7.2024, pp. 1–170. Available at: https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng
  68. Ricaurte, P. and Zasso, M. (2023) ‘AI, ethics, and coloniality: a feminist critique’, in M. Cebral-Loureda, E.G. Rincón-Flores and G. Sanchez-Ante (eds) What AI can do: strengths and limitations of artificial intelligence. Boca Raton: CRC Press, pp. 39–57. Available at: https://doi.org/10.1201/b23345-4
  69. Ruschemeier, H. and Hondrich, L.J. (2024) ‘Automation bias in public administration: an interdisciplinary perspective from law and psychology’, Government Information Quarterly, 41(3), article 101953. Available at: https://doi.org/10.1016/j.giq.2024.101953
  70. Russell, S. and Norvig, P. (2021) Artificial intelligence: a modern approach. 4th edn. Harlow: Pearson.
  71. Said, Y., Saidani, T., Atri, M., Alsheikhy, A.A. and Shawly, T. (2026) ‘Computational intelligence for emotion recognition in autism spectrum disorder: a systematic review of signal-based modelling, simulation, and clinical potential’, Biomedical Signal Processing and Control, 111, article 108367. Available at: https://doi.org/10.1016/j.bspc.2025.108367
  72. Sen, R. and Das, S. (2023) ‘Unsupervised learning’, in Indian Statistical Institute Series. Dordrecht: Springer Science and Business Media B.V., pp. 305–318. Available at: https://doi.org/10.1007/978-981-19-2008-0_21
  73. Senthil, G.A., Geerthik, S., Jerlin Ida, J. and Ashika Jubi, S. (2025) ‘Ethical AI auditor for bias detecting in AI models using adversarial debiasing’, in 2025 International Conference on Advances in Modern Age Technologies for Health and Engineering Science (AMATHE), pp. 1–6. Available at: https://doi.org/10.1109/AMATHE65477.2025.11081330
  74. Seraj, A., Mohammadi-Khanaposhtani, M., Daneshfar, R., Naseri, M., Esmaeili, M., Baghban, A., Habibzadeh, S. and Eslamian, S. (2022) ‘Cross-validation’, in S. Eslamian and F. Eslamian (eds) Handbook of hydroinformatics: Volume I: Classic soft-computing techniques. Amsterdam: Elsevier, pp. 89–105. Available at: https://doi.org/10.1016/B978-0-12-821285-1.00021-X
  75. Singhi, S.K. and Liu, H. (2006) ‘Feature subset selection bias for classification learning’, in Proceedings of the 23rd International Conference on Machine Learning (ICML ’06), 148, pp. 849–856. Available at: https://doi.org/10.1145/1143844.1143951
  76. Snow, J. (2018) ‘Amazon’s face recognition falsely matched 28 members of Congress with mugshots’, ACLU of Northern California, 26 July. Available at: https://www.aclu.org/news/privacy-technology/amazons-face-recognition-falsely-matched-28 (Accessed: 3 October 2025).
  77. Sogancioglu, G., Kaya, H. and Salah, A.A. (2023) ‘Using explainability for bias mitigation: a case study for fair recruitment assessment’, in Proceedings of the 25th International Conference on Multimodal Interaction (ICMI ‘23), pp. 631–639. Available at: https://doi.org/10.1145/3577190.3614170
  78. Soleimani, M., Intezari, A. and Pauleen, D.J. (2022) ‘Mitigating cognitive biases in developing AI-assisted recruitment systems: a knowledge-sharing approach’, International Journal of Knowledge Management, 18(1). Available at: https://doi.org/10.4018/IJKM.290022
  79. Steidl, M., Felderer, M. and Ramler, R. (2023) ‘The pipeline for the continuous development of artificial intelligence models—current state of research and practice’, Journal of Systems and Software, 199, article 111615. Available at: https://doi.org/10.1016/j.jss.2023.111615
  80. Taylor, J.M.G., Ankerst, D.P. and Andridge, R.R. (2008) ‘Validation of biomarker-based risk prediction models’, Clinical Cancer Research, 14(19), pp. 5977–5983. Available at: https://doi.org/10.1158/1078-0432.CCR-07-4534
  81. Tellez, N., Serra, J., Kumar, Y., Li, J.J. and Morreale, P. (2023) ‘Gauging biases in various deep learning AI models’, in K. Arai (ed) Intelligent systems and applications. IntelliSys 2022. Lecture notes in networks and systems, 544, pp. 171–186. Available at: https://doi.org/10.1007/978-3-031-16075-2_11
  82. Thelle, M.H., Lundberg, A.T., Hovmand, B.E., Woltmann, H.H., Virtanen, L., Tranholm-Mikkelsen, N., Pedersen, S.T. and Oure, A.J. (2024) The economic opportunity of AI in Estonia: capturing the next wave of benefits from generative AI. Copenhagen: Implement Consulting Group. Available at: https://www.justdigi.ee/sites/default/files/documents/2024-12/2024_The-economic-opportunity-of-AI-in-Estonia.pdf (Accessed: 3 October 2025).
  83. Vatsa, V.R. and Chhaparwal, P. (2021) ‘Estonia’s e-governance and digital public service delivery solutions’, in 2021 Fourth International Conference on Computational Intelligence and Communication Technologies (CCICT), pp. 135–138. Available at: https://doi.org/10.1109/CCICT53244.2021.00036
  84. Verma, S., Paliwal, N., Yadav, K. and Vashist, P. C. (2024) ‘Ethical considerations of bias and fairness in AI models’, in 2024 2nd International Conference on Disruptive Technologies (ICDT), pp. 818–823. Available at: https://doi.org/10.1109/ICDT61202.2024.10489577
  85. Vetrò, A. (2021) ‘Imbalanced data as risk factor of discriminating automated decisions: a measurement-based approach’, Journal of Intellectual Property, Information Technology and E-Commerce Law, 12(4), pp. 272–288.
  86. Vetrò, A., Torchiano, M. and Mecati, M. (2021) ‘A data quality approach to the identification of discrimination risk in automated decision-making systems’, Government Information Quarterly, 38(4), article 101619. Available at: https://doi.org/10.1016/j.giq.2021.101619
  87. Vitek, M., Das, A., Lucio, D.R., Zanlorensi, L.A., Menotti, D. and Khiarak, J.N. (2023) ‘Exploring bias in sclera segmentation models: a group evaluation approach’, IEEE Transactions on Information Forensics and Security, 18, pp. 190–205. Available at: https://doi.org/10.1109/TIFS.2022.3216468
  88. Vorras, A. and Mitrou, L. (2021) ‘Unboxing the black box of artificial intelligence: algorithmic transparency and/or a right to functional explainability’, in T.-E. Synodinou, P. Jougleux, C. Markou and T. Prastitou-Merdi (eds) EU internet law in the digital single market. Cham: Springer International Publishing, pp. 247–264. Available at: https://doi.org/10.1007/978-3-030-69583-5_10
  89. Woolf, B., Betke, M., Yu, H., Bargal, S.A., Arroyo, I., Magee, J., Allessio, D. and Rebelsky, W. (2023) ‘Face readers: the frontier of computer vision and math learning’, CEUR Workshop Proceedings, 3491, pp. 24–36.
  90. Xivuri, K. and Twinomurinzi, H. (2023) ‘How AI developers can assure algorithmic fairness’, Discover Artificial Intelligence, 3(1), article 27. Available at: https://doi.org/10.1007/s44163-023-00074-4
  91. Yang, M., Wang, J. and Ton, J.-F. (2023) ‘Rectifying unfairness in recommendation feedback loop’, in SIGIR ’23: Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), pp. 28–37. Available at: https://doi.org/10.1145/3539618.3591754
  92. Zemmari, A. and Benois-Pineau, J. (2020) ‘Supervised learning problem formulation’, in Deep learning of mining of visual content: SpringerBriefs in computer science. Cham: Springer, pp. 5–11. Available at: https://doi.org/10.1007/978-3-030-34376-7_2
  93. Zhaltyrbayeva, R., Jangabulova, A., Suleimenova, S., Saimova, S. and Tlembayeva, Z. (2025) ‘Legal challenges of regulating artificial intelligence in law enforcement: taking into account the interdisciplinary approach to socio-legal transformations’, Social & Legal Studios, 8(2), pp. 118–130. Available at: https://doi.org/10.32518/sals2.2025.118
DOI: https://doi.org/10.2478/bjes-2025-0037 | Journal eISSN: 2674-4619 | Journal ISSN: 2674-4600
Language: English
Page range: 201 - 224
Published on: Dec 12, 2025
In partnership with: Paradigm Publishing Services
Publication frequency: 2 issues per year

© 2025 Kristi Joamets, published by Tallinn University of Technology
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.