Abstract
The article examines the growing tension between the use of Artificial Intelligence (AI) in criminal investigations and the protection of fundamental rights. While AI technologies such as facial recognition, predictive policing, and digital forensics promise greater efficiency in law enforcement, they simultaneously raise serious concerns related to privacy, equality, non-discrimination, and the right to a fair trial. The analysis demonstrates that the current legal and doctrinal framework remains insufficiently developed to address these challenges, creating the risk of fragmented practices and undermining legal certainty. Drawing on European and international standards, as well as recent doctrinal debates, the article highlights the main risks: algorithmic opacity, indirect discrimination, automation bias, and the lack of consolidated jurisprudence regarding AI-generated evidence. The article contributes to filling this gap by identifying doctrinal lacunae and proposing research and regulatory directions. These include the need for clear procedural standards, mandatory algorithmic audits, minimum safeguards for digital evidence, strict limitations on the use of predictive technologies, and investment in digital literacy for justice professionals. AI should not be rejected as a threat, but integrated responsibly into the legal system, ensuring both security and respect for the rule of law.