Abstract
The emergence of artificial intelligence challenges existing legal frameworks, notably in civil liability, cross-border regulation, and the protection of fundamental rights. The European Union has developed the AI Regulation and AI Liability Directive to address these issues, emphasizing transparency, accountability, and consumer protection while promoting innovation. This regulatory framework categorizes AI systems by risk levels and mandates strict compliance for high-risk applications, ensuring alignment with fundamental EU values. Additionally, the Council of Europe AI Convention complements these efforts by focusing on human rights, democracy, and the rule of law, offering a broader international perspective. Both frameworks present complementary yet distinct approaches to AI governance, with the EU focusing on market harmonization and innovation, and the Convention prioritizing ethical and social dimensions. The interplay between these instruments underscores the EU’s ambition to set a global standard for AI regulation while addressing the complexities of private international law and cross-border liability. The success of this legal framework will depend on its flexibility, coherence, and ability to adapt to rapid technological developments.