The Application of Large Language Models in Enforcing Prohibitions against Hate Speech in Lithuanian
Abstract
Large language models (LLMs) are increasingly used to detect and explain hate speech, yet their potential role within evidentiary processes remains underexplored. This article examines whether and how LLMs can support the enforcement of legal prohibitions against hate speech in Lithuania, focusing on the post-detection phase of legal analysis and case preparation. It first outlines the mechanics and evolutionary trends of LLMs, the specific risks associated with their deployment in a low-resource language such as Lithuanian, and the applicable substantive and procedural standards governing hate speech. Building on this framework, the article proposes and experimentally validates a conceptual “moot court” model in which distinct LLM agents assume the roles of plaintiff, defendant, and judge to generate structured legal arguments and reasoned decisions in Lithuanian hate speech cases. The findings indicate that, under carefully engineered constraints, LLMs can reliably distinguish criminal hate speech from lawful expression, reduce the cognitive burden on legal actors, and triangulate human and automated assessments, while persistent risks of hallucinations and opacity preclude their use as stand-alone evidence and instead support a complementary, assistive role in judicial practice.
© 2026 Milita Songailaitė, Aušrinė Pasvenskienė, Paulius Astromskis, published by Faculty of Political Science and Diplomacy and the Faculty of Law of Vytautas Magnus University (Lithuania)
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.