The legal profession is undergoing a profound transformation driven by advancements in artificial intelligence (AI), with generative AI at the forefront of this evolution. Unlike earlier generations of legal technology, generative AI—particularly large language models (LLMs) such as OpenAI’s GPT-4—enables the autonomous generation of human-like text, legal documents, summaries, translations, and legal analyses. This technological breakthrough introduces both unprecedented opportunities and significant risks for legal practitioners, especially in jurisdictions like Spain, where stringent legal and ethical standards govern the conduct of legal professionals.
In the context of Spanish law, the use of generative AI by law firms and corporate legal departments must be carefully evaluated against the backdrop of multiple regulatory and ethical obligations. Spanish legal professionals are subject to the binding provisions of the Estatuto General de la Abogacía Española and the Código Deontológico de la Abogacía Española, which set out fundamental duties regarding professional secrecy, diligence, independence, loyalty, and the protection of clients’ rights. These duties are directly implicated by the adoption of AI tools that may process sensitive client data, perform legal reasoning, or generate legal outputs that bear upon the quality and integrity of legal advice.
Moreover, Spain, as a member of the European Union, enforces the General Data Protection Regulation (Regulation (EU) 2016/679, “GDPR”), a comprehensive data protection framework that imposes strict requirements on the processing of personal data, including legal bases for processing, purpose limitation, data minimisation, and security safeguards. The GDPR is complemented in Spain by the Organic Law 3/2018 on the Protection of Personal Data and Guarantee of Digital Rights (Ley Orgánica 3/2018, de Protección de Datos Personales y garantía de los derechos digitales, “LOPDGDD”), which introduces additional national provisions relevant to the deployment of AI systems that may access or process personal data within legal workflows.
Beyond privacy and professional responsibility, the emerging legal landscape surrounding artificial intelligence at the European level—particularly the EU AI Act—further complicates the regulatory matrix. The AI Act classifies certain legal applications of AI, such as legal advice generation or automated legal decision support, as either “limited risk” or “high-risk” systems, with each category carrying distinct compliance obligations. Spanish law firms and in-house legal teams must begin preparing for these obligations by implementing risk-based AI governance frameworks that ensure legal, ethical and technical accountability.
At the same time, the use of generative AI presents clear benefits: automating routine drafting tasks, accelerating legal research, supporting compliance monitoring, and enhancing client communication. However, these advantages can only be realised within a robust framework that preserves the core principles of legal practice, including confidentiality, independence, accuracy and accountability. Without such a framework, legal practitioners risk breaching ethical duties, exposing confidential information, relying on inaccurate outputs or delegating critical legal judgments to systems incapable of legal reasoning.
This article aims to address the challenges and opportunities associated with generative AI technologies in a systematic and rigorous manner. It aims to provide legal professionals in Spain with a comprehensive, practical and legally sound guide to understanding, assessing and implementing generative AI technologies in compliance with applicable regulatory, ethical and technical standards. By examining the intersection of AI technology with the normative and institutional structures of Spanish legal practice, this article aspires to make a valuable scholarly contribution and provide a practical reference for law firms, in-house counsel and legal technologists operating within Spanish law.
The implementation of generative AI in legal services within the Spanish jurisdiction is subject to a complex and multilayered legal framework. This framework encompasses ethical obligations derived from the professional statutes governing lawyers, comprehensive data protection laws applicable at both the national and European levels, and emerging regulatory instruments addressing the unique risks associated with artificial intelligence. Legal professionals operating in Spain must therefore navigate a challenging terrain of intersecting legal standards to ensure that the deployment of AI technologies does not contravene their professional obligations or infringe upon the rights of clients and data subjects.
The Council of the Bar of Europe (CCBE) has also addressed the regulation of artificial intelligence in the practice of law in the document entitled ‘CCBE considerations on the legal aspects of artificial intelligence’ and, among other considerations, advocates for effective human oversight in the use of AI tools in the field of justice as a precondition for a rule of law justice system (1).
The Law Society issued new guidance to the profession in November 2023 (2) on the use of generative artificial intelligence tools, stating that lawyers are responsible for work products generated using ‘technology-based solutions’ and urging lawyers to ‘carefully review content and ensure its accuracy’.
A central pillar of the legal framework is the Código Deontológico de la Abogacía Española (3), which sets out the ethical and professional duties of Spanish lawyers. Article 5 of the Code establishes the inviolability of professional secrecy, obliging lawyers to maintain absolute confidentiality concerning all facts and information that come to their knowledge as a result of their professional activity, regardless of their origin. This duty is further reinforced in the Estatuto General de la Abogacía Española (Royal Decree 135/2021) (4), which codifies the obligation to uphold professional secrecy as a foundational element of the lawyer-client relationship. The use of generative AI, particularly in cloud-based environments or through third-party platforms, poses inherent risks to the preservation of confidentiality and must be subjected to a rigorous risk assessment and control protocol to ensure compliance with these core duties.
Closely related to confidentiality is the duty of competence (diligencia profesional), which requires lawyers to possess the necessary knowledge and skills to provide effective legal representation. The integration of AI tools into legal practice implicates this duty in two significant respects. First, lawyers must acquire a reasonable understanding of how AI technologies operate, including their limitations, accuracy levels, and potential failure modes, in order to use them responsibly. Second, they must ensure that reliance on AI does not substitute human judgment in areas requiring legal interpretation, strategic decision-making, or the exercise of discretion. In accordance with these obligations, the adoption of generative AI must be accompanied by internal policies and training protocols that ensure all personnel—legal and non-legal—are competent to engage with these systems appropriately.
From a data protection perspective, Spanish lawyers must comply with the General Data Protection Regulation (GDPR), which has direct effect across the European Union, and the Spanish Organic Law 3/2018 on the Protection of Personal Data and Guarantee of Digital Rights (LOPDGDD) (5). These instruments impose strict requirements on the processing of personal data, particularly when such data are processed using automated tools, including generative AI systems. Under Article 5 of the GDPR, personal data must be processed lawfully, fairly, and transparently, and only for specified, explicit, and legitimate purposes. The principles of data minimisation, accuracy, storage limitation and integrity and confidentiality are also of paramount importance.
The deployment of generative AI tools in the legal context frequently involves the processing of special categories of data—such as data relating to legal proceedings, criminal convictions or sensitive client matters—which are subject to additional safeguards under Articles 9 and 10 of the GDPR and the corresponding provisions of the LOPDGDD. Legal professionals must conduct Data Protection Impact Assessments (DPIAs) where AI systems are likely to result in high risks to the rights and freedoms of individuals, particularly in cases involving automated decision-making or profiling. The absence of adequate safeguards may not only result in regulatory sanctions, but also infringe upon the fundamental rights of data subjects under the Spanish Constitution and the EU Charter of Fundamental Rights.
Moreover, Spain and the European Union are currently developing AI-specific regulatory frameworks that will directly impact legal practice. The most notable of these is the proposed EU Artificial Intelligence Act, which introduces a risk-based classification system for AI applications. Legal tools that perform document analysis, legal drafting or decision support functions may be classified as “limited risk” or “high-risk” systems, depending on their specific capabilities and the context of use. High-risk systems will be subject to stringent requirements, including risk management systems, data governance measures, technical documentation, record-keeping obligations and post-market monitoring. Although the AI Act will not be fully enacted until 2027, Spanish legal professionals must begin preparing for its eventual implementation by establishing compliance mechanisms aligned with its structure.
Finally, the improper or negligent use of generative AI tools may expose legal professionals to civil or professional liability under Spanish law. If a lawyer uses AI-generated content that contains erroneous legal reasoning or misinformation, and such content is relied upon by a client or court to their detriment, the lawyer may face disciplinary action or malpractice claims. Therefore, robust oversight mechanisms must be instituted to ensure that all AI-assisted legal outputs are subject to human review and legal validation prior to their dissemination.
In sum, the legal framework governing generative AI in Spanish legal practice is expansive, stringent and evolving. It demands a high level of diligence, transparency, and ethical foresight on the part of legal professionals. The convergence of data protection rules, professional conduct obligations, and emerging AI regulation necessitates the development of internal governance models that are both legally compliant and technologically informed (6). Only through such integrative approaches can Spanish law firms and legal departments harness the potential of generative AI while safeguarding the integrity of legal practice and the rights of their clients.
The effective and compliant implementation of generative AI technologies within legal environments requires a sound understanding of their technical foundations. Legal professionals responsible for governance, risk and compliance must ensure that AI systems are deployed in ways that align with both the letter and the spirit of the law, as well as the broader principles of information security, technological resilience and professional responsibility. This section outlines the key technical considerations that should inform the integration of generative AI into legal practice in Spain.
At the core of generative AI are large language models (LLMs) built on transformer architectures. These models, such as OpenAI’s GPT series or Meta’s LLaMA, are trained on vast corpora of text data, enabling them to generate coherent, contextually relevant language outputs based on user prompts. While these capabilities can support legal drafting, research, summarisation and even clause extraction, they also raise significant concerns regarding control, transparency, and reliability. It is imperative that legal institutions understand the provenance of the models they employ, including the nature of the training data, the risk of embedded bias and the presence or absence of guardrails such as output filters and human-in-the-loop mechanisms.
Deployment architecture constitutes a fundamental decision point with direct implications for data security and compliance with confidentiality obligations under the Código Deontológico de la Abogacía Española and data protection laws such as the GDPR and the LOPDGDD. Law firms and corporate legal departments must choose between different deployment modalities: public cloud services (such as OpenAI’s API via Azure), private cloud environments or fully on-premises installations. While public cloud deployments offer scalability and access to state-of-the-art models, they may introduce risks related to data transmission and third-party processing, which could compromise client confidentiality or trigger cross-border data transfer obligations under Chapter V of the GDPR. Conversely, on-premises or self-hosted solutions provide greater control over data flows and model access, but require significant investment in infrastructure, technical staff and cybersecurity capabilities.
Regardless of the deployment model selected, robust data security protocols must be implemented. These include the application of zero-trust security principles, end-to-end encryption of data in transit and at rest, granular access controls, identity and access management (IAM) systems and continuous monitoring of user activities and model outputs. Additionally, audit logging is essential for ensuring traceability and accountability, particularly in contexts where AI-generated content is incorporated into client-facing deliverables or forms part of internal legal reasoning processes.
From a systems integration perspective, generative AI tools are increasingly interfaced with legal technology platforms such as contract lifecycle management (CLM) systems, document management systems (DMS), legal research engines and enterprise knowledge bases. These integrations are typically facilitated through Application Programming Interfaces (APIs), which must be governed by strict usage policies and compliance controls. Legal organisations must ensure that API endpoints are protected against unauthorised access, subject to usage monitoring and compliant with applicable data handling standards. Moreover, the integration process must be documented and auditable, with clear delineation of responsibilities between internal IT teams, legal staff and external vendors.
Another critical technical aspect is the ability to validate and verify AI outputs. Generative AI models are prone to a phenomenon commonly referred to as “hallucination,” where the system fabricates information that appears plausible but is factually incorrect or legally unsound (7). To address this risk, legal practitioners must implement structured review protocols whereby all AI-generated outputs are reviewed by qualified legal professionals prior to any reliance or disclosure. Where possible, firms should employ retrieval-augmented generation (RAG) systems that integrate trusted legal sources into the model’s response process, thereby enhancing accuracy and contextual relevance.
Finally, the technical implementation of AI within legal practice must support transparency, explainability, and auditability—principles that are increasingly recognised as essential under both the GDPR (Recitals 71 and 60) and the EU AI Act. Systems should be capable of providing clear documentation on how outputs were generated, including prompt logs, reference sources (if applicable), and confidence metrics. These technical attributes are not only valuable for internal quality assurance but may also be necessary for demonstrating compliance in the event of regulatory scrutiny, client inquiries or litigation.
Article 57 of Royal Decree-Law 6/2023 of 19 December, which enacts urgent measures for the implementation of the Recovery, Transformation and Resilience Plan in the areas of public justice service, civil service, local government and patronage (8), constitutes the first legislative provision in Spain that explicitly regulates the use of generative artificial intelligence in judicial practice. The statute introduces the notion of “assisted actions”, thereby providing a legal framework for the drafting of documents—or drafts of documents—produced with the aid of AI systems. The provision defines an assisted action as a situation in which the information system of the Administration of Justice generates a total or partial draft of a complex document, based on available data and potentially produced by algorithms, which may serve either as the basis for or as supporting material to a judicial or procedural decision. Importantly, the legislator circumscribes the role of AI strictly to an auxiliary function. It neither replaces judicial decision-making nor aspires to do so. At this point, the rule mandates direct human intervention, making it unequivocally clear that “[t]he draft document generated in this way shall not constitute a judicial or procedural decision in itself, unless validated by the competent authority. The justice administration systems shall ensure that the draft document is only generated at the user’s discretion and can be freely and fully modified by them”.
All these provisions are highly significant. They reflect the first positive step towards the juridification of AI in the Spanish judicial system, providing both recognition and limitation. On the one hand, it legitimises the use of generative AI to assist in the preparation of judicial documentation, acknowledging its efficiency in handling complex data. On the other, it sets a clear boundary of competence, reaffirming that the authority to decide and validate always rests with the judicial officer. Thus, the rule embodies the principle of “human-in-the-loop” governance, ensuring that AI serves exclusively as an auxiliary instrument, never as a substitute for judicial reasoning or decision-making.
Article 8 of the Organic Law on the Right of Defence 5/2024, read in conjunction with Article 4.1 of the same statute, establishes that the “right to receive adequate legal assistance” must be interpreted as complemented by the requirement of “quality of service”. This mandate is further reinforced by Article 19, which sets out the duties of legal professionals, requiring them to act in accordance with the Spanish Constitution, statutory law, procedural good faith and the ethical obligations of loyalty and honesty, with particular regard to the rules and guidelines of the relevant bar associations and professional councils.
From this systematic interpretation it follows that legal practice in Spain is governed not only by technical competence but also by binding ethical and qualitative standards. Although there is no explicit provision requiring lawyers to inform clients of the use of artificial intelligence tools, the lawyer’s responsibility lies in the supervision and adaptation of outputs to the client’s specific circumstances. A failure of oversight, rather than the mere use of AI, would amount to negligent conduct and could give rise to disciplinary or civil liability.
In conclusion, the deployment of generative AI in legal contexts cannot be approached as a purely technological initiative. It must be grounded in a comprehensive technical framework that ensures data integrity, system security, legal accountability and operational reliability. Spanish law firms and corporate legal departments bear the dual burden of maintaining the highest standards of professional ethics and implementing cutting-edge technological systems. Success in this domain requires a proactive, well-resourced and collaborative approach that bridges legal expertise with technical acumen.
The successful and compliant integration of generative AI into the legal function requires more than technical deployment—it necessitates the creation of a comprehensive governance model rooted in legal ethics, regulatory obligations and operational prudence. For Spanish law firms and in-house legal departments, best practices must be informed by the Código Deontológico de la Abogacía Española, the GDPR, the LOPDGDD and emerging AI standards, including the EU AI Act. This section outlines key strategic and operational best practices to guide legal professionals through the responsible implementation and ongoing oversight of generative AI technologies.
A foundational best practice is the adoption of a formal AI governance and risk management framework. Legal organisations should establish an internal AI policy that defines the permissible uses of generative AI, delineates accountability structures and sets out the criteria for selecting, testing and approving AI tools. This policy should include a risk classification system—aligned with the EU AI Act—that identifies whether an application is minimal, limited or high-risk and stipulates the corresponding procedural safeguards. For example, the use of AI in legal research may be considered low risk, while applications involving client advice, contract negotiation or regulatory submissions may trigger high-risk classifications, requiring enhanced oversight.
Selecting the right vendor and tools is a critical juncture in the implementation process. Law firms and legal departments must conduct due diligence to ensure that any generative AI tools acquired, whether off-the-shelf or bespoke, comply with Spanish and EU data protection requirements and support ethical usage. They must also provide adequate technical transparency. Vendors should be required to provide documentation on model training data, hosting architecture (including data residency and access controls), strategies for mitigating bias and audit mechanisms. Contracts with vendors must include data processing agreements (DPAs) in accordance with Article 28 of the GDPR, clearly assigning liability for data breaches, unauthorised processing and model errors.
Another pillar of effective AI integration is human competence. Legal professionals must be adequately trained in both the functionality of generative AI tools and the ethical and legal implications of their use. Training programmes should include modules on prompt engineering, critically assessing AI outputs, data privacy principles, professional secrecy obligations and the role of human oversight in mitigating errors. It is important to note that AI-generated content cannot replace legal judgment, and reliance on such content without review may breach professional duties and civil liability standards under Spanish law. Internal operational protocols must also be established to regulate AI usage across the legal function. These protocols should define when and how generative AI tools may be used, under what supervision, and for what types of tasks. For instance, junior associates may use AI to generate draft memos or legal outlines, but all such drafts must be reviewed and validated by supervising attorneys prior to client delivery. Similarly, policies must specify that client-specific data should not be entered into external AI systems unless such systems are operated under strict confidentiality safeguards and contractual assurances.
Another best practice is clear and proactive communication with clients about the use of generative AI. While consent from clients may not be legally required for every application, transparency fosters trust and ensures ethical compliance. Clients should be informed, preferably through engagement letters or terms of service, when generative AI is used to support service delivery. This disclosure should outline the purposes, benefits, limitations and risk mitigation measures in place. In high-stakes matters, clients should be given the option to opt out of AI-assisted services altogether.
Quality control and monitoring must form part of an ongoing assurance process. Law firms and legal departments should conduct regular audits of AI usage to ensure the accuracy, consistency and legal reliability of outputs. These audits should include both automated tools and manual review procedures, ideally overseen by a multidisciplinary AI governance committee. Any anomalies, inaccuracies or ethical concerns must trigger a root cause analysis, and where appropriate, the AI system’s usage must be suspended or revised.
Lastly, legal organisations must be prepared to adapt their AI strategies in light of regulatory developments. The implementation of the EU AI Act introduces new documentation, transparency and registration obligations for certain legal AI systems. Spanish law firms must be agile and ensure that their internal practices evolve in line with external legal requirements, especially with regard to conformity assessments, technical standards and post-market monitoring duties.
In conclusion, integrating generative AI into legal services requires a deliberate, structured and ethically grounded approach. By implementing these best practices, Spanish legal professionals can achieve the dual goals of technological innovation and compliance with the fundamental values of the legal profession: confidentiality, diligence, independence and respect for the rights of clients and third parties.
The implementation of generative AI in legal practice must account not only for regulatory compliance and ethical obligations, but also for the specific demands, workflows and risk profiles associated with different areas of legal work. In the Spanish context, generative AI can be harnessed to support a wide range of legal functions—both in law firms and corporate legal departments—provided that its use is consistent with the standards of diligence, professional secrecy and legal accountability imposed by the Estatuto General de la Abogacía Española and the Código Deontológico. This section explores concrete applications across key practice areas, illustrating the diverse value propositions and the distinct legal and operational considerations that apply to each.
In litigation, generative AI can play a significant role in automating time-intensive tasks while maintaining human control over substantive legal decisions. Tools powered by large language models are increasingly used to generate draft pleadings, prepare litigation summaries, identify relevant case law and even propose argument structures (9). In the Spanish judicial context, where procedural formalism and evidentiary rigour are paramount, these tools must be integrated cautiously and under close supervision. Legal professionals must ensure that all AI-generated content is meticulously reviewed to avoid procedural defects or inaccuracies that may affect the validity of submissions. Moreover, AI can be applied to improve e-discovery processes by rapidly classifying, clustering, and summarising large volumes of documentary evidence, particularly in complex civil or commercial disputes in the courts (10).
In transactional law, generative AI is being deployed to assist in contract review, clause harmonisation, and risk flagging. Spanish law firms engaged in M&A, real estate, or financial transactions can leverage AI to analyse and standardise clauses across multi-document negotiations, identify deviations from institutional standards, and generate alternative clause formulations based on predefined risk matrices. In-house counsel operating under Spanish company law (Ley de Sociedades de Capital) benefit from AI tools that accelerate the review of supplier contracts, NDAs, licensing agreements, and compliance documentation, ensuring consistency with internal policies and external regulatory requirements. Nonetheless, caution must be exercised when deploying AI in cross-border transactions governed by foreign laws or mixed jurisdictions, where the system’s training data and legal assumptions may not align with the applicable legal framework.
In the realm of regulatory compliance, generative AI can assist in monitoring evolving obligations under Spanish and EU law, particularly in heavily regulated sectors such as finance, energy, health, and data protection. AI tools can scan official bulletins, case law databases, and regulatory updates (e.g., Boletín Oficial del Estado, CNMV circulars, or AEPD guidelines), distil relevant developments, and map them to internal compliance checklists. For example, compliance officers may use generative AI to track developments related to the Digital Services Act, whistleblower protection laws or new anti-money laundering rules. However, it is imperative that such tools do not replace professional analysis and that conclusions drawn from AI-generated summaries are verified against the official legal sources.
In corporate legal operations, generative AI enables the automation of legal intake processes, knowledge management and policy drafting. Many in-house legal departments in Spain now utilise AI-enabled platforms to triage incoming legal queries, route them to appropriate team members and generate initial responses based on historical data and predefined templates. These tools can reduce turnaround time and enhance consistency, particularly in high-volume areas such as employment law, procurement or GDPR compliance. Additionally, generative AI can be used to maintain and update internal policy documentation (e.g., data protection policies, code of conduct, internal investigations procedures), ensuring alignment with the latest regulatory requirements and organisational practices.
Beyond the above, specialised legal domains also offer fertile ground for AI-enhanced tools. In labour and employment law, for instance, AI can assist in generating model contracts, disciplinary notices or legal opinions based on the Estatuto de los Trabajadores and relevant jurisprudence from the Tribunales Superiores de Justicia. In tax law, generative AI can support the drafting of client memos regarding VAT treatment or corporate restructuring, provided that the outputs are validated against rulings from the Dirección General de Tributos or applicable EU directives. Even in criminal law, AI may support the preparation of procedural documents or case summaries under the Ley de Enjuiciamiento Criminal, although its use must be strictly circumscribed due to the heightened ethical and procedural sensitivities in this field.
In all these applications, the overarching principle remains the same: generative AI is a powerful assistive tool, but not a substitute for professional legal judgment. Spanish legal professionals must retain ultimate responsibility for the legal content and outcomes of their work. The appropriateness of AI use must be determined on a case-by-case basis, balancing efficiency gains with the duty to act with independence, diligence, and in strict observance of the law.
The integration of generative AI into legal workflows, while potentially transformative, introduces a series of risks that must be proactively identified, evaluated and mitigated to ensure compliance with both professional standards and regulatory obligations. These risks span technical, ethical, legal and operational domains and may expose Spanish law firms and corporate legal departments to civil liability, reputational damage and professional disciplinary action if not adequately addressed. This section outlines the principal risk categories associated with generative AI in legal practice and sets forth corresponding mitigation strategies, with a particular emphasis on compliance with the Código Deontológico de la Abogacía Española, the Estatuto General de la Abogacía Española and applicable Spanish and EU regulations.
One of the most critical risks is unauthorised access to confidential information, which may occur if sensitive client data is transmitted to generative AI systems operated by external vendors without adequate data protection safeguards. In cloud-based deployments, especially those involving public APIs, data may transit through servers located outside the European Economic Area, potentially triggering a violation of Chapter V of the GDPR concerning international data transfers. To mitigate this risk, legal organisations must ensure that any generative AI platform used is either hosted on-premises or under contractual arrangements that provide GDPR-compliant transfer mechanisms, such as Standard Contractual Clauses (SCCs) or Binding Corporate Rules (BCRs). Additionally, internal policies must prohibit the input of confidential or privileged client information into external AI systems unless such systems are subject to a data processing agreement and prior client consent, if required.
A related risk involves data breaches and cybersecurity failures, which could arise from inadequate encryption, insufficient access controls or vulnerabilities in AI system integration. Breaches not only entail significant liability under the GDPR and the LOPDGDD but may also violate the absolute duty of professional secrecy imposed by Article 5 of the Código Deontológico. To address this, legal organisations must adopt a zero-trust security model, implement strong identity and access management (IAM) protocols, encrypt all data in transit and at rest and conduct regular penetration tests and security audits. Incident response plans should be in place to enable rapid containment and notification in the event of a breach.
Another significant concern is the phenomenon of model hallucinations, wherein generative AI systems produce content that is factually inaccurate, legally unsound or entirely fabricated. This issue is particularly acute in legal contexts where reliability and precision are paramount. An AI-generated memorandum that cites non-existent case law, misinterprets statutes or proposes flawed legal strategies may lead to malpractice claims or disciplinary action if relied upon without proper review (11). To mitigate this risk, legal professionals must implement a human-in-the-loop validation process for all AI-generated content, particularly in deliverables that involve legal analysis, client advice or court submissions. Retrieval-augmented generation (RAG) models, which integrate authoritative legal sources into the output process, should be prioritised wherever possible to reduce the likelihood of hallucinations.
The risk of bias and discrimination in AI outputs also merits attention. Generative AI models may inadvertently reproduce biases present in their training data, resulting in outputs that are prejudicial or ethically problematic. In legal applications, this could manifest in discriminatory language in employment documents, inconsistent treatment of similar legal cases, or the reinforcement of stereotypes in client communications. Spanish legal institutions must therefore implement regular audits of AI-generated outputs, focusing on bias detection, fairness and inclusivity. Additionally, firms should require vendors to disclose the steps taken during model development to mitigate bias and ensure that diversity and equity principles are embedded in both the design and deployment phases.
Non-compliance with applicable regulations and professional rules is a further area of concern. The evolving regulatory landscape, particularly the EU AI Act, will impose new obligations on legal professionals using certain categories of AI systems. These may include documentation requirements, transparency disclosures and post-market monitoring duties. Failure to comply may lead to administrative sanctions or render the AI system unlawful. Legal departments must therefore adopt a regulatory watch function, ensuring that they remain abreast of legal developments and adjust their AI governance frameworks accordingly. Internal audits, training refreshers and documentation reviews should be scheduled at regular intervals to maintain compliance readiness.
Lastly, the use of generative AI may give rise to professional liability if clients suffer harm as a result of flawed AI-assisted legal services. Under Spanish civil liability principles and the disciplinary regime of the Consejo General de la Abogacía Española, lawyers remain fully responsible for the advice they provide, regardless of the tools used. To mitigate this liability exposure, legal organisations must explicitly delineate the role of AI in their workflows, ensure that all outputs are reviewed and approved by qualified practitioners and document such reviews in a defensible manner. Furthermore, professional indemnity insurance policies should be reviewed to confirm coverage for technology-assisted services and to identify any exclusions related to AI usage.
In summary, while generative AI presents substantial benefits to legal practice in Spain, it also introduces multidimensional risks that must be proactively managed through a combination of technical safeguards, procedural controls and ethical oversight. A culture of accountability, continuous learning and regulatory alignment will be essential for the safe, lawful and effective use of AI in legal services.
The integration of generative AI into the legal sector represents both a remarkable technological advancement and a profound regulatory challenge. For law firms and corporate legal departments operating within the Spanish jurisdiction, the path forward requires a careful balance between innovation and adherence to the foundational principles of the legal profession. The transformative potential of AI—enhancing efficiency, accelerating legal analysis and expanding access to knowledge—must be harnessed within a robust framework of legal, ethical and technical safeguards.
Spanish legal professionals operate under some of the most demanding ethical codes and regulatory regimes in Europe, with professional secrecy, diligence and client protection enshrined in the Código Deontológico de la Abogacía Española, the Estatuto General de la Abogacía Española and reinforced through the application of the GDPR and LOPDGDD. These standards are not optional nor symbolic—they are binding duties that persist irrespective of the technologies deployed. As such, the use of generative AI in legal practice must never serve to dilute or displace the role of human legal reasoning, but rather to augment it under conditions of clear accountability.
This paper has sought to articulate a comprehensive approach for the lawful and responsible implementation of generative AI in legal practice in Spain. By analysing the intersecting legal frameworks, identifying technical prerequisites, and outlining actionable best practices, it offers a pragmatic roadmap for legal professionals committed to innovation without compromise. Among the essential measures identified are the adoption of AI governance policies, the establishment of secure and compliant technical infrastructure, the implementation of human-in-the-loop review mechanisms and the development of client transparency protocols. The operationalisation of these measures must be supported by continuous training, internal audits and a forward-looking posture in response to regulatory developments such as the forthcoming EU AI Act.
Ultimately, the successful deployment of generative AI in legal practice is not merely a question of technical capability, but of institutional responsibility. Spanish law firms and legal departments must position themselves as stewards of both technological integrity and legal ethics. By doing so, they can reap the benefits of generative AI while upholding the honour of the profession and the rights of those they serve. In a domain where trust, precision and duty are paramount, no less will suffice.
CCBE considerations on the legal aspects of artificial Intelligence. https://www.ccbe.eu/fileadmin/speciality_distribution/public/documents/IT_LAW/ITL_Guides_recommendations/EN_ITL_20200220_CCBE-considerations-on-the-Legal-Aspects-of-AI.pdf
The Law and Society issued new guidance to the profession in November 2023 https://www.lawsociety.org.uk/topics/in-house/how-in-house-lawyers-can-and-should-use-ai-and-chatgpt
The Spanish General Council of the Legal Profession (CGAE). Código Deontológico de la Abogacía Española, 2019. https://www.abogacia.es/wp-content/uploads/2019/05/Codigo-Deontologico-2019.pdf
Royal Decree 135/2021, of 2 March, approving the Estatuto General de la Abogacía Española. https://www.boe.es/boe/dias/2021/03/24/pdfs/BOE-A-2021-4568.pdf
Organic Law 3/2018, of 5 December, on the Protection of Personal Data and Guarantee of Digital Rights (LOPDGDD). https://www.boe.es/buscar/act.php?id=BOE-A-2018-16673
Brkan, Maja. “Do Algorithms Rule the World? Algorithmic Decision-Making and Data Protection in the Framework of the GDPR and Beyond.” International Journal of Law and Information Technology, vol. 27, no. 2, 2019, pp. 91–121. https://academic.oup.com/ijlit/article-abstract/27/2/91/5288563
Matthew Dahl, Varun Magesh, Mirac Suzgun, Daniel E Ho, Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models, Journal of Legal Analysis, Volume 16, Issue 1, 2024, Pages 64–93, https://doi.org/10.1093/jla/laae003
GenAI should be expected to provide additional functionalities of significant doctrinal and practical relevance, encompassing the drafting of legal documents and, at more advanced stages, the autonomous creation of documents or reports in response to a legal query; the systematic construction of legal arguments and counterarguments consistent with established principles of reasoning; the preparation of standardised contractual instruments, subject in all cases to indispensable human validation; the review of documents through the identification of errors and the proposal of corrections; the elaboration of summaries and automated abstracts; the design of tailored workflows and procedural frameworks; the provision of advanced search functionalities capable not merely of retrieving but also of conducting genuine “research” into legal sources; the facilitation of comparative analyses across jurisdictions and legal systems, thereby reinforcing the methodological scope of comparative law; the enhancement of semantic search capacities enabling the system to process legal concepts rather than limiting itself to literal terms; the integration with case management and docketing systems; the execution of batch processing, allowing for the simultaneous and autonomous completion of multiple tasks; and finally, continuous learning through iterative user interaction, progressing in more advanced stages towards genuine self-directed autonomous learning.
The integration of generative artificial intelligence into the Spanish legal sector is entering a stage of consolidation, marked by the coexistence of open-source initiatives, editorially driven solutions and international platforms adapted to the local regulatory framework.
Tools such as Aranzadi LA LEY K+ exemplify the combination of large language models with authoritative legal databases curated by experts, thereby ensuring security, accuracy and reliability in legal research and jurisprudential analysis. Open-access projects such as Justicio highlight the democratizing potential of AI by offering practitioners, students and citizens structured responses grounded in Spanish, regional and European legislation, thus enhancing transparency and access to law. Finally, global platforms such as Harvey, adopted by top-tier firms like Cuatrecasas and corporate legal departments of multinational companies such as Repsol, demonstrate the practical and corporate dimension of this transformation, enabling contract automation, document review and regulatory risk analysis in highly demanding environments. Taken together, these developments confirm that Spain is not merely a passive recipient of technological innovation but an active participant in shaping a digital legal ecosystem that balances operational efficiency, regulatory rigour and ethical responsibility.
The Spanish legal system has already witnessed instances in which the use of generative artificial intelligence by practising lawyers has led to significant procedural incidents. Two recent cases—one before the High Court of Justice of Navarra (Tribunal Superior de Justicia de Navarra, TSJN) and the other before the Constitutional Court (Tribunal Constitucional)—are illustrative of the risks that arise when AI tools are employed without adequate professional control
The High Court of Justice of Navarra: references to foreign criminal law. In proceedings before the Civil and Criminal Chamber of the TSJN, a lawyer filed a complaint in which the legal reasoning included citations that, erroneously, referred to provisions of the Colombian Criminal Code rather than the Spanish one. The lawyer openly acknowledged having relied on an artificial intelligence system—specifically ChatGPT 3—for the drafting of the text, and explained that the inclusion of such references was the result of a “gross and involuntary material error” caused by improper use of the tool. The Court initiated proceedings to determine whether the submission constituted procedural bad faith under Article 247 of the Spanish Civil Procedure Act. However, noting the immediacy of the rectification, the apologies offered, and the absence of effective prejudice to the proceedings, the Court resolved not to impose a sanction, while expressly warning that the use of AI systems does not relieve the lawyer from the duty of diligence and thorough review of any document he or she signs and files.
A more serious incident arose in the context of an amparo appeal before the First Chamber of the Constitutional Court. The appeal contained nineteen purported citations from Constitutional Court judgments, all presented in quotation marks as if literal excerpts, which in fact did not exist. These “citations” were generated by an automatic text system and bore no correspondence to real case law. The Court declared the appeal inadmissible and, finding that the conduct demonstrated a lack of truthfulness and respect towards the institution, imposed a formal reprimand (apercibimiento), the lightest sanction available, while also referring the matter to the Barcelona Bar Association for disciplinary review. In its decision, the Court underscored that regardless of the cause of the inclusion of false citations—whether a software error, a database malfunction, or the use of artificial intelligence—the lawyer remains under an inexcusable duty to verify the content of every submission filed before the courts.