Artificial intelligence and its use are already part of our daily life, both at home and in professional activities. Artificial intelligence has evolved and increased in visibility over the last few years with high levels of autonomy, high availability, high visibility in the public domain, etc. The use of AI systems such as Chat GPT, deepfake, various presentation making platforms, financial AI platforms, etc., has increased dramatically. These functionalities are used not only by private individuals for their own pleasure, but also by employees in various companies, regardless of whether such actions are compatible with the company’s position. In other words, employees use publicly available AI platforms to facilitate their work functions and consequently provide internal, potentially highly sensitive, legal, financial or other information of the company. There are many aspects to consider when it comes to ChatGPT and privacy and business risks. This includes both legal, privacy and compliance aspects when it comes to AI chatbots (Sussman, 2023). One of the most significant privacy and security concerns relates to the large amount of private information held by AI and the potential risk of loss of such information and its subsequent use, whether for financial crime, identity theft or other forms of coercion (Gulen, 2023). For example, Samsung employees using ChatGPT disclosed some of the company’s confidential data in several different sessions (DeGeurin, 2023). OpenAI representatives themselves state that they can use the information to improve the functionality of ChatGPT (Schade, 2023). According to Cyberhaven (Coles, 2023), 10.8% (1 June 2023) of employees have used ChatGPT in the workplace and 8.6% have uploaded company data to the AI chatbot. These examples not only show the potential for some AI to develop and act independently in decision-making, formulating answers, making presentations, etc., thus demonstrating its potential and applicability in different spheres, but also highlight areas of particular sensitivity and debate. This raises a legitimate question as to whether we can and, if so, to what extent we can use AI in mediation, and what are the possible legal, moral, ethical and other risks of such use. Such potential risks relate, inter alia, to the issue of trust between the client and the mediator, the confidentiality of the process itself and the information held, the effectiveness of the mediation process, and the mediator’s liability (not only moral but also legal), e.g., must the mediator be held liable for the use of an inappropriate artificial intelligence tool leading to a disruption in the process, or even to the failure of the mediation? What if the use of AI leads to a leak and disclosure of confidential data? Does the mediator have to pay damages in these and similar cases? This article will seek to answer all these questions.
The subject of the article is the legal issues related to the possibility of using artificial intelligence in mediation processes and the mediator’s moral as well as legal liability if the use of artificial intelligence leads to the failure of the process or to damage to the parties.
The aim of this article is to analyse the scientific and legal sources and the possible problematic aspects related to the possibility of using artificial intelligence in mediation and the potential risks posed by artificial intelligence.
Objectives:
- 1)
disclosing the concept of artificial intelligence;
- 2)
assessing the theoretical possibility of using artificial intelligence in mediation;
- 3)
assessing the legal and ethical liability of the mediator for the negative consequences caused by artificial intelligence.
The methods used in the preparation of article are data analysis, comparative, linguistic, systematic, logical and generalisation. The data analysis method in this topic allows to reveal the content of the concept of artificial intelligence, the principles of operation of different artificial intelligence tools and the legal field of using artificial intelligence in practical activities. The comparative method is used alongside the method of data analysis, as it analyses the possibility and practice of using different AI tools, the legal regulation in the context of this topic, the opinions of different authors, the content of the mass media, etc. The linguistic method of research is used to uncover the meaning of concepts and their content; to determine the general meaning of legal rules and doctrinal texts. The systematic and logical methods are used to reveal and summarise the content of the legal regulation and doctrinal positions, to provide conclusions or insights that help to better understand the subject matter and to reach answers to problematic questions. The summarisation method is used to summarise scientific insights and draw conclusions.
Artificial intelligence is not a spontaneous phenomenon. It has its origins in human activity, i.e. it was man who created artificial intelligence. There is more than one definition of artificial intelligence, and depending on the branch of science that deals with AI, it can vary.
Artificial intelligence is intelligence demonstrated by machines or software (Pannu, 2015). Paisley and Sussman (2018) described AI as the process of combining large amounts of data with processing systems, thus allowing software to automatically learn from different patterns or features of the data.
AI is also defined as the ability of a machine to perform cognitive functions that we normally associate with the human mind (McKinsey, 2023). As defined in the doctrine, the fundamental goal of AI is to advance further, with the aim of creating a product that can think like humans think (Marr, 2018) (Goralski, 2019). Frankenfield (2023) classifies AI into two groups, weak AI and strong AI, based on the functions it can perform, giving as examples of weak AI the AI used in video games such as chess, or personal assistants such as Amazon Alexa and Apple Siri. Strong AI systems, according to Frankenfield (2023), are systems that perform tasks that are perceived to be similar to human tasks. These are usually complex and sophisticated systems. They are programmed to deal with situations where they may need to resolve a problem without human intervention. This type of system can be found in self-driving cars or in hospital operating theatres.
McClean (2021) distinguishes between automated and autonomous AI. Autonomous AI is autonomous and does not require human intervention; it can learn and adapt to dynamic environments and evolve as its environment changes. By contrast, automated AI is narrowly task-oriented, based on well-defined criteria, and limited to the specific tasks it can perform. The distinction between those groups based on mode of action is also made by Jacob Turner (2019). The paper focuses more on “strong” and “autonomous” AI, according to the distinctions made. The breadth of applicability of AI has a profound impact on various areas of life and is widely used to solve complex problems in a wide range of fields such as science, engineering, business, medicine, weather forecasting (Pannu, 2015). The dictionary definitions of artificial intelligence should also be reviewed. The Cambridge Dictionary (2023) states that artificial intelligence is the study of how to produce machines that have some of the qualities that the human mind has, such as the ability to understand language, recognise pictures, solve problems, and learn. Or the use of computer programs that have some of the qualities of the human mind, such as the ability to understand language, recognise pictures, and learn from experience.
Given that the aim of the article is not to exclusively analyse all potential definitions of AI in order to identify the features or differences inherent in all definitions, a general understanding of what AI is, the path to be taken with the concept, and thus an assessment of what the possibilities of AI are in a general sense, as well as the potential for practical applicability for mediators, is considered to be sufficient.
The applications of AI are wide-ranging and not limited to the areas listed above. Artificial intelligence can be applied in a much wider range of fields, including law. According to Thomas (2023), there is virtually no major industry that is not affected by modern AI – more specifically, “narrow AI”, which performs objective functions using data models and often falls into the categories of self-learning or machine learning.
With technology evolving at such a rapid pace, the areas in which AI can be used are only set to grow. Tewari (2023) argues that companies will use AI in all creative phases of their work, especially in marketing and customer service processes, and that AI in healthcare will lead to drug development and personalised medicine.
Despite the fact that artificial intelligence is defined differently by different authors, in some cases even, to some extent, devaluing artificial intelligence (e.g. the Oxford Dictionary defines artificial intelligence as, among other things, the ability of computers or other machines to demonstrate or mimic intelligent behaviour), from the definitions given (including but not limited to) we can identify the criteria that characterise what artificial intelligence is and its attributes. I.e. that AI: (i) is the activity of machines, software rather than a living person, (ii) is capable of combining large amounts of data with processing systems and performing cognitive functions, thus (iii) replicating the behavioural pattern of a living person, and (iv) has the ability to think like a human being and to make decisions, solve problems, and learn from oneself.
After disclosing the concept of AI, reviewing its potential, and considering, among other things, its similarity and ability to replicate human activity and thinking, to learn and improve, and to make independent decisions, it is possible to say that the potential for AI to be used in a practical professional context is very broad indeed.
Artificial intelligence can already be, and is likely to be, used in mediation processes at various stages. However, the current use of AI is very limited, mainly in terms of systematisation and retrieval of information, etc. However, the possible applications of AI are much broader, and the potential ceiling is generally hard to predict given the pace at which AI is developing.
Chloe Chua Kay Ee (2021) grouped artificial intelligence according to its purpose, i.e. what it can be used for, distinguishing the following functions: (i) to organise, (ii) to inform, (iii) to predict, (iv) to recalibrate, (v) to replace. This distinction, in terms of content, should be seen as broadly encompassing the functionalities of AI, which is why the article will assess the usability of AI in mediation on the basis of this distinction.
Artificial intelligence for organisation is useful for agenda setting, information processing, organising, etc. For example, tools such as Motion (which can help to organise the calendar, prioritising activities, marking deadlines, organising meetings, etc.), Calendly AI (which can be used for automated scheduling of meetings in the context of both the date and the time suitable for the parties to the mediation), and tools such as Monday work management (which can help in managing the different mediation processes, allocating the team's activities, assessing the efficiency of the team) or Slack AI (can help summarise correspondence, even in large volumes, including in thematic groups where more than one person is involved, can create reminders, notes, both for the individual and for sharing with others, e.g. mediation parties).
Organisational functionality also includes the possibility of remote mediation, which facilitates the situation of the parties and allows them to conduct the process without leaving the safety and proximity of their own surroundings, which is a particularly important feature, especially in more sensitive mediation cases (e.g. family disputes). In this context it should be mentioned the Modron Spaces platform, which can be used from start to finish in any of the processes in mediation. The MODRON Spaces Dispute Resolution Platform is an example of existing online dispute resolution platform, which can be deployed to resolve domestic and international disputes via mediation. The benefits of this platform include secure onboarding, case management, virtual rooms – by default, each case has a space shared with everyone but as a facilitator you can create spaces and invite whoever you feel is relevant. Also, this platform can provide advanced video conferencing, guided conversations, information access, secure document sharing, audio and video recordings, templated forms and even invoicing and payments (Obi-Farinde, 2020).
However, while AI could help in day-to-day processes by organising activities in mediation, according to Šliavas (2023) “the results obtained or the data provided will still have to be evaluated by a high-level human expert. Even if the IoT can analyse large amounts of data and information, the task must be supervised and evaluated by a specialist in the field who has the experience, knowledge and broad contextual understanding to say whether the task has been done well, qualitatively, ethically, etc.”. Organisational functionality includes AI’s ability to both draft and manage contracts.
The information functionality can be assessed from several different perspectives, e.g. (i) the provision of information to the mediator himself, (ii) the provision of information to the participants in the mediation process (clients), (iii) the provision of information to third parties. AI, with its ability to process a very large amount of data, its ability to learn, analyse and draw conclusions, can provide information to the mediator, e.g. on what methods to use in mediation, on how to better approach the clients and provide mediation services (e.g. by using some of the most popular AI tools for idea generation such as ChatGPT, Claude or Gemini, or tools to assess the emotions of the parties during the mediation process (Cogito AI), or the personal characteristics of the parties to the process according to their communication style (Crystal Knows), thus potentially helping the mediator to adapt to different communication needs during the mediation process, or at least to assess the need for a change in the communication with the parties). Information can be provided to the participants in the proceedings through automated information via emails, SMS or other means. Another useful functionality is the use of chatbots, which, when integrated into the mediators’ website, can answer client queries and provide information to clients before they decide whether or not to enter the mediation process and whether or not to choose the services of the mediator in question. In addition to the organising and, at the same time, informing function of the mediator’s work, the virtual assistant tool should be mentioned. The virtual assistant is used to check availability, make calls, write messages, provide reminders, help with scheduling and changes, and more. According to Joshi (2018), both of these functionalities are considered conversational interfaces, but they are very different from each other, so it is important for organisations to understand the differences between chatbots and virtual assistants in order to be able to apply them wisely in their practical work.
As Chloe Chua Kay Ee (2021) points out, an extension of AI’s ability to inform is the ability to predict. AI’s ability to learn from different situations, as well as its ability to process large amounts of information, systematise information, draw conclusions, etc., in the advanced case, also allows AI to make potential assessments and predictions. The evaluation and processing of large amounts of information is of paramount importance in the mediator’s work, which involves, among other things, the evaluation of large amounts of information, including in the form of legislation and case law, etc. The AI’s ability to predict should be measured through its ability to predict one or another outcome of the process, based on the information provided by the AI. As Miller (2017) points out, AI has long gone beyond keyword search, removing duplicate documents, etc., to search for context, concepts and more.
This prediction is useful for mediators, not least because the mediator may choose (or the AI may suggest) different paths, methods, forms, etc. of mediation, which would affect the outcome of the mediation process. In this way, the ability to anticipate would not be limited to an initial prediction, but would allow for different scenarios to be considered and, on that basis, for the most optimal outcome of the mediation process.
Chloe Chua Kay Ee (2021) describes several different cases in the prediction functionality – one where the AI draws on the experience of existing cases to make its prediction of the mediation process, and another where there is no relevant mediation case practice to inform the AI’s prediction, where the AI can use the most appropriate “from scratch” assessment method. If AI provides actionable insights into patients’ treatment, allowing for the development of more personalised treatment plans (Tewari, 2023), we can reasonably assume that AI can provide the same insights into the mediation process, its potential efficacy, perhaps even predict the likelihood of success, and allow for the most personalised mediation process possible. Such individuality would be linked to the specific factual situation of the parties to the proceedings, perhaps even taking into account the personality criteria of the parties.
Looking at the functionality of artificial intelligence, which is already widely used outside the professional activities, it can be assumed that this ability to predict is directly related to the information, the amount and the manner in which the mediator provides it, as well as to the AI tool used, i.e. criteria of content, quantity, form and means. The nature of the information itself is important for the AI to be able to make an assessment in the anticipation/forecasting process. The extent to which information is provided is an equally important criterion, as it is not only the content of the information provided that is important, but also the quantity of the information, i.e. that the information provided is as detailed as possible. Lack of content or quantity of information may lead to inaccurate forecasts. Of course, even if the information is abundant, but provided in a chaotic, unstructured, unclear way in the formulation of the task for the AI, this can have an impact on the result, as well as on the choice of the AI tool used by the mediator in their work.
The nature and quantity of the information provided is crucial to the ability to predict and to do so with sufficient accuracy, but it is also linked to the ethical responsibilities of the mediator, which are discussed in more detail in the next section.
The recalibration function is attributed by the authors (Chloe Chua Kay Ee (2021)) to the AI’s ability to offer a different solution for the parties to resolve the dispute in mediation. And while there are authors (e.g., Miller (2017), Berland (2023)), who highlight AI as a weakness because it (AI) lacks empathy and the necessary emotional intelligence, and is unable to appreciate non-verbal nuances that can be extremely important in the context of a dispute when assessing a situation, Chloe Chua Kay Ee (2021) sees AI and its ability to recalibrate decisions as a positive characteristic, as AI makes decisions precisely without the emotional “burden” and frustration of the fact that the parties to the mediation did not like one or other mediator actions in process. In the context of emotional rapport and empathy, which is also important in the mediation process, both for the establishment of rapport and for the clarification of the real needs and expectations, which then contributes to the final resolution of the situation, the progressiveness of AI is worth mentioning. Agarwal (2023) points out that there is already a strong focus on emotional AI, a technology, machine learning application that can sense and interact with human emotions. Agarwal (2023) gives the example of an artificial intelligence application developed by Find Solutions AI, which is already being used in some secondary schools in Hong Kong to assess the micro-movements of students’ facial muscles and to identify the various negative and positive emotions. At the same time, however, Agarwal (2023) argues that such AI tools, even when learning from extremely large amounts of data, assess emotions in a rather simplistic way, without taking into account the situation, social and cultural context.
In this context of recalibration, we can already see the authors’ reflections not only on the possibility of using AI in mediation, but also on the fact that AI can replace the mediators themselves, i.e. that the mediator in the process would be artificial intelligence rather than a living person. Such functionality is seen as futuristic, more rhetorical and oriented towards thinking about future perspectives, which is why it is not considered in the context of this article from a legal and practical point of view, but it is seen as worthy of mention, especially in view of the rapidity of the development of AI and the increasing transfer of functions to automated AI tools.
In summary, all AI functionalities are interconnected and work under an umbrella principle, covering the mediation process from its initial stage to its conclusion. That is to say, AI tools in the mediation process can be useful both at the outset, when assessing the information received and processing it, when making predictions about the course of the mediation process, the potential success rate, etc., and simply when informing clients, sending them emails, SMS messages, organising the agenda etc. Of course, each AI tool can also be used in isolation, without necessarily involving AI at every stage of the mediation process or the pre-mediation relationship (e.g. use of a chatbot, use of a virtual assistant, etc.), especially when considering the financial costs of such a broad use, but the functionalities discussed in the article and the opportunities created by AI show that AI can be used in a wide range of ways, but that such a broad use comes with potential risks and the resulting legal and ethical liability of the mediator.
Artificial intelligence is a really useful tool that can save time, energy, potentially financial resources and other positive aspects in the professional field. However, it is already evident in practice (some of these examples are also given earlier in the article) that the use of AI can have negative consequences. These consequences include, inter alia, breaches of the mediator’s ethics, potential discrimination, transparency, confidentiality, data protection, trust, liability.
As professionals in their field, mediators are bound by strict ethical standards that apply to members of the mediation community. The mediator, as an intermediary between the parties, can be less active or more active in the process, depending on the nature of the dispute and the differences in the individual situation. Nevertheless, the mediator interacts with both sides in the process and has to create a favourable environment for the mediation process in order to achieve a positive outcome. In order to achieve such a result, the mediator has the right, among other things, to proactively suggest settlement options to the parties.
The principle of procedural fairness consists of the principles of ethical conduct of mediators in the mediation process – the involvement of a competent, independent, impartial mediator who protects confidential information, facilitating the parties to the dispute in reaching a decision and providing them with greater satisfaction (Kaminskienė, et al., 2013). Pursuant to Article 28(1) of the Law on Mediation of the Republic of Lithuania (Law on Mediation of the Republic of Lithuania, 2020), persons may submit complaints/notifications to the Commission for Mediator Performance Assessment regarding the performance of mediators who have violated the requirements of this Law, the European Code of Conduct for Mediators or any other legislation governing the provision of mediation services. In general, the ethics of the mediator, as a very broad criterion, is an umbrella for other potential risk factors related to the use of AI, such as transparency and impartiality in the process, ensuring confidentiality, ensuring data protection, etc., as all of them are intrinsically linked to the ethics of the mediator in the general sense.
The relationship between the use of AI and the enforcement of human rights is a critical factor. AI tools make decisions based on what they have been taught or what practices they have already used. If decisions have been made on the basis of flawed rationales, algorithms based on race, ethnicity, gender or other criteria, this may lead to or give the appearance of discriminatory practices. Browne and Sigalos (2023) point out in their paper that AI has problems with racial bias, ranging from biometric identification systems that misidentify the faces of black and minority individuals to voice recognition software applications that are unable to distinguish between voices with different regional accents, which means that there is still a lot of work to be done on AI when it comes to discrimination.
Discriminatory practices using AI are reported in recruitment/selection cases. Hsu (2023) points out that “resume scanners that prioritize keywords, “virtual assistants” or “chatbots” that sort candidates based on a set of pre-defined requirements, and programs that evaluate a candidate’s facial expressions and speech patterns in video interviews can perpetuate bias or create discrimination <…>”. Hsu (2023) identifies potential manifestations of discrimination, such as in the case of language impairment, in the case of social network analysis, where AI is used to assess social networks and potentially discriminate against those who use less or no social networks, and in the case of gaps in work experience, where the AI systems reject such candidates as being less qualified, without looking at the reasons for the gaps in work experience, which could be related to either health problems or parental leave. Hsu (2023) also observes that such discriminatory practices may go unnoticed because it is difficult for candidates to be aware of them. This observation is considered to be valid, as the AI tools used by the potential employer in the selection process are largely unknown to the candidates, or only minimally disclosed during the later stages of the selection process. These discriminatory manifestations may also play a role in different mediation processes. Unlike in the case of staff selection, it is considered that in the case of mediation processes, the AI tools used by the mediator should be disclosed to the clients, inter alia, for ethical reasons, and the parties to the mediation process should have the opportunity to make comments or observations from the moment they enter into a contract with the mediator and the mediation process is initiated. In other words, the parties are quite reasonably not given the option when making enquiries or looking for a mediator before the mediation process, and the mediator can choose which AI tools to use. Of course, the mediator may also disclose to the parties in advance of the mediation process which AI tools will be used during the mediation process, and the parties must be given the right to decide whether or not to participate in the mediation process and to choose this mediator. Summa summarum, in all cases, it is presumed that the parties to the process must be disclosed which AI tools will be used in the mediation, which may lead to three or more potential cases, such as, (i) if the parties do not agree to the use of any of the AI tools, the mediator agrees not to use them, (ii) if the parties do not agree to the use of any of the AI tools and the mediator does not agree not to use them, the parties are at liberty not to choose such mediator and not to start the process, and (iii) if the parties agree to the use of all the AI tools, to start the mediation process.
The disclosure in question is related to transparency, i.e. that the process is, inter alia, transparent and that the parties are provided with essential information. Of course, disclosure in itself only partially solves the problem. Disclosing the very fact that AI tools are being used fulfils the formal criterion, but the problematic aspect of transparency remains with the content criterion. The content criterion is to be assessed through the fact that AI tools, especially those capable of self-learning and self-decision-making, may make decisions or perform actions in a way that the mediator cannot explain to the parties to the process because he/she is unable to understand it. This lack of transparency and clarity potentially leads to the questioning of the proper and safe mediation process. Such a challenge relates, inter alia, to the aspect of confidentiality and data protection obligations.
One of the key aspects of liability when artificial intelligence is used in mediation seems to be the potential breach of confidentiality and data protection obligations. Breach of these obligations may lead to both a tort, as confidentiality and data protection in mediation is required by law48, and a contractual breach, if confidentiality and data protection have been addressed in the contract. As the Mediator’s Handbook (2019) points out, “during the mediation process, a number of emotional experiences of the parties are revealed, as well as details of their personal lives or commercial activities, which the parties would prefer to keep secret. Parties are often afraid to disclose certain information because they fear it could be used against them. In order to ensure the security of information and to encourage the parties to feel free and unrestricted in the mediation process, the mediation process must be confidential”.
As Kaminskienė (2011) points out, “the confidentiality of alternative dispute resolution, which includes mediation, has two aspects – the confidentiality of the mediation procedure itself and the confidentiality of information related to the resolution of the dispute in the mediation procedure. The latter requirement is imposed on the mediator, the parties to the dispute and their representatives, and the persons organising and/or administering the mediation procedure.”
Article 17(1) of the Law on Mediation states that, unless otherwise agreed by the parties to the dispute, the parties to the dispute, mediators and administrators of mediation services shall keep confidential all information relating to the mediation of a civil dispute, except for the information necessary for the purpose of the confirmation or enforcement of a mediated settlement agreement and information the non-disclosure of which would be contrary to the public interest (particularly where it is necessary to protect the interests of a child or to prevent damage to the health or life of a natural person). Paragraph 2 of the same Article also states that the mediator may not disclose confidential information entrusted to him by one party to the dispute to the other party to the dispute, unless authorised by the party who entrusted the information. These requirements relate to the person and the active role of the mediator and, more specifically, to the requirement to keep confidential information confidential. This requirement, when viewed through the prism of the usability of AI, is considered to be of particular significance as it goes beyond the mediator’s role as a natural person in keeping the mediator’s secrets and confidential information confidential. That is to say, if artificial intelligence is used in the mediation process, the mediator loses (or at least limits) control over the protection of such information. This degree of loss of control over confidential information potentially depends on which AI tools are used and to what extent.
The data protection risk aspect of the mediation process should be considered alongside the risk of loss/disclosure of confidential information. Westcott (2023) points out that data protection is already a concern for many customers, which is only exacerbated by AI, and that the potential for data usability offered by AI tools can seem daunting. For example, data that are submitted to AI processing, especially those submitted to public AI tools, are then used for AI learning, further query answering, etc., and may be disclosed to third parties. Such data can be simply name, but can also include any other data that is particularly sensitive, such as health data, information about sexual orientation, beliefs, religious views, etc. For this reason, every AI tool that could potentially be used in mediation should be assessed through a data protection impact assessment. Thus, highlighting the potential risks, their degree and the acceptability of such risks to the mediators themselves. In this context, Westcott (2023) also suggests that the key to the successful use of AI is the understanding that the customer is the most important and must remain the priority. This is not only about building trust in the company, but also about the ethical use of AI.
The data protection risk factor is also lower in the European Union (EU) as the EU has a strong focus on data protection. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (GDPR), states that “the principles and rules for the protection of natural persons with regard to the processing of their personal data should respect the fundamental rights and freedoms of natural persons, in particular their right to the protection of their personal data”. The GDPR also refers to the rapid technological development, pointing out that such development allows unprecedented access to personal data, while emphasising that a high level of protection of personal data must be ensured. In the European Union, all legal entities that process the data of natural persons (data subjects) are required to comply with the GDPR and to ensure that its provisions are respected and implemented in practice in their activities. However, while this circumstance, both through the principles of the activities related to the processing of personal data and the obligation to ensure appropriate technical and organisational measures to protect the data subjects’ data, etc., makes it possible to mitigate the degree of data protection risk, it does not completely eliminate such risk. The final level of implementation and enforcement of the GDPR provisions and the actions to be taken in terms of data protection, choice of technical measures, data processing operations, etc., depends on the individual mediator or mediation firm.
The credit of trust in mediation is particularly important. The parties must trust both the mediator and his/her methods and have faith in the process. Therefore, as mentioned above, the mediator should clearly and unambiguously disclose to the parties, first of all, information about the tools to be used and obtain the parties’ consent/objection and on the basis of that consent/objection, model the further steps to be taken with regard to the use of AI in the mediation, and to what extent. Even with consent, and even after informing the parties to the dispute about the use of AI in the process, the risk of loss of confidence remains and is directly linked to the other risks mentioned in the article, such as loss of confidential information or breach of personal data. The loss of confidential information, the disclosure or loss of personal data, or any other threat of adverse events in the mediation process, also leads to a loss of trust, without which the mediation process would be difficult to imagine.
The list of potential risk factors for which mediators should be held liable, as discussed in this article, is not exhaustive and depends both on the development of AI and the level of applicability of AI and the level of inclusion of AI in mediation, as well as on the legal regulation establishing both professional standards and liability of mediators and guidelines, prohibitions and liability for the applicability of AI in professional practice, which would imperatively affect the activities of mediators and their ability to use AI in their professional practices. Given that the law is “alive”, it would be difficult to identify all the potential risks associated with the use of AI in mediation, but at the same time there is no need to do so. The article attempts to identify potential situations where mediators, while using highly effective tools that can facilitate and improve the mediation process from the beginning to the end, also create potentially the greatest risks.
The mediator must be held liable for all the breaches discussed above. The issue of liability in the context of the use of artificial intelligence is also not as straightforward as it might seem at first sight. The problematic nature of liability is linked to the AI’s ability to make decisions autonomously, i.e. AI actions (especially when looking at the AI that produces letters, responses, etc.) may not only be determined by human behaviour, but may be made autonomously by the AI tools themselves, through evaluation of past actions, learning and development.
General professional liability is characterised by the duty to exercise care and attention and to avoid causing harm to others (Mikelėnas, 1995). Mikelėnas (1995) distinguishes between tort and contractual liability, i.e. the former deriving from the breach of a legal duty and the latter from the breach of contractual obligations.
MEPs, in their 2021 report on guidelines for the use of artificial intelligence, pointed out that “while AI technologies can help to speed up processes and make more rational decisions in the justice sector, final judicial decisions must be made by humans, and decisions made by AI should be subjected to careful review by humans in accordance with established procedures”. The resolution adopted by the European Parliament with recommendations to the Commission on the civil liability regime for the use of artificial intelligence recognises that the type of artificial intelligence system under the control of the operator is a decisive factor in liability, and points out that it is the automated artificial intelligence capable of autonomous decision-making that poses the greatest risk. This suggests that the use of fully automated tools should either (i) be limited by law, or (ii) be designed in such a way as to allow for automated decision-making mechanisms that allow the mediator to make the final decision or to make adjustments to the outcome.
In 2023, the European Parliament adopted a negotiating position on the development of rules on artificial intelligence, which “aim to ensure that the development and use of AI in Europe fully respects EU rights and values, that it is subject to human oversight, that privacy is respected, and that transparency, non-discrimination, and social and environmental welfare are ensured”. The European Parliament’s consistent position shows that one of the key focuses of AI regulation is on human activity, i.e. that AI should be used under human supervision and that final decisions should be taken by humans. Miller (2017) also argues that the final evaluation of the outcome of an AI activity should be carried out by a human person, pointing out that this should be done by virtue of the requirements of professional ethics. Murray (2023) also considers this condition, pointing out that AI systems are not infallible and require professional and responsible supervision.
This aspect is also particularly important in the context of the use of AI by mediators. That is to say that, with AI acting as a facilitator in the process, from providing initial information to potential clients, scheduling meetings, managing the calendar, etc., to selecting a potential strategy in the mediation process that has already started, formulating proposals to the parties, etc., it is still the mediator who has to take the final decisions. AI, while creating great opportunities to facilitate mediators, according to Miller (2017), lacks empathy, decision-making skills, often views decisions only through a black or white prism, and does not have the ability to create a personal relationship that cannot be replaced by computers. Pasquale (2015), speaking about AI decision-making, identifies it as a mystery and, using the allegory of the black box, assesses that the decision-making process in AI systems is often opaque, without knowing what factors are taken into account and how exactly they affect the outcome.
The statutory requirements for the qualification, performance, etc. of a mediator create mandatory criteria for the mediator’s service, the non-compliance with which creates a tort. Otherwise, the mediator’s additional or more detailed obligations may be discussed in the service agreement with the parties to the mediation. Both infringements, although stemming from different sources, give rise to legal consequences and liability for the mediator, including liability in relation to the use of AI in the mediation process. For this reason, the mediator must always evaluate his or her actions, both before and during the use of AI tools, and afterwards, when disposing of the result generated. This requirement derives in particular from the requirements of professional ethics and is directly related to the management and minimisation of the risks identified in this article. In the event of unmanaged risks, irresponsible use of AI leading to a breach of privacy, a breach of data protection, etc., the mediator should bear the liability.
Although there is no single definition of artificial intelligence, an analysis of the different definitions suggests that they all share identical criteria for what we should consider artificial intelligence: the ability of machines to mimic human actions, thinking, decision-making and learning.
The possibilities of using AI in mediation is wide ranging, from providing information to potential clients on a website, scheduling meetings, to assisting the mediator throughout the mediation process. However, use of fully automated AI tools in mediation should be viewed critically.
The use of artificial intelligence in mediation can lead to legal and ethical breaches of the mediator’s conduct. Regardless of the type of AI tools used by the mediator, even in the case of a fully autonomous AI, the mediator has the obligation to assess the results generated by the AI, as well as any automated actions performed, and it is the mediator who must take full liability, both legally and ethically.