The dynamic deployment of solutions based on Artificial Intelligence (AI) across global digital markets has opened up a new stage in the evolution of trade and consumer behavior (UNCTAD, 2024, p. 3). Contemporary purchasing decisions are increasingly being shaped by digital tools, often promoted as employing “Artificial Intelligence” to optimize the decision-making process from the consumer’s perspective (Paterson, 2022, p. 558).
Following Warszycki (2019, p. 115), AI may be understood as “a field of science encompassing disciplines, methods, tools, and techniques aimed at creating and developing a complete computer program that accurately reflects the model of human functioning and the human mind.” It has become an integral part of the modern consumer market, applied in both front-office processes (interfacing with consumers, clients, and supervisory bodies) and back-office processes (supporting the internal functioning of companies and institutions) (Keller et al., 2024, p. 417).
In consumer-facing applications, AI systems recommend products inferred from users’ preferences and histories, perform automated credit assessments, and provide customer support via virtual assistants (chatbots), among other functions (Myszakowska-Kaczała, 2024). On the operational side, companies are increasingly using AI-based analytics to understand consumer behavior, optimize pricing strategies, and improve supply chain management (GlobeNewswire, 2025).
Although the use of AI in customer service is often considered a hallmark of modern technological implementation, Artificial Intelligence itself is not a twenty-first-century innovation. Most technology historians trace the origins of the concept to the work of the British mathematician and cryptanalyst Alan Turing, who formulated its theoretical foundations in 1950 (Accenture, 2024, p. 8). Nevertheless, the dynamic development of AI was not widely recognized until 2011, when global technology companies such as Google, Facebook, Microsoft, and IBM began using it for business purposes (Ness et al., 2024, p. 1064).
From the perspective of the Polish AI landscape, 2023 marked a turning point, with 88% of respondents declaring familiarity with the term sztuczna inteligencja (“artificial intelligence”) – with this figure rising to 96% among individuals aged 18 to 24 (Digital Poland, 2023, p. 57). It is also notable that the jury of the Polish Language Council declared this term the Polish “Word of the Year” in 2023 (Kruszyńska, 2024).
This coincided with the rapid rise of ChatGPT, an AI–based application that achieved unprecedented global recognition. Between late 2022 and early 2023, the platform attracted approximately 100 million users (mp/dap, TVN24.pl, 2023). The scale and pace of its user growth may position ChatGPT as the fastest-growing consumer-facing web application to date (The Guardian, 2023). Its widespread adoption spurred the creation of numerous derivative solutions tailored to the needs of specific industries, including the banking sector (Capgemini, 2024, p. 44).
In 2025, the global AI market was valued at USD 757.58 billion, with forecasts projecting growth to approximately USD 3,680.4 billion by 2034 (Precedence Research, 2025). Within the global banking sector alone, AI is estimated to generate up to USD 1 trillion in additional value annually (Biswas et al., 2020, pp. 2–3).
The expanding use of AI in consumer services brings not only financial gains but also a range of other benefits – from mitigating risks associated with human error and improving service accessibility, to process automation that enhances efficiency and speeds up customer service. However, the adoption of AI-based tools by market entities also introduces new risks for consumers. The decision-making processes of AI algorithms may be opaque or difficult for the average client to comprehend (Ahn et al., 2024), which can hinder their ability to assess whether a system is operating correctly.
The opacity of AI systems, combined with their capacity to exploit biases and generate unintended side effects, has intensified debates on the need for responsible governance of AI technologies (Cheong, 2024, p. 2). A key challenge, therefore, lies in guaranteeing the effective protection of consumer rights when decisions affecting individuals are being made by algorithms, as well as in determining which parties bear responsibility in cases of algorithmic error or either unintentional or deliberate misuse.
This article seeks to address the following research question: Do Polish and EU legal acts, together with institutional oversight, provide consumers with adequate protection against the negative consequences of decisions made by AI systems, and are there legal gaps in this area? The approach taken is descriptive and analytical, based on selected legal acts (including the Act on Competition and Consumer Protection and the AI Act), relevant academic literature, and selected legal opinions. These sources form the basis for further, more detailed research on the topic.
The choice of a qualitative descriptive analysis stems from its suitability for examining phenomena within their real-world context – in this case, the institutional and regulatory environment. Its purpose is to capture ongoing processes, identify the actors involved, and situate them within their operational conditions. While serving as a starting point for more advanced analyses, this approach itself constitutes a valuable and independent methodological framework (Sandelowski, 2000, p. 339). It involves the following stages (Villamin et al., 2024, pp. 51–91):
defining the research objective (application-oriented),
determining the research method (descriptive analysis),
establishing the theoretical framework (accountability for algorithmic decisions in the context of legal frameworks and institutional oversight),
selecting the research sample (domestic and international literature, legal provisions, and opinions of Polish legal scholars),
collecting data (reviewing available sources),
analyzing data (evaluating sources in light of the research objective), and
presenting the research findings.
The outcomes of this analysis are threefold: (i) a presentation of the current regulatory framework governing responsibility for AI-mediated decisions affecting consumers; (ii) the identification of potential gaps within the existing system of consumer protection; and (iii) the formulation of recommendations aimed at addressing these gaps in the Polish legal system, alongside proposals for new regulatory measures to strengthen consumer safeguards against the adverse consequences of AI-driven decision-making.
Artificial Intelligence is now being applied across nearly all areas of human activity. It is already assisting the work of both teachers and students, including in schools and even in early childhood education (Iron Mountain, 2025). AI can automatically perform tasks such as grading tests and homework assignments or generating reports on student progress (Stecyk, 2025). Higher education institutions are also increasingly utilizing AI algorithms to enhance the efficiency of administrative and academic work. One example is the use of autonomous AI agents that assist in creating professional academic presentations based on an outline (Stecyk, 2025). AI can likewise improve communication processes within universities – for example, through the implementation of “intelligent” dean’s offices or automated student admissions systems. A student wishing to access publicly available university knowledge and documentation in real time needs only one condition to be met: access to the Internet (KALASOFT, n.d.).
It should be emphasized that in the context of higher education, where the student may be regarded as a client or consumer of educational services (Sojkin et al., 2012, pp. 565, 567), the use of Artificial Intelligence entails risks analogous to those observed in other sectors of digital services, particularly regarding data protection, algorithmic transparency, and the right to reliable information. Theoretically, information generated by software based on AI algorithms should be factually accurate. In practice, however, AI systems may rely on unreliable or outdated sources, creating a risk that users receive incorrect or misleading information.
Another risk associated with the use of Artificial Intelligence in higher education concerns the protection of student data collected by institutions employing AI tools, as well as the potential dehumanization of the educational process – where human interaction is diminished and the lecturer’s role shifts away from that of a mentor, becoming instead a mere supervisor of AI-driven systems (Kornaś, 2024).
An argument in favor of limiting the use of Artificial Intelligence in education is that decisions made without human intervention may result in the absence of a clearly identifiable responsible entity, as well as a lack of transparency regarding how such decisions are made (PARP Grupa PFR, 2023, p. 29). Insufficient oversight of these processes may, in turn, result in different types of misuse or abuse, potentially harming the interests of those affected (Iron Mountain, 2025). Table 1 presents examples of AI applications in the consumer market, along with their associated potential benefits and risks.
Examples of AI applications in the consumer market and their associated potential benefits and risks.
| Example of AI application in the consumer market | Benefits | Risks |
| Personalization of offers and advertising | Analyzing data on potential consumer preferences and behaviors (e.g. their purchase history or records of previously viewed products) to recommend goods or services best suited to individual needs. | Concerns regarding consumer privacy (with respect to completed or planned purchases) and the unauthorized use of personal data. One corporate study found that in Poland, “61% of individuals using digital services such as online store applications fear that the information they provide may contribute to identity theft” (EY, 2024) |
| Customer service chatbots | Enabling consumers to access support or obtain responses to their inquiries at any time, eliminating the need to wait for a human agent to become available. | Consumers are not always able to obtain accurate information from chatbots – Poland’s Office of Competition and Consumer Protection (UOKiK), for instance, has received complaints regarding the improper functioning of such systems. |
| Dynamic pricing and personalized price offers | Increasing market efficiency and the ability to adjust prices to market conditions. | Consumers may pay more for a product or service because the algorithm overestimates the consumer’s ability to pay. Online price differentiation may be perceived as unfair, especially if other customers can pay less for the same product. |
| Automated financial decision-making | Accelerating decision-making in finance, such as creditworthiness assessment, loan approval, or insurance issuance. | Algorithms may rely on biased data that discriminates against certain consumer groups, often lacking transparency in the decision-making process – for example, in cases of allegedly unjustified denial of credit or loans. |
| Analysis of online opinions and reviews | Enabling entities that use AI-supported monitoring of online opinions and reviews of their services or products to respond quickly to any negative feedback. | AI bots may be used to automatically delete negative comments or generate fake positive reviews, thereby misleading consumers. |
Source: Author’s compilation based on Warszycki (2019, p. 119); EY (2024); Jurczak (2023); Bondos (2016, p. 173); Nowakowski, Bank.pl (2021).
The examples of Artificial Intelligence applications presented in Table 1 illustrate the dual nature of AI’s impact on the consumer market. On the one hand, algorithms can enhance convenience, accessibility, and service efficiency, reduce operating costs, and minimize human error. On the other, AI-related risks include a lack of transparency in decision-making processes, potential discrimination, and incorrect decisions that may result in harm to the consumer. AI-powered tools may not only pose a threat to customer privacy but also increase the risk of consumers falling victim to deceptive or unfair market practices or even financial exclusion in cases where an insurer, on the basis of an AI-generated analysis, determines that a given consumer represents too great a risk of potential payout (BEUC, 2021, p. 35).
An example of potential gender-based discrimination by an AI algorithm was the 2019 case in the United States involving the credit limit determination process for the Apple Card, issued jointly by Apple and Goldman Sachs. Customers observed that the algorithm responsible for assigning credit limits granted significantly higher limits to men than to women with comparable financial situations. One potential applicant reported that his credit limit was 20 times higher than that of his wife, even though they shared joint marital property and, in his view, her credit history was even better than his. Following the publication of this report, other couples also began to confirm such disparities, sharing examples suggesting that the algorithm favored men. The case attracted the attention of the New York Department of Financial Services, which launched an investigation to determine whether anti-discrimination laws had been violated in this instance (The Guardian, 2019), but it ultimately concluded that there was no discrimination against customers based on gender (Campbell, 2021).
The Apple Card case demonstrated, however, that a lack of algorithmic transparency can lead to public controversy. Customers did not receive a clear explanation as to why the decisions varied so significantly between genders. Being unable to understand the automated decision-making process led some users to perceive the differences in credit limits as negative gender discrimination, even though closer scrutiny showed that no such discrimination had actually occurred. A positive takeaway from this example is that regulators are prepared to intervene, treating the use of AI like any other credit procedure subject to the law.
It should be noted, however, that despite incidents raising concerns about the impartiality of AI-based solutions, there is also evidence suggesting that consumers perceive such systems as more objective than human-driven processes. The rationale in this context is the perceived absence of bias and emotions in AI decision-making (Nogueira et al., 2025, p. 2).
Another type of potential incident involving the use of AI tools and consumers could be having a chatbot incorrectly dismiss a complaint. This might occur, for example, if the chatbot misinterprets an image submitted by the customer and wrongly concludes that a product defect was caused by user error. Another possible example, negative from the customer’s perspective, would involve the misclassification of a complaint into the wrong category. In both cases, one of the possible consequences is the expiration of the statutory 14-day period for responding to a consumer complaint, which, under Polish consumer law, results in the complaint being deemed accepted by default (Polish Consumer Rights Act, Article 7a). A consumer’s lack of awareness of, or failure to invoke the above legal provision could result in their not receiving appropriate support in such a case, due to the algorithm’s improper functioning.
The next section of this article will examine the extent to which current Polish regulations address the challenges outlined above and what changes may be necessary to ensure that consumer rights are effectively protected in the era of widespread algorithmic use in the consumer market. This is a highly important issue, as the number of incidents involving AI systems is increasing alongside the growing adoption of Artificial Intelligence. Between 2022 and 2023 alone, the number of such incidents rose by approximately 1278% (OECD, n.d.).
Given the risks associated with the practical use of Artificial Intelligence, it is often perceived as a source of threats to individual rights (Contissa et al., 2018, p. 11). The noticeable rise in technological sophistication and the emergence of new risks have led regulatory bodies to recognize the necessity of legislative action in this domain (Lagioia et al., 2022, p. 482). Artificial Intelligence poses new and complex challenges to both consumers and the system of consumer law – challenges that existing regulatory mechanisms are not always capable of addressing effectively (Terryn & Martos Marquez, 2025, p. 210).
Based on the analysis of the current legal framework, it can be indicated that there is no comprehensive legal act that specifically addresses the use of Artificial Intelligence in the consumer context. Nevertheless, existing legal provisions offer a certain degree of protection to consumers against the negative consequences of decisions made by algorithms. These include data protection regulations and consumer protection laws (Table 2).
Selected examples of legal acts protecting consumers from the negative consequences of algorithmic decision-making.
| Legal act | Examples of legal acts protecting consumers from the negative consequences of algorithmic decision-making |
| Regulation (EU) 2016/679 of the European Parliament and of the Council (GDPR) | Article 22(1) “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” – |
| Directive (EU) 2019/2161 of the European Parliament and of the Council (Omnibus) | (45) “Consumers should (…) be clearly informed when the price presented to them is personalized on the basis of automated decision-making, so that they can take into account the potential risks in their purchase decision. Consequently, a specific information requirement should be added to Directive 2011/83/EU to inform the consumer when the price is personalized, on the basis of automated decision-making.” – |
| Polish Act of 23 August 2007 on Counteracting Unfair Market Practices | Art. 6 (4) and (7) “In the case of an invitation to purchase a product, the following shall be regarded as material information (…) in particular: (…) information on whether and how the trader ensures that the published reviews originate from consumers who have actually used or purchased the product – in the case of a trader who provides access to consumer reviews of products.” – |
| Polish Act of 30 May 2014 on Consumer Rights | Article 12 (1) “No later than the time the consumer expresses their intention to be bound by a distance or off-premises contract, the trader shall be obliged to clearly and comprehensibly inform the consumer of: (…) the total price or remuneration for the service, including taxes, (…) any individualized pricing based on automated decision-making, where such pricing is applied by the trader; (…) the existence and content of any guarantees and after-sales services, and the manner in which they can be exercised.” – |
| Polish Civil Code | Article 449 (1) “Anyone who, in the course of their business activity, manufactures a defective product (the producer) is liable for damage caused to anyone by that product.” – |
Source: Author’s compilation based on the GDPR and Omnibus Directives; the Polish Acts of 23 August 2007 on Counteracting Unfair Market Practices and of 30 May 2014 on Consumer Rights; and the Polish Civil Code of 23 April 1964.
Moreover, successive parts of the relatively new EU Artificial Intelligence Regulation (AI Act) are now gradually entering into force. The aim of the regulation is “to improve the functioning of the internal market by laying down a uniform legal framework, in particular for the development, placing on the market, putting into service and use of Artificial Intelligence systems (…) to promote the uptake of human-centric and trustworthy Artificial Intelligence (…) and to support innovation” (Regulation (EU) 2024/1689 of the European Parliament and of the Council – The Artificial Intelligence Act). Although the AI Act will fully apply as of 2 August 2026, the provisions of Chapters I and II are already binding and should be applied now (AI Act, art. 113).
Despite the fact that the AI Act includes several significant provisions from a consumer protection standpoint, such as the prohibition of social scoring and the right to file a complaint with a market surveillance authority if an AI system is believed to violate the regulation, European consumer advocacy groups have raised concerns about legal gaps that fail to fully address the risks consumers are exposed to in the context of AI deployment. According to these organizations, the AI Act is not capable of fully eliminating the risks associated with the use of AI tools in consumer interactions. In their view, the regulation focuses primarily on high-risk systems, while many widespread applications of AI, such as the use of chatbots, fall outside its scope (BEUC, 2023).
Such a situation may lead to the emergence of national legislative solutions addressing selected risks associated with the use of AI, which in turn could result in the fragmentation of legal provisions and hinder the assurance of a uniform level of protection for European Union citizens with respect to the same technological products and services (Bertolini, 2025, pp. 9–10).
Referring back to the earlier example of potential gender discrimination in the Apple Card credit approval process, it is worth noting that, under EU law, a consumer in a similar situation could rely on Article 22 of the General Data Protection Regulation (GDPR). This provision entitles the data subject, whether a potential or actual client, to request clarification regarding the logic behind the algorithmic decision on their credit limit, and to demand a reassessment of the outcome by a human decision-maker.
Additionally, the European Union has in place anti-discrimination regulations – such as Directive 2004/113/EC of 13 December 2004, implementing the principle of equal treatment between men and women in the access to and supply of goods and services. Moreover, if such an incident were to occur in Poland, an entity that actually employed a discriminatory algorithm could face sanctions from the Office of Competition and Consumer Protection (UOKiK), as its actions may constitute a violation of collective consumer interests (Polish Act on Competition and Consumer Protection, Article 24). The activities of this Office will be discussed in more detail in the following sections of this article.
Similarly, in scenarios involving potentially incorrect decisions issued by an AI-driven complaint resolution system, or where a university student receives inaccurate information from an “intelligent” dean’s office, current legal frameworks would regard such instances as the equivalent of human error. Ultimately, responsibility for the functioning and consequences of AI systems rests with the individual or institution that has introduced and operates them (Paprocki, 2025).
The consumer submitting a complaint would retain the right to exercise their entitlement (e.g., to repair or replacement of the product) (Polish Consumer Rights Act, Article 43d). The consumer could also notify the UOKiK, which would assess whether the company had violated the collective interests of consumers (Polish Competition and Consumer Protection Act, Article 24). In cases where complaint processing is delegated to a malfunctioning algorithm, the UOKiK has begun examining such situations and emphasizes that the use of AI does not relieve businesses of their responsibility to review consumer complaints in a fair and timely manner (Infor.pl, 2023).
However, for a student who received incorrect information via an AI system, pursuing legal remedies in response to the negative consequences of such inadequate support may prove to be a significant challenge. Legal provisions do not always recognize a student as a consumer eligible for protection under all the legal acts listed in Table 2. However, if a student were to enter into an agreement based on incorrect information provided by a chatbot, the issue of determining liability for being misled by AI could have a valid legal basis (Warchoł-Lewucka, 2024). In the case of an incorrect response provided by a “smart” dean’s office – regarding, for instance, the current class schedule – the consequences of a student’s absence from mandatory classes held on a date not indicated by the chatbot would likely be borne solely by the student.
Since the broad application of AI in areas such as the consumer market is a relatively new phenomenon, the institutional structure aimed at protecting consumers from AI-related risks is still evolving. Additionally, the complexity of AI use cases necessitates coordination and cooperation among the various regulatory and supervisory authorities.
In the Polish legal system, the Office of Competition and Consumer Protection (UOKiK), established in 1990, serves as the main institution responsible for safeguarding consumer rights (UOKiK, n.d.). While no existing legal act explicitly names the UOKiK as the principal supervisory authority overseeing the impact of Artificial Intelligence on the consumer market, the Office has been actively engaged in addressing issues related to the use of algorithms in consumer-facing processes. Despite the lack of explicit regulatory designation, the UOKiK actively monitors and engages with developments concerning the application of algorithms in consumer interactions. Its current activities include assessments of chatbot functionality in the telecommunications market and in e-commerce services – most notably in food delivery apps and online marketplaces (Infor.pl, 2023).
The UOKiK is also striving to harness AI to enhance consumer protection on the Polish market. An example of this effort is the implementation of the project entitled “Detection and elimination of dark patterns using Artificial Intelligence,” which aims to develop an AI-based tool capable of identifying unfair uses of so-called dark patterns on commercial websites (UOKiK, 2024). These are user-interface designs intentionally created to mislead consumers, hinder the expression of genuine preferences, or manipulate users into taking predetermined actions. Such practices are intended to pressure consumers into making purchases they do not truly desire, or to manipulate them into revealing personal information they would not voluntarily provide in a more transparent context (Luguri & Strahilevitz, 2021, p. 43).
It can be assumed that in the near future, the scope of UOKiK’s activities and responsibilities related to the use of Artificial Intelligence in the consumer market will continue to expand. It is likely that the authority will gradually acquire additional statutory powers aimed at enhancing the effectiveness of its supervisory activities in this area.
An additional authority involved in addressing the use of Artificial Intelligence with respect to personal data protection in Poland is the Personal Data Protection Office (UODO). Its counterpart at the EU level is the European Data Protection Board (EDPB), which coordinates data protection policies across member states.
The President of the UODO is the “competent authority for personal data protection” (Polish Personal Data Protection Act, Article 34(1)), with tasks including monitoring and enforcing the provisions of the GDPR, as well as promoting public awareness and understanding of the risks, rules, safeguards, and rights related to data processing (GDPR, Article 57(1)(a) and (b)).
In the context of Artificial Intelligence, the Personal Data Protection Office (UODO), examines the impact of AI on individuals’ privacy and the protection of their personal data (UODO, n.d.). The UODO is authorized, among other things, to impose administrative fines for violations of the GDPR, including the aforementioned Article 22 (e.g., failure to provide human verification of automated data processing in cases where the decision produces legal effects for the consumer).
Among the responsibilities of the European Data Protection Board (EDPB, or EROD) is providing guidance to the European Commission on issues concerning data protection – particularly with regard to proposed amendments to the GDPR and broader legislative initiatives within the EU (EDPB, n.d.). Notably, at its inaugural plenary meeting in 2018, the EDPB adopted guidelines addressing automated decision-making and profiling (EDPB, 2018).
At the EU level, the European Artificial Intelligence Board was established to oversee the proper implementation of the AI Act (European Commission). Moreover, the European Data Protection Supervisor (EDPS) plays a key role in ensuring that all EU institutions and bodies respect citizens’ privacy rights during personal data processing. The EDPS is also responsible for tracking the development of emerging technologies that may impact data protection and for carrying out investigations into relevant matters falling within its jurisdiction (european-union.europa.eu).
Accordingly, it may be concluded that the enforcement of legal standards regarding the protection of Polish consumers’ personal data and the appropriate use of AI-assisted tools involves multiple institutions operating at both the national and European levels. Determining which body is responsible in a specific case should depend exclusively on the type of suspected violation (Table 3).
Comparison of the scope of responsibilities of Polish institutions overseeing the consumer market.
| Compared feature | Polish Office of Competition and Consumer Protection (UOKiK) | Polish Personal Data Protection Office (UODO) |
| National legal act regulating the scope of an entity’s responsibility | Polish Act of 16 February 2007 on Competition and Consumer Protection | Polish Act of 10 May 2018 on the Protection of Personal Data |
| Scope of responsibility of the president of the institution | The President of the Office of Competition and Consumer Protection (UOKiK) is the central government administration authority competent in matters of competition and consumer protection (Article 29 (1)).
| The President of the Personal Data Protection Office (UODO) is the competent authority for personal data protection and the supervisory authority within the meaning of the GDPR (Article 34 (1) and (2)). |
| Scope of responsibility of entities supporting the president of the institution | The President of the Office of Competition and Consumer Protection (UOKiK) performs duties with the assistance of the Office of Competition and Consumer Protection (Article 29 (6)). The Office consists of the Central Office in Warsaw, regional branches, and laboratories supervised by the President of the Office (Article 33). | The President of the Personal Data Protection Office (UODO) performs their duties with the assistance of the Personal Data Protection Office (Article 45 (1)).
|
Source: Author’s compilation based on the Act of 16 February 2007 on Competition and Consumer Protection; the Act of 10 May 2018 on the Protection of Personal Data; and information from the websites of the Personal Data Protection Office (UODO) and the Office of Competition and Consumer Protection (UOKiK).
However, due to the fast-paced development of Artificial Intelligence in an ever-growing range of consumer-facing applications, it is highly probable that not all risks stemming from AI usage are adequately addressed in existing legal frameworks, and that responsibility for such risks may not fall solely within the remit of a single regulatory body. A relevant example would be a chatbot’s improper handling of a consumer complaint, accompanied by a breach of personal data protection regulations – particularly involving sensitive data. In such circumstances, the case would require joint consideration by at least two competent authorities, such as the UOKiK and UODO.
Thus, it is crucial to ensure not only the constant oversight of emerging AI-related risks and the ongoing adjustment of relevant legislation and institutional responsibilities, but also effective interdisciplinary collaboration between the entities tasked with safeguarding consumer rights.
When analyzing the risks associated with the use of Artificial Intelligence in consumer services, it is essential to consider the issue of responsibility for erroneous decisions made by algorithms. AI itself does not possess legal personality and therefore cannot be held directly accountable (Bączyk-Rozwadowska, 2022, p. 9). Responsibility may lie solely with a natural or legal person who exercises control over the operation and deployment of AI-driven systems (Kulicki, 2025). As one analyst has put it, “In principle, liability for errors stemming from the system’s architecture or software should rest with the manufacturer, whereas responsibility for misuse of the system lies with the end user” (Trzaska, 2024). However, given that there is currently no specific legal act that directly assigns responsibility for damages caused by Artificial Intelligence, it remains challenging to clearly designate a natural or legal person as directly liable for errors resulting from AI operations (Trzaska, 2024).
The existing academic literature offers a range of proposals concerning the entity that could be considered “responsible” for decisions made by AI systems: ranging from the software developer who implemented faulty algorithms (programistajava.pl, 2025), through the system operator or controller (Kaniewski & Kowacz, 2023), to the end user (Infinity Insurance Brokers, n.d.), which may be responsible for the proper use of artificial intelligence systems (Buiten, 2024, pp. 256–257).
Certain authors suggest a model in which responsibility is distributed among various groups of stakeholders (programistajava.pl, 2025). Meanwhile, other sources highlight the possibility that, given the considerable complexity of the AI value chain, it may not always be possible to clearly identify the entity responsible for a specific error (Jelińska-Sabatowska, 2025). In many AI-driven processes involved in the provision of products and services, multiple entities participate (Buiten et al., 2023, p. 11). Legal counsels also point to a new type of risk associated with the use of AI – namely, the risk of a “liability gap” (Nogacki, 2024).
The challenge of assigning liability for the outcomes of Artificial Intelligence stems from factors including the following (Nogacki, 2025):
autonomy – AI systems make decisions without human oversight,
opacity – the AI decision-making process may be difficult to understand,
data dependency – flawed data can lead AI to make erroneous decisions,
value chain complexity – the development and implementation of AI involves multiple entities.
Nevertheless, the most frequently cited example of a party considered responsible for decisions made by AI is the entrepreneur who implements an AI-based process within their organization. As such, they must take into account the possibility of incurring contractual liability in the event that damage is caused by Artificial Intelligence – such as when an error results in the failure to fulfill a contract concluded with a business partner (Tak Prawnik, 2025). They may also face tort liability, for example in the case of an accident caused by an autonomous vehicle (Kaniewski et al., 2023). However, some sources argue that the previously mentioned “opacity” of AI decision-making undermines the application of standard principles of tort liability (Nogacki, 2025).
Apart from the legal challenge of clearly identifying the entity liable for damage caused by Artificial Intelligence, another significant obstacle is the difficulty in proving the “fault” of the AI system itself. To do so, the consumer – or their legal representative – must gain access to and understand how the AI tool functions, which may require insight into complex and often non-transparent decision-making processes. In practice, however, this may prove difficult or even impossible. Among other factors, this is due to the so-called “black box problem” (Taveira da Fonseca et al., 2024, p. 300) – that is, the system’s recommendations may not be explainable within the framework of traditional linear cause-and-effect logic (Kroplewski, 2023, p. 112).
An additional risk for banking customers related to the use of Artificial Intelligence is the potential overdependence on AI systems in decision-making, predictive analytics, and recommendation processes. Even if a human remains the final decision-maker, they may defer too strongly to the suggestions provided by AI – perceiving them as inherently correct or derived from deep and reliable analysis (Szostek et al., 2022, p. 55). In practice, however, there may be uncertainty as to whether the data used by automated models is of adequate quality, which may result, for example, in an inaccurate assessment of a customer’s creditworthiness (Szostek et al., 2022, p. 26). In such circumstances, the harmed consumer may face significant challenges in demonstrating that the unfair treatment resulted from the actions of both the AI system and the bank’s staff.
In the context of seeking redress against an erroneous AI-generated decision, the consumer must first be aware that such an irregularity has occurred. The literature on accountability for AI-driven decisions highlights the so-called “information gap,” whereby an individual may not realize that their adverse situation results from the actions of Artificial Intelligence (Ziosi et al., 2023, p. 9). What is crucial, therefore, is not only the existence of legal provisions designed to prevent the effects of erroneous AI decisions, but also the consumer’s own awareness of the protections available under the relevant legal framework.
While the application of Artificial Intelligence in the consumer market brings various advantages – such as personalized product offerings – it also entails significant risks. These include the reliance of algorithms on outdated or biased data, which may result in the unequal treatment of certain customer groups.
Additionally, consumers’ inability to logically explain how algorithms operate may also lead to their misinterpretation of AI-generated decisions, as exemplified by the case concerning the determination of credit limits in the Apple Card program.
Although existing legislation ensures a certain level of protection for consumers against the risks posed by Artificial Intelligence – such as the right to human oversight and the prohibition of discriminatory practices – there are still notable legal gaps. In particular, the opacity of AI decision-making processes creates challenges in proving errors and seeking redress. The lack of algorithmic explainability may also result in consumers misinterpreting automated decisions, as exemplified by the Apple Card case referenced earlier, in which the credit limit allocation raised concerns about fairness and transparency.
One of the legal gaps identified in the article concerns the question of who should be held accountable for decisions made by Artificial Intelligence. Since AI does not have legal personality, it cannot itself bear responsibility for erroneous algorithmic decisions, and no existing provision in either Polish or EU legislation explicitly designates the entity liable for the malfunction of AI systems. In the scholarly literature, the entity most frequently identified as “responsible” is the provider making the AI-based solution available to consumers. However, responsibility for the harms caused by Artificial Intelligence is also sometimes attributed to the software developers whose algorithms prove faulty, as well as to end users.
In summary, the answer to the research question posed in this article is as follows: Polish and EU legal acts, together with institutional oversight, provide consumers with protection against the negative consequences of decisions made by AI systems. However, this protection does not extend to the full spectrum of potential risks arising from the use of Artificial Intelligence in consumer markets. Legal gaps remain in this area, and the introduction of new legislation that keeps pace with the ongoing development of AI capabilities represents a major regulatory challenge, making the complete elimination of such gaps difficult – if not impossible – in the foreseeable future.
As the use of Artificial Intelligence becomes more widespread, the frequency of incidents involving AI systems continues to rise. Regulatory bodies at both the European and national levels, along with consumer protection authorities, are still building the expertise and acquiring the instruments required to monitor and control AI effectively. This transitional phase contributes to the persistence of certain regulatory blind spots and legal uncertainties. To enhance consumer protection in a market environment where an ever-growing number of processes are supported by Artificial Intelligence – systems that may still be prone to error – it is crucial to implement reforms across legislative, institutional, and educational spheres.
With regard to recommendations, priority should be given to measures designed to address the identified shortcomings in the Polish legal system and to strengthen safeguards for consumers affected by AI-driven decision-making, such as the following:
Clarifying legal liability for individual entities involved in the development, provision, and use of AI – for example, by introducing a provision into the Polish Competition and Consumer Protection Act stating that liability for errors made by Artificial Intelligence rests with the entity that makes the AI-based tool available to consumers, or with another entity explicitly designated by that provider in the applicable terms and conditions.
Introducing a legal provision that facilitates the burden of proof for consumers in disputes concerning the malfunctioning of Artificial Intelligence – given that proving an AI-related error is often difficult or even impossible for the average consumer, a reasonable solution would be to shift the burden of proof to the entity providing the AI-based tool to consumers (or to another entity explicitly designated in the relevant terms and conditions). In the event of a dispute, this entity would be required to demonstrate that the AI system did not make an error; otherwise, the case would be resolved in favor of the consumer.
Requiring algorithmic transparency – consumers should have the right to understand the logic behind decisions made by Artificial Intelligence that affect them personally; for example, by being granted access to terms and conditions that include information about the characteristics or factors the AI takes into account when making specific decisions
Establishing a statutory definition of the competences of supervisory authorities – for example, a dedicated department could be established within Poland’s Office of Competition and Consumer Protection (UOKiK), staffed with experts in artificial intelligence systems, tasked with analyzing cases in the consumer market suspected of involving faulty operation of AI-based systems.
Promoting consumer education on AI – through initiatives aimed at increasing consumer awareness of the risks associated with artificial intelligence, as well as of the rights they have with regard to protection against such risks.