Introduction
The rapid advancements in artificial intelligence (AI), particularly in the domain of generative AI (GenAI), are promising to transform educational landscapes. Given that discourse and language shape how educators, administrators, and policymakers perceive and integrate technology within educational institutions, it is essential to critically examine how these technologies are conceptualized and articulated as they become increasingly embedded in educational practices. Metaphors in particular have the power to influence how teachers and learners interpret the role of technology, its potential, and its limitations, guiding its use and integration into educational settings. By unpacking the metaphorical frameworks that inform education’s engagement with GenAI, researchers can develop a better understanding of their impact on educational practices and perceptions.
Conceptual metaphor theory proposes that metaphors are fundamental to cognition, shaping how individuals think and reason (Gupta et al. 2024; Lakoff & Johnson 2003). By forging connections between abstract concepts and concrete experiences, metaphors serve as powerful linguistic tools that frame complex technological ideas in accessible terms (Bearman, Ryan & Ajjawi 2023; Maas 2023). For instance, in the realm of educational technology, the metaphor of “digital natives” is powerful: despite its lack of empirical support, it persists in oversimplifying generational differences in technology use (Weller 2022). Analyzing the specific metaphors employed in discussions about GenAI in education can yield valuable insights into the underlying assumptions, potential benefits, and risks associated with these technologies.
Metaphors, with their power to translate complex ideas into accessible terms, are frequently used in everyday conversations to help us make sense of the world. In the context of educational technology, and GenAI in particular, metaphors play a pivotal role in affecting how these technologies are understood and integrated into practice. An influential contributor to the GenAI discourse is UNESCO, a global leader in education with 193 member states and a history spanning nearly eight decades. UNESCO’s initiatives have consistently shaped international educational policies and practices, making its perspective on GenAI especially significant. Its global reach, lasting influence, and commitment to fostering inclusive and equitable education are just a few reasons why its views warrant close attention. Among UNESCO’s notable contributions to the GenAI discourse is its document titled Guidance for Generative AI in Education and Research (UNESCO 2023), the first internationally-published set of policy recommendations for education (Taylor 2024), and UNESCO’s third most downloaded publication on AI in education (Miao 2024). This publication offers a comprehensive framework to help policymakers, educators, and researchers navigate the ethical, pedagogical, and societal implications of GenAI.
This paper analyzes the discourse on GenAI that UNESCO uses in the aforementioned document, specifically examining the metaphors used to represent GenAI. We analyze the metaphors in this particular UNESCO document for several reasons. First, the document provides valuable insights into how UNESCO conceptualizes and envisions GenAI in the future of education. Second, it unpacks the underlying assumptions and values that inform UNESCO’s recommendations regarding GenAI integration in educational contexts. Third, a critical examination of these metaphors can illuminate potential blind spots and limitations in our current understanding of GenAI’s implications for education. Finally, given UNESCO’s global reach and influence, this document is likely to shape educational policies and practices worldwide, making the language used in the document of particular significance and interest.
The research questions addressed in this study are: What types of metaphors are employed in UNESCO’s document to depict GenAI? How might these metaphors impact perceptions of GenAI within educational settings? To answer these questions, we explore the dominant metaphorical frameworks used to portray GenAI in the document, how these metaphors may influence the perceptions and actions of various stakeholders, such as policymakers, educators, learners, and technology developers; and the potential implications of these metaphors for educational policy and practice, considering both positive and negative impacts.
Review of Relevant Literature
GenAI refers to a class of AI models capable of generating new content—such as text, images, or code—by identifying and replicating patterns found in large datasets. While GenAI builds on decades of AI research, it marks a significant shift from earlier forms of AI focused primarily on rule-based or predictive tasks (Floridi 2023). Unlike those earlier systems, GenAI models can produce seemingly novel outputs, prompting both enthusiasm and concern across various sectors, including education. In this study, we adopt a working definition of GenAI as AI systems that autonomously generate human-like content through probabilistic pattern recognition, without engaging in or replicating human cognitive processes or understanding (Bommasani et al. 2021; Floridi 2023).
The landscape of GenAI in education is rapidly evolving, with scholars increasingly recognizing its potential as a transformative force across various dimensions of the educational experience. AI is anticipated to impact learning, teaching, assessment, and administration, promising benefits such as personalized learning experiences, enhanced efficiency, and improved accessibility. However, alongside these optimistic projections, significant concerns have emerged, including issues of bias, privacy violations, and the potential deskilling of educators, along with the potential for GenAI to embody worldviews that risk perpetuating existing biases and marginalizing diverse perspectives (Bearman, Ryan & Ajjawi 2023; Blikstein & Blikstein 2021; Bozkurt et al. 2024; Bozkurt & Sharma 2023; Clark 2023; Ferreira, Lemgruber & Cabrera 2023; Gupta et al. 2024; Renz & Vladova 2021).
A recurring theme in the literature is the necessity for a critical approach to the integration of AI and GenAI in educational settings. Scholars express concerns regarding the influence of commercial interests on the development and promotion of AI technologies, warning that such influences may prioritize technological solutions over pedagogical needs (Clark 2023; Ferreira & Lemgruber 2018; Ferreira, Lemgruber & Cabrera 2023). This tendency can lead to the uncritical adoption of these technological tools without a comprehensive understanding of their implications, potentially undermining educational practices. To counter this, the researchers advocate for the development of “critical AI literacy”, empowering educators, policymakers, and learners to make informed decisions about AI’s role in education (Almatrafi, Johri & Lee 2024; Gupta et al. 2024). Such literacy is crucial for engaging in nuanced uses and discussions that move beyond uninformed adoption of AI or simplistic narratives of AI as either a utopian savior or a dystopian threat (Bearman, Ryan & Ajjawi 2023; Bozkurt & Sharma 2023; Ferreira, Lemgruber & Cabrera 2023; Gupta et al. 2024).
More specifically considering GenAI, research indicates that inquiries into its roles in education have predominantly focused on themes such as its application, impact, and potential; ethical implications and risks; perspectives and experiences and institutional and individual adoption. GenAI is frequently conceptualized as a tool for pedagogical enhancement, specialized training and practices, writing assistance and productivity, professional skills and development, and interdisciplinary learning. These conceptualizations underscore the diverse ways in which GenAI is envisioned to contribute to education, highlighting its versatility and adaptability across various contexts. However, this framing often leans toward an optimistic narrative of GenAI’s potential, leaving the underlying discourses and metaphorical frameworks that shape these conceptualizations underexplored (Yusuf et al. 2024). Furthermore, research indicates that students’ uses and relationships with GenAI vary significantly, with some perceiving it as an object to be used, others engaging with it as a subject with which they interact, and some adopting a dual perspective, relating to GenAI as both an object and a subject (e.g., Keuning et al. 2024; Veletsianos, Houlden & Johnson 2024).
As the research surrounding AI in education evolves, so too does the language used to describe it. Literature shows that the discourse on GenAI in education is often shaped by narratives that emphasize its inevitability and transformative potential. GenAI is framed as an unavoidable shift to which educators, institutions, and stakeholders must adapt, reinforcing the idea that responding to GenAI is not optional but necessary. In this context, the language, and in particular the metaphors, used to describe AI play a critical role in shaping public discourse and understanding, as how educational settings respond to GenAI is shaped not only by its technical capabilities but also by how it is understood and framed within these narratives, highlighting the critical role of discourse in shaping perceptions and practices (Bearman, Ryan & Ajjawi 2023). A recent article in Science for example noted that the metaphors we use to describe these technologies “can pivotally affect not only how we interact with these systems and how much we trust them, but also how we view them scientifically, and how we apply laws to and make policy about them [… and we need to be] acutely aware of the often unconscious metaphors that shape our evolving understanding of the nature of their intelligence” (Mitchell 2024, para. 9).
Metaphors serve as powerful framing tools that influence how people perceive and interact with technological systems (Ferreira & Lemgruber 2018, Ferreira, Lemgruber & Cabrera 2023; Gupta et al. 2024; Maas 2023). For instance, personifying GenAI by attributing human-like qualities, such as “intelligence” or “consciousness”, can inflate expectations and lead to an overestimation of GenAI’s capabilities while framing GenAI solely as a neutral “tool” can obscure its potential for unintended consequences and downplay the ethical implications of its design and deployment.
Common metaphors and metaphorical frameworks for AI
Metaphors have long been recognized as powerful tools for framing our understanding of technology, with discussions on their influence spanning more than two decades. An illustrative example is provided by Nardi and O’Day (2000: 27): “People who see technology as a tool see themselves controlling it. People who see technology as a system see themselves caught up inside it.” Such metaphors often go unnoticed, perceived as “common sense”, yet they wield significant power in shaping our concepts of technology and its role in society (Weller 2022). In the field of educational technology, metaphors framing technology as being able to “jumpstart” learning in the classroom reinforce the image of technological artifacts as solutions to a stagnant or ineffective system, portraying them as catalysts for awakening dormant educational practices (Marone & Heinsfeld 2023).
The literature highlights several prominent metaphorical frameworks that shape our discourses, emphasizing that metaphors are more than just linguistic constructs—they play a crucial role in shaping how we understand and interact with the world. In the context of education, these metaphors influence how stakeholders perceive and engage with GenAI. One prominent framework is personification, which attributes human-like qualities to nonhuman entities (Lakoff & Johnson 2003). This type of metaphor can lead to inflated and unrealistic expectations regarding GenAI’s capabilities, fostering a belief that these systems possess intelligence or consciousness akin to that of humans, downplaying the complex dynamics between technology and educational outcomes (Ferreira, Lemgruber & Cabrera 2023; Maas 2023).
Engaging in critical analysis of the metaphors used to describe GenAI is crucial for fostering a more nuanced and responsible approach to GenAI integration in education—one that acknowledges both the opportunities and risks associated with this rapidly advancing technology (Ferreira, Lemgruber & Cabrera 2023; Gupta et al. 2024). Such critical engagement supports the cultivation of “critical AI literacy” and promotes informed decision-making regarding the future role of AI in learning environments.
Methodology
UNESCO’s Guidance for Generative AI in Education and Research document was the first international official statement recommending uses for GenAI in education and has been considered “a useful primer for non-experts to make informed decisions” (Taylor 2024: 16). Building on the discussion of UNESCO’s significant role in shaping educational discourse, analyzing its Guidance for Generative AI in Education and Research becomes especially important. The metaphors employed in this document not only reflect how UNESCO envisions GenAI’s role in education but also reveal underlying assumptions and values that may influence global policies and practices, highlighting the need for a critical and informed engagement with its implications. As GenAI technologies increasingly permeate educational contexts, understanding the implications of UNESCO’s evolving narrative is critical for ensuring that these tools promote equity and transformative learning experiences rather than perpetuating existing inequalities. Therefore, this study aims to investigate the metaphors employed in UNESCO’s discourse on GenAI, specifically addressing two key research questions: First, what types of metaphors are used to depict generative AI (GenAI)? Second, how might these metaphors impact perceptions of GenAI within educational settings?
To explore these questions, this study employed a critical discourse analysis (CDA) methodology of UNESCO’s Guidance for Generative AI in Education and Research document (UNESCO 2023). CDA is a qualitative research approach that analyzes language to understand how power, ideology, and social structures are constructed and maintained within discourse (Fairclough 1992, 2003). This analysis considers the broader sociopolitical context in which the guidance is produced and the potential impact of these metaphors on stakeholder perceptions. By investigating how metaphors frame GenAI in educational contexts, this study aims to explore implicit assumptions and biases embedded within the guidance and to highlight their implications for the future of AI in education.
The analysis is grounded in Fairclough’s three-dimensional framework of CDA, which encompasses text, discourse practice, and sociocultural practice (Fairclough 1992, 2003). First, the textual analysis focuses on the linguistic features of the metaphors present in the document, identifying specific metaphorical expressions and examining their functions within the text. Second, the discourse practice aspect considers how these metaphors are produced and consumed within educational discourse, paying attention to how they might shape or reflect prevailing ideologies about AI. Finally, the sociocultural practice dimension situates the findings within broader societal contexts, exploring how these metaphors influence and are influenced by contemporary discourses on technology and education.
The identification of metaphors within UNESCO’s guidance followed a systematic analytic process informed by conceptual metaphor theory (Lakoff & Johnson 2003). First, we examined the text for figurative or descriptive language that suggested connections between distinct domains. Next, we analyzed these expressions to identify cross-domain mappings, where a familiar source domain was used to structure the target domain of GenAI—such as for example when people write that they need to “fine-tune” GenAI, thus applying a mechanical or tool-like domain (source domain) onto GenAI (target domain). Once we identified all cross-domain mappings, we analyzed them for patterns and systematicity in the language, identifying recurring metaphors that reinforced particular conceptualizations of GenAI.
We initially set out to explore a broad range of metaphors and their potential impact on perceptions of GenAI within educational settings (such as tools, enhancers, and partners). However, as the analysis progressed, the personification metaphors emerged as particularly prominent. They appeared frequently, were present throughout the text, and emerged across distinct identified subcategories such as human biology, reasoning, and leadership. Given their significant implications for shaping perceptions of GenAI, we chose to narrow the focus of this paper to a detailed examination of these personification metaphors. This decision allowed for a more in-depth analysis of how these metaphors construct human-like representations of GenAI and the potential impacts of such framing on educational policy and practice.
Analysis
Personification metaphors are a rhetorical strategy in which non-human entities are imbued with human characteristics, motivations, or capacities. Derived from ontological metaphors, which allow abstract phenomena to be understood through tangible and familiar concepts (e.g., ideas as objects), personification takes this process further by attributing human-like qualities to these entities (Lakoff & Johnson 2003). A simple example might be the statement “Siri isn’t understanding me today”, giving the digital assistant human-like capacities of misunderstanding rather than it solely malfunctioning. These metaphors serve to make abstract or complex phenomena more accessible and relatable, enabling individuals to engage with them more intuitively. However, such framing can also obscure the actual nature of the entity, leading to misconceptions about its capabilities and limitations.
In the context of UNESCO’s guidelines on GenAI in education and research, personification metaphors are a prominent rhetorical strategy, shaping perceptions of AI as a quasi-human entity. Our analysis indicated that the personification metaphors used in UNESCO’s document fall into three major distinct categories, reflecting different aspects of human experience: biology, reasoning, and leadership. Table 1 below presents the categories, illustrative examples from the UNESCO (2023) guidelines, and a description of each category. Each category reflects a distinct way of anthropomorphizing GenAI, contributing to its representation as a human-like presence in educational settings.
Table 1
Categories of Personification Metaphors Used to Describe GenAI.
| METAPHOR CATEGORY | METAPHORS/EXPRESSIONS | DESCRIPTION |
|---|---|---|
| Biology | “Ingests data,” “hallucinates,” “family of models” | These metaphors attribute to GenAI physiological or organic traits, framing it as a living or sentient entity capable of perception or kinship. |
| Reasoning | “Thinks,” “reasons,” “makes reasoning errors” | These metaphors suggest GenAI engages in mental processes, positioning it as capable of human-like thought, logic, and judgment. |
| Leadership | “Coach,” “advisor,” “Socratic opponent,” “partner in learning” | These metaphors assign GenAI relational authority or pedagogical roles, portraying it as a guide, mentor, or intellectual peer in educational contexts. |
Biological metaphors: The human body and its functions
The first category identified is the one of biological metaphors, that is, the metaphors related to the human body and its functions. Biological metaphors situate GenAI within the domain of human-like physiology. For instance, describing GenAI as “ingesting” data (UNESCO 2023: 8) draws on biological processes to explain the workings of GenAI, framing its computational functions as analogous to human digestion, transforming an abstract and technical process into something relatable and easily visualized. Excerpts below exemplify the biological metaphors category. Italics were applied to direct citations to emphasize key elements of the original text.
[1] It generates its content by statistically analysing the distributions of words, pixels or other elements in the data that it has ingested and identifying and repeating common patterns (UNESCO 2023: 8).
[2] Instead, its responses are based on probabilities of language patterns found in the data (from the internet) that it ingested when its model was trained (UNESCO 2023: 26).
[3] […] is known as artificial neural networks (ANNs), which are inspired by how the human brain works and its synaptic connections between neurons (UNESCO 2023: 8).
The excerpts contribute to the humanization of GenAI by framing its processes in terms of human-like actions and structures. The metaphor of ingestion positions GenAI as an organism that consumes and processes data, evoking empathy by aligning its behavior with familiar biological functions. Similarly, the comparison to the human brain fosters a sense of kinship, presenting GenAI as a reflection or extension of human intelligence. These rhetorical strategies encourage users to view GenAI as a relatable and trustworthy partner, even as they risk oversimplifying its operations and reinforcing anthropomorphic misconceptions.
The ingestion metaphor anthropomorphizes GenAI by implying an active, volitional process akin to eating, which suggests that GenAI performs this task intentionally and systematically, much like a living organism. However, in reality, GenAI does not ingest data in the biological sense; it processes data according to pre-programmed algorithms and statistical models. This oversimplification risks conflating AI’s deterministic operations with organic, purposive activity. The metaphorical phrase “inspired by how the human brain works and its synaptic connections between neurons” (UNESCO 2023: 8) maps the source domain of human neurology onto the target domain of Artificial Neural Networks (ANNs). By comparing ANNs to the structure and functioning of the brain, the text frames GenAI as a technological mirror of human biology. This metaphor emphasizes similarity, suggesting that GenAI systems replicate, to some degree, the processes underlying human cognition. This linguistic framing positions ANNs as extensions or emulations of human intelligence, evoking the image of a living brain, reinforcing the perception that GenAI shares an organic, human-like quality.
Reasoning and cognitive metaphors
Reasoning and cognitive metaphors further anthropomorphize GenAI by attributing to it human-like cognitive processes, creating a perception that the technology is not merely executing programmed algorithms but actively engaging in mental activities akin to human reasoning and thought. This framing often involves describing technical flaws or limitations in GenAI not as purely algorithmic or systemic errors, but instead as relatable “mental errors”, as if the technology were a person making mistakes in judgment or logic. For example, terms like “reasoning” and “thinking” are frequently employed in the document to characterize the functionality and problem-solving capabilities of GenAI. Such language subtly suggests that GenAI possesses an inherent agency and intellectual depth, which may influence how people understand and trust the technology. The excerpts below show a few examples:
[4] as they replicate the higher-order thinking that constitutes the foundation of human learning (UNESCO 2023: 3).
[5] Most importantly, it still is not fully reliable (it ‘hallucinates’ facts and makes reasoning errors) (UNESCO 2023: 12).
[6] GenAI may be used to challenge and extend human thinking (UNESCO 2023: 24).
[7] […] facilitate higher-order thinking (UNESCO 2023: 29).
The metaphor that GenAI “hallucinates” (UNESCO 2023: 12) in excerpt [5], for instance, evokes an image of a mind constructing false realities, aligning GenAI’s inaccuracies with human fallibility. Similarly, describing GenAI as making “reasoning errors” (2023: 12) suggests that it engages in processes akin to human logic, even if flawed. This language anthropomorphizes the technology, fostering empathy by framing its limitations as comparable to human cognitive struggles, such as misjudgments or lapses in thinking. By aligning GenAI’s errors with human experiences—like hallucinations or logical missteps—the metaphors create a sense of relatability, encouraging readers to perceive the technology as more approachable or understandable, despite its flaws. The metaphors in excerpt [5], while intended to caution about GenAI’s unreliability, paradoxically humanize the technology by mapping its functions onto familiar biological and psychological processes, thus deepening the sense of empathy users might feel toward it.
The word “hallucinates” is particularly striking because it attributes to GenAI a human mental condition associated with perceiving non-existent stimuli. Hallucination is typically understood as a phenomenon tied to consciousness and the workings of the human mind, often linked to illness, stress, or substance use. By applying this term to GenAI, the text anthropomorphizes the technology, suggesting a relatable human failing rather than a technical or algorithmic flaw. Additionally, both verbs (hallucinates and makes) are dynamic, action-oriented words. This choice of language positions GenAI as an active agent, capable of engaging in processes analogous to human thought. This rhetorical strategy subtly endows the technology with a sense of agency and intentionality, further reinforcing its anthropomorphization. The metaphorical framing of hallucination and reasoning errors draws a parallel between GenAI’s functioning and human cognitive processes, which are inherently fallible. This anthropomorphic narrative makes readers more likely to relate to GenAI as a fellow human—as a partner even—rather than a tool, ascribing to it the same struggles and limitations that define human experience. The implied struggle of a GenAI attempting to “reason” or avoid “hallucinations” evokes a sense of effort and imperfection that softens its critique.
A computer algorithm cannot truly “hallucinate” because it lacks a mind, consciousness, or subjective experience. What is referred to here as hallucination is, in reality, the algorithm generating inaccurate outputs due to limitations in its training data, statistical modeling, or input interpretation. The metaphor simplifies the complex nature of GenAI’s inaccuracies, framing them as an understandable human error rather than the product of a deterministic system. Similarly, the use of “makes reasoning errors” attributes active, intentional cognitive behavior to GenAI, even though its processes are not driven by reasoning but by probabilistic calculations and pre-defined models. These metaphors likely reflect the tension between simplifying complex technological phenomena for lay audiences and the desire to critique GenAI’s limitations. The metaphorical framing makes the critique more accessible by translating technical shortcomings into human-like actions that readers can understand and relate to. However, this framing is also a product of the broader socio-technical discourse that normalizes GenAI’s anthropomorphization. By embedding GenAI within human conceptual frameworks, the discourse invites users to empathize with it, viewing its “errors” as forgivable and familiar rather than mechanical and deterministic.
Leadership and decision-making metaphors
In UNESCO’s guidelines, leadership and decision-making metaphors are used to frame GenAI as active agents capable of guiding, mentoring, or influencing outcomes. By attributing roles traditionally associated with human authority and expertise to these systems, such metaphors emphasize their utility in decision-making processes while fostering a sense of trust and reliability. Metaphors that cast GenAI in leadership roles—such as “teaching assistant”, “coach”, “socratic challenger”, “primary advisor”, and even “opponent” (UNESCO: 30–34)—highlight its potential to challenge, guide, and mentor students.
[8] If guided by ethical and pedagogical principles, GenAI tools have the potential to become 1:1 coaches for such self-paced practice (UNESCO 2023: 31).
[9] However, given that GenAI models have been trained based on large-scale data, they have potential for acting as an opponent in Socratic dialogues or as a research assistant in project-based learning (UNESCO 2023: 32).
[10] Potential transformation: 1:1 primary advisor for learners with social or emotional problems or learning difficulties (UNESCO 2023: 34).
The first excerpt employs the metaphor of “1:1 coaches” (UNESCO 2023: 31) to describe the role of GenAI in facilitating self-paced learning. The term coach conveys a sense of personalized guidance and mentorship, attributes traditionally associated with human educators. By framing GenAI as a coach, the text once again anthropomorphizes the technology, attributing to it the relational and adaptive qualities necessary for effective mentoring. The conditional clause “if guided by ethical and pedagogical principles” attempts to temper this claim by situating the success of GenAI’s role within a framework of human oversight. However, the broader metaphor of a coach implies that the technology already possesses the capacity for autonomous and thoughtful engagement, which risks overshadowing the need for ethical safeguards.
Similarly, the metaphors of “opponent in Socratic dialogues” (UNESCO 2023: 32) and “research assistant” (2023: 32) position GenAI as an intellectual and collaborative entity. The term “opponent” in the context of Socratic dialogues evokes an image of a critical thinker engaging in rigorous questioning to stimulate learning and reflection. This metaphor suggests GenAI can challenge learners’ ideas in a meaningful, nuanced manner. Likewise, the metaphor of “research assistant” attributes collaborative, supportive qualities to GenAI, framing it as a reliable partner in academic inquiry.
Furthermore, the metaphor “1:1 primary advisor” (UNESCO 2023: 34) casts GenAI in a deeply human role, suggesting it can provide personalized guidance for learners facing social, emotional, or cognitive challenges. The term primary implies a level of responsibility and centrality typically associated with a human advisor, while 1:1 underscores the idea of individualized interaction, positioning GenAI as a partner tailored to the unique needs of the learner. This framing anthropomorphizes GenAI to a significant degree, implying it has the relational and emotional intelligence needed to address sensitive, complex issues. The phrase trained based on large-scale data attempts to anchor these metaphors in technical reality but does little to counterbalance their anthropomorphic implications. The metaphors risk implying that GenAI can mimic not just processes but also the intent and critical depth of human intellectual engagement, which is fundamentally absent. In fact, GenAI lacks the contextual understanding and ethical judgment necessary for such a role. The use of this metaphor risks presenting GenAI as a substitute for human advisors, potentially diminishing the importance of trained professionals in supporting learners.
In addition, the systematic use of action verbs throughout the document related to how GenAI performs—such as analyzes, enables, facilitates, fosters, frees, helps, identifies, imposes, improves, and supports, among others—further contribute to the sense of agency. These action verbs imply deliberate and thoughtful behavior, suggesting that GenAI is not merely executing automated instructions but engaging in decision-making processes. This framing aligns GenAI with human-like analytical reasoning, even though its operations are rooted in statistical probability rather than cognitive intent. By doing so, they situate GenAI within a broader narrative of technological determinism, portraying it as a dynamic and autonomous agent.
Discussion
In its guidelines for the use of GenAI in education and research, UNESCO offers a nuanced perspective, advocating for a human-centered approach while cautioning against an over-reliance on technology. The document explicitly critiques the “media hyperbole” framing GenAI as a panacea for educational challenges, emphasizing the irreplaceable role of human agency in education. By urging students, researchers, and educators to critically engage with GenAI’s uses and outputs, UNESCO highlights the importance of keeping human expertise at the core of educational practices. Additionally, the guidelines address ethical and regulatory concerns, particularly in the context of the Global South, where there is a risk of GenAI embedding cultural biases and perpetuating forms of digital colonialism. UNESCO’s call for robust regulatory frameworks and corporate accountability underscores the need for ethical governance to mitigate these risks and protect users from exploitation.
However, this critical perspective contradicts the language UNESCO used to describe GenAI. Despite cautioning against anthropomorphic framings, the document employs highly personified metaphors, characterizing GenAI with human-like attributes such as ingesting information, hallucinating, or acting as a coach. While intended to simplify complex processes for broader audiences, these metaphors risk reinforcing misleading perceptions of GenAI’s capabilities, suggesting it possesses human-like autonomy and intelligence. Terms like learning coach and socratic opponent imply that GenAI is equipped to replicate deeply-human functions, even as the guidelines stress the necessity of rigorous oversight. This inconsistency between critique and metaphorical framing risks fostering unrealistic expectations about GenAI, potentially leading readers to overestimate its role in educational settings.
The biological metaphors, especially the brain-inspired ones, serve to bridge the gap between complex technological concepts and lay audiences. These metaphors use familiar imagery (the human brain) to explain unfamiliar processes (neural networks). This strategy is particularly effective in educational and policy contexts, where simplifying complex ideas is essential for broad engagement. However, while the functioning of ANN might be somewhat “inspired” by biological systems, their mathematical and computational foundations are far removed from the complexities of actual brain function. This metaphor oversimplifies the relationship between biological and artificial systems, potentially leading to an inflated perception of GenAI’s cognitive and emotional capacities. This framing risks overstating GenAI’s capabilities, as it implies an equivalency between biological and artificial processes that does not exist. As a result, these metaphors might encourage a perception of GenAI as fundamentally human-like, fostering both trust and fascination. Such metaphors can lead stakeholders to view GenAI as an organic, sentient-like entity, overshadowing its mechanical and algorithmic nature, and potentially encouraging trust in its outputs without sufficient critical scrutiny. By suggesting that GenAI performs tasks akin to human consumption and analysis, the discourse in the guidance may contribute to the normalization of GenAI’s role as a cognitive partner. This framing risks downplaying the responsibilities of developers and regulators in shaping AI’s limitations and ethical implications.
Reasoning metaphors encourage readers to view GenAI as a flawed yet relatable actor. These metaphors position GenAI as attempting, but failing, to achieve reliability—an inherently human struggle. This framing elicits empathy, as it mirrors human fallibility. Readers may unconsciously attribute to GenAI the same capacity for learning and improvement that they associate with human beings, softening their critique of its failures and reinforcing the narrative that GenAI is a quasi-human partner rather than an unfeeling tool. Socioculturally, these metaphors contribute to the broader narrative of technological determinism and techno-optimism, even as the UNESCO document appears to critique GenAI. By framing GenAI errors as human-like flaws, the document aligns with ideologies that position GenAI as an inevitable and evolving part of society, capable of improvement over time. This discourse subtly shifts responsibility away from the developers and regulators of GenAI systems, framing the technology as something inherently fallible but ultimately forgivable. The use of hallucination, for example, evokes a sense of vulnerability or imperfection that makes GenAI seem less threatening. This rhetorical strategy mitigates potential fears about GenAI’s capabilities by emphasizing its limitations, but it also diminishes critical awareness of the systemic and structural issues inherent in GenAI design, such as bias, lack of transparency, and uneven power dynamics. While the reasoning metaphors make GenAI’s behavior more comprehensible, they also risk oversimplifying its mechanisms, which are fundamentally statistical and non-cognitive. Such framing may lead to misconceptions, suggesting that GenAI possesses a degree of intentionality and understanding that it does not. By framing its outputs as cognitive phenomena, these metaphors may inadvertently inflate public trust in GenAI’s reasoning capabilities.
Lastly, the leadership and decision-making metaphors frame GenAI as a human-like authority figure capable of interpersonal and pedagogical functions. This language elevates GenAI from a mere tool to an active participant in decision-making processes, implying that it possesses judgment, relational skills, and the ability to tailor advice to individual needs. Such metaphors align with a narrative that GenAI can effectively replace human educators or mentors in specific contexts. While these descriptions may highlight GenAI’s usefulness, they obscure its limitations, such as its inability to understand context, nuance, or the ethical dimensions of decision-making. These metaphors potentially obscure the significant differences between human coaches and GenAI, risking the creation of unrealistic expectations about the depth and scope of GenAI’s capabilities in leadership roles. While a human coach adapts based on empathy, experience, and real-time context, GenAI operates on probabilistic models without a true understanding or relational capacity. The metaphor may lead educators and policymakers to overestimate GenAI’s ability to replace or supplement human mentorship.
The tension between critique and advocacy becomes particularly evident in discussions of GenAI as a personalized educational tool. While UNESCO promotes awareness of GenAI’s limitations and biases, the metaphors employed implicitly grant the technology a level of sophistication that might overshadow its flaws. For example, describing GenAI as a 1:1 primary advisor or a generative twin for educators positions it as an equivalent to human educators, undermining the human-centered approach the guidelines aim to promote. These metaphors likely reflect an intention to highlight GenAI’s potential for addressing disparities in under-resourced contexts. However, they also risk normalizing its replacement of human educators in settings where resources are scarce, perpetuating inequities rather than resolving them. By framing GenAI as a versatile tool capable of enhancing learning outcomes, these metaphors create a sense of empowerment for educators and learners. However, this framing minimizes GenAI’s fundamental limitations, such as its inability to genuinely comprehend, critique, or adapt ideas. It perpetuates the perception of GenAI as a quasi-human collaborator, aligning with techno-optimist ideologies that valorize technological innovation while downplaying systemic issues. Moreover, by associating GenAI with human intelligence, the guidelines position it as a natural extension of human ingenuity, fostering unrealistic expectations about its ability to replicate nuanced, adaptable, and creative human thought. This discourse risks undermining the value of human interaction in education, particularly qualities such as empathy, critical thinking, and ethical reasoning, which are inherently human.
The promotion of GenAI as a transformative force also reflects broader sociocultural narratives about scalability and efficiency in education. While these narratives align with aspirations for equitable access to personalized learning, they also prioritize technological solutions over systemic investments in human resources. This framing reinforces techno-determinist ideologies and risks reducing education to a commodified process, where human cognition is treated as something replicable and optimized through technology. To ensure a more equitable future, a more expansive and critical discourse around GenAI is necessary—one that interrogates whose needs are prioritized, whose perspectives are centered, and whose futures are envisioned.
Addressing these concerns requires rethinking how GenAI is communicated to diverse audiences. While the use of personification metaphors in UNESCO’s guidelines might derive from a societal need to navigate GenAI’s abstract and technical nature, it risks reinforcing anthropomorphic misconceptions that may shape public understanding and policy in problematic ways. At a scholarly level, this duality calls for a deeper examination of how discourse shapes public understanding of GenAI’s role in education. At a practical level, this rhetorical tension highlights the importance of aligning language with intended messages, ensuring that the limitations and ethical considerations of GenAI are not overshadowed by powerful and optimistic metaphors. While metaphors play a vital role in making technology accessible, a more deliberate approach to their use could ensure that stakeholders engage with GenAI critically, appreciating both its potential and its limitations within a human-centered educational framework. Instead of metaphors that humanize technology, UNESCO and similar organizations could adopt
more consistent alignment between their critique of GenAI’s anthropomorphic framing and their rhetorical strategies,
language that accurately represents GenAI’s technical nature while avoiding human-like attributes, and
process-oriented explanations or analogies rooted in familiar, non-human systems.
For instance, describing GenAI as a “library that retrieves and recombines information based on patterns” avoids misleading anthropomorphic implications. Rather than saying GenAI “hallucinates facts”, it is more precise to explain that “GenAI generates outputs that may contain inaccuracies due to gaps or biases in its training data or modeling processes”. Similarly, instead of describing GenAI as a “coach” or “advisor”, it would be more accurate and deliberate to frame it as “a system designed to provide automated feedback” or “generate responses based on patterns in its training data”. Visual aids, diagrams, and case studies can also serve as effective tools to bridge the gap between technical complexity and lay understanding without oversimplifying. Developing a glossary of terms that clearly define GenAI’s processes and limitations could further empower audiences to critically engage with the technology. These —and other —approaches have numerous benefits. They (a) are precise, focusing explicitly on GenAI’s operational mechanics and inherent limitations, (b) avoid anthropomorphizing the technology and its potential pitfalls, (c) maintain accessibility while fostering a more realistic understanding of GenAI’s capabilities, and (d) limit the risk of undermining human-centered-values. Overall, we encourage researchers, developers, educators, and policymakers to adopt a more precise and process-oriented language in order to foster a more realistic and critical understanding of GenAI, ensuring that its capabilities are neither overstated nor misunderstood.
Conclusion
This research contributes to the growing body of literature on GenAI in education by focusing on the language used to describe GenAI and its implications for teaching and learning. By identifying and unpacking the personification metaphors employed in UNESCO’s guidelines, this study illuminates how linguistic framing shapes perceptions of GenAI’s capabilities, roles, and limitations, potentially influencing how educators, policymakers, and learners understand and use this technology in educational contexts. Ultimately, this analysis contributes to a more nuanced and critical understanding of GenAI in education, supporting the development of responsible and equitable GenAI policies and practices.
This study is limited by its focus on personification and anthropomorphic metaphors in the language used to describe GenAI in UNESCO’s guidelines. Future studies could expand this analysis to include other metaphorical categories, such as tool metaphors, which frame GenAI as an extension of human capability or an instrument for achieving specific goals. For example, metaphors describing GenAI as a “toolbox” or “engine” for innovation might emphasize utility and control, contrasting with the relational and autonomous qualities implied by personification. Exploring these additional metaphorical framings could provide a more comprehensive understanding of how language constructs different narratives around GenAI and its potential uses in education. Moreover, research comparing the findings of this analysis with the actual understanding and interpretations by educators, administrators, and learners could provide valuable insights into how these linguistic choices affect perception and decision-making in educational contexts. Additionally, longitudinal research could investigate how these discourses evolve over time and how they align—or conflict—with lived experiences and outcomes in educational contexts.
A key limitation of this research lies in its focus on a single UNESCO document, which raises questions about the broader applicability of its findings. By concentrating on this specific text, we risk missing the larger linguistic patterns and rhetorical strategies that may or may not be consistent across UNESCO’s publications or those of other organizations. While this limitation warrants caution, our experience suggests that the anthropomorphic metaphors identified here are prevalent in discourse about GenAI. Nevertheless, qualitative research emphasizes understanding the world from the perspectives of those within it, acknowledging the existence of multiple perspectives and interpretations. Rather than relying solely on reliability in the traditional sense, this study prioritizes consistency and dependability, ensuring that its findings align closely with the data collected (Merriam 1995). Future research could address this limitation by examining a wider and larger set of documents, both from UNESCO and other stakeholders, to empirically evaluate the consistency and implications of such language. This approach would not only enhance the reliability of these findings but also provide a more comprehensive understanding of how metaphors shape global narratives about GenAI.
Competing Interests
The authors have no competing interests to declare.
