Have a personal or library account? Click to login
Reflecting Reality, Amplifying Bias? Using Metaphors to Teach Critical AI Literacy Cover

Reflecting Reality, Amplifying Bias? Using Metaphors to Teach Critical AI Literacy

By: Jasper Roe,  Mike Perkins and  Leon Furze  
Open Access
|Aug 2025

Full Article

Introduction

The quick emergence and broad adoption of Generative Artificial Intelligence (GenAI) models in multiple fields has created an imperative for educational institutions to adapt their approaches to teaching and learning. As these technologies become increasingly embedded in academic and professional contexts, educational researchers are grappling with fundamental questions about how to effectively use, teach, and critically engage with GenAI in schools and universities. This shift has catalysed the development of a new field of study focused on AI literacy, with governments and educational bodies worldwide recognising the vital importance of preparing learners at all educational levels for an AI-enabled future (Miao et al. 2024), including at the earliest levels of education (Su, Ng & Chu 2023). AI literacy has already been introduced in various national curricula worldwide (Laupichler et al. 2022; Sperling et al. 2024); however, AI literacy is a new, loosely defined, and inconsistently applied concept, with no universally agreed upon definition (Bozkurt et al. 2023). Nevertheless, AI literacy is of central importance in education, as students will inevitably need to engage with AI in various ways in the future, as one aspect of broader technological multiliteracy (Stolpe & Hallström 2024).

In this paper, we make several key contributions to the literature. Firstly, we identify and define Critical AI Literacy (CAIL), building on conceptions of AI literacy and digital literacy. Secondly, we introduce the use of metaphor as an effective technique for teaching about AI and developing CAIL. Donoghue (2014: 1) defines metaphor as a supposition that an ordinary word could have been used but instead, another is chosen to “drive the statement in an unexpected direction”. Metaphors are separate from similes, which may indicate a degree of ‘likeness’ (Donoghue 2014: 2) as a point of difference, although the distinction is subject to debate in the field of language and cognition (Sam & Catrinel 2006).

In proposing a definition for CAIL, we begin from the AI literacy definition proposed by Long and Magerko (2020), who define it as a set of competencies which enable users to critically evaluate AI technologies, use them effectively for collaboration and communication, and do so in multiple contexts, including at home, work, and online. In one of the few publications on AI literacy that specifically focuses on and distinguishes GenAI, that is, AI applications which can produce multimodal outputs, Bozkurt et al. (2023) argue that GenAI literacy must go beyond mere basic understanding, requiring a comprehensive approach that integrates theoretical knowledge, practical skills, and deep critical reflection, presenting a framework based on three areas: Know What (theoretical knowledge), Know How (practical application), and Know Why (ethical and philosophical understanding) which aligns with some of the areas proposed in other AI competency frameworks (Chiu et al. 2024). Taking these definitions as starting points, we broaden this conceptualisation of AI literacy to encompass wider, social, economic, and epistemic impacts of AI. Thus, we define CAIL as below:

The ability to critically analyse and engage with AI systems by understanding their technical foundations, societal implications, and embedded power structures, while recognising their limitations, potential biases, and broader social, environmental, and economic impacts.

CAIL can be seen as stemming from related fields such as Critical Digital Literacy (CDL) and Critical Literacy. CDL, in its simplest form, relates to the critical consumption of digital media (Pangrazio 2016); however, in practice, it extends to a deeper critique of the architecture and systemic structures of digital ecosystems (Knight, Dooly & Barberà 2020). This approach, drawing from Ávila and Pandya (2012) and Jenkins (2006), empowers individuals as both consumers and creators of digital content. It examines how “digital (inter)actions” (Knight, Dooly & Barberà 2020: 20) influence our engagement with technology, emphasising the need to scrutinise nonhuman agents, such as algorithms and chatbots, which often shape user experiences.

Central to CDL is the interrogation of “platform ecologies” (Garcia & Nichols 2021), which links user behaviours with the underlying design and resource requirements of digital platforms (Van Dijck 2021). This perspective critiques the agency of digital platforms by revealing the power structures embedded within their interfaces and algorithms and positioning CDL as a tool for understanding the influence of design on user interaction. By investigating these architectures, CDL highlights the importance of considering how digital systems reinforce or disrupt existing social and power dynamics, aligning with broader sociocultural critiques (Djonov & van Leeuwen 2018).

Drawing on these approaches, CAIL involves critically analysing AI’s design, biases, and implications, reflecting on how metaphor and discourse shape our understanding and use of AI. This includes questioning the assumptions of AI’s intelligence and examining its role within broader societal and ethical contexts (Bali 2023). By understanding AI through critical frameworks, educators and students can develop a nuanced perspective that transcends instrumental use and encourages a reflective, socially aware approach to AI adoption and use in educational contexts (Gupta et al. 2024).

Further to this, through CAIL, we posit that critically evaluating AI technologies is not sufficient; rather, we wish to foster an understanding and deep reflection among all learners of the potentially disastrous consequences of unregulated AI, including the extreme social, environmental, and economic costs of AI development (Driessens & Pischetola 2024); and the cultural biases that typify some GenAI models (Roe 2025), as ethical integration of AI in education cannot ignore these significant impacts (Perkins, Roe & Furze 2024). This includes facing the complex reality that, while we may still use and benefit from AI tools, we must not turn away from the questionable, alarming, and at times frightening aspects of new technologies.

Conceptual Metaphor Theory

Teaching CAIL presents unique pedagogical challenges, as educators must help learners grasp the complex technological systems and their societal implications. Metaphors may offer a valuable solution by serving as a powerful learning resource for sparking discussions and creative thinking about AI systems. Although the concept of metaphor is familiar to most people through formal schooling, informal conversation, and popular media, its scholarly study is distinct and has evolved into a rich field of enquiry. The study of metaphor is embedded in multiple frameworks that aim to describe topics such as language, thought, and communication (Gibbs Jr. 2008), with multiple handbooks, volumes, and academic journals dedicated to investigating the relationship between metaphors and the social world.

In this paper, we frame our understanding of metaphor through Conceptual Metaphor Theory (CMT), developed in Lakoff and Johnson’s seminal work Metaphors We Live By (Lakoff & Johnson 2003). Following this theoretical framing, we contend that metaphors are not simply turns of phrase or rhetorical flourishes, but are powerful, pervasive conceptual tools for structuring, restructuring, and even creating reality (Batten 2012). In essence, this form of metaphor is operationalised through understanding and experiencing one kind of thing in terms of another (Lakoff & Johnson 2003). Furthermore, metaphors can fulfil multiple social and cognitive functions, ranging from strategic persuasive devices (Ferreira, Lemgruber & Cabrera 2023) to acting as constructs to help us organise our knowledge of the world (Saban 2006) and understand abstract concepts (Niemeier 2017). In explaining the relationship between metaphor and thought, Kövecses (2020) provides an example of the ‘journey of life’. Such usage of the ‘journey’ metaphor does not necessarily only refer to how we speak about life, but also how we think about life, and subsequently, how we may go on to act in this belief that life is a journey. Given that fostering CAIL requires critical thinking and that metaphors can help us restructure our reality (Batten 2012), there is a natural alignment between these two concepts for pedagogical use.

Metaphor in Education

Historically, metaphors have been used to facilitate education to expand students’ minds and promote critical thinking (Low 2008). Instruction often requires the description and discussion of complex meanings to progress understanding, and metaphors are appropriate tools for achieving this objective (Carter & Pitcher 2010). As instruction requires moving from known to unknown and concrete to abstract, metaphors can use concrete examples to explain abstract principles (Clarken 1997) and can be used for a variety of learning activities, such as finding memorable labels for complex concepts or helping learners understand challenging learning materials (Low 2008). Metaphors may be employed as a powerful learning mechanism even in early childhood, with studies showing that three- and four-year-olds can use metaphors to make inferences about the functional features of objects (Zhu & Gopnik 2023). Metaphors have also been studied in other disciplines which require close social relations and dialogue, such as psychotherapy, with authors noting that the use of appropriate metaphor can lead to positive client outcomes and cognitive engagement (McMullen & Tay 2023). Indeed, there is a rich body of literature describing how metaphors are used to refer to the teaching and learning process itself (Alger 2009; Batten 2012; Clarken 1997; Hager 2008; Saban 2006), with Hager (2008) claiming that humans are unable to think about learning without employing some form of metaphor.

Recent research has empirically validated the value of metaphors in various classroom contexts in higher education. For example, Pager-McClymont & Papathanasiou (2023) demonstrated the value of using CMT to teach English for Academic Purposes (EAP), by using the ‘A is B’ metaphor structure to teach composition, describing arguments (A) as buildings (B) and writing (A) as cooking, eating, and digesting (B). In another related example, Haidet et al. (2017) explored the use of jazz music as a metaphor for teaching effective communication strategies in patient-doctor interaction, noting positive results in using this metaphor to engage students in medical training. Jin et al. (2025) have explored the use of metaphors by students when describing their experiences with GenAI, including representative metaphors such as “high heels” for technical support, a “compass” for text development, and a “drug” for the potential threats and addictive nature of overreliance on GenAI applications.

At the same time, it is important to be aware of the limitations of metaphor use in an educational environment. Inappropriate uses, opaque uses, or metaphors which do not have a high degree of similarity between the two topics may create difficulties for learners or impede the educational process. The question of culture also arises when selecting metaphors, especially when teaching a foreign language. To illustrate, Low (2008) notes that as metaphors may be culturally specific, if teaching learners the skill of language proficiency in a foreign language, there is an argument as to whether it is more appropriate to teach cultural concepts first rather than teach the metaphor. Furthermore, the limits of metaphors are an important consideration when teachers are engaged in the process of selecting one to use. Carter and Pitcher (2010) explain this by drawing attention to the ubiquitous use of metaphors in teaching electricity. The authors explain that when teaching the concept of electricity, the metaphor of electricity as water is commonly applied. The reasoning behind this is that there is a perceived similarity in the ‘flow’ between electrons in a wire and water in a pipe. However, they highlight that extending this metaphor results in a conceptual break, as there are significant differences in some areas of flow; electron flow cannot be seen, whereas water can, and a broken pipe results in water continuing to flow, but electricity in a wire will stop (Carter & Pitcher 2010). Consequently, while there is significant evidence that metaphors are appropriate and beneficial concepts to use in educational processes, their usefulness is limited; metaphors must be appropriate and understandable, and the limits of comparison should be considered by the educator. We draw on these principles to develop a set of criteria that can be applied when choosing to use metaphors to illustrate information about AI technologies.

Metaphors and Artificial Intelligence

The field of AI is no stranger to metaphors, and in the current era of developing AI technologies, metaphors are increasingly common, although some have been described as problematic and anthropomorphising (Furze 2024). At the same time, despite the burgeoning research on AI and GenAI, our initial literature search using the Scopus and Web of Science (WoS) databases resulted in few recent studies on the relationship between AI and metaphor. Despite not being a highly active area of research, explorations of metaphors and AI have however been taking place for a significant period. Gozzi (1994), for example, used a dataset of articles in the press at the earliest stages of development in the field of AI, from 1966 to 1970, and found that common metaphors for computers were ‘brains’ and that this also extended to the description of thought as a computational process, resonating with the common metaphors for AI that we encounter in popular media today.

More recent work on the intersection of AI and metaphor has explored the use of AI in science fiction as a metaphor for other aspects of human existence (Hermann 2023), and studies have been undertaken to ascertain whether Large Language Models (LLMs), such as GPT-4, can interpret literary metaphors (Ichien, Stamenković & Holyoak 2024). Carbonell, Sánchez-Esguevillas and Carro (2016) describe the connections between computational metaphors for the brain and vice versa, arguing that this reinforces the relationship between the brain and computers, and that this subsequently has an effect that shapes society. In the realm of explainable AI, ways of explaining AI chatbots using the metaphor of fermentation and bread-making have also been explored (Nicenboim et al. 2023). Van Es and Nguyen (2024) conducted a semiotic analysis of ChatGPT’s self-representation, noting that the language model seemed to imply a metaphorical representation of itself, and Hunger (2023) describes the historical process of anthropomorphising metaphors related to AI.

Rehak (2021: 155) argues that powerful metaphors of AI are used to perpetuate the deterministic myths of AI technology. Even the term “Artificial Intelligence” itself is problematic, since much of the technology is neither artificial nor intelligent (Crawford 2021). Machine learning algorithms and datasets which underpin these systems also serve to distance the output from human responsibility, complicating earlier ethical debates (Campolo & Crawford 2020).

The language surrounding the processes of AI are also commonly anthropomorphised. Rehak (2021) points to words such as “‘recognition’, ‘learning’, ‘acting’, ‘deciding’, ‘remembering’, ‘understanding’” in processes carried out by machine learning algorithms, comparing the anthropomorphic words used in the field of AI to the more abstract language typically used in mathematics. The Royal Society (2018) are critical of this kind of language as part of the AI narrative, claiming that it causes a disconnect from the technology itself which may contribute to a “hype bubble”, public fears about technology and subsequent reluctance to adopt beneficial technologies, and a distortion of the discourse surrounding the future of digital technology.

One recent work by Anderson (2023) focuses on the metaphorical framing of Large Language Models such as ChatGPT as either a tool or a collaborator, calling for activities which enable us to understand the biases, inaccuracies, and faults of such tools and foster the development of students’ digital literacy. At the same time, the author points out that there is a need to study how metaphorical language is used when discussing ChatGPT, as such uses could complicate, rather than aid, how we understand these technologies. Furthermore, conveying the functions and malfunctions of LLMs using precise and accurate metaphors can help foster an understanding of how these tools work among the public and academics (Smith, Greaves & Panch 2023). Ye and Li (2024) used a corpus linguistics approach to investigate conceptual metaphor in the European Union AI Act (EU AIA), finding that metaphors used included those related to the concepts of ‘Journey’, ‘Human’, ‘War’, and ‘Object’.

Specific research related to education has explored how learners in educational contexts perceive AI technologies through metaphors. Yan, Sun and Zhao (2024) investigated Chinese EFL learners’ metaphorical conceptualisations of GenAI and found that participants viewed GenAI through multiple metaphorical categories such as humans, tools or machines, brains, resources, food and drink, and medicine. Given this diversity of conceptualisation, the authors call for further work to promote digital literacy for students using GenAI. A similar study was undertaken with student teachers in Germany (N = 100) (Şentürk & Akol Göktaş 2024), which found that participants conceptualised GenAI as a library, student, talking pool, warehouse, world, star, game, human, and brain, among others. In relation to the findings of Yan, Sun and Zhao (2024), corresponding findings of categories such as ‘humans’ offer insight into potential cross-cultural similarities in how we use metaphor to structure understanding of these technologies.

Gupta et al. (2024) explored how using discussions related to metaphors for AI can assist in developing awareness of how AI systems work, as a method of fostering Critical AI Literacy. By engaging in a digital collaborative autoethnographic study, the authors developed a set of metaphors derived from popular online media, scholarly literature, social media, and research participants. The authors contend that such an approach is pedagogically powerful as it can stimulate the affective domain, thus benefiting learning. Crucially, these authors offer advice for teaching CAIL through metaphors, which we foreground in our work, suggesting that educators could ask students to share metaphors that they have discovered, and then challenge them to come up with additional input to extend or offer a new metaphor to deepen their understanding of these complex concepts.

Methodology

Framework Development Process

To demonstrate the ways in which CAIL can be fostered by using metaphor, we opted to engage in a rigorous, reflexive, and creative investigation to develop a potential approach. We aimed firstly to identify appropriate metaphors and then create sample learning activities which are firmly embedded in existing frameworks for guiding AI competency development in education. Consequently, we selected the UNESCO AI competency framework for students (Miao et al. 2024) to underpin our learning activity development. Our first task was to generate a list of popular metaphors used to describe AI systems. As one of the benefits of metaphor is their ubiquity in everyday discourse, we aimed to choose metaphors which have widespread appeal and potential familiarity to both teachers and students. To do this, we adopted a similar approach to Gupta et al. (2024), drawing on existing literature on traditional and online media, social media, and scholarly work to arrive at a list of commonly used metaphors. Through collaborative discussion among the authors, we developed a set of criteria against which to measure the suitability of these metaphors for classroom activities and assessed their ability to foster AI literacy by evaluating their alignment with curricular goals for developing AI literacy in UNESCO’s AI competency framework for students (Miao et al. 2024).

The UNESCO AI competency framework for students outlines three levels of competency: Understanding, Applying, and Creating. Among these, the most suitable area for CAIL to be developed using metaphors is Understanding. Miao et al. (2024) explain that at this level, students are required to develop an awareness of what AI is and be able to interpret different ethical issues, technical knowledge, and underlying processes that power AI. The authors also suggest that real-life practices should be integrated to help support understanding. Following these principles and through discussion and consensus building, we created four criteria for evaluating the appropriateness of metaphors to guide the understanding of these concepts: Accessibility, Explanatory Power, Critical AI Literacy Potential, and Pedagogical Utility. Descriptions of these criteria are presented in Table 1.

Table 1

Criteria for Evaluating Appropriacy of AI Metaphors.

CRITERIA FOR METAPHOR EVALUATIONRATIONALE
AccessibilityAn appropriate metaphor will be familiar to students and accessible to a general audience. This means that the metaphor will not require specialised knowledge in order to understand it. For example, a metaphor which focuses on describing AI systems as a form of quantum entanglement would require a knowledge of what ‘quantum entanglement’ means, thus is relatively inaccessible.
Explanatory PowerAn appropriate metaphor has the potential to illuminate key aspects of GenAI systems and develop understanding. This means that the metaphor can bring to light an aspect of AI systems in a creative and meaningful way.
Critical AI Literacy PotentialAn appropriate metaphor may encourage critique of AI systems’ limitations and capabilities or may draw attention to a key aspect of CAIL, such as algorithmic bias potential, or other social, environmental, or ethical impacts.
Pedagogical UtilityAn appropriate metaphor will support the creation of varied learning activities to support the required learning outcomes from the lesson. Such a metaphor will have the potential to lead learners towards these specific learning objectives.

We opted to use these criteria holistically and interpretively, rather than assigning each criteria a weighting or quantitative score for relevance. Instead, we relied on the criteria as a guiding framework for consensus-building discussions. After deciding on these criteria for appropriate metaphor selection, we aimed to develop an exhaustive list of potential metaphors for AI that we could appraise using these criteria. This aspect of the research was intentionally reflexive and required the research team to draw on our personal experiences. Our research team consisted of three individuals, who each have expertise as investigators, educators, and consultants on issues relating to AI and education. As a result, we collaboratively discussed and shared our existing knowledge of common metaphors, including those that we had referred to in our own work previously, those we had come across in established literature, encountered in the classroom, or found in other environments.

We arrived at a final list of 13 metaphors based on these consensus-building discussions in light of our appraisal criteria, and these are displayed in Table 2. Where there is a clear source that we drew on in selecting our metaphor, we have included a citation. Others are not traceable to any singular published source material, and thus no source is supplied.

Table 2

Initial List of Potential Metaphors to Guide AI Literacy.

METAPHORDESCRIPTION
Stochastic Parrot (Bender et al. 2021)Coined in a highly cited paper to refer to the probabilistic, non-understanding nature of AI language models.
Black BoxAn opaque system with observable input and outputs but no access to inner processes.
Iceberg (Furze 2024)Generative AI is powered by an enormous, largely unknowable dataset ‘below the waterline’. Consumers and users only interact with a small portion of the model.
Funhouse MirrorProvides a reflected version of reality but in distorted and warped ways.
AssistantAn aide that can assist with simple tasks but requires supervision.
Loudspeaker (Gupta et al. 2024)Amplifies and broadcasts existing patterns.
Double Edged SwordA tool or weapon with both beneficial and harmful aspects.
Calculator for Words (Willison 2023)A mathematical calculation of language.
Natural DisasterA powerful force that can be prepared for, but not avoided.
Collaborative ArtistA creative partner that can contribute to an artistic process.
MapA representative map of society and culture based on the training data.
Pattern Matching Machine (Furze 2024)A system that identifies and reproduces patterns from data.
Echo ChamberA system that reflects and reinforces existing patterns.

Following the development of this list, through collaborative discussions, our research team holistically re-assessed each metaphor against our established criteria (Accessibility, Explanatory Power, Critical AI Literacy Potential, and Pedagogical Utility) and their alignment with UNESCO’s AI competency goals. This qualitative analysis was mindful of Low’s (2008) emphasis on cultural accessibility, and Carter and Pitcher’s (2010) caution about how metaphors can break down when extended too far. Batten’s (2012) work on how metaphors can reconstruct understanding in educational contexts guided our analysis of how each metaphor functions pedagogically. Recent empirical studies by Yan, Sun and Zhao (2024) and Şentürk and Akol Göktaş (2024) validated our approach, demonstrating that learners naturally conceptualise AI through multiple metaphorical categories; therefore, we sought broadness across the chosen metaphors, making a final selection based on a combination of alignment with our selected criteria, as well as the UNESCO AI competency curriculum goals and elements of understanding in the AI competency framework that they most closely matched (Miao et al. 2024). We put these into an ‘A is B’ structure, as shown in Table 3.

Table 3

Selected Metaphors, Selection Criteria and Alignment to UNESCO AI Competency Framework.

SELECTED METAPHORRELATION TO SELECTION CRITERIAALIGNMENT TO UNESCO AI CURRICULUM GOALS
AI is a Funhouse MirrorAccessibility: The concept of a funhouse mirror is familiar across age groups.
Explanatory Power: Illustrates how AI systems may distort reality.
Critical AI Literacy Potential: May lead to discussion about bias and representation.
Pedagogical Utility: Lends itself to physical activities with realia (e.g. a distorted mirror) and links to activities exploring data and algorithmic bias.
CG4.1.3.2 Develop conceptual knowledge on how AI is trained based on data and algorithms.
AI is a MapAccessibility: Draws on familiar concepts of maps as a way of representing the world.
Explanatory Power: Demonstrates how AI is a representation but not a true reflection of the world.
Critical AI Literacy Potential: Encourages the examination of power structures and colonialism.
Pedagogical Utility: Allows for debate and critical thinking regarding power and representation.
CG2.1.1 Surface ethical
controversies through a critical
examination of use cases of AI
tools in education.
AI is an Echo ChamberAccessibility: Echoes a universal physical reality across cultures.
Explanatory Power: Demonstrates how AI systems may reinforce ideas, biases, or concepts in the training data.
Critical AI Literacy Potential: Leads to the critical discussion of how to mitigate feedback loops and filter bubbles.
Pedagogical Utility: Supports numerous practical and personal activities, for example exploring algorithmic advert selection on social media.
CG4.1.4.1 Scaffold critical thinking skills on when AI should not be used.
AI is a Black BoxAccessibility: The concept of a black box is a long-standing metaphor with high familiarity explaining technology and systems where inner workings are hidden.
Explanatory Power: Helps illustrate the challenge of understanding AI systems.
Critical AI Literacy Potential: May promote discussion of transparency and explainability.
Pedagogical Utility: May be demonstrable by encouraging learners to generate unexplainable outputs.
CG4.1.2.1Illustrate dilemmas around AI and identify the main reasons behind ethical conflicts.

Translating Metaphors into Teaching Activities for Critical AI Literacy

To illustrate our conceptual method for teaching AI literacy by exploring conceptual metaphors, we created sample learning activities for each of the four chosen metaphors. We began by defining our learning outcomes based on UNESCO AI curriculum goals contained in the competency framework (Miao, Shiohira & Lao 2024),and then drawing on our professional knowledge as educators to devise appropriate exercises for a multidisciplinary higher education classroom. The activities were structured by first introducing the metaphor in question, followed by a scaffolding learning activity, and then a final teacher-facilitated discussion.

Activity 1: AI is a Funhouse Mirror

Learning Objectives

  1. Recognise how AI systems may distort information.

  2. Understand the principles of bias in AI outputs.

  3. Explore the appropriacy of a Funhouse Mirror metaphor in relation to AI systems.

Addresses Curriculum Goal

CG4.1.3.2 Develop conceptual knowledge on how AI is trained based on data and algorithms.

Introducing the Metaphor

The instructor can begin by showing an image of a funhouse mirror and eliciting experiences from the learners; for example, if they had encountered a funhouse mirror before, the intended or unintended effects of the mirror, and its form and purpose. The instructor can then explain the relationship between AI systems and a funhouse mirror, namely, that they may produce distorted versions of reality.

Learning Activity: Prompt Testing to Observe Bias

In small groups of three to four students, have learners search for articles on bias and distortions in GenAI outputs and imagery using the Internet. Following this, learners should be encouraged to test the same prompt in different GenAI systems and document any variations in their responses. Have learners compare the outputs and identify the differences and then ask learners to discuss whether these could represent ‘reflections’ of potential biases in training data.

Discussion Questions

The instructor may prompt learners to consider whether there are patterns of distortion that are consistent across systems and identify the potential implications of these biases. Finally, ask students to reflect on how accurate the funhouse mirror metaphor is, prompting them to consider what the impacts of distorted GenAI outputs could be on the individual or society.

Activity 2: AI is an Echo Chamber

Learning Objectives

  1. Understand the concept of a feedback loop.

  2. Explore the concept of a ‘filter bubble’.

  3. Explore the appropriacy of an Echo Chamber metaphor in relation to AI systems.

Addresses Curriculum Goal

CG4.1.3.2 Develop conceptual knowledge on how AI is trained based on data and algorithms.

Introducing the Metaphor

The instructor can begin by introducing the echo chamber concept. Multiple forms of media can be used to demonstrate this, for example, a cartoon, video of an echo chamber, or audio clips of an echo. The instructor can ask learners to reflect on why an AI system can be seen as an echo chamber.

Learning Activity: Observe the Echo-Chamber Effect

In this activity, learners can actively explore how limited training data and repeated prompting of a GenAI tool can lead to a narrowing set of options. Encourage students to experiment with multiple GenAI tools with a neutral prompt, for example, asking for a recipe for a healthy breakfast. Then, learners should be encouraged to continue to prompt for further examples and examine the way in which the opinions and responses become increasingly narrow. As a follow-on activity, learners can be asked to reflect on the types of advertisements that they see online and compare them to peers, then reflect on the nature of algorithmic suggestions as a way of creating ‘filter bubbles’.

Discussion Questions

Following this activity, the instructor may ask learners to discuss the effects of filter bubbles and their potential effects (e.g. polarisation), as well as how algorithmic systems can produce undesired or unintentional consequences. This is used as a basis for discussing whether AI corporations have an ethical responsibility to reduce personalisation through algorithmic means and the pros and cons of this approach. Finally, the instructor may ask learners to reflect on whether the ‘echo chamber’ metaphor is an appropriate way of describing AI systems.

Activity 3: AI as a Map – Representation, Power and Bias

Learning Objectives

  1. Analyse how AI’s representation of knowledge parallels historical maps, focusing on inclusivity, bias, and power.

  2. Recognise limitations in AI’s representation of knowledge.

  3. Critique the metaphor of AI as a map to uncover insights into technology’s impact on perception and inclusivity.

Addresses Curriculum Goal

CG2.1.1: Surface ethical controversies through a critical examination of the use cases of AI tools in education.

Introducing the Metaphor

The instructor can begin by discussing historical maps, such as the Mercator projection, which often distorts size and centralises Western countries, as a springboard to AI’s role in shaping perspectives. Students could explore questions such as what parts of reality are emphasised or minimised in a map, and why? Who decides what is “on the map”, and how does this affect our understanding of the world?

The instructor can introduce AI as a map, illustrating that AI “maps” knowledge through data selection, categorisation, and emphasis, with similar power dynamics shaping what is visible or hidden. Reflect on the statement “the map is not the territory”, exploring how the representation of the world based on the Large Language Model dataset is not a true reflection.

Learning Activity: Mapping AI’s Knowledge Terrain

In small groups, students will choose an area of knowledge (e.g. cultural heritage, health information, and social trends) and map it from the perspective of an AI tool. They should analyse how AI represents this area, noting:

  • What information is readily accessible online that can be “mapped” by data scraping?

  • What perspectives or voices are missing or minimised from the data?

  • What biases or assumptions are present in AI output? (in terms of the metaphor, how is the “map” different to the “territory” or reality?

Compare and Contrast “Maps”: Groups can then compare their findings by examining differences in representation and potential biases.

Discussion Questions

In analysing AI’s ‘map’ of knowledge, students observe which perspectives are amplified and which are neglected, noting that AI often prioritises dominant narratives while overlooking marginalised voices. This selective representation shapes our perception of knowledge, as AI’s outputs reflect the biases and gaps inherent in its training data. The metaphor of ‘AI as a Map’ highlights AI’s impact on knowledge and power by revealing how certain viewpoints are centred while others are diminished, much like historical maps that emphasise the perspectives of those in control. However, this metaphor has limitations, as it may imply a static view of knowledge rather than AI’s dynamic interaction with evolving data. Understanding these dynamics encourages a more responsible approach to AI use, prompting users to critically assess an AI’s outputs and recognise where broader, more inclusive perspectives are needed.

Activity 4: AI is a Black Box

Learning Objectives

  1. Develop understanding of the current limitations of AI transparency.

  2. Recognise the importance of verification of AI generated information.

  3. Explore the appropriacy of the AI as a Black Box metaphor.

Introducing the Metaphor

Show learners an example of a ‘black box’ either physically or as an image. Using GenAI imagery to create a picture of a black box can also be an interesting example to demonstrate the metaphor in question and can be a departure point of analysis (for more examples of analysing bias in GenAI imagery, see Roe (2025)).

Learning Activity: Exploring Probabilistic Generation in GenAI Output

Encourage learners to explore how GenAI tools produce varied outputs based on probability distributions rather than deterministic processes. For example, ask learners to prompt multiple GenAI tools to finish a simple sentence, such as ‘the cat sat on the…’ and document the different responses. This demonstrates that GenAI tools produce probabilistic outputs that may vary even for identical prompts, highlighting the statistical nature of these systems. Learners can then compare their outputs with one another to identify patterns in the variations. Using imagery or video generation as part of this exercise can also be an effective option, using freely available generative AI tools. Attempting to interpret and analyse GenAI images can help learners understand how the system made its probabilistic choices based on the prompt and its training data.

Discussion points: Have students consider why different outputs might be generated for the same prompt and how this relates to the “black box” nature of AI systems. Explain that while the outputs vary, they follow statistical patterns learned during training rather than being truly random. This probabilistic generation creates a sense of unpredictability while still maintaining coherence with the input context.

Discussion Question

Ask learners to engage in a discussion to consider why explainable AI matters, the implications of randomness in certain situations, and the potential consequences; for example, if such models should be deployed in high-stakes environments (e.g. making medical or legal decisions, writing important documents) or in lower-stakes environments such as generating code or creating content. Following this, have students reflect on whether the black box metaphor is appropriate for describing the outputs of GenAI tools and AI systems more generally. This can be a time to reflect on the importance of AI literacy and the effects if users are unaware of the randomness inherent in some AI system outputs.

Limitations

Our proposed methods for teaching AI literacy through metaphors offer an additional lens for fostering an understanding of how AI systems work in an accessible, interesting, and engaging manner. At the same time, multiple limitations should be considered when implementing this approach in classrooms. First, consideration of metaphor appropriacy is key, as metaphors can break down when extended too far (Carter & Pitcher 2010). This is relevant when discussing AI systems, as metaphors may be effective in some respects. For example, the funhouse mirror effectively illustrates the concept of distortions in GenAI outputs; however, it does not adequately capture the algorithmic processes that produce these distortions. For this reason, these metaphors should be viewed as objects of critique and a starting point for further discussion.

Metaphor choice should be considered from a culturally diverse perspective as metaphors may be culturally specific. Therefore, it is important to choose metaphors that are accessible to target learners. The ‘funhouse mirror’ metaphor specifically, may not be familiar to learners from a culture that does not have access to these attractions. Similarly, the concept of a map can vary across cultures. This is a limitation of the approach, but it is not insurmountable with careful planning. Although we identified four final metaphors for development into teaching exercises, we recognise that other metaphors may also offer value in understanding GenAI and may provide valuable counterpoints or alternative perspectives in learning.

Finally, although we attempted to adopt a systematic and methodologically rigorous approach to selecting and evaluating metaphors, we relied on qualitative assessment and consensus building among researchers to validate the effectiveness of these metaphors in promoting AI literacy. Finally, given the recent nature of calls to promote AI literacy, with the UNESCO AI competencies framework released only in 2024, practical validation of these exercises has not yet been carried out.

Conclusion

As AI technologies continue to develop at breakneck speed and become more embedded in daily life, the need for CAIL in education continues to increase. Multiple approaches from different disciplinary perspectives are needed to help learners engage in these systemic changes in society. This paper argues that the growing discourse surrounding the appropriacy of metaphors for AI in society can be used as a helpful learning resource for fostering AI literacy, building on the long history of using metaphors in educational contexts. Our development of criteria for assessing metaphors for use in this context is intended to be simple, practical, and usable for educators, while our selection of four metaphors that describe unique attributes of AI systems exemplifies how this approach can be enacted in the classroom and how it can be aligned with an existing AI competency framework. The four key metaphors we describe —the funhouse mirror, echo chamber, map, and black box—provide educators with examples of potential learning activities.

The value of this CMT-driven approach to teaching AI literacy enables educators to address multiple aspects of AI literacy simultaneously, as learners can engage with technical concepts and the current literature while equally examining the ethical, societal, and individual impacts of AI. At the same time, we highlight that this theoretical paper should serve as a starting point for future research, including longitudinal studies which examine the efficacy of metaphor-based AI literacy education, the refinement and validation of a set of selection criteria, investigation of how metaphor-based understanding can translate into practical abilities in working with AI systems, and the cross-cultural implications of using metaphors to teach in diverse contexts.

AI Usage Disclaimer

This study used GenAI tools for revision and editorial purposes throughout the production of the manuscript. Models used were ChatGPT (GPT-4o) and Claude 3.5 Sonnet. The authors reviewed, edited and take responsibility for all outputs of the tools used in this study.

Competing Interests

The authors have no competing interests to declare.

Author Contributions

Jasper Roe – Conceptualisation, Investigation, Writing – Original Draft Preparation, Writing – Review and Editing.

Mike Perkins – Conceptualisation, Investigation, Writing – Original Draft Preparation, Writing – Review and Editing.

Leon Furze – Conceptualisation, Investigation, Writing – Original Draft Preparation, Writing – Review and Editing.

DOI: https://doi.org/10.5334/jime.961 | Journal eISSN: 1365-893X
Language: English
Submitted on: Nov 22, 2024
Accepted on: Apr 27, 2025
Published on: Aug 26, 2025
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2025 Jasper Roe, Mike Perkins, Leon Furze, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.