Have a personal or library account? Click to login
A Metaphor for Rethinking Artificial Intelligence in/and Education Cover

A Metaphor for Rethinking Artificial Intelligence in/and Education

Open Access
|Aug 2025

Full Article

1. Introduction

In Lewis Carroll’s “Through the Looking Glass”, Alice steps into a world that mirrors her own—only to discover that familiar truths are upended and comforting assumptions unravel. The metaphor of the “looking glass” invites us to consider the strange yet revealing reflections that technologies collectively referred to as “Artificial Intelligence” (AI) offer when applied in educational settings. Much like Alice’s journey, where mirrors distort and exaggerate, these technologies provide a reflection of our dominant values, systemic inequities, and entrenched biases, often exposing uncomfortable truths we have yet to confront. This journey is not one of fantasy, however; it is a tangible exploration of our collective values, revealing how technology can amplify injustice, reshape notions of fairness, and redefine what we consider merit.

The term “AI” is less a technical category than a rhetorical chameleon—adapting to evoke wonder, stoke fear, or legitimize corporate interests. Its meaning has always been unstable: from Cold War-era dreams of machine “reasoning” to today’s marketing of stochastic parrots as “intelligent” (Bender et al. 2021). This slipperiness is not accidental; it obscures the mundane realities of pattern-matching systems—systems that rely on data drawn from the past, often reinforcing historical patterns and existing social norms— while amplifying their perceived authority. In education, this chameleonic term legitimizes technologies as “innovative” while masking their role in outsourcing pedagogical judgment—from grading essays to policing student attention.

Under the ambiguous umbrella of “AI” fall a range of computational software and systems that use pattern recognition, statistical inference, and optimization to simulate tasks such as language processing, prediction, and decision-making. These include, but are not limited to, Large Language Models (LLMs) for content generation, machine vision systems for exam proctoring and behavior monitoring, adaptive learning platforms for performance-based content delivery, recommender systems for personalized material suggestions, automated grading systems using predefined rubrics, and predictive analytics dashboards for identifying at-risk students. Each of these technologies offers a distinct reflection of educational priorities—some reinforcing dominant norms, others distorting the values they claim to uphold. These technologies are not merely instructional aids—they are reflective surfaces that reproduce the ideologies and inequalities embedded in the systems that design and deploy them.

Extending Vallor’s (2024) mirror metaphor, this paper argues that so-called AI technologies do not merely reflect educational systems—they actively shape them, reinforcing dominant logics under the guise of neutrality. Unlike the “coach” metaphor (which implies agency improvement) or the “tool” metaphor (which suggests neutral utility), the mirror exposes how technologies freeze existing power relations into infrastructure. Ultimately, the mirror compels us to ask: What future do we tacitly normalize through the integration of these technologies? More crucially, what possibilities for education do we surrender in the process?

2. Metaphors in education and for (educational) technology

Metaphors are not simply figures of speech—they are conceptual frames that shape how we perceive, understand, and act upon the world. Lakoff and Johnson’s seminal work “Metaphors we live by” (Lakoff & Johnson 1980) illustrates how metaphors influence not only language but also thought and action, embedding themselves deeply into our cognitive and cultural frameworks. Along the same lines, Weller suggests that there are two, at least, significant elements of metaphors: “they are fundamental in shaping our interactions with the world” and “they can be used to understand a new domain” (Weller 2022: 7). Similar to this notion, just as the design of an object shapes the interaction between the object and the user (Norman 1988), so a metaphor shapes our understanding of an entity (object, process, person, or environment) and its functions.

These insights become especially salient in education, where metaphors not only describe but actively shape pedagogical relationships—between teachers, students, and technologies. For example, the “students as vessels” metaphor reflects a teacher-centered approach, framing learning as passive knowledge transfer—where teachers “fill” students rather than engage them. Conversely, the “students as explorers” metaphor positions learners as active agents of discovery, framing teachers as guides rather than authorities—a perspective aligned with constructivist and humanistic approaches.

In the context of technology, metaphors serve as powerful interpretive frameworks that shape how students, educators, and policymakers understand its roles and implications. Yet, they also risk obscuring the nuanced realities of technology. To some extent, this is inevitable. As scholars such as Sfard (1998) have noted, metaphors also carry limitations, often constraining perspectives and reinforcing reductive or problematic views. For instance, positioning digital technologies as “tools” or “tutors” can oversimplify their nature, downplaying critical issues such as ethical dilemmas and inherent biases. If they are perceived as “tools”, one might assume they are value-free, ignoring the human choices and biases embedded in the algorithms and data selection. Similarly, framing them as “tutors”, “coaches”, “assistants”, or “partners” risks anthropomorphizing the technology, suggesting an agency or empathy it does not possess. What is more, these metaphors often frame technology as subordinate or benign, downplaying its embeddedness in power relations, labor practices, and institutional agendas. Lastly, it is important to recognise that “metaphors are so embedded in our language and models of thinking, we often do not even recognize something as a metaphor” (Weller 2022: 6). For instance, even the term “Artificial Intelligence” is a metaphor—one that borrows from human cognition to imply capabilities machines do not possess. This raises a critical question: If our foundational language is metaphorical, how might alternative metaphors, like the “mirror”, reframe our understanding? This topic will be analysed further in the next sections.

It is important to emphasize that the ethical stakes of these metaphorical choices extend far beyond semantics—they shape funding priorities, policy decisions, and institutional practices. When technologies are framed as “efficiency tools” or “intelligent tutors”, policymakers may invest in automated grading systems over teacher professional development, or prioritize surveillance tools under the guise of “personalization”. Such metaphors naturalize technical solutions for deeply human challenges, often diverting resources toward corporate EdTech platforms while obscuring the labor, bias, and power relations embedded within them (Crawford 2021; Williamson 2024a). These oversights highlight the need for a critical and reflective approach to metaphors used to describe particular technologies, ensuring they are used not to obscure but to illuminate the multifaceted implications of integrating these technologies into education.

3. Two critical perspectives on metaphors

The literature offers numerous classifications of metaphors in education (Botha 2009; Chen 2003; Lukeš 2019; Mason 2018; Nardi & O’Day 1999). From these categorizations, two elements are significant to consider for discussion in this paper.

The first is the metaphor of ecology by Nardi and O’Day (1999) which positions technology as part of a complex ecosystem, interacting with other elements in the environment. Taking into account that different metaphors lead to different perceptions of technology’s power and our relationship to it, viewing technology as a tool implies human control. On the other hand, viewing “technology as part of an ecology, surrounded by a dense network of relationships in local environments” (Nardi & O’Day 1999: 27) suggests a more complex and intertwined relationship between human and technology. This perspective can better support our understanding of how technology is influencing and being influenced by its surroundings and it can lead to a focus on sustainability, balance, and the long-term consequences of technological development.

The second posits that metaphors can be used in three broad ways: a) metaphor as invitation; b) metaphor as instrument; and c) metaphor as catalyst (Lukeš 2019). In the first way the metaphor is used to introduce a complex concept by using a familiar one. While this can be helpful for initial engagement, it is crucial to move beyond this surface-level understanding. When metaphor is used as an instrument, then a deeper dive into both the target concept and the metaphor is required. By exploring the similarities and differences, learners can develop a more nuanced understanding. Lastly, when metaphors are used as catalysts, they transcend explanation or analysis to inspire transformative thinking. For instance, framing the so-called AI technologies as mirrors invites deep self-reflection about the ethical and pedagogical implications of integrating them into education. It compels stakeholders to question not only how these technologies can enhance education but also what they reveal about existing practices and values. This dynamic use of metaphor can foster critical dialogue, guiding both educators and policymakers toward more thoughtful and sustainable adoption of technology.

4. Rethinking the nature of “AI”: Moving beyond “Artificial” and “Intelligent”

The conventional framing of so-called AI as an autonomous, intelligent entity has long shaped public and professional perceptions of these technologies. The term “artificial” suggests a replication of human abilities, while “intelligent” implies cognitive capacities akin to human thought. This rhetorical framing lends computational systems an aura of agency they do not possess, fueling widespread misconceptions. In both media and academia, these systems are described as “judging”, “predicting”, “deciding”, or even “thinking”—obscuring the fact that they operate through statistical inference and pattern recognition, not human reasoning. Such anthropomorphic language fosters unrealistic expectations, suggesting that computational software can independently innovate, navigate complex human contexts, or replace the relational and affective dimensions of education.

This problematic framing is compounded by descriptions invoking “magic” or “alchemy” (Campolo & Crawford 2020; Diamant 2024) that deepen the mystique and eclipse the human labor and infrastructures that make these systems possible. Meanwhile, the fervent pursuit of Artificial Superintelligence (ASI), reflects a techno-utopian narrative bordering on theomorphic reverence, casting ASI as a redemptive force or even a divine surrogate (Altman 2024). It is indicative that public misconceptions frequently align with either overestimating its capabilities or misunderstanding its constructed nature.

Research by the FrameWorks Institute (Conklin et al. 2021) highlights how public misconceptions often frame “AI” as an infallible “divination” tool, obscuring its limitations and its role in reproducing systemic inequities. This mystification depends on erasure as it hides the extractive reality: machine outputs require human data labor, environmental resources, and cultural bias laundering (Crawford 2021). These layered metaphors, both anthropomorphic and theomorphic, deepen corporate narratives, amplifying quasi-spiritual dependence while sidelining critical discussions about power and governance.

Against this backdrop, it is crucial to recall that computational systems operate based on human-defined algorithms and parameters and data curation. Their outputs reflect the limitations of the data and biases of their designers, rather than an autonomous or universally “intelligent” capacity.

As Crawford (2021) and Tucker (2022) argue, their so-called “intelligence” is less autonomous cognition than it is the scaled product of human labor and data extraction. Selwyn (2022) similarly critiques this performativity as “AI theatre”, wherein inflated claims mask the banal mechanics behind the curtain. In response, many scholars now call for retiring the generic term “AI” altogether, favoring more precise descriptions that avoid mystification (Pretz 2021; Tucker 2022). As Tucker (2022) succinctly observes “Whatever the merit of the scientific aspirations originally encompassed by the term ‘artificial intelligence,’ it’s a phrase that now functions in the vernacular primarily to obfuscate, alienate, and glamorize”.

These critiques underscore the need to strip away metaphorical hype, not just for terminological accuracy, but to properly recognize these systems as reflective surfaces exposing the ethical, political, and epistemic assumptions embedded within the educational contexts they inhabit. Recognizing these technologies for what they are—complex artefacts reflecting past human decisions and societal structures—allows for more honest and critical engagement. Their outputs inevitably function less like independent thoughts and more like mirrors, reflecting the values, biases, and priorities embedded within their design and deployment. Shifting attention away from the illusion of internal “intelligence” and toward the societal forces they mirror is crucial for critically evaluating their role in education—and for imagining more just technological futures.

5. Reframing “AI” in education: The mirror metaphor

The mirror metaphor fundamentally reorients how we understand so-called “AI” technologies, rejecting notions of autonomous intelligence to frame them instead as reflective surfaces that expose embedded values, biases and power structures. This perspective aligns with critiques showing how technologies amplify inequality (Noble 2018), magnify surveillance capitalism (Zuboff 2019), and disproportionately harm vulnerable populations (Crawford 2021)—demonstrating that these systems reflect and entrench societal injustices rather than transcend them.

Within education, the metaphor has profound implications. It challenges educators and policymakers to ask: What do these technologies reveal about our pedagogical models? Are these models equitable, inclusive, and aligned with the diverse needs of learners—or do they perpetuate outdated and narrow definitions of teaching, learning, and achievement? It also invites critical scrutiny of assessment practices, questioning whether technological tools fairly evaluate student performance or amplify biases inherent in traditional metrics like standardized testing.

This extends to deeper constructs of educational value, such as notions of merit and failure. If digital technologies mirror systemic inequalities, such as disparities in access to educational resources or culturally biased measures of intelligence, then we must reconsider how we define and reward success. Are we unintentionally using technologies to reinforce the status quo, privileging certain groups while marginalizing others, or can we harness these tools to challenge and reshape systemic inequities?

Beyond pedagogy and assessment, the mirror metaphor prompts us to question whether the integration of digital technologies genuinely fosters learning and growth—or whether it serves less student-centered priorities such as administrative efficiency, compliance, and institutional optimization. By illuminating these tensions, the metaphor compels a deeper evaluation of the values we embed in our educational technologies.

This metaphorical approach resonates with the ecological view of technology (Nardi & O’Day 1999), which sees digital tools not as isolated innovations but as elements deeply intertwined with broader educational ecosystems. Technologies interact dynamically with pedagogy, institutional priorities, and societal structures, shaping and being shaped by them. But where ecology emphasizes interdependence, the mirror demands we confront what these interdependencies reveal about our priorities; much in the way that polluted water reflects environmental neglect.

Thus, the mirror metaphor serves as both a critique and a call to action. It exposes the illusion of so-called “AI” as an independent transformative force, urging us instead to recognize these technologies as reflections—often distorted—of the priorities, assumptions, and power dynamics of the societies that create and deploy them.

By fostering critical reflection, the mirror metaphor lays essential groundwork for reimagining education toward more ethical, inclusive, and human-centered futures.

The mirror metaphor ultimately compels us to confront an uncomfortable question: When we do not like what we see in technology’s reflection of education, the biases, the inequities, the misplaced priorities, will we have the courage to change not just the mirror, but the system it reveals?

To fully grasp the implications of viewing digital technologies as mirrors, we must analyze the properties of this reflection more closely. The following analysis proceeds in four parts: examining the limitations of what the mirror shows, analyzing the inherent distortions and biases within the reflection itself, exploring how policy and implementation choices further shape or mishandle this reflection, and considering how the mirror can catalyze critical pedagogical practice.

Mirror, mirror: What don’t you see?

Just as a mirror reflects a physical image, computational systems reflect quantitative data on student performance. They analyze test scores and homework submissions to identify patterns and trends. But these ostensibly clean metrics come at a cost: like a mirror that captures only an outline but erases the nuances of character, they strip away much of what makes learning human.

While computational systems proficiently capture measurable products of learning, they cannot grasp its invisible yet vital dimensions: the spark of curiosity, the resilience cultivated through failure, the deep understanding forged through collaboration, or the empathy developed in human relationships. They struggle to reflect student creativity, as algorithms trained on past data are ill-equipped to recognize true novelty or divergent thinking that defies established patterns. Crucially absent is the reflection of the learner’s emotional state—the curiosity fueling inquiry, the anxiety hindering performance, or the frustration signalling a learning bottleneck. Similarly, the richness of social interaction in collaborative learning—encompassing empathy, negotiation, and shared problem-solving—is reduced to shallow proxies that flatten the complexity of genuine collaboration.

This surface-level reflection is not a neutral limitation; it reproduces and intensifies education’s broader tendency toward metric capture—the reduction of complex human development to easily quantifiable indicators (Selwyn & Gašević 2020; Williamson, Bayne & Shay 2020). In doing so, these technologies reinforce reductive visions of learning and success, enacting what Biesta (2010) terms “learnification”: the problematic reduction of rich human development to simplistic data points.

If left unexamined, this shallow mirroring risks narrowing educational aspirations, fostering environments where creativity, critical inquiry, and holistic development are sidelined in favor of producing data-friendly outcomes. Over time, it risks reconfiguring education into a system optimized for compliance and output, rather than for cultivating critical, empathetic, and transformative citizens—pressures that are further exacerbated by broader audit cultures and accountability regimes.

Recognizing these limitations is crucial. Without careful reflection, we risk allowing technologies to redefine educational values by default—valorizing efficiency, surveillance, and control over exploration, diversity, and human flourishing.

Not the fairest of them all: Distortion in the mirror

Much as the fidelity of any mirror depends on the quality of its materials and the artisan’s skill in grinding and polishing the glass, so the capacity of computational systems for equitable reflection is fundamentally constrained by the datasets they are based upon and the algorithmic logics guiding their design (Holstein et al. 2019). Crucially, these distortions do not originate from the computational systems themselves, which process inputs with mechanical indifference, but from the cultural fingerprints embedded in their construction (Acerbi & Stubbersfield 2023). In this sense, bias is not merely an artefact of flawed machine architectures, but a deeper cultural inheritance from the societies that create and deploy these technologies. The distortion, therefore, is less an inherent flaw of the “mirror” itself and more a faithful, albeit uncomfortable, reflection of historical inequities, systemic prejudices, and pervasive cultural assumptions.

Thus, when facial recognition models systematically misidentify minority and non-white individuals (Leslie 2020; Wehrli et al. 2022) or when hiring algorithms discriminate against women and Black applicants (Hofmann et al. 2024; Wan et al. 2023; Wilson & Caliskan 2024), we are not witnessing isolated technological malfunctions, but rather the algorithmic reproduction of societal injustices.

Similarly, in educational contexts, predictive models designed to identify “at-risk” students often display gender and racial biases (Anderson, Boodhwani & Baker 2019; Gándara et al. 2024; Lee & Kizilcec 2020), while writing detection software disproportionately flags texts produced by non-native English speakers (Liang et al. 2023). These cases underscore that algorithmic harm is rarely random: it systematically reflects and reproduces existing structures of privilege and exclusion.

As these examples illustrate, the effects of these technologies are “relational in nature” (Selwyn 2022: 624) empowering some learners while disadvantaging others. Much like a mirror reflects clearly the person standing directly in front of it but distorts those farther away, educational technologies tend to better serve students who already align with their underlying assumptions and data, while marginalizing those who deviate from them.

Recognizing these distortions, or rather, these accurate reflections of systemic inequality, is crucial. It reminds us that addressing bias cannot be reduced to technical fixes or better datasets alone; rather, it demands a deeper reckoning with the cultural and institutional values that are inscribed into the technologies we build. Without such critical engagement, attempts to “de-bias” educational technologies risk merely polishing the mirror—leaving the structures it reflects fundamentally unchanged.

Mis-handling the mirror: Distortion in policy and implementation

While technical biases originate in the mirror’s glass, the most damaging distortions emerge from how institutions position it. Like a museum curator angling a mirror to showcase only prized artworks while leaving marginalized pieces in shadow, policymakers selectively amplify certain reflections through implementation choices.

Thus, recognizing the reflective nature of digital technologies is merely one step. The real challenge lies in how governments and institutions choose to handle that mirror—what they prioritize, what they polish, and what distortions they ignore. Too often, policies and implementation strategies misinterpret or mishandle the reflection, reinforcing rather than mitigating systemic inequities.

This distortion becomes acute when a “police-catch-punish approach” is adopted with regards to the use of LLMs in relation to plagiarism (Kramm & McKenna 2023; Weiner, Lake & Rosner 2024). In such cases, the reflection being acted upon is shaped by fear and a desire for control, leading to policies that prioritize prohibition and detection over fostering critical understanding, ethical engagement, and the constructive integration of digital technologies as learning aids. This mishandling breeds suspicion rather than encouraging responsible innovation.

Likewise, when administrators are tempted to deploy software solutions primarily as cost-saving replacements for educators, they are selectively focusing on a narrow reflection of potential efficiency gains. This handling dangerously ignores the vital reflections of pedagogical nuance, mentorship, faculty concerns about job displacement, and the profound ethical considerations surrounding the mass collection of confidential student and faculty data needed to power such systems (Doğan, Celik & Arslan 2025). The mirror, in this case, is held at an angle that privileges institutional austerity and managerial convenience, rather than educational care or social justice.

Similarly, when universities deploy predictive analytics that explicitly use race as a “high-impact factor” to determine student success (Feathers 2021), they do not just reflect systemic inequities—they cement them into algorithmic policy. Like a mirror angled to magnify stereotypes, these systems codify racial disparities as “objective” risk factors, directing fewer resources to the very students they label as likely to fail.

These examples reflect the dynamics Andrejevic (2019) critiques, where automation serves not liberation, but rather control. Decisions to deploy plagiarism detectors punitively or replace staff with machines represent a political “offloading” (Andrejevic 2019) of complex educational responsibilities onto systems optimized for surveillance, efficiency, or cost-saving. Framed as a technical necessity, this choice masks institutional priorities and obscures the crucial question of who gains and who loses. Unlike industrial automation displacing manual labor, information-age automation seeks “to pre-empt agency, spontaneity, and risk: to map out possible futures before they happen so objectionable ones can be foreclosed and the desirable ones selected” (Andrejevic 2019: 9).

In education, this manifests as sidestepping pedagogical dialogue in favour of automated detection, or replacing nuanced human interaction with programmed responses. The result is often social de-skilling, steering educators and students toward simplified binaries (human/machine, cheat/pass) dictated by the technology’s limits, diminishing the capacity for critical engagement with complexity.

Ultimately, mishandling the mirror means failing to ask not only what the system reflects, but whose priorities it serves and who is reflected accurately within it. If policy implementation remains reactive—concerned more with managing outputs than interrogating inputs—it risks enshrining the very inequalities it claims to solve. A more thoughtful and pedagogically grounded engagement with digital technologies is needed—one that does not merely polish the surface of the mirror, but also critically questions its shape, angle, and purpose. What is needed is not just a clearer reflection, but a reimagined mirror—one built for educational care, inclusivity, and justice.

Technology as a catalyst for pedagogical reflection

Viewing computational technologies as a mirror encourages educators to reflect on their own pedagogical practices and assumptions. Like an unforgiving backstage mirror revealing every flaw before a performance, computational systems expose hidden biases in our pedagogical practices—if we dare to look.

If a computational system reveals grading biases, it invites a deeper interrogation of classroom expectations, assessment norms, and the broader educational culture. This mirror-like effect not only highlights gaps in instruction but also prompts educators to evaluate whether their curriculum or teaching strategies may inadvertently favor certain student groups.

When educators recognize that technology mirrors their own practices, they are better positioned to adopt a critical stance toward these tools. This shift moves beyond viewing technology as neutral or inherently beneficial, prompting deeper scrutiny of its role as a complex, socially embedded artefact requiring thoughtful, contextualized integration.

This metaphor demands radical educator agency—not just using “tools”, but interrogating their political dimensions: Who designed this technology? Who benefits from its specifications or limitations? What realities does it render invisible?

By fostering skepticism, the metaphor highlights the need for educators to weigh technology’s potential benefits against its limitations, ensuring its use aligns with pedagogical goals, ethical standards, and the diverse needs of students. This perspective supports more deliberate and responsible applications of technology in education and transforms technology integration from technical training to ethical practice.

6. Educational futures and the way forward

The advent of technologies collectively referred to as “AI” has profoundly influenced education, sparking intense debates over their role – from revolutionizing creativity and personalization (Arenaza 2022; Novy-Marx & Velikov 2024) to amplifying bias, inequity, and shallow learning (Bozkurt et al. 2024; Cormier 2024; Kerr 2024; Williamson 2024b).

Nevertheless, this debate is more than a pedagogical disagreement; like a mirror held up to civilization itself, these technologies expose not just educational possibilities or pitfalls, but the foundational fractures of our society. As we stand before this technological mirror, the question is no longer whether the tool works as intended, but what its reflection reveals: systems that promise “personalization” while standardizing humanity, software that claims to foster equity while deepening historical divides, and reforms that tweak the frame without questioning the image itself. Predictably, institutional responses have focused on managing these contradictions—some imposing restrictive bans, others pursuing integration through digital literacy and ethical guidelines (Standing Committee on Employment, Education and Training 2024).

However, these educational dilemmas pale before the existential challenges we are now facing as a human society: climate collapse, food insecurity, deforestation, accelerating loss of biodiversity, and the urgent need to transition to clean energy systems. These pressures unfold alongside deepening demographic and economic fractures—from aging populations to mass displacement—and have been intensified by recent seismic shifts: the COVID-19 pandemic, a long-overdue racial reckoning, the global resurgence of xenophobic nationalism and far-right extremism, and escalating global conflicts.

At the heart of these challenges lies the systemic influence of capitalism and conceptualization of progress. Capitalism, while fostering economic growth, perpetuates the unfair distribution of wealth and power, enabling monopolies and oligarchies that erode democratic governance (Varoufakis 2023). It fuels imperialism, counter-revolutionary wars, and various forms of exploitation—economic, cultural, and environmental (Robra, Pazaitis & Latoufis 2021). Repressive practices against workers, coupled with social alienation, economic inequality, and unemployment, underscore its inherent flaws.

Simultaneously, our current notion of progress—centered on unbounded technological and economic growth—ignores its cascading side effects (Akbulut 2021; Civilization Research Institute 2024). This immature definition of progress often prioritizes short-term gains at the expense of long-term sustainability, harming essential systems for life on Earth. Progress narrowly focused on metrics like economic, technological, and military growth neglects the profound interconnectedness of human and ecological well-being (Alonso et al. 2023; Koch, Buch-Hansen & Fritz 2017; Pereira et al. 2005).

As technology evolves, its ability to shape and transform reality becomes increasingly consequential (Varoufakis 2023; Zuboff 2019). Many of our gravest challenges—climate change, nuclear war, and species extinction—are unintended outcomes of past attempts to solve other challenges. These technical solutions, while addressing immediate concerns, introduced unforeseen and insufficiently mitigated side effects. Paradoxically, these global crises stem not from a failure to achieve goals but from the destructive efficiency with which humanity pursues them (Schmelzer 2022).

These interconnected crises expose the inadequacy of incremental reforms and underscore the need for education systems capable of preparing individuals and communities not just to adapt to change, but to reimagine and shape more just, resilient, and sustainable futures.

Nevertheless, current education systems are ill-equipped to address the interconnected challenges of a rapidly evolving world (Bates 2024). Fundamentally outdated, they are inherited from an industrial-age model characterized by “standardisation and compliance” (Schleicher 2018: 15). Designed for a factory-based economy, these systems prioritize uniformity, rote memorization, and efficiency over quality, while suppressing creativity and critical thinking and eroding student well-being. Under the influence of capitalism, education has increasingly become a commodity, prioritizing marketable skills over humanistic learning, placing a narrow emphasis on “success” measured by easily quantifiable outcomes, and driving reliance on proprietary software controlled by tech giants.

Caught between the Scylla of capitalism and the Charybdis of immature progress, we are compelled to rethink our values and reimagine education. However, institutional reactions—whether through policies, strategic initiatives, or innovative tweaks—are inherently limited because they operate within established frameworks of meaning, dictated by existing societal and institutional goals such as efficiency, optimization, and measurable outcomes (Stone & Scharff 2024). This limits the scope of what can be achieved and fails to question the foundational assumptions underpinning the issues at hand. Consider, for example, the prevalent question of “How do we AI-proof assessment?” This very framing remains rooted in a system that prizes control over transformation. This reactive stance treats symptoms rather than addressing root causes, perpetuating the very challenges it seeks to solve. Instead, what is needed is a deeper response—a collective reimagining of education’s purpose, ethos, and praxis (International Commission on the Futures of Education 2021).

As the mirror metaphor suggests, technology reflects human choices and priorities rather than being an independent force. Therefore, the trajectory we take depends entirely on our willingness to rethink not only our solutions but also the values and practices that define our goals. This ontological resistance—challenging the frameworks of what we consider problems and solutions—demands a more profound shift. It requires asking foundational questions such as “What kind of world would we like to live in?” and “How can higher education contribute to building that world?” and being open to answers that transcend instrumental objectives like job preparation or economic outcomes.

I believe that education needs an expanded vision of its purpose and relationship to society and nature—one that reimagines it “as being for, in, with, and by the world” (Nørgård & Holflod 2024: 2). Education should challenge systems of oppression and inequality, fostering critical awareness and empowering individuals to question and transform societal structures for justice, inclusivity, and equity (Freire 1985; hooks 1994), while shifting from anthropocentric goals to a focus on planetary well-being and ecological interconnectedness (Jickling et al. 2018; Naess 1973).

In practice, educational institutions should advocate for, support, and foster small and local communities as a means of living harmoniously with nature, achieving ecological balance, and addressing the harms caused by centralized systems (Bollier & Helfrich 2019; Mistry 2009; Robra et al. 2023; Transition Movement 2021; Warren 2015), while empowering students to become agents of social change by equipping them with the skills to identify and solve local challenges, ask meaningful questions, and embrace lifelong learning (Freire 2000; Gallagher & Savage 2023). This vision calls for pedagogies that emphasize understanding both natural and human-made environments, while highlighting the importance of local communities and holistic learning (Jickling et al. 2018; Lave & Wenger 1991; Mukuka 2010; Woolombi Waters 2018). It also requires fostering partnerships between academia and community members to identify problems, design and implement interventions, and assess their outcomes, while simultaneously challenging and dismantling systems of power and privilege that have historically marginalized and excluded certain groups (Centre for Social Justice and Community Action & National Coordinating Centre for Public Engagement 2022; Martinez-Vargas 2022; Shahjahan et al. 2022), and eventually collaboratively “reinventing social organization, economics, infrastructure, politics, and state power itself” (Bollier & Helfrich 2019: 3). Finally, it requires a high commitment to grassroots collaboration, recognizing communities as co-creators of knowledge rather than passive beneficiaries. Research and pedagogy must actively support campaigns for social, economic, and environmental justice, fostering mutual learning and empowering communities to shape equitable futures (Bollier & Helfrich 2019; Christens, Gupta & Speer 2021; Weil, Reisch & Ohmer 2013).

The mirror metaphor ultimately demands more than reflection—it requires shattering and reblowing the glass itself. Technology reveals the flaws in our current systems: the standardization masquerading as personalization, the surveillance framed as care, the extraction branded as innovation. But the metaphor’s true power lies in its imperative: if we dislike what we see, we must change not just the mirror, but the world it reflects. This means pedagogies that prioritize ecological interdependence over human dominance, assessment that values unmeasurable growth over algorithmic efficiency, and technologies that amplify community wisdom rather than corporate control. The mirror will not clear itself; we must polish it with justice, rehang it with empathy, and continually ask what are our deepest aspirations for human flourishing and planetary well-being.

Competing Interests

The author has no competing interests to declare.

DOI: https://doi.org/10.5334/jime.981 | Journal eISSN: 1365-893X
Language: English
Submitted on: Jan 2, 2025
Accepted on: May 27, 2025
Published on: Aug 26, 2025
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2025 Angelos Konstantinidis, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.