Abstract
This article reports on a human-AI intra-action comprising a thought experiment on sustainability, ethics, and curriculum, followed by a tracing of these topics. The authors and ChatGPT (we) conceptualised and performed a thought experiment in intra-action. The process involved prompting ChatGPT to generate a speculative vignette, which was then analysed through agential realist tracing, focusing on how concepts evolved relationally over time, their material affects/effects, and why certain relations materialised while others did not. ChatGPT was provided with the tracing exercise and offered their opinion on the implications for sustainability, ethics and curriculum. This human~AI intra-action contributes to postqualitative research by expanding tracing as an experimental approach, and demonstrates how entities (agencies) like knowledge emerge relationally rather than as fixed entities. While the experiment generated no entirely novel insights into sustainability, ethics, and curriculum, it confirmed AI’s capacity to surface connections, generate alternatives, and prompt reflexivity. However, ChatGPT tended to reproduce dominant, Western-centric discourses and showed limited ability to unsettle hegemonic ways of knowing, doing and thinking when it came to matters of sustainability, ethics and curriculum. The intra-action highlighted risks of reinforcing dehumanising technologies through uncritical use, while simultaneously revealing human anthropomorphic projections onto AI. We conclude by suggesting that there should be a shift in our focus from (an obsession with) prediction and control, to exploring relational, indeterminate possibilities and experimenting with new human~AI intra-actions.