In recent decades, digital media have profoundly transformed what it means to read, write and interpret texts inside and outside of school. In many contexts of subject-matter education, learners move between school textbooks, learning platforms and search engines, often within a single task. They encounter texts as hyperlinked, multimodal or algorithmically created artefacts rather than as fixed, linear pages. Research on new and digital literacies has highlighted that literacy is not a stable set of decontextualised skills but a changing repertoire of practices that develops alongside new technologies and social uses of text (Lankshear & Knobel, 2011; Leu et al., 2013). In first, second and foreign language education, this means that reading and writing are increasingly embedded in digital ecologies where learners must locate, evaluate and synthesise information across multiple sources, move flexibly between modes, and critically reflect on how digital infrastructures shape what they can see and say.
Multimodal and digital perspectives on literacy further emphasise that students no longer only write continuous prose on paper. They design slides, dashboards, emails, posts and interactive documents, and they work within learning management systems that structure both tasks and feedback. From a social semiotic view of communication, literacy in this context involves the orchestration of multiple modes—language, image, layout—in ways that are responsive to specific purposes and audiences (Kress, 2010). These developments pose new challenges for subject-matter teaching and learning. Teachers must help learners coordinate language, layout, images and data; scaffold comprehension of complex, non-linear texts; and address questions of participation, authorship and ownership in environments where texts are easily copied, remixed and circulated (Kress, 2010).
The rapid emergence of large language models (LLMs) and generative artificial intelligence (henceforth: AI) has added a further layer to this landscape. Digital tools are no longer only platforms for accessing and distributing texts; they have become co-writers, language tutors, assessment engines and dialogue partners for teaching and learning (Kasneci et al., 2023). Recent work in first and second language writing and language education highlights both the potential of generative AI to support planning, drafting and revising of texts (Meyer at al., 2024; Steinhoff, 2023) and the risks it poses for over-reliance, loss of agency and opaque text production (Brüggemann et al., 2025; Li, 2025; Michel et al., 2025; Warschauer et al., 2023). For learners in foreign and second language contexts, such tools can lower barriers to participation by offering language support, feedback and exemplars (“worked examples”) to scaffold learning processes on an individual level. At the same time, they raise pressing questions about cognitive offloading, epistemic trust, data protection and the validity of assessment (Kosmyna et al., 2025). The challenge for research and practice is therefore not simply to adopt new tools, but to understand how digital and AI-based environments reshape the conditions under which texts are produced, interpreted, evaluated and assessed in different learning contexts.
This Special Issue of RISTAL brings together contributions that examine reading, writing and interpreting texts in such digital contexts from different angles. In doing so, the focus areas in first and foreign language teaching such as the following: How can learners be equipped to use AI-based feedback for writing in ways that actually foster learning? How reliable are AI-generated text interpretations, and how confidently can they support learners in interpreting texts? How can digital assessment environments support fair and informative judgements of students’ texts? How might generative AI be systematically integrated into learning progressions for argumentative writing? What affordances and constraints do immersive technologies such as virtual reality offer for language learning and teacher education? And how do genre-based online environments support learners in mastering genres such as email writing in English as a foreign language? Taken together, the five articles in this issue offer empirically grounded and design-oriented answers to these questions and invite an interdisciplinary dialogue on the future of text-based learning in digital settings.
The first two studies in this issue of RISTAL deal with student writing in L1 and L2 in digital contexts. In their article on “Enhancing argumentation writing instruction in secondary school through artificial intelligence integration”, Thorben Jansen, Hannah Pünjer, Nils-Jonathan Schaller, Luca Bahr and Lars Höft focus on argumentative writing as a key competence for participation in democratic and knowledge-based societies. Building on research on learning progressions for scientific argumentation (e.g., Osborne et al., 2016), the authors develop a detailed design for an AI-supported learning progression that spans the full argumentative process: from understanding a socio-scientific problem and identifying relevant criteria, through constructing claims, evidence and warrants, to developing rebuttals and engaging in dialogic argumentation. For each step, they specify the role that AI can play – as tutor, feedback provider or argumentation partner – and formulate concrete student activities and prompt structures. A central design principle is that AI support is always “student-first”: students must initially produce their own ideas, drafts or arguments before AI intervenes, in order to avoid cognitive offloading. The article is complemented by an open online course and a custom AI “argumentation trainer” that operationalise the framework for classroom use. Rather than treating AI as a generic writing aid, this contribution shows how AI functionalities can be systematically aligned with evidence-based pedagogy on learning progressions in argumentation. It thus offers both a conceptual bridge between research on argumentation and AI, and a practical guide for teachers who wish to integrate AI into argumentative writing instruction in a principled way.
In their article on “Mastering the genre of English emails”, Stefan Keller, Ruth Trüb, Andrea Horbach, Thorben Jansen and Johanna Fleckenstein present a linguistic analysis of learner uptake in an on-line learning environment, focusing on semi-formal request emails as a key genre of writing in English as a foreign language (EFL). In a web-based learning environment, students in grades 8 and 9 completed three email-writing tasks based on realistic communicative scenarios, such as requesting information about a language course or a summer job. Between tasks, they worked with a genre-specific email framework and received rubric-based feedback that highlighted key elements of successful emails. Rather than focusing only on holistic ratings of text quality, the study offers a fine-grained linguistic analysis of seven genre-specific elements: subject line, salutation, information about the writer, matter of concern, number of task questions addressed, concluding sentence and closing. The results show substantial gains in task completion and in formulaic elements such as subject lines, salutations and closings, while more open, meaning-rich elements (self-introduction and explanation of the concern) improved more slowly and remained challenging for many learners. This pattern is consistent with genre-based approaches to second language writing, which suggest that learners often first appropriate conventional schematic structures and formulaic language before gaining control over more flexible, ideationally complex parts of a genre. The authors discuss implications for the design of digital writing environments and for the development of genre-specific, potentially automated feedback in EFL writing.
The second part of the issue focuses on the activities of teachers to assess texts, perform complex tasks or plan teaching in digital environments. In their article on “Judging Students’ Texts in a Digital Research Tool”, Frederike Stahl, Jörg Kilian and Jens Möller address the question of how teachers judge students’ writing when assessments are mediated by a digital environment. Using the digital research tool “Student Inventory”, they examine whether text quality, students’ gender, and migration background impact teacher text assessments. The question is relevant because teachers’ evaluations of text quality are a crucial precondition for adapting instruction to individual learning needs, yet prior research has repeatedly documented only moderate accuracy and potential biases when different teachers rate the same texts (Südkamp et al., 2012; Urhahne, 2021). The authors now present two experimental studies with student teachers of German in which participants rated authentic student reports of low, medium and high quality. The names attached to the texts were systematically varied to signal either male or female gender in the first study, and migration or non-migration background in the second. Across both studies, teachers reliably distinguished between levels of text quality on holistic and analytic rating scales, while there was no evidence that gender or migration background systematically affected judgements. These findings suggest that, in this carefully designed digital environment, text quality rather than social categories of the writer drives assessments. The article thus contributes to debates on fairness and bias in digital assessment of writing and demonstrates how research tools can be used to monitor and improve the quality of teacher judgements.
In his article “Reliability of AI in the domain of literary literacy and literary text interpretation: an empirical study” Volker Frederking presents the results of an experimental study in the field of L1 education. The study focused on literary literacy and the interpretation of literary texts as examples to examine the reliability of AI in helping learners in literature classes to overcome complex challenges. Due to the ambiguity of literary texts, the reliability of AI faces particular challenges in this domain. Based on an empirically verified five-dimensional model of literary literacy, two chatbots considered particularly powerful—ChatGPT-5 (Open AI) and Claude Sonnet-4.5 (Anthropic)—were prompted to solve complex tasks of literary interpretation. Two lyrical texts and 82 empirically tested LUK (sub)items (open, semi-open, MC, and FC) were used. In four test runs, ChatGPT-5 and Claude Sonnet-4.5 each worked on the test tasks for the two poems. On this basis, both chatbots processed a total of 820 items and subitems across the two units. The item solutions were evaluated using the coding grid from the LUK project. The findings show significant improvement over the previous versions of ChatGPT 3 and 4. The total percentage of correctly solved items for the two poems was 88.9% in the experiments with ChatGPT-5 and 87.4% with Claude Sonnet-4.5. However, there are some significant differences in the individual dimensions of literary competence. The limitations of the study and the need for further research are discussed.
In their article on “Virtual Reality in Language Learning and Teaching” Claudia Finkbeiner, Wiebke Sophie Ost and Claudia Schlaak present results of an explorative study of four VR applications from a student perspective. Against the backdrop that immersive technologies, particularly virtual reality, have become valuable tools in foreign language learning and teaching, the presented pilot study was conducted in an interdisciplinary, multilingual higher education context. It aims to introduce prospective foreign language teachers in developing, implementing, and assessing VR-based teaching ideas for FL classrooms in four distinct VR applications. To evaluate impact on teachers’ professional competences, quantitative and qualitative methods are combined in mixed methods design. The findings are of high relevance for school teaching and learning in foreign languages. Three of the four studies show that participating prospective teachers preferred applications with explicit language learning offerings for direct pedagogical use. At the same time, they rated immersive applications such as BRINK Traveler or National Geographic VR positively because of their potential to promote intercultural competence. The empirical study demonstrates that VR applications hold considerable potential in FL teaching, particularly when they address language-specific learning objectives or motivate learners through interactive, immersive elements.
Across these five contributions, the Special Issue of RISTAL highlights how deeply digitalisation and AI in particular now shape the conditions for reading, writing and interpreting texts in foreign and second language education. The articles show that digital and AI-based tools are not neutral add-ons but reconfigure who reads and writes, how feedback is produced and taken up, how texts are judged, and which semiotic resources can be mobilised for learning. At the same time, they underline that technology does not determine pedagogy. Productive use of digital environments depends on learners’ and teachers’ literacies, on carefully designed tasks and scaffolds, and on a critical awareness of biases, limitations and ethical questions. We hope that this collection will stimulate further dialogue between subject-matter didactics, educational research and classroom practice, and support readers in navigating – with curiosity and critical reflection – the evolving landscape of reading, writing and interpreting texts in digital contexts.