Have a personal or library account? Click to login
Exploring the Use of Generative AI in Computer Science Education in a Technological University in Ireland Cover

Exploring the Use of Generative AI in Computer Science Education in a Technological University in Ireland

By: Michael GleesonORCID  
Open Access
|Feb 2025

Full Article

Introduction

As the field of artificial intelligence (AI) continues to advance, its application in higher education is becoming increasingly prevalent. The release of the Chat Generative Pre-Trained Transformer (ChatGPT) in November 2022 has been a pivotal moment that has generated much discussion and debate about how ChatGPT and similar technologies will affect all aspects of society. The rapid emergence of these tools and their capabilities has also generated a significant increase in academic research in the specific domain of generative artificial intelligence (GenAI). In the ever-evolving environment of higher education, the increasing utilisation of GenAI by faculty, while still in its infancy, has ushered in a new era of scholarly exploration and can be viewed as a major pedagogical disruptor (Schiff, 2021).

The provenance of this research has emerged from recent observations of professional practice as a computer science lecturer, where the adoption and use of GenAI are not fully acknowledged or understood. The integration of GenAI tools into higher education remains fragmented, ad hoc, and often individualised among academic faculty. The purpose of this study is to examine the current use of GenAI in higher education, specifically by computer science lecturers in an Irish Technological University (TU). By exploring the literature to identify the current use of GenAI in education and survey the current use of GenAI by computer science faculty at a TU in Ireland, this research aims to answer the following research questions:

  • RQ1: How is GenAI currently being used in educational practices in higher education?

  • RQ2: What trends or patterns emerge from the integration and use of GenAI by computer science faculty at an Irish TU?

By answering these research questions, this research aims to provide a specific focus for computer science lecturers in an Irish TU, to reflect and inform their scholarly activity of professional practice. This study provides a narrative exposition of GenAI within the realm of higher education in general and, more specifically, its applicability to computer science. It then provides insights through the use of Rogers' Diffusion of Innovation Theory (DIT) as a theoretical lens to examine the results of a survey of computer science lecturers in an Irish TU. One of my beliefs as a lecturer is to be informed, prepared, and proactive in my academic practice. The emergence of recent developments in GenAI offers a huge transformative shift; it is imperative that academic practices evolve to match this shift. A study such as this is essential not only for computer science but for all academic domains.

This paper is structured as follows. The “Literature Review” presents a narrative literature review, and the “Research Design” details the research design, including the theoretical framework and survey design. The “Results and Analysis” section presents the results of the survey instrument, accompanied by analysis. The “Discussion” section offers discussion on the findings, presented through the lens of the theoretical framework and related back to the literature review. Finally, the “Conclusion” provides a conclusion to the research.

Literature Review

To provide some context for this narrative literature review, it is important to illustrate the significance of a pivotal moment at the end of November 2022, when OpenAI released ChatGPT(Hines, 2023). In the aftermath of the release of this publicly available and free to use GenAI resource, research in the domain has undergone an extraordinary surge, characterised by an exponential rise in academic study and exploration. To provide a sense of the scale of the increase of research, a preliminary analysis was undertaken on three relevant academic databases using the following search terms: ‘Generative AI’ or ‘GenAI’, as part of the Abstract, Title, or Keywords; the results of this are presented in Table 1, a rudimentary snapshot which displays the magnitude of the increase in research in the domain.

Table 1.

Snapshot of percentage increase of research in the domain

DatabaseNumber of publications
Nov 2018–Nov 2021Nov 2022–Nov 2023% Increase
SCOPUS357862,146%
ACM Digital Library81972,363%
IEEE Xplore4892,125%

To offer a more nuanced exploration of the literature around the research domain, a more focused search of SCOPUS, ACM, and IEEE using the key terms of ‘Generative AI’ or ‘GenAI’ or ‘ChatGPT’ and ‘Computer Science’ or ‘Computing’ was subsequently undertaken. This resulted in 167 results; the subsequent inclusion of ‘higher education’ as a search term resulted in 31 papers, which were reviewed as part of this research. The examination of ACM and IEEE broadened the scope of relevant literature to ensure a comprehensive review of the literature.

While it is widely accepted that, at its core, the fundamentals of AI are not new (Tuomi et al., 2018), however, due to advances in computational power, the availability of large datasets, and the use of Large Language Models (LLMs), many of the capabilities and performance of current GenAI systems are unprecedented (Prather et al., 2023). This has facilitated the potential for AI and specifically GenAI to offer a significant impact in an educational context. The history of digital technologies in an educational context has many examples of undelivered promises, with many innovations being accompanied by conflicting positive and negative consequences. Furthermore, educational technology has a record of being hyped up without ever fully fulfilling its initial promise, with example critiques of the use of the Internet (Iseke-Barnes, 1996), virtual reality (Evans, 2001), and social media (Sunstein, 2018).

As the sensationalising of new technology in education is not unheard of, varying from ‘Utopian reformers’ to ‘dystopian cynics’ (Schiff, 2021), it is beneficial to lay out in plain terms what GenAI can actually do. When asking a GenAI prompt a question such as ‘Who is the current head of state of the UK?’ What is actually being asked of the machine learning algorithms embedded within the LLM is, ‘based on the words available in your dataset, what are the word(s) most likely to follow this sequence of words …’, in this instance the answer being ‘Queen Elizabeth II’. By highlighting this, it shows that there is no ‘intelligence’ as such, but simply a machine learning algorithm and statistical analysis of words in the vast corpus of human text currently available to the LLM (Shanahan, 2022). This example was chosen to demonstrate how GenAI can be both correct (in its correct implementation of the machine learning algorithms and statistical analysis) and incorrect (based on the outdated dataset provided). Most GenAI models will come with a caveat that the information might be outdated and to check the latest sources, as shown via OpenAI ChatGPT v3.5 in Figure 1.

Figure 1.

GenAI prompt displaying user interaction with an LLM. GenAI, generative artificial intelligence; LLM, large language model.

The use of GenAI in higher education presents both opportunities and challenges, and a number of systematic literature reviews have been undertaken in this domain recently. Chiu et al. (2023) presented a systematic literature review on the use of these AI systems in education, providing a number of example applications such as personalised learning, improved assessment practices, and increased efficiency in administrative tasks. Walczak and Cellary (2023) and Wang et al. (2023) not only highlighted the potential for personalised learning experiences and improved academic support but also emphasised the need for digital literacy and ethical considerations. Johri et al. (2023) further explored student perceptions and the potential impact on education, underscoring the importance of addressing concerns such as privacy, accuracy, and ethical implications. These studies collectively show the need for a balanced approach that harnesses the potential of generative AI while mitigating its risks. A challenge facing academic faculty is how to leverage these new tools in addition to adapting to the capabilities they offer. Koh and Doroudi (2023) highlight the need for pedagogical adaptation to incorporate these tools effectively and the need for more empirical studies to ascertain the gains and downsides of using generative AI for learning and teaching. Accompanying this pedagogical adaptation, resource limitations are also exposed as an issue in the research, where Chiu et al. (2023) discussed the lack of relevant learning resources for academics who are considering adopting personalised or adaptive learning facilitated through the use of GenAI.

Content creation is one area that has been identified in the literature, with studies exploring the potential of creating case study examples and developing solutions (Van Slyke et al., 2023) and others finding that AI-generated content led to higher levels of engagement by learners (Colace et al., 2018). However, these studies also noted that content could not always be relied upon and needed to be reviewed and revised. Of specific applicability to computer science is the use of tutoring systems and chatbots, especially given the nature of programming and software engineering. Studies have reported the positive use of chatbots for debugging code, tutoring, scaffolded learning, and enhanced comprehension (Aljanabi et al., 2023; Rajala et al., 2023). In these studies, reliability concerns emerged again, in addition to inconsistency issues, but overall, the positive use of GenAI-powered chatbot and tutoring systems outweighed the limitations.

GenAI offers significant opportunities for academic faculty to enhance teaching and learning through personalised user experiences. By focusing GenAI's capacity on specific domains, it can offer individualised answers and personalised prompts to offer effective learning experiences to students (Sohail et al., 2023). However, Sohail et al. (2023) also expressed concerns about the responsible and safe use of personalised learning resources facilitated through GenAI. There are concerns that students may use ChatGPT to cheat on assignments or exams by relying on LLMs to generate answers for them in addition to the honesty of the generated content and the potential for manipulation of the LLM to produce biased or inaccurate responses (Sohail et al., 2023). Similarly, other studies also expressed concerns related to personal learning resources in terms of assessment (Van Slyke et al., 2023), where GenAI can dynamically generate personalised tests for individual students based on specific parameters, negating the risks of ‘test-bank-question’ availability and cheating. This is particularly relevant considering the extensive use of test bank questions in existing computer science educational practices. Further concerns include privacy, as the use of ChatGPT may involve the collection and storage of personal data, which must be handled responsibly and comply with applicable regulations (Sohail et al., 2023).

Zastudil et al. (2023) examined the potential for using GenAI for automated grading and feedback. They found that GenAI can save time, provide more consistent grading, and offer immediate feedback. It also has caveats about overreliance on these tools and unintended outcomes of algorithms. Laato et al. (2023), while examining the implication of GenAI in computer science education, submitted exam questions to an LLM for automatic grading. The findings show that there are limitations to its ability to fully assess and grade complex and nuanced answers. Research also shows that there is technical complexity associated with integrating automated capabilities, such as integrating GenAI capabilities with style analysis for feedback, as well as the duty of care a lecturer has in providing constructive feedback that incorporates human understanding along with personalised feedback (Sohail et al., 2023).

GenAI's ability to offer interactive learning experiences allows students to engage more in the learning process, improving self-efficacy. The use of GenAI in programming can facilitate self-learning by using LLMs to interact in a conversational manner, providing guidance and suggestions for problems. A study by Yilmaz and Yilmaz (2023) performed an experiment with two groups of students, a control group and an experiment group using GenAI. They showed that the experimental group had significantly higher scores on the computational thinking scale, self-efficacy scale, and learning motivation compared to the control group.

This literature review explores the evolving role of GenAI in education, particularly its unprecedented capabilities and potential for significant impact in educational settings. The review highlights various applications of GenAI in education, such as facilitating chatbot/tutoring style systems, generating content capabilities, and its automated grading/feedback functionality. Other roles are presented, such as personalised learning experience and GenAI's positive impact on student self-efficacy and learning motivation. It also identifies issues related to GenAI in education, such as ethical concerns, issues with reliability and consistency, and impact on student plagiarism and overreliance on AI-generated content.

Research Design

My ontological perspective is based on social constructivism (Vygotsky, 1978), where there are multiple different realities created by individuals in groups. Therefore, my epistemological position for this study is to interpret the actuality of computer science lecturers' use of GenAI tools as part of professional practice. I achieved this through the use of an inductive methodology, developing truth based on quantitative and qualitative data, without assuming that the findings can be generalised to other cases or contexts.

Theoretical framework

DIT (Rogers, 1995) provides a framework for understanding the process of innovation adoption within a specific context and has been widely applied in educational research (Miller, 2015). The theory posits that the adoption of innovation occurs through a dynamic process of diffusion, involving various stages, and it categorises different groups within a population, based on their willingness and speed of adopting innovations. Other theories, such as the technology acceptance model (TAM) (Venkatesh et al., 2003) and Ely's (1999) eight conditions of implementation, were considered. TAM was not selected as it primarily focuses on understanding user acceptance of technology based on factors such as usability and utility, while academic practice involves more nuanced objectives, such as knowledge generation and dissemination. Additionally, Ely's (1999) eight conditions are broad, overlap in places, and each condition is not equally as important in all contexts. The DIT provides a structured way to analyse the process and the factors influencing it and is an appropriate theoretical lens for this study. A graphical representation of DIT is presented in Figure 2.

Figure 2.

DIT (adapted from Rogers, 1995). DIT, diffusion of innovation theory.

Qualitative research study

This research employs a qualitative approach, incorporating a survey to gather primary data. Using an inductive methodology, it aims to provide a comprehensive understanding of the current use of GenAI by computer science lecturers. The balance offered by the narrative examination of the literature, juxtaposed with the primary data provided by a survey, offers a complete picture of the phenomenon.

Participant selection

Participants were selected using purposeful sampling (Creswell & Plano Clark, 2011), targeting individuals with direct experience in considering the use of GenAI within their academic practices. The sampling criteria focused on identifying an appropriate representative population of faculty who have knowledge of GenAI and exhibit diverse perspectives on its use in academic practice, thus enriching the depth and breadth of the data collected. Ethical approval for this research was obtained prior to any data collection. The survey was conducted for 2 weeks in November/December 2023, with a total of 24 respondents, as detailed in Table 2.

Table 2.

Breakdown of participants' years of lecturing experience

Years of experienceNumber of participants
<1 year1
1–3 years2
4–7 years4
8–12 years8
Over 12 years9
Survey design and data collection

The survey instrument for this research was guided by the theoretical framework, with questions aligned to various components and dimensions of DIT. The process began by delineating the applicable components and dimensions within the theoretical framework that are specifically relevant to this research, as presented in Table 3.

Table 3.

Components and dimensions within DIT framework

DIT
DimensionAdoption process componentFactors influencing adoption
KnowledgeRelative advantage
PersuasionCompatibility
DecisionComplexity
ImplementationTrialability
ConfirmationObservability

DIT, diffusion of innovation theory.

Following this exercise, survey questions were then structured in a way to elicit responses that would capture the perceptions, attitudes, and behaviour of computer science lecturers regarding their use of GenAI in their academic practices. The alignment between the survey's content and the theoretical underpinnings of DIT ensures that the data collected effectively probe the use of GenAI within the context of computer science education.

The Likert scale (Likert, 1932), a psychometric scale for gauging the attitudes, values, and opinions of participants, was employed for the survey. While utilising this approach allows for the translation of subjective feelings into quantifiable data, making it easier to analyse and interpret, it is also a powerful tool when combined with qualitative data analysis. Analysing Likert scale data qualitatively involves interpreting patterns and trends that emerge in responses beyond numerical values. Data were interpreted on an ordinal basis, as each level of the scale indicates an order of preference, without implying equal intervals between each level (South et al., 2022). This approach delves into understanding the reason and context behind the participants' choices, exploring the narratives and themes that emerge from their selected options on the scale (Sullivan & Artino, 2013). Examining quantifiable data using qualitative data analysis not only enriches the research findings by providing both breadth and depth but also ensures a comprehensive understanding of the participants' perspectives.

Results and Analysis
Quantitative results

Quantitative results are described and presented here. In places, questions are grouped together where appropriate, based on related components and/or dimensions of the theoretical framework. Analysis of questions 1, 2, and 3, grouped based on the ‘Adoption Process’ of DIT, provides an overall snapshot of lecturers' familiarity, perception, and current use of GenAI in academic practice related to the adoption process of the theoretical framework. Predominantly, it can be stated that of the 24 lecturers, 21 are moderately familiar (9), very familiar (9), or extremely familiar (3) with GenAI in computer science education and 20 are either moderately concerned (8), very concerned (11), or extremely concerned (1) about the ethical considerations around the use of GenAI. While all of the 24 respondents have familiarity with the use of GenAI in computer science education, over half state that they use GenAI in academic practice only ‘sometimes’ (17). These results indicate that while GenAI has emerged as a major discussion point and its existence appears to be omnipresent in the ether, its actual use in academic practice is not reflective of its prominence in the discourse.

Questions 5, 8, and 13, grouped based on ‘Factors Influencing Adoption’, interrogate the compatibility and relative advantage of using GenAI in academic practice, and the data offer some contrasting observations. The responses from question 5, regarding the extent that GenAI aligns with current teaching and pedagogy, are slightly inconclusive, showing little middle ground. For question 8, only three respondents said that integrating GenAI was ‘very challenging’ or more, as displayed in Graph 1.

Graph 1.

Challenge of integration of GenAI into existing academic practice. GenAI, generative artificial intelligence.

Question 13 examines the relative advantage of the use of GenAI over traditional methods where results show that most find that there is the same or better advantage in utilising GenAI, over traditional methods to enhance academic practice, as shown in Graph 2.

Graph 2.

Rate overall advantage of using GenAI over traditional methods GenAI, generative artificial intelligence.

Overall, the results for questions 8 and 13 indicate that there is an even spread of opinion related to the challenge of integrating GenAI (compatibility) and the advantage of using GenAI over traditional methods (relative advantage).

For question 6, explores ‘where’ GenAI is in use within the educational practices of teaching, learning, and assessment, shows that the selection of ‘sometimes’ or ‘never’ is a clear pattern among respondents which is visible in Graphs 3, 4, and 5.

Graph 3.

Use of GenAI for teaching. GenAI, generative artificial intelligence.

Graph 4.

Use of GenAI for learning. GenAI, generative artificial intelligence.

Graph 5.

Use of GenAI for assessment. GenAI, generative artificial intelligence.

These predominant responses underscore a notable gap between GenAI's potential and its current adoption within each of these educational practices. This suggests a cautious approach by educators in embracing GenAI as a core tool in their teaching methodologies, learning facilitation, and assessment practices. The predominance of ‘sometimes’ implies ad hoc or occasional use, where educators experiment with GenAI but have not fully integrated it into their academic practice. This relates to question 7, where the trialability of the use of GenAI shows a relatively straightforward result, where of the 24 who have incorporated GenAI in academic practice, 22 have utilised trials prior to wider adoption.

Question 9, which was informed by the literature review to identify several common uses of GenAI, addresses ‘how’ GenAI is currently being used by academics. The results give a clear indication of the predominant use of GenAI by the participants, showing that content creation (16) is by far the most commonly used capability of GenAI, followed by the leveraging of GenAI to facilitate students to self-learn. Graph 6 visually highlights this.

Graph 6.

How GenAI is currently used by computer science lecturers in an Irish TU. GenAI, generative artificial intelligence; TU, technological university.

The emphasis on content creation showcases GenAI's potential to alleviate the time-consuming task of creating educational content, while leveraging GenAI to encourage self-directed learning indicates that educators are adapting to a student-centric approach by empowering learners to explore content at their own pace and style. Question 10 set out to discover the ease with which the outcome of any integration of GenAI in academic can be observed, with the results appearing to represent a normal distribution. However, this does not take into consideration participants' individual interpretation of easy/difficult, and the difference between points on the scale may not be equal. However, at a high level, this result indicates a balance or consensus across survey participants, as shown in Graph 7.

Graph 7.

How easy can you observe outcomes which result from the use of GenAI? GenAI, generative artificial intelligence.

Question 12 offered ranking scale indicating the importance or significance of a list of issues that were identified in the literature review. The options of technical complexity, resource limitations, ethical issues, pedagogical adaptation, and student engagement were offered as options. Of these, ethical issues scored highly as the most significant issue facing academics when integrating GenAI in academic practice, followed by pedagogical adaptation. At the other end of the scale, technical complexity and student engagement were rated as less significant, whereas resource limitation concerns were consistently prevalent across all participants. This is shown in Graph 8.

Graph 8.

How GenAI is currently used by computer science lecturers in an Irish TU. GenAI, generative artificial intelligence; TU, technological university.

These results portray a clear hierarchy of concerns among lecturers regarding the adoption of GenAI as part of academic practice. The prominence of ethical concerns underscores the complexities and dilemmas inherent in the use of GenAI technologies within an educational context. Pedagogical adaptation is shown to be another significant issue, highlighting the need to align GenAI with effective teaching methodologies to ensure successful use in educational settings. The lower significance attributed to technical complexity and student agreement could be due to prioritisation or suggest that these are less formidable barriers in the eyes of the participants.

Qualitative results

Analysis of open-ended, qualitative survey questions also provided the basis for some findings as part of this research. Following thematic analysis (Braun & Clarke, 2006) of the experiences of participants in specific instances of use of GenAI in academic practice displays the following themes.

Variable performance of GenAI

Concerns about the effectiveness and reliability of GenAI are evident in the data. GenAI is described as being useful in certain scenarios but inadequate or even counterproductive in others. This inconsistency of performance across the application of GenAI affects the trustworthiness and usefulness of GenAI, as shown in the following comments:

“Some of the explanations were good and valid. However, it also halucinated, and apologised when I corrected it. It was happy enough to repeat the hallucination though.”

“I used generative AI for coding assistance in a research project. It was worse than useless most of the time.”

While GenAI provides some viable content, it often struggles with nuanced or complex tasks, producing very basic outputs or even incorrect outcomes. This theme ultimately presents an issue for lecturers where the quality of AI-generated material can affect the learning process in addition to the perception that students rely heavily on generative AI output without critical assessment, assuming its correctness without further verification.

Educational utility of GenAI

From the data, it is evident that GenAI has the potential to serve a number of useful functions within the realm of educational practices, across teaching methodologies, learning facilitation, and assessment practices. For example, it can improve student understanding by applying AI-generated content in their own environment, thus assisting in understanding computer science concepts. It can facilitate reflective exercises for students and generate exam questions.

“Allow students to use it to gather ideas for completing problem solving exercises. Students had then to demonstrate their understanding of their research by applying to their own examples.”

“I have used it to generate an explanation of at opic to see how it compares to what I would give.”

“I use it to create code dynamicaly in class based on what we are covering, most of the code is good but some is bad. This allows me to show the code to students who can then critique it.”

Additionally, GenAI serves as a teaching aid by checking student answers for accuracy, providing feedback, and creating exam questions in addition to self-efficacy by assisting students in problem-solving through prompt engineering.

Limitations and challenges of AI use

Notwithstanding the utility offered by GenAI, the results also indicate that there is a caveat to this. While GenAI will create content, it appears to struggle with deeper understanding, producing content that can be rudimentary and not appropriate for higher-level assessments or complex problem-solving exercises. This is a practical limitation and challenge related to the use of GenAI in real terms and is depressant by the following excerpts from the primary data.

“I used it to generate revision questions, usualy MCQs. It's ok at generating shallow questions, but poor at generating questions that require deeper understanding”

“I have utilised AI to devise payloads to access security controls. Performance: AI could provide viable but rudimentary pay loads that would succeed if no security controls were in place but struggles with more challenging tasks.”

This theme of limitation and challenges also encompasses higher order concerns related to the use of GenAI by computer science faculty in teaching, learning, and assessment. Issues related to the responsible use of GenAI, such as ethics and plagiarism, are both identified. There are concerns about AI-generated text being detected by plagiarism-checking systems. Users note the effectiveness of such systems but highlight the emergence of AI detection removers, making it challenging to detect AI-generated content. Similarly, there is a concern regarding maintaining responsible use, ethical standards, and avoiding biases when integrating generative AI into academic practice.

“I have used the Turn it in system in Black board which does a check for AI generated text.

It works pretty well. But I note now there are AI Detection removers which wil make it extremely hard to detect such plagiarism.”

“Some challenges I have found include ensuring responsible use, avoiding biases, and maintain in gethical standards”

Discussion

The results and analysis presented in this study offer valuable insights into the current landscape of GenAI adoption in academic practice by computer science lecturers at a Technological University in Ireland. The findings are categorised into quantitative and qualitative results and discussed in terms of the appropriate components or dimensions of the theoretical framework for the study (DIT), shedding light on the use of GenAI in academic practice.

One notable finding from the quantitative results is that while all respondents are aware of GenAI in computer science practice, a significant portion of them (over half) use GenAI in academic practice only ‘sometimes’. This suggests that despite the widespread awareness and hype surrounding GenAI and its transformative potential for education, its integration into everyday teaching, learning, and assessment is not as pervasive as one might expect. When reflecting upon this finding and relating it to the theoretical framework for the paper, it is possible to posit that while there are visionaries and technologists evident in both trialing and experimenting with GenAI, computer science faculty are more realistically pragmatic or sceptical in their actual incorporation of GenAI as part of teaching, learning, or assessment. This points to the knowledge and persuasion elements of DIT, where innovators and early adopters do exist, but for the reasons identified are not following through with full adoption. This also corresponds to the literature review, which shows a range of concerns (Koh & Doroudi, 2023; Sohail et al., 2023). This finding raises questions about the actual impact and practicality of the use of GenAI tools in educational settings.

The study's qualitative findings delve into the experiences of participants using GenAI in academic practice, offering a deeper analysis. GenAI's educational utility is recognised by this study, particularly in content generation, reflective exercises, and exam question generation, in addition to offering students opportunities for self-directed learning and assistance in understanding complex concepts. This is reflective of the dimensions of the theoretical framework, where GenAI is compatible with existing practices and the results are observable as reflected in the literature (Van Slyke et al., 2023; Yilmaz & Yilmaz, 2023). However, similar to Laato et al.'s (2023) findings, this study also shows concerns about GenAI's variable performance, with respondents highlighting its effectiveness in some scenarios and inadequacy in others. Further limitations and challenges are evident, as GenAI struggles with deeper understanding and produces very basic content. These findings can be related to the relative advantage dimension of DIT, where GenAI can appear to be seamlessly integrated into current academic practice; however, this comes with a caveat that the relative advantage is only observable at lower, rudimentary levels and is not universally true.

Overall, it is clear that the premise of this research is valid. There is an individualised and ad hoc approach to using and integrating GenAI into academic practice by computer science lecturers. Faculty state that they have used GenAI to ‘create’, ‘test’, ‘generate’, ‘check’, ‘ascertain’, and ‘troubleshoot’; this all occurs in an individual capacity. Equally, while faculty see the benefits of GenAI, they are reluctant to engage with it fully in academic practice; this is represented by the following quotes from different participants.

“I have concerns that over use of generative AI can under mine or short-circuit student learning.”

“I have so far been unwilling to integrate generative AI with academic practice other than using it as tool to learn about technical topics. I think it can reduce students wilingness to investigate and research.”

“It cant be trusted so what's the point.”

These opinions underscore the complexities of the ethics and trust issues related to the use of GenAI. Possible solutions for this can be extrapolated from the findings of this study, where resource limitations related to staff development can be proposed. A full understanding of the correct use of GenAI in academic practice which will address ethical considerations, sharing best practice guidelines on pedagogical adaptation and the establishment of organisation-wide regulations on the acceptable use of GenAI. This can then facilitate a more holistic and uniform integration and use of GenAI in academic practice.

Conclusion

This study provides a narrative review of the current state of GenAI adoption in higher education in general, with a specific focus on computer science. The study shows that while GenAI is utilised in educational practices across teaching, learning, and assessment, it also indicates that educators are reluctant to fully integrate GenAI. This cautious embracing of GenAI shows that GenAI is not yet a core tool for computer science lecturers in this TU in Ireland. The findings highlight that trust issues can occur at a higher-order level (such as bias or ethics) or at a practical level (such as reliability and quality). This shows the need for further research to explore how to address these issues. A limitation of this research is its specific focus on one TU in Ireland, which can be addressed by expanding the research to a case study across multiple institutions.

Language: English
Page range: 107 - 118
Published on: Feb 26, 2025
Published by: Sciendo
In partnership with: Paradigm Publishing Services
Publication frequency: 2 issues per year

© 2025 Michael Gleeson, published by Sciendo
This work is licensed under the Creative Commons Attribution 4.0 License.

Volume 26 (2024): Issue s1 (August 2024)