1. Introduction
Generative artificial intelligence (GenAI) and its use in academic research is a topic of increasing comment and hypothesis. The literature surrounding the use of GenAI tends to focus on its technical use in research—particularly around its possibilities in terms of improved research skills and downsides in terms of its misuse. The discourse around the doctoral research journey, including student–supervisor knowledge exchange and relationships, and the wider implications of GenAI use in doctoral studies, currently remains limited. The intent here is to widen this debate.
This research focuses on what doctoral researchers and their supervisors think of GenAI, how they use it and how these changes challenge the status quo. The starting point here is that the use of GenAI in research is at present largely hidden, and this rather covert situation feeds confusion and mistrust, rather than bringing into the open the issues and challenges arising from the adoption of this technology. By exploring the nature of these challenges, and by understanding that it is not a question of preventing its use, this paper considers what the use of GenAI might mean for doctoral research practices and relationships, and how its emerging place can be understood within doctoral research.
This study does not attempt to evaluate the benefits, potential or inadequacies of GenAI tools per se. Instead, it explores the impact of GenAI tools on doctoral studies and the potential resulting impact on early researcher research publication.
1.1 Background
GenAI refers to artificial intelligence tools and techniques that search and synthesise data, images and text from existing datasets/databases to produce human-like, contextually relevant outputs in response to user prompts. These prompts can vary from simple commands to more complex prompts which can be nuanced and extended. It is GenAI’s ability to contextualise and synthesise data which allows its use in many areas of doctoral research and sets it apart from more standard AI tools, including search engines such as Google or Bing. Different forms of GenAI now offer targeted applications, which provide doctoral researchers with a panoply of tools to assist them with their research. ChatGPT is one of the most widely used tools, with different generations available. More advanced versions of GenAI tools offering greater processing power and wider, more up-to-date datasets tend to sit behind a paywall, with lesser versions being free of charge. Some newer arrivals on the GenAI scene (e.g. DeepAI) promise low-cost/free access to GenAI and focus on efficiency, lightweight deployment and open-source AI.
Academic discussion around the use of GenAI centres largely around concerns of plagiarism and poor academic practice (Huallpa 2023), with some debate around considerations of bias and reliability and accuracy of outputs (Rane et al. 2023), although its use and impacts in higher education are gaining traction for pedagogic research agendas. This focus omits consideration of its significance to doctoral students in terms of student–supervisor relationships, the development and exchange of knowledge, and its effect on future academic aspects of research publication.
2. Context
To put this study into context, this section explores the literature regarding the doctoral research journey, focusing on the development of doctoral researcher–supervisor relationships in the era of GenAI and exploring knowledge development and exchange within the context of doctoral supervision. It examines the emerging and rapidly developing literature around the use of GenAI in research, and reflects on the literature associated with GenAI and academic aspects of publishing research.
The doctoral supervisory process has been widely acknowledged as being a highly personal, complex and often emotionally charged exercise shaped by institutional authority, discipline-specific norms and interpersonal dynamics (Manathunga 2007; Maher et al. 2008). Navigating this complexity requires a rich mixture of skills from the supervisor (Jackson et al. 2021) and researcher (Sambrook et al. 2008), with their relationship operating within unspoken expectations (Maher et al. 2008).
Learning on the doctoral journey involves the acquisition of both specialist subject knowledge and personal learning and development, with the former often taking precedence in practice and also in research (Lindén et al. 2013). However this personal learning is widely acknowledged as being stressful, complex and often highly charged (Baptista 2014). The giving and receiving of direction, the development of academic debate and the acquisition of research skills all require a delicate balance and a skilful two-way relationship (Ribau 2020; Wichmann-Hansen et al. 2011).
Research into researcher–supervisor relationships often focuses on what supervisors should do in terms of skill and knowledge-sharing, with less emphasis on the process of developing the relationship (Buirski 2022). The importance of mentorship and trust in doctoral supervisory relationships is acknowledged (Hemer 2012; Robertson 2017), although an overemphasis on the importance of mentoring is thought to mask the important dynamic of power in supervision relationships (Manathunga 2007). The sensitivity and importance of power and emotion within researcher–supervisor relationships has been extensively recognised and variously described in terms of: relational power, resulting from a dynamic relationship between the parties (McNamee & Tilson 2021); hierarchical power, directly resulting from the position within hierarchies of institutions (Robertson 2017); and institutional power, derived from institutional norms and legitimation in terms of policy, rules, culture and expectation (Jones & Blass 2019).
This interplay between power and emotion is well-documented, making supervision an emotionally loaded practice. At its best the supervisory process is recognised as a relational and developmental undertaking that fosters researcher identity and enhances confidence (McAlpine & Amundsen 2011). However, this transition to independent research is also recognised as a critically vulnerable process that can bring fear, confusion and vulnerability, sometimes resulting in student withdrawal or disengagement (Lovitts 2008). In parallel with these complex dynamics, there is an acknowledgment that within the current time-poor environment of higher education, these relationships have increasingly assumed a transactional character. This can reduce attention on growth and development and can diminish the everyday practices and emotions of doctoral research (Doloriert et al. 2012).
New technology brings a challenge to the status quo and the possibility of disruptive change. GenAI is a developing technology, with its impact on academic research pedagogy only coming to the fore over the last two years. Two major streams of literature are emerging with regard to this technology and doctoral research practices: academic integrity, ethics and limitations; and the potential positive and negative impacts.
The literature encompassing academic integrity and the ethical implications of the use of GenAI on doctoral research centres on the need for guidance on the acceptable use of GenAI in academia (Atlas 2023) and the opportunities for misuse (Huallpa 2023). Limitations in the data and synthesis carried out by GenAI resulting from bias and rules imposed by human trainers of the technology are also studied (Kocoń et al. 2023; Megahed et al. 2024). Other concerns centre around issues of accountability and the dissemination of misinformation (Rane et al. 2023). Tang et al. (2024) highlight that the lack of consensus and clarity in institutional guidelines places the student in an ethically precarious position which they must navigate in isolation.
In contrast, an emerging body of research points to the potential benefits of GenAI in research. These include benefits in time and quantity of materials processed, positive impacts on researcher self-esteem, and reduced stress (Bin-Nashwan et al. 2023). Broad implications of GenAI on supervision relationships are characterised in terms of normative practice and the possibility of shifts in the roles and responsibilities within doctoral supervisory relationships. Cowling et al. (2023) argue that the use of GenAI can lead to improved psychological need fulfilment and student autonomy, whilst Dai et al. (2023) point to possibilities of GenAI accelerating research. Harding & Boyd (2024) propose that ChatGPT may function as an unacknowledged disruptive mentor, or ‘covert third wheel’, by becoming part of the hidden infrastructure of academic writing—quietly shaping ideas, sentences and confidence without being formally acknowledged.
To understand the fragmented nature of the dissemination of GenAI knowledge and application, it is useful to understand the types of knowledge groups forming around the tools, and the tools themselves. At present, the knowledge groups surrounding the application and use of GenAI centre around focused web-based groups that demonstrate and suggest the latest techniques and tips to optimise the use of GenAI. These include AI stack exchange (questions and answers site), Quora (AI topics or experts) and LinkedIn AI-related groups. There are also dedicated platforms and websites such as Hugging Face (a platform to discuss research and projects) and Kaggle (which hosts forum kernels), with conferences and ‘meet ups’ also gaining traction (e.g. Meetup.com, GitHub, and open source, discord and Slack channels).
GenAI tools are constantly developing and specialising, and at present there are several groups of applications: conversational large language models (LLMs), research-specific applications, tools for specific professions and educators, and graphical design tools. The first two of these groups are particularly relevant here: LLMs are trained on huge sets of data and can recognise and generate human-like text. These include tools such as ChatGPT, Gemini, Claude and DeepAI. Tools particularly aimed at researchers include those focused on developing literature reviews (e.g. Litmap, Rayyan, Evidence and Elicit), those providing general research support (e.g. Zapier, ResearchPal, Consensus, Semantic Scholar, Research Rabbit and Iris.ai), and tools that assist academic writing (e.g. Scholarly, Quillbot, Data Robot, Roam Research and Jenni AI). Given that these tools are increasingly targeted to service particular skill sets required by researchers, and that the doctoral supervisory relationship is intricately bound up in developing these skill sets, it seems inevitable that GenAI tools will have an impact on the doctoral research journey and the supervisory relationship.
Given the already controversial nature of GenAI, the issues surrounding its use in doctoral studies and academia are complex and sensitive. Many academics and supervisors consider the use of AI to be divisive and are uneasy in talking about it or openly giving it a place in academia (although privately they may use it in their own work). Doctoral researchers are very aware of institutional and academic disapproval of its use and are familiar with views loaded with concerns of plagiarism and cheating. By mobilising an ethnographic approach and structuration theory, this study explores whether and how GenAI tools might impact developing doctoral journeys and supervisory relationships, and highlights future directions for the emerging research agenda.
3. Research approach and method
Both researchers and supervisors can find the subject of using GenAI difficult to discuss because it can touch on issues of trust, knowledge and experience. Because of this sensitivity, the original study used a mixed-methods ethnographic exploration (to unfold experiences and develop emerging themes) and structuration theory to analyse emerging themes.
The research data come from two sources. First, they return to a previous scoping study (Harding & Boyd 2024) to explore the ethnographic data within the context of doctoral student practices and relationships (n = 12). Second, the research expands this exploration to new data from surveys of doctoral students and supervisors (n = 14) from a broader pool of institutions.
3.1 Digging deeper: an ethnographic exploration
This research mines data from a scoping study with two sources of data: the analysis of diarised notes from a doctoral researcher’s eight-month journey using GenAI; and a series of semi-structured interviews with both doctoral researchers and academic supervisors.
The diarised entries give a longitudinal account of a first-year doctoral student’s experiences of using GenAI intensively. These data were written at a time of great uncertainty and anxiety over progress. Out of curiosity, the student began exploring ChatGPT, using a developing series of prompts and counter-prompts. To balance this single source of data, semi-structured interviews gave a greater range and experiences of the use of GenAI. The semi-structured interviews were developed around structuration theory, looking at the practice of supervision and the structures and agency that hold this practice together. In total, seven academic supervisors and five doctoral researchers at different stages of their doctoral research, from four institutions in the UK and Ireland, were interviewed in 50-minute sessions.
Emergent themes from both sets of ethnographic data (the diarised entries and the semi-structured interview data) were explored and analysed using elements of structuration theory.
3.2 A broader exploration: a survey of supervisors and doctoral students
The initial research was expanded to include an open survey on the use of GenAI and its impact on relationships and practice among a wider range of academic institutions and practitioners. Designed to explore how GenAI is used and perceived, the questionnaires had 16 closed questions that used a five- or six-point rating and were designed and analysed using elements from structuration theory. For the survey questions, see the supplemental data online. Respondents were asked about their roles, which GenAI tools were used, changes in team relationships, what the GenAI tools represented, and the norms and standards around their use of GenAI tools. Issues of agency in terms of knowledge and communication of that knowledge and constraints on its use were included. The second part of the questionnaire used five open-ended ‘fill-in-the box’ questions and was designed to draw out information and to understand the views and experiences of the respondents.
The questionnaire was open for three weeks and was posted on Survey Monkey and one UK institution’s School of the Built Environment (not one of the original institutions), and was advertised via LinkedIn and personal networks. The 14 respondents were evenly split between supervisors and doctoral researchers.
3.3 Drawing on structuration theory
This research is concerned with the changing nature of the practices and relationships within doctoral supervision, and they are considered using the themes of power, trust and knowledge identified in the literature. These themes tie in well with the elements of structuration theory—where human agency and societal structures are considered as mutually acting constructs rather than constructs that act separately on practice (Giddens 2014). Giddens’ theory is complex and nuanced, and this research merely draws upon the major elements. Here, structure (those structures that hold the practice under consideration) is considered in terms of power, signification and legitimation, while agency is considered in terms of knowledge (reflexive and discursive) and constraints at play. Both datasets were analysed using the same elements of structuration theory. This continued use of one theory gives coherence and structure to both phases of the research.
The following sections set out the findings from the two data sources, using the lens of structuration theory. These are followed by reflections on what the findings might signify for doctoral researchers and their supervisors. The conclusions section uses these reflections to point to the resulting challenges and opportunities for academia in leveraging GenAI in doctoral research.
4. Findings
This research aims to understand whether and how GenAI tools might impact the process of developing the doctoral journey and supervisory relationships and highlight future directions for the emerging research agenda. Rather than giving individual findings for the dataset of the research, the following section uses elements of structuration theory to explore the findings for the two sets of actors involved: doctoral students and supervisors. These findings are brought together in the reflections section to draw out key themes and implications.
To give perspective to the spread of GenAI tools used, survey respondents were asked both their role (doctoral student or supervisor) and the types of GenAI tool they used. Types of GenAI tools were classified in the survey as follows: writing assistance (e.g. Grammarly), data analysis tools (e.g. Python, BIM), synthesising and organising data tools (e.g. ChatGPT) and project management tools (e.g. Trello, Asana). All except two of the student respondents reported that they used GenAI tools, with most using two or three of the classification types, with the majority using ChatGPT type tools. Two students responded that they did not use GenAI tools at all. All responding supervisors responded that they used multiple GenAI classification types, with the majority using ChatGPT-type tools. Interview participants were much less prepared to discuss their use of GenAI and in general reported to have kept it at a distance. There is no data-driven explanation for this differential.
It is interesting to note that the majority of discussion in the following section centred around the use of ChatGPT, although responses were invited about other types GenAI tools. The majority of the respondents (through both interview and survey) found the issues surrounding the use of tools such as ChatGPT to be particularly complex.
4.1 Doctoral students
4.1.1 Structure
Power
Students perceived power relationships within the supervision process as significant—with supervisors setting standards, asking difficult questions and having a knowledge base far exceeding that of the student. Fear of failure, fear of disappointing their supervisors and worries about making sufficient progress gave rise to anxiety before supervisory meetings, particularly when the supervisors style was more outcome focused:
He’s very […] just get the work done. He reviews it and it doesn’t go beyond that.
Students described how using GenAI—in this case ChatGPT—helped to reduce this fear:
It gave me the courage to get started when I felt too stupid even to begin.
I only used ChatGPT because I was scared my supervisor would think I was stupid.
It helped me start. That is all I needed to start.
Whilst another stated:
It does not judge me like my supervisor might […] I can ask the same thing ten times and not feel embarrassed.
If I use ChatGPT as a sort of safety buffer I can better relate to what supervisor wants.
Another described ChatGPT as ‘My invisible safety net’.
Another form of fear was through use of GenAI itself—with doctoral researchers fearing the consequences of using the technology and being caught. Trust was often spoken about—both personal trust in the integrity of the relationship and mistrust of the data generated by GenAI.
Data from the diarised case study show how the use of GenAI shifted the power base towards empowering the student to move ahead, despite the power imbalance.
Encounters with non-communicative supervisors highlighted their significance for guidance in navigating academic challenges […] leveraging AI tools like ChatGPT for academic support steered me to a pivotal turn in my academic pursuit.
Signification
Doctoral researchers ascribed clear meanings to supervision relationships and to some extent were comfortable (or at least resigned) with the time-pressured nature of these relationships. They tended to ascribe even greater significance to the meaning of the relationship because of this.
They’re excellent [the supervision sessions] but contact would be minimal, and the expectation and responsibilities is on me […] you prepare more for them rather than going for a social chat. You’re more prepared because you want to get as much out of it as you can because it might be quite a while [before the next one].
In most cases, GenAI was not seen to fit within this meaning, and it was cast as an untrustworthy interloper.
What’s very important, I feel, is supervisor’s guidance, and ChatGPT can’t tell you what has to go into your PhD.
But it’s not thinking for itself. It’s only regenerating what has been described before.
Conversely the diarised study showed that the experience of using GenAI allowed a re-evaluation of the meaning and value of supervisory relationships.
My engagement with ChatGPT, transformed my research approach, offering a private, judgment-free platform to clarify doubts and explore complex research questions.
Legitimation
A key concern over the use of GenAI concerned the area of legitimation norms and rules. Researchers were very concerned about breaching trust and being caught. Doctoral researchers were very concerned about how use of GenAI might break the existing internalised norms and rules surrounding their research and relationships.
This was demonstrated by many of the doctoral researchers in their avoidance of, and reluctance to use, GenAI tools, or even to talk about them in supervision sessions:
I’ve never had it used it in supervisions apart from to point out how it doesn’t work. […] I am afraid of AI in other senses—supervisors and academics constantly point out its weaknesses, the danger of being corrupted by AI […] discussion is not encouraged.
I can’t see a scenario myself where I’d bring up to my supervisor about using AI, and it being positive. […] But I could be completely wrong like I don’t know what [their] opinion is.
Unspoken rules around the use of GenAI led to confusion and its covert use:
I did not know if I was allowed to use it. There were no rules, but I did not want to be accused of cheating.
I felt like I was doing something wrong even though it was helping me learn.
My supervisor has no time for GenAI particularly ChatGPT and gets very annoyed if I mention I have used it.
One student from the survey demonstrated their own concerns about the legitimate use of GenAI tools:
Generative AI is based off of huge data. […] I consider that to be highly unethical. Furthermore, […] its results are not as trustworthy as people tend to assume—the creation of false citations is one notable example. […] When people use these tools for data analysis, they open the risk of notable errors creeping in. I consider tools like Grammarly to be more benign, but even they are problematic—tending, for instance, to make people’s writing styles sound overly generic. Learning to write and properly express oneself is a crucial part of the dissertating process, and quite frankly I view people who rely heavily on those tools to be cheating.
Interestingly the diarised case showed that where time was spent understanding and developing new rules of engagement relating to the use of GenAI, a deeper understanding and faster coverage of ground was possible within the ethical permissions of doctoral research.
Engaging with ChatGPT also informed my methodological and theoretical orientations but also reinforced the significance of ethical research practices.
4.1.2 Agency
Reflexive knowledge
Doctoral researchers relied on supervisors to lead the development for their reflexive knowledge, but were also committed to developing their own fund of knowledge through traditional research methods. Doctoral researchers had differing views of the benefits of gaining reflexive knowledge varied from GenAI tools:
I don’t trust any of them, and they don’t write the way I write. I would write very plainly. […] I know myself that even on the best day I couldn’t write like that.
It’s more like a clever, a cleverer person than me.
I want to form my own ideas, and AI may guide me in the wrong way.
I’d be a bit terrified of going off track from the review done from literature. And then you know, essentially wasting the time.
There were clear advocates of how the tools were used as a passive instructor:
Sometimes I would use it to translate what I thought my supervisor meant—like, I understood the words, but not what they wanted from me.
I was never sure I belonged in academia until ChatGPT helped me understand my thinking.
With the exception of the diarised case, doctoral researchers invested little time in developing reflexive knowledge around GenAI. For the diarised case, the extensive use of GenAI helped to develop generalised reflexive knowledge about the landscape of research:
My engagement with ChatGPT, transformed my research approach, offering a private, judgment-free platform to clarify doubts and explore complex research questions. This interaction also alleviated the isolation I experienced in PhD studies. ChatGPT served as a digital interlocutor, aiding in the conceptualisation and refinement of my research focus.
Discursive knowledge
This describes the ability to communicate knowledge. Doctoral researchers generally found this a hurdle in their supervisory relationships as they had so many new terms and ideas to integrate into their research ‘vocabulary’, whilst striving to appear intelligent. There were clear advocates of students using GenAI tools to help develop their discursive knowledge:
I used it to rehearse my response before I emailed my supervisor—not to fake it, but to be clearer and more confident.
I needed ChatGPT to say I was on the right track before I dared share anything with my supervisor.
Constraints
Doctoral researchers in all cases mentioned the impact that lack of time and physical distance had on the relationship. In most cases this meant that the relationships were often stressed by being compressed into bite-sized episodes. Some doctoral researchers acknowledged that GenAI might play a role in pushing work further and faster by improving the quality of researcher–supervisor interactions.
Note that while there was general uniformity of response from doctoral students about the value and limitations of GenAI tools, it was noticeable that only the respondents to the online survey admitted openly to using these tools in their work. This underlines the complex nature of the debate and speaks to the continuing complexity of the structure of supervisory relationships in terms of power and legitimation.
4.2 Supervisors
4.2.1 Structure
Power
Supervisors acknowledged the disparity in power between themselves and their doctoral researchers, but considered that this power shifted throughout the doctoral research journey. They recognised that this was a developing relationship where their power diminished as the student progressed.
They also acknowledged the impact of supervisory style on the relationship:
I have some colleagues who will focus on squeezing the Ph.D. student like a lemon to get all the juice out of and get the publications. My interest is more developing them so that they can if they wanted to pursue an academic career.
No supervisors evinced concern that the use of GenAI tools eroded their power in the relationship. Fear and trust were also mentioned as part of this power dynamic, with trust forming a key part of the dynamic:
Relationships are built on trust, I think trust, is a key word here. And you know we’re talking about these people as if they’re kind of static.
Academics also pointed to a broader hesitation within the academic community about discussing or leveraging GenAI for more complex tasks due to stigma associated with its use. Fear from academics centred on two aspects: their lack of understanding of the technology and the stigma associated with the use of GenAI from other academics.
There’s a bit of a stigma, I’d say at this moment. That’s affecting the environment.
Signification
Academic supervisors ascribed clear meanings to supervision relationships and to some extent were comfortable (or at least resigned) with the time-pressured nature of these relationships. They emphasised the importance of facilitating growth rather than merely transferring knowledge and were sceptical about GenAI’s depth and reliability for facilitating academically rigorous work and critical thinking. Academics recognised the need for different styles of supervision dependant on their student’s needs:
So my role has become very similar to a project management role, developing people, providing pastoral care, keeping them on track and through regular meetings. It’s very much trying to develop them as academics, trying to develop their techniques and trying to manage this project and see whether we can get it finished in 4 years.
I think different supervisors have different styles and will adopt those styles depending upon the student […] this person needs more support, so I’ll be a bit more prescriptive […] this person knows what they’re doing, so I’ll be a bit more of a coach.
Most mentioned concerns about quality, authenticity and detection. There was some recognition that the use of GenAI tools might affect both the relationship and the supervisory relationship:
If my students use AI and are honest about it I have no issue it is only when they are not honest they there are challenges.
GenAI could be better integrated into [doctoral] supervision by ensuring it was complementary to the supervisory relationship, rather than viewed as a replacement for human mentorship.
Human relationships are a key part of the supervisor–student relationship. I think if AI is used in a transparent way it improves the relationship.
Supervisors were mixed in their response to the impact of GenAI on the supervision process. There was a cautious acceptance and recognition that GenAI may be a tool that can improve the process:
[It adds to the process] where students are able to see a general overview of a topic, and, thus, are able to be more focused in their work going forward.
You know the hat I’m wearing is the guidance hat, not the policeman hat, and it hasn’t come up [yet].
Legitimation
Academics were concerned that the use of GenAI might break the existing internalised norms and rules surrounding research and relationships. Institutions are rolling out guidance and procedures to cover their teaching practices and requirements, but there remains little guidance on this technology. Supervisors were secure in believing in their researchers’ integrity (which usually meant not using GenAI), although one academic was clear about dealing with breaches in integrity:
When students utilize GenAI in unethical ways, it requires me to step into a role that I rarely need to step into: Calling out inappropriate behaviour (that often the supervisee knows is inappropriate).
Participants raised the need for clear policies and guidelines to ensure the responsible use of GenAI in research. Interestingly, participants voiced their general trust in their doctoral researchers’ integrity and judgement regarding self-policing in the use of GenAI, although there were some more cautious voices:
[We need] clear policies established by the institution and to openly acknowledge when AI is being used rather than being a ‘secret’.
I want full transparency from my research students.
It is hard to know now what the student’s writing is and what the machine’s is.
We should be careful in terms of how we encourage its use, and we should be specifically careful in terms of how it might be, detrimental to academic rigor.
One respondent summarised the rather covert presence of GenAI in academia:
I have not had a single conversation with any of my PhD students, in the last year at least, and before that about the use of AI in terms of in relation to their work. They haven’t raised a question about it. I haven’t given advice about it.
4.2.2 Agency
Reflexive knowledge
Supervisors acknowledged their role in developing doctoral researchers’ reflexive knowledge, but acknowledged too that students found it difficult to become independent thinkers in unfamiliar areas. Supervisors could see the value in using GenAI to develop student reflexive knowledge:
I think it helps my researchers understand the main areas thus allowing me as supervisor to develop this.
[By using GenAI,] my student understands the main components of a PhD which allows for deeper discussion in meeting.
Most participants tended to refer to GenAI in a dismissive and negative context and had not encouraged its use, but admitted to being under-informed about GenAI and its capabilities. Those participants who had dabbled in the technology had only used the free ChatGPT 3.5 version and admitted to being very time poor when it came to exploring its potential. Most supervisors admitted to being unversed in GenAI tools, and although curious, most had not developed any deep knowledge of the tools themselves and expressed uncertainty about how these tools would challenge the status quo. This gave rise to concerns around their ability to give feedback and comment at the right level:
If a student submits a polished draft every time but will not talk about how they got there, what does that mean for feedback?
Lack of familiarity with the tools and concern over ethics, privacy, etc. prevent me from using GenAI in my research.
Am I doing anything [now] that I wasn’t doing [before GenAI]? I don’t think I might be, which may be quite surprising actually, because maybe I ought to be doing something.
Academics also acknowledged how guiding students to using GenAI could save time in transmitting often repetitive terms and basic knowledge and terms:
You can kind of get into this thing of haven’t I told you this before? You know we just use that thing about epistemology. Haven’t we had this discussion before?
One academic also expressed rather more negative positions about the use of GenAI:
I see no need for GenAI, except as a tool to identify cells or structures in imaging (relevant to my field) ensuring that a larger bulk of data can be generated faster.
Discursive knowledge
Academics generally understood how GenAI could help students communicate their work, and so benefit more from input during supervision sessions, but were cautious in knowing how to understand the true level of their student’s understanding:
My student was able to use AI to learn about the points and then was able to discuss concepts easier with myself.
I can tell when students have had help—they are clearer, more confident. However, they do not always say how.
[The key challenge will be] that it should not be used to write a thesis. The contribution still needs to original and to come from the student.
Constraints
The main constraint cited by supervisors was the time-poor nature of their work, and they could see the benefits of using GenAI in these cases, although they also acknowledged that they themselves had little time to invest in understanding and driving the GenAI tools:
It is a tool to be used, and as a supervisor I’m quite thankful if they are all learning about epistemology and ontology, because then I don’t have to repeat it for the millionth time, you know, when I might only have 5 min before my next meeting to do it. So if they come to the meetings better equipped, then we can tackle the actual projects that they’re getting into.
Note that all supervisor respondents to the questionnaire reported themselves as either moderately or very comfortable with the use of GenAI by their doctoral students.
5. Reflections
The data-collection process captured instances where GenAI’s contributions were unacknowledged, subtly shifting relational dynamics and processes. Conversely, it revealed cases in which GenAI was actively integrated into workflows, enhancing collaboration and transparency.
Time constraints in modern academia make it increasingly hard for supervisors to find the time to enact the three roles of supervision: academic supervision, coaching and mentoring (Wadee et al. 2010). Before the advent of GenAI, it was not unusual for doctoral researchers to enlist external coaching and mentoring support to spare embarrassment and translate supervisors’ technical demands—either informally (social networks) or formally (paid coaches). There is evidence from the research that doctoral researchers are filling this gap with GenAI—particularly ChatGPT—and that the enrolment of this surrogate mentor is largely covert. When the use of GenAI is unacknowledged, its impact on the student and supervisor relationship becomes very powerful, fostering autonomy but risking the sidelining of relational depth and developmental aspects of supervisory relationships. In effect this imbues GenAI with a rather sinister power behind the scenes. These findings on the relationship between trust and power support Manathunga’s (2007) ideas on the masking of power behind other constructs.
Conversely, when GenAI is effectively given ‘a seat at the table’ in the supervisory dynamic, GenAI can become a transparent and collaborative resource that complements human expertise and brings the benefit of potential to make the research more effective in terms of improving the quality of researcher–supervisor interactions and covering more ground faster. The diarised study also showed that using GenAI allowed a re-evaluation of the meaning and value of supervisory relationships, supporting Bin-Nashwan et al.’s (2023) assertion of the positive impacts of GenAI on researcher growth. Both doctoral researchers and supervisors were very concerned about how the use of GenAI might break the existing internalised norms and rules surrounding their research and relationships.
Doctoral researchers relied on supervisors to lead the development of their reflexive knowledge, but were also committed to developing their own fund of knowledge through traditional research methods. Neither supervisors nor doctoral researchers generally invested time in developing reflexive knowledge around GenAI. For the diarised case, the extensive use of GenAI helped to develop generalised reflexive knowledge about the landscape of research and further supports Cowling et al.’s (2023) assertions on improved student autonomy.
Doctoral researchers generally found the ability to communicate their knowledge as a hurdle in their supervisory relationships. In the diarised study the researcher interrogated ChatGPT relentlessly and systematically to understand concepts and terms and to internalise their understanding. Supervisors articulated the view that GenAI could take away the researcher’s voice and so affect their student’s future publication potential.
This research shows that, as Harding & Boyd (2024) suggest, students increasingly frame GenAI as an invisible tutor: present at drafting sessions but absent from the official narrative of supervision. This surrogate emotional and pedagogic support gives GenAI technologies unrecognised agency in shaping and mediating doctoral research practice, and this has important implications for both the formation of the researcher and their relationships with their supervisors. If GenAI is shaping academic practice so thoroughly, failing to acknowledge its presence may represent an oversight and an institutional blind spot.
6. Conclusions
The starting point for this research was that the use of generative artificial intelligence (GenAI) in research is at present largely hidden—and this rather covert situation feeds confusion and mistrust, rather than bringing into the open the many concerns felt by institutions, academics and doctoral researchers. By exploring these issues and challenges, a wider perspective is provided on what its use might mean for relationships and practices between doctoral research students and their supervisors, and how these may have an impact on their future academic publishing. Three key themes emerged from the research: the influence of GenAI tools on relationships; the influence of GenAI on practices and processes; and ethical considerations and challenges:
Influence on relationships
The use of GenAI tools has the potential to reshape relationships and power dynamics in doctoral research. Researchers can gain greater independence through using GenAI tools, and this can challenge existing hierarchies. The key finding from this research is that doctoral students are (often covertly) using GenAI tools to support their emotional and discipline specific development. This risks blurring the boundaries between supervisory knowledge and surrogate machine-learning support and could lead to presumed knowledge and imperfect understandings.
Influence on practices and processes
GenAI tools have the potential to significantly enhance productivity by automating repetitive tasks, allowing users to allocate more time to advanced intellectual or strategic activities. These tools can simultaneously enable autonomy and challenge institutional norms, with participants reflecting on the lack of institutional encouragement or regulation of GenAI use. GenAI tools can be used to streamline literature reviews, refine drafts, synthesise data, and correct grammar and syntax, but reported adoption of these tools is surprisingly patchy. Once supervisors are able to direct their doctoral students to ethical and accepted uses of these tools, doctoral students could perhaps be expected to leverage these tools more to reach harder and push further into their research whilst removing the need to keep their use hidden.
Ethical considerations and challenges
Ethical concerns surrounding originality, accountability and data security underscore the normative structures that guide GenAI’s integration. Doctoral researchers and their supervisors express apprehension about an overreliance on GenAI, fearing it may risk data security and academic integrity, with a concern about the effect of GenAI on developing the authentic voice of the researcher. Above all, greater clarity and courage from institutions to provide solid guidance and procedures for the use of GenAI tools is sorely needed.
Supervision, at its best, is not about control: it is about accompaniment. GenAI cannot offer the mentorship, critical perspective or human warmth of a good supervisor. However, its uses in this study do show where doctoral researchers need those human qualities the most. It shows that students can be afraid to ask, unsure how to begin or unable to interpret what they have been told. These are signals that GenAI may have a legitimate role to play in supporting supervision practices and doctoral researcher development within the context of a clear, ethical and well-thought-out institutional guidance.
Although this study has a small dataset, and is limited in its scope, it does bring into the open the many concerns felt by academics and doctoral researchers. Whilst important questions about the accuracy and potential of GenAI remain unanswered, the study brings into focus some clear implications for doctoral research, its supervisory practices and broader research processes.
The study points to several interesting opportunities for further research. These include developing practices for legitimising the use of GenAI in doctoral research, exploring the phenomenon of academic stigma associated with GenAI in research, and exploring issues of developing unique author voices in the context of academic writing. Additionally, research into understanding differences in uptake and the use of GenAI tools between academia and industry would be a useful extension of the research.
In conclusion, this study has unfolded the complicated impacts of GenAI on relationships and practice within doctoral research. The findings reflect an understanding by participants of the changing landscape of academic practices in the age of GenAI. They point towards a future where GenAI tools could play a significant role in shaping these practices, but also raise important questions about its effect on relationships and the openness of its use. The research points to key areas of concern in the concealment of GenAI in academia, and shows how doctoral researchers and their supervisors could benefit by collaboration in the open enrolment of GenAI into practice.
Acknowledgements
The authors thank all the participants in their research for their open and enthusiastic contributions.
AI declaration
GenAI has not been used in this paper—neither in the synthesis of data nor in the development of text. The authors’ use of AI was restricted to standard search engines and checks for spelling and grammar.
Author contributions
Both authors contributed to the research and authorship of the submission in equal parts.
Competing interests
The authors have no competing interests to declare.
Data accessibility
Given the confidential nature of the research and assurances of anonymity given to participants, data are not made available to readers. Please contact the corresponding author for more information.
Ethical approval
The research was granted ethical consent by the University of the Built Environment (number 240307).
Supplemental data
Supplemental data for this article can be accessed at: https://doi.org/10.5334/bc.560.s1
