Abstract
Introduction: It is estimated that large language models (LLMs), including ChatGPT, are already widely used in academic paper writing. This study examined whether certain words and phrases reported as frequently used by LLMs have increased in medical literature, comparing their trends with common academic expressions.
Methods: A structured literature review identified 135 potentially AI-influenced terms from 15 studies documenting LLM vocabulary patterns. For comparison, 84 common academic phrases in medical research served as controls. PubMed records from 2000 to 2024 were analyzed to track the frequency of these terms. Usage trends were normalized using a modified Z-score transformation.
Results: Of the 135 potentially AI-influenced terms, 103 showed meaningful increases (modified Z-score ≥3.5) in 2024. Terms with the highest increases included “delve,” “underscore,” “primarily,” “meticulous,” and “boast.” The linear mixed-effects model revealed significantly higher usage of potentially AI-influenced terms compared to controls (β = 0.655, p < 0.001). Notably, these terms began increasing in 2020, preceding ChatGPT’s 2022 release, with marked acceleration in 2023–2024.
Discussion: Certain words and phrases have become more common in medical literature since ChatGPT’s introduction. However, the use of these terms tended to increase before 2022, indicating the possibility that the emergence of LLMs amplified existing trends rather than creating entirely new patterns. By understanding which terms are overused by AI, medical educators and researchers can promote better editing of AI-assisted drafts and maintain diverse vocabulary across scientific writing.
