Abstract
Background: Artificial intelligence (AI) is now present in many areas of healthcare, but its role in occupational medicine is still unclear. The available literature comes from different fields that are only partly connected and is rarely viewed from an occupational-health perspective. This makes it hard to judge how such technologies could support prevention, follow-up, or workplace adjustments.
Aim: This paper explores how AI has been used or proposed to help identify early changes in workers’ health needs and to guide the planning of workplace accommodations for those with chronic illnesses, with particular attention to evidence most relevant to occupational medicine.
Methods: A narrative review was conducted using predefined questions and eligibility criteria, and the material was organised thematically. Searches in major databases and several professional sources identified studies on AI in workplace health, monitoring of chronic conditions, return-to-work models, and accommodation planning. The selected publications were analysed along four themes: early identification of needs, workplace accommodation, potential contributions to occupational medicine, and the methodological and ethical limits that influence current developments.
Results: The studies describe various AI tools, including real-time monitoring systems, predictive models, wearables, decision-support applications, and digital platforms for self-management. These have been used to detect changes in functional capacity and to support more tailored workplace adjustments. Reported benefits include improved surveillance, more consistent diagnostic support, and some organisational advantages. However, evidence remains limited. Few tools have been tested in routine workplace conditions, and concerns about data quality, bias, privacy, confidentiality, and opaque algorithms persist.
Conclusions: AI is being explored as a complementary support for early needs detection and workplace accommodation, but current evidence is insufficient to draw firm conclusions about practical impact. Progress requires stronger validation, clearer algorithms, reliable ethical safeguards, and continued interdisciplinary collaboration.