Have a personal or library account? Click to login

Artificial Intelligence and the Reconfiguration of Organizational Communication in the Context of the Knowledge Society

Open Access
|Sep 2025

Full Article

Introduction

The knowledge society, defined by the centrality of information, innovation, and intangible assets, has profoundly redefined how organizations operate and communicate. Within this dynamic environment, artificial intelligence has emerged not merely as a set of technical tools but as a transformative force reshaping communicative practices across sectors. AI systems are increasingly woven into the fabric of organizational processes, from automated content generation and smart data analytics to predictive modelling and real-time audience interaction. These shifts are not just about accelerating tasks; they reflect a deeper transition toward data-driven, adaptive, and personalized modes of engagement. In particular, technologies such as natural language processing, machine learning algorithms, and AI-powered chatbots are enabling communicators to tailor messages with unprecedented precision. They facilitate audience segmentation at scale, automate routine interactions, and generate insights that inform strategic decisionmaking. For organizations navigating the pressures of competitiveness and informational complexity inherent in the knowledge economy, AI offers the potential to enhance responsiveness and relevance in their communication efforts (Beane & Anthony, 2024). Yet alongside these capabilities come a set of challenges that demand critical reflection.

The adoption of AI in communication settings surfaces complex issues related to privacy, transparency, and the erosion of human nuance in digital interactions. Automated communication systems, while efficient, risk oversimplifying relational dynamics and undermining trust when they are perceived as impersonal or manipulative. Moreover, ethical considerations about bias in algorithmic outputs and the responsible use of user data remain central to any discussion on AI integration (Korinek & Suh, 2024).

Another key consideration is the evolving skillset required of communication professionals. In order to work effectively with AI systems, practitioners must go beyond technical literacy to cultivate a deeper understanding of how these technologies shape meaning, affect perception, and influence organizational culture. Media literacy, in this context, becomes a foundational competence, not simply about decoding messages, but about interrogating the tools and processes through which messages are now created and delivered.

In light of the growing integration of artificial intelligence (AI) tools into communication and public relations practices, this research seeks to address a critical gap in existing literature: the lack of empirical insight into how individual-level psychological and experiential variables shape the adoption and perception of AI technologies in these fields. While previous studies have examined the operational advantages of AI, such as automation, efficiency gains, and enhanced targeting, few have explored how factors like media literacy, media locus of control, and satisfaction influence engagement with these tools in professional communication contexts. This research responds to this gap by investigating the extent to which communication professionals’ cognitive and affective dispositions impact the frequency and manner of AI usage, particularly in environments defined by knowledge intensity and digital transformation. By shifting the analytical lens from a purely technological focus to one that incorporates human factors, the study contributes to a more holistic understanding of AI’s role in reshaping communicative dynamics within the knowledge society.

To explore these dimensions, the study adopts a quantitative survey-based methodology aimed at generating structured and comparable data across a sample of 227 communication and public relations professionals. This methodological approach is well-suited to examining relational patterns among key constructs such as media literacy, perceived accessibility of AI tools, user satisfaction, and the complexity of knowledge-intensive tasks. The research design allows for the analysis of non-normally distributed variables using non-parametric statistical techniques, particularly Spearman’s rho correlation, enabling the identification of significant associations without requiring data normality. The inclusion of professionals from diverse roles, ranging from corporate communication and PR consultancy to digital marketing, ensures a broad and relevant perspective on how AI is perceived and utilized across sectors. In doing so, the study not only tests five theoretically grounded hypotheses but also provides evidence-based insights that can inform both organizational strategy and future academic inquiry into human-technology interaction in the digital communication sphere.

This paper seeks to explore the intersection of artificial intelligence and organizational communication through the lens of the knowledge society. It investigates how communication professionals are engaging with AI technologies in their daily practices, how they perceive the balance between automation and authenticity, and how media literacy mediates these interactions. At a broader level, the research aims to contribute to ongoing debates about how AI is redefining the communicative architecture of organizations, and what this means for the future of human interaction in a digitalized world.

Literature review

The growing presence of artificial intelligence (AI) across organizational functions has notably reshaped how communication strategies are designed and implemented. Within digital campaigns, AI enhances operational efficiency, supports accurate audience segmentation, and enables personalized message delivery. In the domains of marketing and public relations, AI integration facilitates the automation of repetitive tasks such as content production, performance monitoring, and interaction management, allowing professionals to shift focus toward strategic decision-making and innovation. As digital transformation accelerates, understanding how AI supports and modifies communication processes has become a necessary consideration for both academic inquiry and industry application.

Strategic planning remains the cornerstone of any successful communication effort, as it involves the systematic identification of audience-specific challenges and potential opportunities (Tichindelean, 2015). Within this framework, strategy acts as the foundational layer upon which all subsequent objectives and tactics are constructed. The effective use of AI in this context is directly linked to the existence of a well-defined strategic approach. AI tools function best when communication is guided by a coherent plan that aligns objectives with audience insights, enabling professionals to implement data-informed, audience-centric messaging frameworks. Through AI-enabled segmentation, communicators can classify and analyze consumer behavior, crafting messages that are more likely to generate engagement and conversion.

Among the most significant contributions of AI to communication is its ability to enhance content personalization. By leveraging technologies such as machine learning and natural language processing (NLP), organizations can process large volumes of user data to produce customized messages that align with recipients’ preferences and behaviors (Hegner-Kakar, Richter, & Ringle, 2018). This shift toward hyper-personalized communication has had a substantial impact in areas like content marketing, where AI-driven systems not only generate individualized content but also identify the most effective channels and timings for message delivery. The use of predictive analytics further strengthens personalization, as it enables the anticipation of user responses and the continuous refinement of communication strategies based on data feedback loops.

The increasing adoption of AI in communication is closely tied to the broader digitalization of society and the exponential growth of big data. Digital platforms generate immense quantities of behavioral and interactional data, providing the raw input needed for AI systems to function effectively. AI tools can now analyze engagement metrics, detect sentiment trends, and forecast consumer actions, thereby enabling more adaptive and responsive communication planning (Frost, Fox, & Strauss, 2018). These capabilities support data-driven decision-making and contribute to the creation of more agile communication strategies that evolve in real time in response to audience behavior.

Another important aspect of AI integration in communication involves the transformation of audience engagement practices. The deployment of AI-powered chatbots and virtual assistants has significantly altered how organizations interact with stakeholders. These tools allow for immediate, automated interactions, reducing response times and improving user experience. In public relations and customer service settings, chatbots are now central to frontline communication, capable of handling a wide range of queries without human intervention. Additionally, sentiment analysis powered by AI offers a more nuanced understanding of public attitudes and emotional reactions, enabling organizations to adapt their messaging in real time and manage reputation more proactively (Hänninen & Karjaluoto, 2017).

Beyond engagement, AI also contributes to greater cost-efficiency in communication operations. Automating functions such as content scheduling, data analytics, and report generation reduces the workload on communication teams and minimizes the need for manual input. As Kotler and Armstrong (2018) observe, organizations that implement AI in their communication processes benefit not only from improved targeting and engagement but also from considerable reductions in operational costs. These economic advantages strengthen the case for integrating AI into modern communication infrastructures.

However, despite its evident benefits, AI adoption presents several challenges. Ethical concerns regarding transparency, algorithmic bias, and surveillance are increasingly prominent in the discourse on AI in communication. Data privacy regulations and the need for responsible data handling introduce additional complexity for communication professionals. Moreover, the use of AI raises questions about the erosion of human authenticity in message delivery, a concern especially relevant in contexts that rely on emotional connection and trust. As AI continues to evolve, finding the right balance between automation and human oversight will be essential to maintaining credibility, ethics, and relational quality of communication strategies.

The growing integration of artificial intelligence (AI) into organizational communication reflects a profound shift in the structure and function of knowledge-driven enterprises. Within the context of the knowledge society and the knowledge economy, communication is not merely about information transmission; it is increasingly a matter of interpreting, generating, and managing intangible assets. Foundational contributions to the economic understanding of knowledge work, particularly the models of Garicano (2000) and Garicano and Rossi-Hansberg (2004, 2006), underscore the hierarchical organization of tacit and codified knowledge. These models establish the framework through which firms allocate problem-solving tasks, optimize expertise distribution, and respond to information asymmetries. Building on these theoretical underpinnings, recent studies have turned to examine how AI reconfigures these traditional structures by enabling machines to handle tasks that previously required human cognitive effort.

The integration of artificial intelligence into organizational communication cannot be fully understood without situating it within the broader context of the knowledge society. The term "knowledge society" refers to a socio-economic configuration in which the production, distribution, and application of knowledge become the primary drivers of development, innovation, and competitiveness. In contrast to earlier industrial paradigms based on material capital or physical labor, the knowledge society values intangible assets, expertise, information, data, and relational capital as the foundations of organizational performance and societal transformation. As Del Giudice, Scuotto, and Papa (2023) argue, this shift has been significantly accelerated by the rise of digital infrastructures and smart technologies, particularly artificial intelligence, which enables organizations to manage and mobilize knowledge in ways previously unimaginable.

Within this framework, AI plays a dual role: both as a facilitator of knowledge work and as a reconfigurator of what constitutes knowledge itself. AI systems, especially those capable of processing natural language and generating content, are not merely tools for information retrieval or automation; they function as agents capable of interpreting, synthesizing, and even producing knowledge. This raises fundamental epistemological questions about the boundaries between human cognition and machine reasoning. As Peterson (2025) suggests, the growing reliance on AI in knowledge-intensive domains risks triggering a phenomenon he calls "knowledge collapse", a condition in which the human ability to critically evaluate, contextualize, and challenge machine-generated knowledge is eroded by overdependence on automated systems. In this context, the knowledge society becomes paradoxically vulnerable: the very tools that were designed to expand our cognitive capacity might undermine the foundations of critical thought and informed decision-making.

Nevertheless, the evolution toward a digitally mediated knowledge society is not merely defined by risks; it also opens up new opportunities for organizational learning and strategic differentiation. Beane and Anthony (2024) note that in times of technological disruption, senior professionals often reinvent their roles by developing new forms of expertise that allow them to engage with, rather than resist, innovation. This observation holds particular relevance in communication environments, where AI can absorb and operationalize tacit knowledge, once considered non-transferable, through training on large datasets. As AI increasingly mediates communication flows, professionals must not only master technical competencies but also engage in continuous epistemic reflexivity: questioning the origins, validity, and consequences of machine-assisted knowledge production.

The knowledge society also requires a rethinking of institutional and communicative practices. Traditional boundaries between sender and receiver, expert and audience, human and machine are blurred as knowledge circulates across digital ecosystems. In this landscape, communicators are no longer solely content creators; they are also curators, validators, and mediators of algorithmically shaped messages. Bachmann (2019) argues that the automation of communication processes, when uncritically adopted, can induce a form of moral disengagement or “moral blindness,” whereby ethical scrutiny is displaced by efficiency imperatives. The speed and scale of AI-generated content can obscure issues of bias, manipulation, and misinformation, particularly when organizations rely on opaque models whose decision-making processes remain inaccessible to most users (Russell & Norvig, 2009; Zerfass, Hagelstein, & Tench, 2020). In a society increasingly dependent on automated knowledge flows, the need for transparency and accountability in AI-assisted communication becomes not merely a regulatory concern but a cultural imperative.

Furthermore, AI’s role in the knowledge society must be interpreted through the lens of power and access. Knowledge, while theoretically abundant in the digital age, is not evenly distributed. The ability to interpret and act upon AI-generated insights depends on a range of contextual factors, including digital infrastructure, media literacy, organizational culture, and individual motivation. Zong and Guan (2025) emphasize that the effectiveness of AI-driven analytics in industry depends not only on the sophistication of the models but also on the readiness of human actors to understand and apply the insights generated. Similarly, Duan, Edwards, and Dwivedi (2019) point out that the full potential of AI in decision-making is realized only when there is a high level of synergy between machine capabilities and human judgment, underscoring the need for hybrid intelligence rather than full automation.

Finally, the emergence of foundation models and general-purpose AI further amplifies the centrality of knowledge orchestration in organizations. The capacity of these models to perform across domains invites new strategies for standardizing communication processes, optimizing knowledge flows, and reducing operational fragmentation. However, as Berger, Cai, Qiu, and Shen (2024) caution, these transformations also generate tension between technological efficiency and professional identity. Employees may simultaneously experience empowerment through augmentation and dislocation due to the shifting valuation of their cognitive contributions. In communication fields, where context sensitivity, emotional nuance, and relational judgment remain critical, this duality poses complex challenges that demand thoughtful organizational responses.

Contemporary research recognizes AI not as a mere extension of automation but as a transformative mechanism that reshapes the very fabric of communication and decision-making. Beane and Anthony (2024) argue that senior professionals often respond to technological disruption by developing new practical expertise and defending their roles, a dynamic that becomes increasingly complex in the face of AI systems capable of performing cognitive, interpretive tasks. Unlike earlier technological tools, modern AI can absorb tacit knowledge, previously seen as non-transferable, and deploy it at scale through model training and inference. As Goldfarb and Tucker (2019) point out, digital information is nonrival and scalable, which means that the once individual-bound nature of tacit expertise can now be embedded within machine-learning systems.

This evolution is evident in the rise of foundation models such as GPT-4, Claude Sonnet, and Gemini. These large-scale, pre-trained models can be fine-tuned for various organizational tasks, from content creation and internal documentation to client-facing communication. Their versatility undermines the traditional belief in the superiority of highly customized solutions and highlights a trend toward homogenization across industries (Li et al., 2023). The adoption of general-purpose AI by companies, often in lieu of developing proprietary tools, underscores a new economic calculus, wherein access to shared AI infrastructures reduces costs and accelerates deployment. This standardization also invites new questions about differentiation, brand authenticity, and the erosion of organizational uniqueness in communication strategies.

AI's influence extends beyond task automation. Berger et al. (2024) show that generative AI adoption induces both optimistic and defensive responses among employees, reshaping expectations around roles and productivity. In the communication sphere, this duality manifests in the way human actors adjust their functions, delegating routine exchanges to bots while focusing on high-stakes interactions. This dynamic reallocation of labor suggests a shifting boundary between automated and human-mediated communication. Moreover, Peng et al. (2023) provide evidence that AI-enhanced workflows can significantly improve output quality and speed, especially in knowledge-intensive domains such as software development and research, which have strong parallels in communication-intensive professions.

At a more strategic level, Grant (1996) and Nonaka (1991) have long emphasized that firms compete through their ability to integrate, coordinate, and apply knowledge. The infusion of AI into these processes does not simply add to another layer of tools but challenges the firm's epistemological architecture. AI systems, especially when autonomous, begin to act not just as instruments but as participants in organizational routines. Korinek and Suh (2024) argue that this shift necessitates a reconceptualization of AI as economic agents, potentially capable of interacting with markets, contracts, and internal stakeholders with minimal human intervention. This reconceptualization brings with it a host of concerns about transparency, accountability, and ethical governance. While Goldman Sachs (2023) projects massive productivity gains and task transformation across nearly two-thirds of occupations, these benefits come with the risk of deskilling, job displacement, and growing knowledge asymmetries within organizations. In response, organizations are called to foster not just technical fluency, but also media literacy, the ability to critically engage with AI-generated content and understand its limitations. This aligns with NVIDIA's (2025) articulation of "Physical AI," where digital reasoning must align with real-world constraints, contextual understanding, and human oversight. In essence, the literature suggests that AI is altering the foundations of how organizations communicate internally and externally, redefining roles, and introducing new forms of asymmetry in information processing and access. The blending of tacit and codifiable knowledge into a hybrid digital format presents novel opportunities for scalability and efficiency yet simultaneously disrupts established communication hierarchies and expectations. As firms continue to embed AI agents into their operations, the challenge will lie in balancing the efficiencies of automation with the imperatives of human connection, ethical integrity, and sustainable organizational identity.

An essential theoretical construct underpinning this study is Media Locus of Control (MLOC), which represents an adaptation of the broader psychological theory of Locus of Control (LOC) to the context of media influence. LOC, originally developed by Rotter, describes an individual’s belief about the extent to which they have control over events in their life. Those with an internal LOC tend to believe that outcomes result from their own actions, efforts, or decisions, while individuals with an external LOC attribute outcomes to external forces such as fate, luck, or powerful institutions (Hamzah & Othman, 2023). MLOC narrows this focus to examine how individuals perceive the role of media in shaping their knowledge, beliefs, and behaviors. It captures the degree to which a person believes they can control or critically assess the influence of media content on their lives, distinguishing between those who see themselves as active, discerning consumers and those who view media as an overwhelming, uncontrollable influence.

Scholarly research on MLOC has increasingly highlighted its relevance in understanding how individuals engage with media in the digital age. Those with a high internal MLOC tend to evaluate news content critically, cross-check sources, and take deliberate action to inform themselves using multiple perspectives. In contrast, individuals with a high external MLOC often perceive themselves as passive recipients of information, feeling subject to the persuasive or even manipulative power of media messages (Xiao & Yang, 2024). This distinction has important implications for understanding behaviors such as susceptibility to misinformation, engagement with social media, and even levels of trust in journalistic institutions. Xiao and Yang (2024) demonstrated that MLOC, particularly when combined with media literacy, plays a significant role in shaping how people identify, reject, or propagate false information. The internal MLOC construct can thus be interpreted as a predictor of critical media engagement, and its integration into the current research model allows for a deeper understanding of why some professionals are more likely to use AI tools reflectively and strategically than others.

The decision to include MLOC in this study is further supported by evidence that it mediates various behavioral and cognitive outcomes related to digital communication. As Hamzah and Othman (2023) note in their work on psychological constructs in professional settings, individuals with an internal LOC, and by extension, internal MLOC, are more likely to demonstrate entrepreneurial and adaptive behavior in complex environments. Applying this logic to communication professionals, it can be hypothesized that those with internal MLOC are not only more confident in handling media flows but also more proactive in adopting emerging technologies such as AI, which require a critical and autonomous mindset. MLOC thus functions as both a psychological trait and a socio-cognitive filter through which communication professionals assess technological utility, credibility, and ethical responsibility.

Closely related to MLOC and equally critical to this study is the concept of media literacy. In its broadest sense, media literacy refers to the ability to access, analyze, evaluate, create, and act upon information across various forms of media. It involves understanding how media content is constructed, the intentions behind it, and its effects on individuals and society. Far from being a passive skill, media literacy is inherently participatory and reflexive, encompassing both critical thinking and the capacity for active media production (Xiao & Yang, 2024). In today's digitally saturated environment, where content is generated and disseminated rapidly through algorithmically curated platforms, media literacy has emerged as a fundamental competency for informed and responsible participation in the public sphere.

Recent studies emphasize that media literacy is not only about recognizing false or biased information but also about understanding the infrastructures and technologies that shape how messages are delivered. Peterson (2025), for instance, argues that the growing reliance on AI-driven communication systems has led to a “knowledge collapse,” where the abundance of information paradoxically undermines individuals’ ability to discern credibility and synthesize meaning. In such a context, media literacy must evolve beyond traditional critical reading to include algorithmic awareness, data ethics, and AI literacy. This extension of media literacy is particularly relevant to communication professionals, who are increasingly expected to use AI tools like ChatGPT, Jasper, or Gemini in content creation, audience targeting, and message personalization. The ability to engage with these tools critically, not just operationally, requires a high level of media literacy that encompasses both interpretive skill and technological fluency (Lee & Park, 2023).

Moreover, media literacy has been found to correlate positively with the ability to adapt to emerging communication technologies. As Del Giudice, Scuotto, and Papa (2023) note in their discussion on Society 5.0 and knowledge management, digital transformation is not only technological but epistemological. In this new landscape, knowledge is coconstructed between humans and machines, and the lines between content producers and consumers are increasingly blurred. This makes media literacy a strategic competency for professionals operating in knowledge-intensive environments, such as public relations, corporate communication, and digital media. Those who possess advanced media literacy are better equipped to interpret AI-generated content, detect embedded biases, and ensure ethical message delivery, capabilities that are central to maintaining organizational credibility and public trust.

In addition, the role of media literacy as a predictor of AI engagement is increasingly supported by empirical studies. For example, Lee and Park (2023) found that users with higher levels of ChatGPT literacy, an applied form of media literacy, reported significantly higher satisfaction with AI use. This satisfaction was mediated by their intrinsic motivations and ability to set informed expectations about what AI tools can and cannot do. Their findings affirm the notion that media literacy is not a static skill but an adaptive resource that enables users to navigate evolving technological ecosystems. Similarly, Zong and Guan (2025) emphasize that AI-driven data analytics and predictive technologies are most effective when users are capable of interpreting and acting upon machine-generated outputs with a degree of critical oversight, a capability fundamentally linked to media literacy.

Together, MLOC and media literacy form the psychological and cognitive foundation upon which this study is built. They explain not only why communication professionals engage with AI tools but also how they approach such tools, as active agents capable of evaluation and adaptation, or as passive recipients overwhelmed by technological complexity. By integrating these two constructs, the study moves beyond simplistic models of technology acceptance and offers a more layered understanding of the human dimensions shaping AI integration in organizational communication.

In light of these developments, this study explores how AI tools are currently used in the execution of communication campaigns, particularly in areas such as audience segmentation, message personalization, and efficiency optimization. By focusing on these dimensions, the research contributes to a more comprehensive understanding of AI’s impact on communication strategy and offers actionable insights for professionals navigating digital transformation in their organizations.

Research framework
Research aim and objectives

The purpose of this study was to examine how the adoption of artificial intelligence (AI) tools in communication and public relations contributed to the broader reconfiguration of organizational communication within the evolving landscape of the knowledge society and the knowledge economy. Specifically, the study sought to explore the relationship between AI usage and key individual and contextual factors such as media literacy, media locus of control, perceived accessibility, and satisfaction with AI applications. In the knowledge economy, where intangible assets, information flows, and data-driven decisions are central to value creation, the integration of AI technologies into communication practices has become not only a matter of operational optimization but also a strategic factor in knowledge generation, interpretation, and dissemination. The purpose of this study was to explore the interrelations between media literacy, media locus of control, and the usage patterns of AI software among communication and public relations professionals. It aimed to assess how individual perceptions of accessibility and user satisfaction influenced engagement with AI-driven tools, while also examining the extent to which knowledge-intensive roles within the field correlated with the frequency and manner of AI software adoption.

Research hypotheses
  • H1:

    There is a significant positive relationship between media literacy and AI software usage among communication professionals. This hypothesis assumes that professionals with higher media literacy levels are more likely to effectively integrate AI applications into their communication strategies, critically assess AI-generated content, and optimize AI-driven tools for enhanced message precision.

  • H2:

    There is a significant positive correlation between media locus of control and AI software usage. This hypothesis assumes that communication professionals with a higher internal media locus of control, who perceive themselves as having greater influence over their media exposure, are more likely to adopt AI tools strategically, ensuring responsible and accurate content dissemination.

  • H3:

    There is a significant positive relationship between AI software usage and the perceived accessibility of AI-driven communication tools. This hypothesis assumes that the more frequently communication professionals use AI applications, the more they perceive these tools as accessible, user-friendly, and effective in optimizing communication workflows.

  • H4:

    There is a significant positive relationship between AI software usage and user satisfaction in communication and public relations. This hypothesis assumes that communication professionals who regularly engage with AI tools experience increased efficiency, reduced workload, and enhanced content customization, leading to higher levels of satisfaction with AI-driven communication processes.

  • H5:

    There is a significant positive correlation between engagement in knowledge-intensive communication tasks and the frequency of AI software usage. This hypothesis assumes that professionals whose roles involve high levels of content creation, analysis, and strategic decision-making are more likely to adopt and benefit from AI tools, given their alignment with the cognitive demands of the knowledge economy.

Research methodology

To investigate these relationships, the study employed a quantitative approach based on survey methodology, selected for its ability to generate structured, comparable data from a broad sample of professionals. Given the exploratory and relational nature of the study, the survey method enabled the assessment of self-reported experiences and attitudes toward AI usage in real-world communication contexts. By using a standardized questionnaire, the research ensured consistency across responses, facilitating valid statistical analysis of correlations between the key constructs of interest. Data collection was conducted online, ensuring efficient reach and accessibility for participants.

Research sample

The sample included 227 communication and public relations professionals, selected through non-probabilistic convenience sampling. This method allowed direct access to individuals actively engaged in corporate communication, public relations consultancy, media relations, or digital marketing, all of whom had at least some exposure to AI-driven tools in their professional practice. The inclusion of diverse communication roles enhanced the relevance of the findings by capturing a broad perspective on how AI was perceived and applied across the field. The size of the sample was adequate for conducting non-parametric statistical analyses, particularly Spearman’s rho correlation, which was used to explore associations between ordinal variables. These analyses also allowed testing data normality and identifying significant relationships among media literacy, locus of control, AI usage, perceived accessibility, and user satisfaction. Overall, the methodology was designed to produce meaningful, interpretable insights into the evolving role of AI in communication and public relations, grounded in current professional experience.

Research instrument

The research instrument used in this study was a structured questionnaire, developed specifically for the purposes of this investigation. It was designed from the ground up to align with the study's conceptual framework and objectives and to capture relevant data on psychological and experiential variables that may influence the adoption and perception of AI tools in communication and public relations. The instrument was original and did not rely on standardized scales from prior studies, although its construction was informed by concepts and constructs discussed in the academic literature on media psychology, technology adoption, and communication practices in the knowledge economy.

The final questionnaire consisted of six main sections, each corresponding to one of the core research variables: automatic versus conscious thought processing (used here as a proxy for media literacy), media locus of control, AI software usage, perceived accessibility of AI tools, satisfaction with AI software use, and engagement in knowledge-intensive communication tasks. Each section included between two and six items, designed as single-choice statements measured on five-point Likert scales. Most statements asked participants to express agreement or perceived extent, with scale options ranging from “strongly agree” to “strongly disagree” or from “to a very large extent” to “to a very small extent,” depending on the conceptual focus of the item. These Likert-type scales allow for the generation of ordinal data suitable for non-parametric analysis. Additionally, demographic data were collected using nominal and ordinal scales, including gender (nominal), area of residence (nominal), and age range (ordinal, with five intervals).

In total, the instrument comprised 29 closed-ended items: 5 items measuring preference for cognitive effort (thought processing), 6 items addressing media locus of control, 4 items assessing AI software usage, 4 items on perceived accessibility, 4 on user satisfaction, 3 items evaluating engagement in knowledge-intensive tasks, and 3 demographic questions. Composed variables were created for each of the six constructs, allowing the research to test the relationships between them using correlation analysis. The use of composed variables was essential for capturing multi-item representations of complex constructs like satisfaction or accessibility, and for enhancing the internal consistency and interpretive validity of the data.

Prior to the main data collection, the instrument underwent a pretesting phase with a pilot group of 15 communication professionals. This pretest aimed to assess the clarity, relevance, and internal coherence of the questionnaire items. Feedback received from this initial group was used to refine the wording and logical flow of certain questions, ensuring the questionnaire’s usability and comprehensibility. The successful completion of the pretest helped establish the instrument’s initial reliability and face validity.

Data were collected online using a secure digital form, allowing participants to respond anonymously and at their convenience. The choice of an online format was consistent with the digital nature of the study and ensured greater accessibility for professionals working in various geographical locations. Once data collection was finalized, responses were exported and analyzed using JASP version 0.19.3, a statistical software platform suited for advanced non-parametric testing and correlation analysis.

The analytical process included multiple steps. First, descriptive statistics were calculated for each variable to understand general trends and distribution characteristics across the sample. Second, the Kolmogorov-Smirnov and Shapiro-Wilk tests were applied to assess the normality of the data, which confirmed non-normal distribution across all key variables. Based on these results, Spearman’s rho correlation coefficients were computed to test the study’s five hypotheses, allowing for the examination of relationships between ordinal variables without assuming linearity or normality. This approach ensured that statistical conclusions were both robust and appropriate to the nature of the collected data.

In summary, the research instrument was carefully designed to capture the multidimensional reality of AI adoption in communication work, with close attention paid to construct validity, data type, and analytic compatibility. The structure of the questionnaire, the pretesting process, and the statistical treatment of the responses collectively ensured the methodological rigor of the study and the reliability of its empirical findings.

Results

The results of this research provide a comprehensive analysis of the relationship between artificial intelligence (AI) applications and their impact on communication and public relations strategies. By examining key variables such as media literacy, AI accessibility, and user satisfaction, the findings offer valuable insights into how professionals integrate AI tools into their daily workflows. The statistical analysis, based on nonparametric correlation tests, highlights the extent to which AI adoption influences communication efficiency, message personalization, and overall strategic effectiveness (Table 1).

Table 1.

Normality statistics

VariablesKolmogorov-SmirnovaShapiro-Wilk
StatisticdfSig.StatisticdfSig.
Media literacy.300227<.001.767227<.001
Media locus of control.230227<.001.798227<.001
AI software usage.283227<.001.772227<.001
Perceived accessibility.499227<.001.469227<.001
User satisfaction.288227<.001.791227<.001
a. Lilliefors Significance Correction

Source: own processing

The outcomes of the normality assessment, as shown in Table 1, indicate that none of the variables under investigation conforms to a normal distribution. Both the Kolmogorov-Smirnov and Shapiro-Wilk tests return significance values (Sig.) below the .001 threshold for all measured variables, highlighting statistically significant departures from normality across the dataset. In particular, media literacy (Shapiro-Wilk=.767, p<.001) and media locus of control (Shapiro-Wilk=.798, p<.001) reveal non-normal distributions, pointing to a wide range of variation in participants' digital competencies and their perceived influence over media content.

A similar pattern is observed for AI software usage (Shapiro-Wilk=.772, p<.001), which suggests that professionals engage with AI tools to varying degrees, likely due to differences in organizational context, familiarity with the technology, or individual attitudes toward digital integration. Notably, the variable perceived accessibility displays the lowest Shapiro-Wilk value (.469, p<.001), indicating a pronounced asymmetry or concentration in responses. This may reflect disparities in access to AI tools, technological infrastructure, or comfort with digital interfaces among the surveyed professionals.

User satisfaction also exhibits significant non-normality (Shapiro-Wilk=.791, p<.001), suggesting that respondents hold divergent opinions regarding the efficiency, relevance, and overall value of AI-driven solutions in communication tasks. In light of these results, the data violates the assumption of normality, which justifies the use of nonparametric statistical techniques. Consequently, Spearman’s rho correlation will be employed in subsequent analyses to ensure robust and valid interpretation of the relationships among the variables.

Table 2 presents important findings regarding the connection between media literacy and the use of AI software among communication and public relations specialists, directly aligning with the study’s first objective. The Spearman’s rho correlation coefficient (ρ=.486, p<.001) reflects a moderate, statistically significant positive relationship. This result indicates that individuals with higher levels of media literacy are more inclined to incorporate AI-based tools into their communication practices. This outcome highlights the role of media literacy as a key enabler of AI integration, rather than a limiting factor. Professionals who are adept at interpreting digital media and critically assessing online content appear more confident and capable in adopting technologies such as ChatGPT within their strategic workflows. As AI becomes increasingly embedded in tasks like automated messaging, engagement analysis, and audience profiling, this skillset becomes essential for effective and responsible usage. The observed correlation, while moderate, also suggests that media literacy is only one piece of a more complex puzzle. Organizational culture, access to technology, and levels of professional training may also influence the extent to which AI tools are adopted. These findings suggest that enhancing media literacy, potentially through targeted education in AI ethics, data interpretation, or content verification, could support more effective and informed use of AI tools in communication roles. Future studies might investigate how such educational interventions influence the depth and quality of AI tool adoption across varied organizational contexts.

Table 2.

AI software usage and media literacy

Media literacyAI software usage
Spearman's rhoMedia literacyCorrelation Coefficient1.000.486**
Sig. (2-tailed)..000
N227227
AI software usageCorrelation Coefficient.486*1.000
Sig. (2-tailed).000.
N227227

Source: own processing

The results shown in Table 3 explore the link between media locus of control and AI software usage, corresponding to the study’s second objective. Spearman’s rho value (ρ=.399, p<.001) identifies a statistically significant moderate positive association, implying that professionals who perceive a higher degree of personal influence over media exposure and interpretation are more likely to utilize AI tools in their communication work. This finding implies that communication practitioners who view themselves as actively shaping their media environment are also more open to integrating AI technologies into their daily tasks. A stronger internal media locus of control may contribute to a sense of agency and critical awareness when interacting with AI-generated content, empowering these individuals to evaluate its reliability and appropriateness before application. This sense of control can be particularly valuable given the opaque and algorithmic nature of many AI systems, which often require informed oversight to mitigate biases and ensure content aligns with ethical standards. Moreover, the presence of this correlation supports the idea that professionals who feel in command of their media consumption are also better positioned to adapt to technological innovations, using them not passively but strategically. Future studies might further examine whether this perception of control enhances not only adoption rates but also the quality of outcomes achieved through AI implementation in communication settings.

Table 3.

AI software usage and media locus of control

Media locus of controlAI software usage
Spearman's rhoMedia locus of controlCorrelation Coefficient1.000.399**
Sig. (2-tailed)..000
N227227
AI software usageCorrelation Coefficient.399*1.000
Sig. (2-tailed).000.
N227227

Source: own processing

Table 4 presents the correlation between AI software usage and the perceived accessibility of AI-based communication tools, aligning with the third objective of this research. The analysis reveals a statistically significant, moderate-to-strong positive correlation (Spearman’s rho=.511, p<.001), indicating that professionals who engage more frequently with AI technologies tend to regard these tools as more accessible. This result suggests a reciprocal relationship between use and usability perception: as professionals become more accustomed to integrating AI into their communication routines, they are likely to perceive these technologies as increasingly approachable and user-friendly. Regular interaction with AI-driven functionalities, such as automated content creation, audience analytics, or workflow optimization, appears to foster a sense of familiarity and operational ease, which in turn reinforces continued engagement. From a practical standpoint, this dynamic illustrates how adoption may be self-reinforcing. Those already utilizing AI tools are not only gaining functional benefits but also developing a perception that these tools are simple and effective to use. This can contribute to deeper integration of AI into strategic communication practices. However, the inverse may also hold: professionals who view AI as complex or inaccessible may hesitate to adopt such tools, potentially leading to disparities in digital competence within the field. To address this, future investigations might examine the role of organizational support systems, user training, and onboarding processes in shaping perceptions of accessibility. By improving initial user experiences, the communication industry can facilitate broader and more equitable adoption of AI innovations.

Table 4.

AI software usage and perceived accessibility

Perceived accessibilityAI software usage
Spearman's rhoPerceived accessibilityCorrelation Coefficient1.000.511**
Sig. (2-tailed)..000
N227227
AI software usageCorrelation Coefficient.511*1.000
Sig. (2-tailed).000.
N227227

Source: own processing

Table 5 outlines the correlation between the frequency of AI software usage and user satisfaction within communication and public relations roles, directly corresponding to the study’s fourth objective. The analysis reveals a statistically significant, moderate-to-strong positive association (Spearman’s rho=.526, p<.001), indicating that professionals who more regularly utilize AI tools tend to express higher satisfaction with these technologies. This outcome suggests that consistent use of AI tools leads to more positive evaluations, likely driven by increased familiarity and confidence in navigating their features. As communication specialists incorporate AI into their routine tasks, they become more proficient in harnessing their capabilities, whether through automation, personalized messaging, or data-driven optimization, which in turn enhances overall satisfaction. AI can relieve professionals of repetitive or time-consuming functions, enabling them to redirect efforts toward more strategic and creative communication endeavors. Viewed in a broader context, this finding underscores the role of hands-on experience in shaping user perceptions. Those who engage with AI tools on a regular basis are not only more likely to benefit from improved workflow efficiency but also to perceive these tools as integral assets in their professional toolkit. Conversely, limited engagement may prevent some professionals from realizing the full potential of AI, leading to neutral or even negative user experiences. These insights point to the need for structured support, such as training programs and resource accessibility, that encourage meaningful interaction with AI technologies. Enhancing users' technical confidence and functional understanding could, over time, raise satisfaction levels across the profession and ensure a more inclusive transition toward AI-integrated communication strategies.

Table 5.

AI software usage and user satisfaction

User satisfactionAI software usage
Spearman's rhoUser satisfactionCorrelation Coefficient1.000.526**
Sig. (2-tailed)..000
N227227
AI software usageCorrelation Coefficient.526*1.000
Sig. (2-tailed).000.
N227227

Source: own processing

The interpretation of the correlation data presented in Table 6 offers a meaningful extension to the current investigation, especially in relation to the fifth hypothesis formulated in this study. The analysis reveals a statistically significant, moderate positive correlation between engagement in knowledge-intensive communication tasks and the frequency of AI software usage (Spearman’s rho=.446, p<.001). This result confirms Hypothesis 5, which posited that professionals who are more involved in cognitively demanding communication activities, such as content creation, critical analysis, and strategic planning, are more likely to make consistent use of AI-driven tools in their workflows. From a statistical perspective, the correlation coefficient of .446 signifies a moderately strong association, indicating that as engagement in knowledge-intensive tasks increases, so does the frequency of AI tool usage. The significance value below .001 further affirms that this relationship is unlikely to have occurred by chance, thus reinforcing the robustness of the observed pattern. Importantly, this finding validates the theoretical link between the nature of professional tasks and the technological solutions employed to execute them efficiently.

Table 6.

Engagement in knowledge-intensive communication tasks and user satisfaction

Engagement in knowledge-intensive communication tasksAI software usage
Spearman's rhoEngagement in knowledge-intensive communication tasksCorrelation Coefficient1.000.446**
Sig. (2-tailed)..000
N227227
AI software usageCorrelation Coefficient.446*1.000
Sig. (2-tailed).000.
N227227

Source: own processing

Professionals whose work requires continuous intellectual input, interpretation of complex information, and the generation of high-quality content are more likely to recognize the potential of AI to augment these processes, particularly in the context of the knowledge economy. This correlation can be interpreted through the broader lens of knowledge-based organizational theory, where communication is not only an operational function but a core element of value creation. In such environments, individuals are expected to navigate high volumes of information, synthesize data for decision-making, and craft persuasive messages tailored to various stakeholders. AI applications, especially those powered by large language models, natural language processing, and content optimization algorithms, provide exactly the kind of cognitive support that knowledge workers require. By automating routine aspects of communication, such as drafting initial versions of documents or analyzing audience sentiment, AI enables professionals to allocate more time to critical thinking and strategic execution. The moderate strength of the correlation found in this study aligns with the idea that AI does not replace human judgment in knowledge-intensive contexts but instead complements and enhances it.

Discussion of the findings

Based on the correlation analyses presented, the results of this study offer a comprehensive, empirically grounded validation of the theoretical insights outlined in the literature review. Each of the statistically significant relationships identified through Spearman’s rho analysis reflects broader patterns and mechanisms discussed in the academic literature on AI and communication strategy, suggesting a strong degree of conceptual convergence between empirical data and theoretical frameworks.

The first correlation, between media literacy and AI software usage, supports the proposition that digital competence is a facilitator of AI integration in professional communication contexts. These findings echo arguments advanced by Hegner-Kakar, Richter, and Ringle (2018), who highlight the role of media literacy in enabling personalized, data-informed communication strategies. Professionals who demonstrate a higher capacity to interpret digital content and navigate technological systems are more confident in using AI tools to manage complex tasks such as message segmentation and content generation. This correlation also aligns with Frost, Fox, and Strauss (2018), who emphasize that media-literate users are better equipped to manage AI-driven feedback loops, refining strategy through real-time data. The empirical evidence thus confirms that media literacy is not merely a technical skill but a strategic asset in knowledge-intensive communication.

The second correlation between media locus of control and AI usage adds a psychological and behavioral layer to the discussion, indicating that professionals who perceive themselves as having control over their media environment are more proactive in adopting AI tools. This dynamic is consistent with findings from Beane and Anthony (2024), who argue that professionals respond to AI disruption by reclaiming agency through new forms of expertise. The notion of internal control becomes particularly significant in environments where algorithmic opacity may otherwise erode user confidence. The current data reinforces that those with a stronger internal locus are more inclined to approach AI adoption strategically, echoing the idea from Korinek and Suh (2024) that AI tools function best when guided by informed, self-directed human users rather than passive operators. Hence, the psychological readiness to control media use plays a measurable role in AI implementation.

The third result concerns the link between AI usage and perceived accessibility. This relatively strong correlation confirms the existence of a feedback loop wherein increased exposure to AI tools enhances perceptions of usability. This finding resonates with the argument made by Goldfarb and Tucker (2019), who note that digital information systems, once perceived as complex, can quickly become user-friendly when deployed at scale. The empirical result here supports the idea that accessibility is not only a function of design but also of habit formation and routine exposure. Professionals who regularly interact with AI applications for tasks such as predictive analysis or engagement tracking gradually internalize these tools as extensions of their daily workflow, a process that both reflects and reinforces their perceived ease of use.

Concerning the fourth hypothesis, the correlation between AI usage and user satisfaction provides robust support for the notion that frequent engagement with AI contributes to more positive evaluations of its utility and effectiveness. This aligns strongly with Peng et al. (2023), who demonstrate that AI-enhanced workflows in knowledge domains lead to improved output and greater satisfaction. The communication professionals in this study appear to experience similar benefits, reduced task load, improved accuracy, and enhanced speed, which directly affect their satisfaction with AI tools. This further confirms Kotler and Armstrong’s (2018) assertion that automation, when applied appropriately, can improve not only efficiency but also the perceived value of communication operations. As in other domains of knowledge work, repeated interaction builds familiarity, which leads to confidence and contentment, reinforcing sustained adoption.

Finally, the correlation between engagement in knowledge-intensive tasks and AI usage validates a central proposition of the knowledge economy perspective articulated by Garicano (2000) and Garicano and Rossi-Hansberg (2004, 2006). According to their models, firms distribute cognitive labor in hierarchical patterns to optimize problemsolving, and AI serves to redistribute some of that labor from humans to machines. The current finding confirms that professionals in cognitively demanding roles, those involving synthesis, interpretation, and strategic thinking, are more likely to adopt AI tools as cognitive extensions of their role. This supports Grant’s (1996) knowledge-based view of the firm, where value is created through the effective coordination of distributed knowledge resources. AI’s ability to absorb and re-deploy codified and tacit knowledge at scale, as discussed by Li et al. (2023) and Nonaka (1991), makes it especially relevant in roles where abstract judgment and interpretation are required.

Conclusions
Theoretical implications

The findings of this research offer valuable insights into how artificial intelligence is currently being adopted and perceived within the fields of communication and public relations, particularly regarding key psychological and experiential variables. By examining the interconnectedness between AI usage and dimensions such as media literacy, media locus of control, perceived accessibility, and user satisfaction, the study contributes to a more nuanced understanding of the human-technology dynamic shaping digital communication strategies today. One of the most compelling outcomes is the clear link between professionals’ ability to critically interpret and analyze media content and their propensity to engage with AI tools. This suggests that media literacy functions as both a gateway and amplifier for responsible AI integration, enabling professionals to navigate AI-generated content with discernment and confidence. As AI systems increasingly contribute to the production and dissemination of organizational messages, media literacy becomes an essential theoretical construct for assessing communication quality, credibility, and ethical considerations.

In parallel, the relationship between AI usage and media locus of control reinforces the idea that technological adoption is shaped not only by external exposure but also by internal perceptions of agency. Professionals who believe they influence the information they consume and distribute are more likely to approach AI technologies proactively and strategically. This insight emphasizes the psychological dimension of AI engagement and supports future theoretical models that account for individual cognitive and attitudinal factors in technology acceptance within communication environments.

Moreover, the correlation between AI engagement and satisfaction underscores that satisfaction is not merely a passive reaction to technological exposure, but a dynamic outcome shaped by utility, efficiency, and perceived relevance. AI tools, when integrated meaningfully, enhance professionals' experience by reducing redundancy and elevating the strategic quality of their work. This has implications for understanding satisfaction as both a motivator for sustained AI adoption and a potential predictor of long-term organizational innovation in communication fields.

Lastly, the findings related to knowledge-intensive tasks illustrate that AI is increasingly embedded in the core activities of content creation, strategy, and analysis. The ability of AI to support complex, cognitively demanding functions challenges traditional dichotomies between automation and creativity, calling for theoretical re-evaluations of the boundaries between human and machine intelligence in professional communication practices.

Organizational implications

From an organizational perspective, the study highlights several actionable insights for enhancing the effectiveness and ethical integration of AI in communication and public relations. One key implication is that digital competence and user education are vital to successful adoption. Professionals with higher levels of media literacy and stronger internal media locus of control are more likely to embrace AI strategically and responsibly. Therefore, fostering a culture of critical thinking and proactive digital engagement, through professional development programs, mentorship, and internal training, should be a priority for communication departments.

The findings also show that perceptions of accessibility and user satisfaction grow with experience, indicating that frequent, guided exposure to AI tools can help reduce apprehension and increase adoption. Organizations should consider implementing onboarding initiatives, hands-on learning opportunities, and user support systems that enhance familiarity and confidence in working with AI-driven platforms. Additionally, the correlation between AI usage and knowledge-intensive tasks reinforces the need for organizations to align AI implementation strategies with job role demands. AI should not be viewed as a one-size-fits-all solution, but rather as a flexible resource tailored to the cognitive complexity and strategic requirements of communication roles. Supporting this alignment may involve customizing AI features for content planning, message framing, sentiment analysis, and audience segmentation based on specific team needs.

Crucially, as AI technologies increasingly act not only as tools but as co-creators in communication processes, organizations must actively address questions of transparency, bias, and ethical responsibility. Clear guidelines for AI usage, ongoing ethics training, and inclusive design practices can help safeguard trust and relational authenticity in digital interactions. This approach supports a sustainable model of AI integration, one that balances technological innovation with human-centered values in the knowledge economy.

In conclusion, this study affirms that human factors are central to successful AI adoption in communication environments. By investing in digital literacy, ethical awareness, and strategic alignment, organizations can fully harness the potential of AI to augment, not replace the critical thinking, creativity, and relational intelligence of communication professionals.

Limitations and future research directions

While this study offers important insights into the relationship between artificial intelligence adoption and key psychological and experiential factors in the field of communication and public relations, several limitations must be acknowledged. First, the use of a non-probabilistic convenience sampling method constrains the generalizability of the findings. Although the sample size of 227 professionals provides sufficient statistical power for the analyses conducted, the participants were selected based on accessibility rather than randomization, which introduces the possibility of selection bias. Consequently, the results may reflect the behaviors and perceptions of a specific subgroup of communication professionals who are already predisposed toward digital tool engagement, rather than the broader professional population.

Another limitation lies in the exclusive reliance on self-reported data collected through an online survey. While this method is effective for capturing individual perceptions and behavioral tendencies, it is inherently subject to issues such as social desirability bias and the limitations of introspective accuracy. Respondents may have overestimated their AI usage or media literacy levels, either consciously or unconsciously, which could affect the precision of the correlations identified. Furthermore, the quantitative nature of the research design, though advantageous for identifying statistical associations, does not allow for deeper exploration of how communication professionals experience, interpret, or negotiate the use of AI tools in their daily practices. As such, the study cannot fully capture the complexities and contextual subtleties of human-technology interaction within organizations.

Additionally, the variables examined, such as media literacy, locus of control, accessibility, and satisfaction, were operationalized using standardized items that, while valid, may not reflect the full spectrum of meanings these constructs carry in different organizational cultures or communication subfields. The static design of the study also precludes any causal inference. Although significant correlations were identified, the study does not determine the directionality or temporality of these relationships, limiting the extent to which conclusions about influence or impact can be confidently drawn.

In light of these limitations, future research would benefit from adopting a mixed-method or qualitative approach to complement the findings of this quantitative study. One particularly valuable direction would be the use of focus groups or in-depth interviews with senior professionals in communication and public relations agencies. Engaging decision-makers in leadership roles could provide richer insights into how AI tools are integrated into strategic workflows, how ethical and operational concerns are navigated at the organizational level, and what internal dynamics shape the adoption of new technologies. These qualitative methods would allow researchers to explore not just what professionals think or do, but how and why those practices take shape within specific institutional and cultural contexts.

Future studies might also investigate the evolution of AI adoption longitudinally, observing changes in perceptions, satisfaction, and skillsets over time. Such an approach would help identify patterns of adaptation and resistance, as well as the long-term effects of AI integration on organizational communication strategies. Moreover, expanding the scope of research to include cross-national or cross-sectoral comparisons could uncover important contextual factors that mediate the adoption and use of AI in communication professions. By incorporating more diverse methodological perspectives and research populations, subsequent studies can build on the present findings to construct a more comprehensive and multidimensional understanding of how AI is transforming the communicative architecture of contemporary organizations.

DOI: https://doi.org/10.2478/mdke-2025-0017 | Journal eISSN: 2392-8042 | Journal ISSN: 2286-2668
Language: English
Page range: 301 - 322
Submitted on: May 23, 2025
Accepted on: Aug 26, 2025
Published on: Sep 26, 2025
Published by: Scoala Nationala de Studii Politice si Administrative
In partnership with: Paradigm Publishing Services
Publication frequency: 4 times per year

© 2025 Cosmin-Sebastian RĂDULESCU, published by Scoala Nationala de Studii Politice si Administrative
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.