Artificial intelligence (AI) has gained significant attention both in public discourse (e.g., Special Committee on AIDA, 2022; White House, 2022; OECD, 2019) and in academic research (e.g., Gamma & Magistretti, 2025; Querci et al., 2022). Many organizations continue to face challenges, particularly in the early stages of AI implementation (Atsmon et al., 2021). One potential reason for this slow adoption is the public’s worries and concerns about AI and other emerging technologies. People remain concerned about AI applications in areas such as facial recognition, driverless cars, and detecting false information on social media (Rainie et al., 2022). While AI-driven innovations promise significant progress, widespread anxiety about their societal impact persists (Schiavo et al., 2024); meanwhile, managers often lack a clear scientific grasp of the conditions required for AI to generate organizational value (Gamma & Magistretti, 2025).
Acceptance of AI is closely related to general acceptance of new technology. The literature on new technology acceptance highlights the social, economic, policy, and ethical challenges that arise with emerging technologies (Dwivedi et al., 2021). New technology acceptance is a complex process influenced by various motivators and inhibitors (Blut & Wang, 2020). Researchers have analyzed this process using various models considering both technology-related factors, such as perceived usefulness, ease of use, and risk (Davis et al., 1989; Hubert et al., 2019), and individual-related factors, including anxiety, uncertainty, hedonic motivation, and emotional responses (Tamilmani et al., 2021). Notably, worries and concerns about new technology may act as significant inhibitors to adoption in organizations (Blut & Wang, 2020). While these traits and feelings have been recognized, existing theories often treat worries and concerns as having only a weak or indirect impact on adoption decisions. Beyond AI as a specific new technology, its diffusion is also embedded within the broader digital transformation driven by information and communication technologies (ICT). The adoption of AI thus reflects not only technical change but also the evolution of organizational ICT capabilities and infrastructures that enable intelligent data use and automation (Chugh et al., 2025; Mariani & Dwivedi, 2024). Positioning AI within the ICT continuum underscores that AI is a critical but integral component of modern digital ecosystems in organizations.
Despite growing public and organizational attention to AI, limited empirical research has systematically examined how individuals differ in their concerns about emerging technologies. Understanding such variation is important because individual perceptions and anxieties shape broader public attitudes and, indirectly, the environment in which organizations adopt AI. Such perceptions may also influence how individuals within organizations respond to AI initiatives, affecting their openness, trust, and readiness for change. Therefore, the aim of this paper is to identify and classify patterns of individual worries and concerns about AI and related technologies, and to explore how these attitudinal clusters differ across technology domains.
Using data from over 10,000 U.S. adults collected through the Pew Research Center’s American Trends Panel (ATP) survey (Pew Research Center, 2021), this study examines worries and concerns about new technologies, particularly AI. We ask whether individuals worry uniformly about all technologies, or whether their worries vary depending on the specific technology in question. Identifying such differences, and grouping individuals accordingly, may help firms accelerate AI adoption and provide insights into how organizational members perceive AI in practice.
The acceptance of new technology is often a gradual process, influenced by a variety of factors (Blut & Wang, 2020). In many organizations, adoption takes considerable time, as seen in higher education (Skoumpopoulou et al., 2018), healthcare (Rahimi et al., 2018), media (Youn & Lee, 2019), and the food sector (Siegrist & Hartmann, 2020). In the literature, technology acceptance in organizations is recognized as often depending on individuals’ acceptance. Models like the Technology Acceptance Model (TAM) (Park et al., 2021), Technology Readiness Model (TRM) (Blut & Wang, 2020), and Innovation Diffusion Theory (IDT) (Hubert et al., 2019) highlight the impact of individual traits and perceptions on technology adoption. The Technology Readiness Model (TRM), in addition, includes motivators (innovativeness, optimism) and inhibitors (insecurity, discomfort) (Blut & Wang, 2020). Related frameworks such as the Extended Unified Theory of Acceptance and Use of Technology (UTAUT2) similarly integrate affective and risk-related beliefs but often model them as indirect antecedents of behavioral intention (Tamilmani et al., 2021).
In the literature, both internal factors (e.g., the Technology Readiness Model, or TRM) and external factors (e.g., the Technology Acceptance Model, or TAM) are recognized as shaping individuals’ attitudes toward new technologies. Individuals may hold varying attitudes toward different technologies, with the TRM reflecting a general predisposition toward technology, though it has only limited impact on specific attitudes (Park et al., 2021; Blut & Wang, 2020; Wixom & Todd, 2005). Among the internal factors, individuals’ worries and concerns about new technologies often hinder their acceptance. These concerns include privacy and trust issues (Dhagarra et al., 2020), as well as fears of security breaches in online transactions (Mousavizadeh et al., 2016). In this context, AI adoption can be viewed as part of a wider trajectory of ICT innovation and digital transformation, where human and organizational readiness play central roles (Chugh et al., 2025). Insecurity and discomfort, in particular, hinder technology acceptance, and the model has evolved with technological developments (Parasuraman & Colby, 2015). Worries and concerns are not minor factors but often central barriers to adoption.
Artificial Intelligence (AI), defined as a machine-based system that generates outputs such as predictions, recommendations, or decisions (White House, 2022; OECD, 2019), has received attention worldwide from governments and organizations (Special Committee on AIDA, 2022). It is projected to contribute $13 trillion to global economic growth by 2030 (AI Commission, 2023), driving competition for global leadership (Special Committee on AIDA, 2022). Applications may include AI-driven chatbots (Morsi, 2023), digital platforms (Gamma & Magistretti, 2025), and process automation (Jha et al., 2019). AI has the potential to enhance organizational performance, with employee productivity serving as a key mediator between AI adoption and performance outcomes (Kassa & Worku, 2025).
Despite this potential, organizational adoption of AI remains challenging, particularly because it is shaped by organizational members’ attitudes and dispositions. AI’s impact on productivity and innovation differs markedly across organizations (Kim et al., 2025). Individuals’ resistance to change and lack of skills (Romeo & Lacko, 2025), as well as social dynamics among organizational members, including team motivation and leader-follower bonds (Booyse & Scheepers, 2024), may affect the adoption of AI. Individuals’ worries regarding privacy and the collection of personal data (Querci et al., 2022), potential disruptions to social norms (Dwivedi et al., 2021), and concerns such as the perceived immaturity of the technology (Morsi, 2023) often deter adoption. A Qualtrics study (2023) similarly highlights worries about privacy, transparency, and AI’s lack of emotional understanding (Ozsevim, 2023). Managers often lack a grounded understanding of these dynamics, limiting their ability to capture AI’s value (Gamma & Magistretti, 2025), while AI may reduce employees’ psychological safety and increase stress (Kim et al., 2025). Recent research on organizational adoption of AI highlights not only technical challenges but also organizational readiness and human-AI collaboration (Raisch & Krakowski, 2021). Understanding how different individuals perceive AI is therefore essential for mapping realistic adoption trajectories.
Generative AI (GenAI) represents a new stage in AI development, using large-scale models to generate novel outputs – such as text, images, or code – beyond the training data (Feuerriegel et al., 2024). It enhances creativity and efficiency (Stokel-Walker & Van Noorden, 2023) but also introduces new challenges around governance, trust, and human–AI collaboration (Mariani & Dwivedi, 2024; Romeo & Lacko, 2025), and its adoption requires safeguards addressing risks such as bias, privacy, and intellectual property (IBM Institute for Business Value, 2024; Smith, 2025). While recent studies emphasize GenAI’s potential to facilitate employee performance and corporate innovation (Rana et al., 2024), with employee productivity identified as a key mediating factor (Liu et al., 2025; Kassa & Worku, 2025), GenAI assistants such as ChatGPT, Grok, and DeepSeek also raise concerns that may hinder adoption among organizational members (Monteverde et al., 2025; Hornung & Smolnik, 2021).
In our study, the AI technologies (e.g., facial recognition, driverless vehicles, brain–computer interfaces, robotic exoskeletons) represent applied or embodied systems that have functional focus different from GenAI, which many individuals may find both more impressive and more unsettling than GenAI. Informed by recent discussions on the evolution of AI and Generative AI (e.g., Chugh et al., 2025; Pandy et al., 2025; Rashidi et al., 2025; Reddy et al., 2025; Smith, 2025; Feuerriegel et al., 2024; Mariani & Dwivedi, 2024), Table 1 synthesizes and extends these perspectives to highlight the contrasts most relevant to our study.
Comparison between Applied AI (as in this study) and GenAI.
| Aspect | Applied AI (as in this study) | Generative AI (GenAI) |
|---|---|---|
| Core Function | Perception, prediction, and autonomous decision in real-world contexts | Creates new and complex outputs such as text, images, music, or code |
| Typical Examples | Facial recognition, driverless cars, brain–computer interfaces, misinformation detection | ChatGPT, Copilot, DeepSeek, Grok, Gemini |
| Learning and Data | Typically multimodal and task-specific | Trained on massive datasets to learn patterns of human expression |
| User Interaction | Often indirect (embedded in devices and platforms); outputs are felt via actions or decisions | Direct interaction via prompts; outputs are visible artifacts (text, images, code) |
| Risks | Physical harm, systemic bias, privacy, liability; rare-event risks with high consequences | Misinformation, bias, privacy, automation of creative or knowledge work |
| Primary Concerns | Safety, accountability, data privacy, bias in decision outcomes, etc. | Authenticity, transparency, accuracy, intellectual property, ethical use, etc. |
Source: original compilation based on recent AI literature
This contrast clarifies that the type of AI examined in this study represents a parallel branch of AI development, rather than an earlier stage. While sharing core concerns with GenAI, such as privacy, bias, and trust, these systems differ in their forms of interaction and risk emphasis, focusing more on safety, accountability, and real-world consequences than on generated content. Such distinctions suggest the technologies analyzed in our dataset tend to elicit stronger public anxieties and concerns, thereby making individual differences in perception more salient and offering a sharper lens for identifying attitudinal clusters than would be possible with GenAI at this stage. Because the dataset used here (Pew Research Center, 2021) predates the mainstream rise of GenAI, it captures public attitudes toward these applied AI systems that were already provoking strong societal reactions.
While the existing literature suggests that individual concerns often stem from a technology’s features and risks (Park et al., 2021; Blut & Wang, 2020), the most recent research on AI has often mentioned the importance of internal factors such as emotions (Hornung & Smolnik, 2021) and social anxiety (Yuan et al., 2022).
Recent studies have begun to examine individuals’ attitudes and emotional responses toward AI, including both traditional and generative forms, confirming that personal dispositions, perceived risks, and trust strongly influence adoption intentions, including within organizations (e.g., Daly et al., 2025; Montag et al., 2025). Several works in the past two years have also explored the psychological underpinnings of AI resistance and acceptance, such as trust (Daly et al., 2025), existing general attitudes to AI (Montag & Ali, 2025), personality traits (Grassini & Koivisto, 2024; Stein et al., 2024), personal experiences (Grassini & Koivisto, 2024), demographic factors (Kaya et al., 2024), individual perceptions such as self-efficacy and perceived job threat (G. Wang et al., 2025), and social perceptions (C. Wang et al., 2025), as well as broader anxieties and societal concerns, including social anxiety (Yuan et al., 2022) and ethical or governance concerns (Mariani & Dwivedi, 2024; Rashidi et al., 2025).
However, most of these studies rely on small or context-specific samples (e.g., healthcare, education, artwork, or organizational case studies) and do not systematically test whether distinct clusters of individuals exist across the general population. This gap highlights the need for a broader, data-driven approach to verify attitudinal heterogeneity in public concerns about AI; our study addresses this need through cluster analysis of a nationally representative dataset. Our research builds on the existing research stream by emphasizing that worries and concerns are not simply by-products of features, but are deeply shaped by individuals’ dispositions. Understanding such heterogeneity is increasingly important for organizations, where individual-level acceptance may shape adoption outcomes. Nevertheless, several important gaps remain.
The first such gap lies in attitudinal heterogeneity. While the TAM/TRM frameworks (e.g., Blut & Wang, 2020) and some demographic segmentation (e.g., Park et al., 2021) exist, most large-scale surveys treat the public as uniform. Surveys on AI adoption often report aggregate percentages (Rainie et al., 2022) but rarely uncover latent clusters of attitudes. What is missing is systematic, cluster-based evidence of how individuals’ worries diverge at scale. The second gap concerns the consistency of individual worries and concerns. Prior studies often examine AI acceptance in narrow contexts such as healthcare, education, or surveillance (Rahimi et al., 2018; Skoumpopoulou et al., 2018). Yet, few have tested whether clusters show stable differences across diverse AI technology domains, or whether their relative concerns follow a consistent directional structure. The third gap involves the individual–organizational link. Organizational studies acknowledge privacy, bias, and intellectual-property worries as barriers to AI adoption (IBM Institute for Business Value, 2024), but these are often discussed at the organizational capability level. The connection between individual dispositions and organizational adoption therefore remains underexplored.
Drawing on prior research, we advance the following hypotheses:
H1: Respondents can be grouped into distinct clusters reflecting different attitudes toward AI and related technologies.
H2: These clusters differ significantly in their reported concerns across a range of AI-and technology-related variables.
H3: Cluster differences will exhibit a consistent directional order across domains, rather than varying unpredictably by context.
To analyze individuals’ opinions on new technologies, particularly AI, we used data from the Pew Research Center’s American Trends Panel (ATP) survey (Pew Research Center, 2021). The ATP is a nationally representative online panel of over 10,000 U.S. adults on current issues, with questions available in both English and Spanish (Rainie et al., 2022; Keeter, 2019). This study specifically uses data from Wave 99 of the ATP survey, conducted from November 1 to November 7, 2021, with 10,260 U.S. adults as respondents, including residents of Hawaii and Alaska (Rainie et al., 2022).
This dataset is particularly well suited for our study because it is one of the largest nationally representative surveys focusing on AI and emerging technologies, and it contains uniquely detailed attitudinal measures across multiple domains. Respondents came from diverse demographic backgrounds (e.g., age, gender, race/ethnicity, education, income, and region). Pew provides survey weights to align the panel with U.S. population benchmarks; however, in this study we analyzed the unweighted dataset. This choice reflects our focus on relative patterns across clusters of respondents rather than on nationally generalizable point estimates. Accordingly, the results should be interpreted as revealing attitudinal structures within the sample, while acknowledging that weighted distributions would more closely reflect the demographic profile of the U.S. adult population. Basic demographics (e.g., age, gender) closely aligned with U.S. Census benchmarks (United States Census Bureau, 2023), confirming sample reliability (Pew Research Center, 2021). Respondents were members of the general U.S. adult population, not sampled by occupational role or management level. While the survey does not provide organizational subgroups, the findings are nonetheless relevant for organizational research, as employees and managers emerge from the broader public and bring these predispositions into organizations. To identify latent attitudinal clusters (H1), we used two composite variables capturing respondents’ excitement or concern about (a) AI applications (POSNEGAI) and (b) potential human enhancements (POSNEGHE).
At the outset of the survey (Pew Research Center, 2021), all participants answered two multi-item questions: one asked how excited or concerned they would be if AI performed six specific types of work, while the other asked about potential new techniques that could change human abilities in six ways. For each item, respondents selected from five options ranging from “Very excited” to “Very concerned,” with nonresponses coded separately and excluded from analysis. These two question blocks offered a uniquely detailed set of attitudinal measures, each with six items and five-point response options, making them well suited for clustering analysis. Notably, while the first block focused directly on AI applications, the second addressed broader technological changes that may be facilitated by AI. Table 2 summarizes the two multi-item questions that served as clustering inputs for H1:
Variables for Clustering Analysis (H1).
| POSNEGAI | How excited or concerned would you be if artificial intelligence computer programs could do each of the following? |
| a | Know people’s thoughts and behaviors |
| b | Perform household chores |
| c | Make important life decisions for people |
| d | Diagnose medical problems |
| e | Perform repetitive workplace tasks |
| f | Handle customer service calls |
| POSNEGHE | How excited or concerned would you be about potential new techniques that could change human abilities in the following ways? |
| a | Slow the aging process to allow the average person to live decades longer |
| b | Allow some people to far more quickly and accurately process information |
| c | Prevent some people from getting serious diseases or health conditions |
| d | Allow some people greatly increased strength for lifting heavy objects |
| e | Allow some people to see shapes and patterns in crowded spaces far beyond what the typical person can see today |
| f | Allow some people to hear sounds far beyond what the typical person can hear today |
Source: Pew Research Center (2021)
To test whether the clusters differed significantly in their overall attitudes (H2), we examined four additional variables measuring general orientations toward technology, science, and AI. These items extend beyond the specific clustering inputs (POSNEGAI and POSNEGHE) and provide a broader attitudinal context, allowing us to validate whether the clusters reflect meaningful differences in respondents’ general views. Two items assessed attitudes toward AI (CNCEXC and ALGFAIR), while two others measured general attitudes toward technology (TECH1) and science (SC1). Because TECH1 appeared only in Form 1 and SC1 only in Form 2, these variables also serve as a robustness check across subsamples. Table 3 below includes the questions of TECH1, SC1, CNCEXC and ALGFAIR, which are the four variables used to test H2, capturing broader general attitudes.
The 4 Variables about General Attitudes toward Technology, Science and AI (H2).
| CNCEXC | Artificial intelligence computer programs are designed to learn tasks that humans typically do, for instance recognizing speech or pictures. Overall, would you say the increased use of artificial intelligence computer programs in daily life makes you feel… |
| 1 More excited than concerned | |
| ALGFAIR | Do you think it is possible or not possible for people to design artificial intelligence computer programs that can consistently make fair decisions in complex situations? |
| 1 Possible | |
| TECH1 | Overall, would you say technology has had a mostly positive effect on our society or a mostly negative effect on our society? |
| SC1 | Overall, would you say science has had a mostly positive effect on our society or a mostly negative effect on our society? |
| 1 Mostly positive |
Source: Pew Research Center (2021)
To assess whether cluster differences followed a consistent directional order across domains (H3), we analyzed six items from the Pew Research Center survey (2021) covering widely discussed AI-related technologies. These included three noninvasive applications (social media, facial recognition, and driverless vehicles) and three more invasive or embodied applications (brain-implanted chips, gene editing, and robotic exoskeletons). The survey was split into two forms, with approximately half of respondents answering the first set (noninvasive, Form 1) and the other half the second set (invasive, Form 2). This design enables us to test not only whether clusters differ in their overall orientations toward AI but also whether their relative positions remain consistent across contrasting domains. Table 4 below shows the six domain-specific variables analyzed to assess H3:
The 6 Variables about AI Technologies in Various Domains (H3).
| SMALG2 | Do you think widespread use of computer programs by social media companies to find false information on their sites has been a… |
| FACEREC2 | Do you think the widespread use of facial recognition technology by police would be a… |
| DCARS2 | Do you think widespread use of driverless passenger vehicles would be a… |
| BCHIP2 | Do you think widespread use of computer chip implants in the brain allowing people to far more quickly and accurately process information would be a… |
| GENEV2 | Do you think the widespread use of gene editing to greatly reduce a baby’s risk of developing serious diseases or health conditions over their lifetime would be a… |
| EXOV2 | Do you think widespread use of robotic exoskeletons would be a… |
| RESPONSE OPTIONS | 1 Good idea for society |
Source: Pew Research Center (2021)
Note: Some variables (e.g., CNCEXC, BCHIP2) originally used the coding in an order similar to 1 = Good idea, 2 = Bad idea, 3 = Not sure. For comparability across items, we recoded them as 1 = Positive, 2 = Neutral, 3 = Negative. This ensured that lower scores consistently reflect more positive attitudes and higher scores more negative attitudes. All ANOVAs and post-hoc tests were conducted on the recoded variables. For all the variables in Table 3 and Table 4, responses that represented nonresponse codes (e.g., “99”) were also excluded.
As mentioned earlier, each of the two variables for clustering purposes (POSNEGAI and POSNEGHE) comprises six sub-questions (a–f). To facilitate clustering, we averaged these responses to create two composite variables, which were then standardized. We acknowledge that averaging the variables may reduce variance and mask item-level heterogeneity (Hair et al., 2019). However, in this study the averaged variables still produced distinct and interpretable clusters, suggesting that substantive group differences were preserved. Using these scaled variables, we applied k-means clustering to determine the optimal number of clusters. For our cluster analysis, we employed k-means clustering because it is computationally efficient for larger samples, produces non-overlapping clusters that are straightforward to interpret, and is widely applied in marketing and management research (Hair et al., 2019). The elbow method, a widely used approach for this purpose (Johnson & Wichern, 1992), indicated that three clusters best fit the data.
To assess whether the identified clusters differed significantly in their attitudes toward AI-related technologies, we employed a series of one-way analyses of variance (ANOVAs) (Hair et al., 2019). Each ANOVA tested mean differences across clusters for a given survey item. Where overall F-tests were significant, we conducted Tukey’s Honest Significant Difference (HSD) post-hoc comparisons to identify which cluster means differed from one another while controlling for familywise error. This allowed us to distinguish whether differences followed a linear pattern or whether reversals emerged. In addition to significance testing, we reported effect sizes using eta squared (η2), which quantify the proportion of variance explained by cluster membership. Effect sizes (η2) were calculated for each ANOVA and interpreted following benchmarks recommended by Hair et al. (2019), with values of approximately .01, .06, and .14 indicating small, medium, and large effects, respectively. These effect sizes provide information about the substantive strength of differences beyond statistical significance. This analytic sequence allowed us to (a) identify latent clusters (H1), (b) test their overall attitudinal differences (H2), and (c) assess whether these differences followed a consistent directional structure across domains (H3).
The dataset was collected in November 2021, prior to the widespread emergence of generative AI applications such as ChatGPT. Survey items focused on technologies that were prominent in public debate at the time, including facial recognition, driverless cars, brain chip implants, gene editing, and robotic exoskeletons. Accordingly, the results capture attitudes toward these technologies rather than generative AI specifically. This temporal scope should be borne in mind when interpreting the findings.
We conducted the analysis using R version 4.5.0. As mentioned earlier, we performed a cluster analysis based on POSNEGAI and POSNEGHE (Table 2) to examine whether different clusters exhibit distinct attitudes toward issues related to new technologies. Using the elbow method, the total within-cluster sum of squares declined sharply before stabilizing at three clusters, indicating that three clusters provided the optimal solution. The sample was divided into three clusters based on responses to POSNEGAI (focused on potential AI technologies) and POSNEGHE (focused on potential new technologies in general). Given this classification, differences in responses to these variables are expected.
The k-means analysis yielded three clusters of sizes 2,951, 4,336, and 2,639 respondents, respectively. Table 5 presents descriptive statistics for the two clustering variables (POSNEGAI and POSNEGHE) by cluster. The results show clear differentiation. These distinctions indicate that averaging did not obscure meaningful differences but produced interpretable clusters.
The 6 Variables about AI Technologies in Various Domains (H3).
| Cluster | Size | POSNEGAI (Mean ± SD) | POSNEGHE (Mean ± SD) |
|---|---|---|---|
| 1 Skeptics | 2951 | 4.28 ± 0.37 | 3.38 ± 0.96 |
| 2 Cautious | 4336 | 3.26 ± 0.28 | 2.56 ± 0.78 |
| 3 Optimists | 2639 | 2.18 ± 0.45 | 1.80 ± 0.70 |
Source: original compilation, based on data from Pew Research Center (2021).
For interpretability, we labeled the clusters according to their dominant attitudinal patterns. Cluster 1 (n = 2951) scored highest on both measures of concern and was labeled the “Skeptics.” Cluster 3 (n = 2639) scored lowest on both sets of concern items and was therefore labeled the “Optimists,” as they appeared more excited about AI and related technologies. Cluster 2 (n = 4336) showed moderate scores and was labeled the “Cautious.” POSNEGAI captures attitudes toward AI-specific applications, while POSNEGHE reflects attitudes toward potential human enhancements, neither of which includes invasive technologies such as brain chips, gene editing, or exoskeletons. These results provide strong evidence for the existence of three distinct attitudinal clusters, consistent with H1.
To further validate the clustering analysis, we examined responses to general attitudes toward technology and science (TECH1 and SC1) and AI (CNCEXC and ALGFAIR) with ANOVA. The results are as follows:
ANOVA Results for General Attitudes toward Technology, Science and AI (H2).
| Variable | Cluster | M | SD | n | η2 | Tukey results |
|---|---|---|---|---|---|---|
| CNCEXC_W99 | Skeptics | 2.65 | 0.52 | 2951 | .29 | Skeptics > Cautious > Optimists |
| Cautious | 2.21 | 0.63 | 4336 | |||
| Optimists | 1.62 | 0.66 | 2639 | |||
| ALGFAIR_W99 | Skeptics | 2.35 | 0.69 | 2928 | .14 | Skeptics > Cautious > Optimists |
| Cautious | 2.06 | 0.73 | 4309 | |||
| Optimists | 1.58 | 0.72 | 2621 | |||
| TECH1_W99 | Skeptics | 1.87 | 0.68 | 1541 | .11 | Skeptics > Cautious > Optimists |
| Cautious | 1.55 | 0.62 | 2157 | |||
| Optimists | 1.29 | 0.52 | 1289 | |||
| SC1_W99 | Skeptics | 1.57 | 0.67 | 1403 | .08 | Skeptics > Cautious > Optimists |
| Cautious | 1.29 | 0.53 | 2175 | |||
| Optimists | 1.15 | 0.43 | 1348 |
Source: original compilation, based on data from Pew Research Center (2021).
The three clusters identified by responses to POSNEGAI and POSNEGHE also differ significantly in their responses to the four variables listed, even when divided across two questionnaire forms. While POSNEGAI and POSNEGHE do not address invasive technologies, our results show that these clusters maintain distinct response patterns even when invasive technologies are considered.
As expected, the clusters differed significantly on CNCEXC (F(2, 9923) = 2046, p < 2 × 10−16). As in Table Y, clusters also differed significantly on ALGFAIR (F(2, 9855) = 817.2, p < 2 × 10−16), TECH1 (F(2, 4984) = 318.4, p < 2 × 10−16), and SC1 (F(2, 4923) = 209.3, p < 2 × 10−16). On all variables, the Optimists scored lowest, the Skeptics highest, and the Cautious fell in between, mirroring the patterns observed for POSNEGAI and POSNEGHE. These results confirm that the clusters not only exist but also differ systematically in their general orientations toward AI, technology, and science, consistent with H2.
Also as mentioned earlier, we next ran ANOVAs on six domain-specific items, covering both noninvasive and invasive technologies (SMALG2, FACEREC2, DCARS2, BCHIP2, GENEV2, EXOV2): Social media, facial recognition, driverless vehicles, brain-implanted computer chips, gene editing, and robotic exoskeletons (Pew Research Center, 2021). The results are below:
ANOVA Results for AI Technologies in Various Domains (H3).
| Variable | Cluster | M | SD | n | η2 | Tukey results |
|---|---|---|---|---|---|---|
| SMALG2_W99 | Skeptics | 2.20 | 0.81 | 1538 | .06 | Skeptics > Cautious > Optimists |
| Cautious | 1.90 | 0.84 | 2156 | |||
| Optimists | 1.66 | 0.84 | 1288 | |||
| FACEREC2_W99 | Skeptics | 1.92 | 0.83 | 1537 | .02 | Skeptics > Cautious > Optimists |
| Cautious | 1.81 | 0.83 | 2156 | |||
| Optimists | 1.64 | 0.83 | 1287 | |||
| DCARS2_W99 | Skeptics | 2.62 | 0.62 | 1540 | .19 | Skeptics > Cautious > Optimists |
| Cautious | 2.14 | 0.81 | 2153 | |||
| Optimists | 1.64 | 0.80 | 1287 | |||
| BCHIP2_W99 | Skeptics | 2.76 | 0.48 | 1405 | .16 | Skeptics > Cautious > Optimists |
| Cautious | 2.51 | 0.63 | 2162 | |||
| Optimists | 2.03 | 0.80 | 1346 | |||
| GENEV2_W99 | Skeptics | 2.33 | 0.72 | 1403 | .13 | Skeptics > Cautious > Optimists |
| Cautious | 1.97 | 0.76 | 2170 | |||
| Optimists | 1.56 | 0.73 | 1347 | |||
| EXOV2_W99 | Skeptics | 2.28 | 0.67 | 1406 | .17 | Skeptics > Cautious > Optimists |
| Cautious | 1.83 | 0.69 | 2173 | |||
| Optimists | 1.47 | 0.66 | 1345 |
Source: original compilation, based on data from Pew Research Center (2021).
Clusters differed significantly across all six measures: SMALG2 (F(2, 4979) = 152.9, p < 2 × 10−16), FACEREC2 (F(2, 4977) = 40.83, p < 2 × 10−16), DCARS2 (F(2, 4977) = 592.4, p < 2 × 10−16), BCHIP2 (F(2, 4910) = 452.8, p < 2 × 10−16), GENEV2 (F(2, 4917) = 374, p < 2 × 10−16), and EXOV2 (F(2, 4921) = 498.8, p < 2 × 10−16). Across all the variables above, Skeptics reported the highest concern levels, Cautious respondents fell in between, and Optimists consistently scored lowest, similar to the pattern observed in POSNEGAI and POSNEGHE. These findings indicate that the clusters’ relative positions remain stable across diverse and contrasting domains, consistent with H3.
Our research suggests that individuals can be meaningfully clustered by their attitudes toward new technology, and these attitudinal differences remain relatively stable across diverse AI-related domains. This consistency indicates that technology attitudes are not merely context-specific reactions but reflect deeper, enduring dispositions. The robustness of these patterns aligns with prior research highlighting the influence of individual characteristics on technology acceptance. For instance, studies have linked personality traits such as openness, conscientiousness, and agreeableness to technology adoption (Fuglsang, 2024; Stein et al., 2024; Park & Woo, 2022; Barnett et al., 2015). However, while earlier work often reported only weak correlations (Fuglsang, 2024; Park & Woo, 2022), our results suggest that individuals’ broader orientations toward new technology manifest as distinct and stable clusters. This implies that dispositional differences – potentially shaped by personality but not reducible to it – underlie how people respond to emerging technologies.
Consistent with H1, the cluster analysis revealed three distinct groups with differing attitudes toward AI. ANOVAs further confirmed significant between-group differences (H2). Moreover, in line with H3, these differences followed a consistent directional order across domains: Skeptics expressed the greatest concerns, Optimists the least, and Cautious respondents fell in between. This suggests that attitudinal clusters not only exist but also exhibit stability across diverse AI-related technologies, reinforcing the view that individuals’ underlying dispositions shape their responses more strongly than contextual variation.
Across a large, nationally representative sample, we identified three stable attitudinal clusters based on concern/excitement composites about AI and human-enhancement technologies. These clusters differed systematically on general evaluations of AI, technology, and science, and the directional ordering (Skeptics > Cautious > Optimists in concerns) persisted across multiple domains of AI (e.g., social media detection, facial recognition, driverless vehicles, brain chips, gene editing, exoskeletons). This pattern supports the view that technology attitudes reflect enduring dispositions rather than context-specific reactions. Most prior studies are context-bound, while by using cluster analysis on a large public sample, we verify population-level heterogeneity and show that the same ordering of clusters appears across qualitatively different technology domains, strengthening external validity for segmentation approaches in future work.
Classical models (TAM/TRM/UTAUT2) often treat inhibiting emotions (e.g., anxiety, discomfort) and risk beliefs as indirect or weak drivers of adoption (Tamilmani et al., 2021; Parasuraman & Colby, 2015; Davis et al. 1989). Our findings suggest that concern-based dispositions operate more directly, segmenting the population into groups with distinct baseline orientations that persist across contexts. This complements literature about individual traits, particularly individual worries and concerns (e.g., Grassini & Koivisto, 2024; Stein et al., 2024; Blut & Wang, 2020), as well as recent evidence on individual-level factors encompassing demographic, perceptual, and socio-psychological dimensions (e.g., Daly et al., 2025; Montag & Ali, 2025; C. Wang et al., 2025; G. Wang et al., 2025; Kaya et al., 2024; Yuan et al., 2022), by demonstrating that such factors aggregate into stable attitudinal profiles at scale.
Our findings also enrich the broader research about ICT and AI/GenAI. Framing AI within the broader ICT continuum clarifies that acceptance depends not only on technical features but also on organizational ICT readiness and data/analytics infrastructures that shape perceived risks and value realization (Chugh et al., 2025; Mariani & Dwivedi, 2024). Our results show that individual predispositions toward applied or embodied AI (as in this study) form coherent, domain-general profiles that likely carry over as organizations integrate both applied AI and GenAI. Because the technologies analyzed here, such as facial recognition, driverless vehicles, or brain–computer interfaces directly affect physical, ethical, and personal domains, they evoke more salient attitudinal differences, offering a sharper lens for identifying the underlying structure of individual concerns.
Our analysis captures individual-level attitudes in the general population. These are directly relevant for organizations, since employees bring these predispositions into the workplace, and managers must account for heterogeneous employee and consumer attitudes when implementing AI strategies. Our findings suggest resistance to adoption stems more from psychological predispositions than from technological attributes. While organizations aim for the rapid adoption of emerging technologies such as AI, individual worries and concerns are often overlooked, despite their crucial influence on adoption outcomes (Meuter et al., 2003; Querci et al., 2022). As recognized inhibitors in models like TAM and TRM, these concerns create psychological barriers to adoption (Blut & Wang, 2020; Park et al., 2021). While past studies often treated such factors as indirect influences, our findings show that concerns remain stable across technologies, indicating a more direct impact. Accordingly, organizations should focus less on modifying product features and more on building trust and reducing uncertainty. Targeted engagement strategies such as tailored education, trust-building initiatives, and identifying resistant individuals based on prior technology attitudes may help organizations improve adoption outcomes.
This study is subject to several limitations. Our dataset captures only concerns, not motivators, so future research should test whether positive drivers of adoption show the same stability across technologies. The analysis also averaged multiple item-level responses into composite variables, which may have masked some heterogeneity, though the resulting clusters remained distinct and interpretable. In addition, while clustering provided meaningful attitudinal profiles, group boundaries are statistical rather than categorical, and individual variation within clusters should be expected. Finally, the data were collected in 2021 in the U.S. context, before the rise of generative AI. Although the findings capture enduring patterns of concern, future work should validate them in updated datasets and cross-cultural settings.
Our results also advance technology adoption research by linking the findings more closely to prior studies. Prior studies often treated concerns as weak or indirect inhibitors (e.g., Davis et al., 1989; Blut & Wang, 2020; Parasuraman & Colby, 2015), but our analysis demonstrates that they represent enduring dispositions rather than context-specific reactions. By identifying Skeptics, Cautious, and Optimists, we highlight systematic attitudinal heterogeneity beyond aggregate survey percentages. This provides both theoretical and methodological value, while also pointing to future research opportunities to refine cluster-based approaches and extend them internationally.
This study provides population-level evidence that attitudinal heterogeneity toward AI is structured, stable, and cross-domain. Three clusters (Skeptics, Cautious, Optimists) differ consistently in concern across both applied/embodied AI domains and general orientations toward AI, technology, and science. In line with our hypotheses, the analysis identified three distinct clusters with differing attitudes toward AI (H1) and confirmed significant between-group differences across multiple variables (H2). Importantly, H3 was also supported: cluster differences followed a consistent directional order across domains, with Skeptics expressing the greatest concerns, Optimists the least, and Cautious respondents falling in between. This consistency underscores that individuals’ orientations toward AI remain stable across diverse contexts, reinforcing the role of underlying dispositions in shaping technology attitudes. The findings make theoretical advances and yield actionable managerial implications. The results highlight why organizational adoption hinges not only on technical performance but also on aligning governance, communication, and rollout strategies with personal dispositions. Practically, segmentation and tailored engagement may reduce resistance and accelerate value realization.
Limitations include the focus on concerns rather than motivators, the use of composite measures that may mask item-level nuances, and the pre-GenAI timing of the dataset. These limitations nonetheless point to meaningful avenues for future research using post-2023 data, cross-cultural samples, and designs that connect attitudinal segments with actual adoption behaviors. Overall, the results underscore a central insight: in organizational contexts, the decisive constraint is often not the capability of AI itself but the diversity of human dispositions toward it. Addressing this challenge requires not only strategic and governance-aligned interventions but also a deeper sense of empathy and understanding toward the individuals whose experiences ultimately shape the success of AI adoption.