Adolescents live increasingly digital lives. As of 2020, 80% of Swedish 16-year olds spend more than three hours per day online, up from 42% ten years earlier (1). This roughly coincides with a two-fold increase in the number of 16-29-year olds reporting severe worry or anxiety (2). These simultaneous increases have spurred speculation that adolescents’ increased time spent online could be the cause of the increase in mental health problems (3). Indeed, longitudinal studies have found screen use to be linked to depressive symptoms (4,5). According to the World Health Organization (WHO) report Health Behaviour in School-aged Children (HBSC) with data from 38 different countries, on average 11% of adolescents could be described as ‘problematic social media users’, with another 32% as ’intense users’ (6). The increasing issue of teenagers becoming engrossed in excessive screen time also poses a risk of worsening the affected youths' school attendance, physical activity level, and social development (7).
Screen time measures have been criticized for being overly broad and failing to distinguish between qualitatively different activities (8). Increasingly, researchers emphasize the importance of distinguishing between active and passive use rather than focusing solely on total duration. The WHO has developed guidelines for the time spent on ‘sedentary screen time’ which in essence concerns time spent passively watching screen-based entertainment (9). Previous studies have also identified certain screen activities, such as texting friends, to be positively associated with well-being (10), while passive scrolling is conversely associated with ill-health (11,12). Passive scrolling has been hypothesized to negatively affect well-being through mechanisms such as increased social comparison, reduced active social engagement, and displacement of rewarding offline activities (13,14). Passive consumption of social media content may increase feelings of inadequacy and exclusion, while providing fewer opportunities for social reinforcement compared to active engagement. The motives for engaging in screen activities might also predict ill-health, where e.g. escapism and social compensation might be linked more to mental health problems than for example pure entertainment motives (15).
It should also be noted that associations between screen time and mental health are often non-linear. For example, some studies have found that moderate levels of screen use are associated with similar or even slightly better well-being than very low or very high levels, challenging simple assumptions that more screen time is uniformly harmful (8).
Twenge and Farley (16) have argued that there are gender differences in how various types of screen time activities are linked to mental health. In their cross-sectional study, social media and internet use were found to be more strongly associated with negative mental health outcomes, such as depressive symptoms, low self-esteem, and low life satisfaction, among girls. The authors speculate that this might be due partly to excessive social comparisons. In contrast, gaming, which was more common in boys, had weaker links to negative mental health indicators and, due to involving more real-time social interaction. Watching fictional series exhibited the weakest associations with mental health outcomes for both genders. These findings suggest that the type of screen activity may be more important than total screen time per se, and that observed gender differences may partly reflect differences in the content and nature of screen use (16).
Previous studies on the bond between parents and adolescents suggest that there is an association between excessive forms of screen time and less emotional closeness (17). Many parents experience and seek assistance to reduce their children's screen time, indicating a need for the development of evidence-based treatments for this group (18). Interventions typically work through parents, and there is a need to know what these interventions should focus on to make them relevant to parents. However, the complex interactions in family dynamics and multiple problems related to screen use and mental health make it hard to point to “root causes” that should be targeted.
Since excessive screen time is a transdiagnostic problem, it might be useful to make use of recent developments in the network approach to psychopathology (19). In this approach, a persons problems with living (e.g. symptoms) are seen as mainly being caused by each other, rather than by underlying disease constructs. This can be analyzed as networks, with symptoms / contextual factors / problems as nodes, and their causal connections visualized as edges (i.e. arrows). Symptom networks thus see problems such as excessive screen use as a phenomenon that needs to be analyzed as part of a larger system. Typically, networks for groups are created using cross-sectional data (20). In this type of study, the edges represent partial correlations between nodes (e.g. respondents who score high on X also tend to score high on Y, controlling for all other nodes in the network). These studies can thus answer how problems tend to co-occur, but nothing about what is causing what.
An alternative is PErceived CAusal Networks (PE-CAN), making use of respondent perceptions about causal relations (21,22). In this methodology, the respondent is asked whether X is causing Y. This has the benefits that data can be collected quickly, and individual level insights can be aggregated to say something generalizable about a group. Obviously, the major limitation is that respondents need not be aware of all causal relations between problems, meaning that one must be mindful that the resulting network represents the mental model of how problems interact - not their true interaction. For example, a parent might perceive that passive scrolling contributes to sleep problems, or that sleep problems contribute to irritability. The resulting network thus reflects how respondents understand the interrelations among problems, rather than statistical associations or objectively verified causal effects. Such networks can be analyzed to identify problems that are perceived as particularly influential within the system.
This method has been used in a series of studies exploring the PECANs of adolescents with broad mental health issues (23,24), social anxiety specifically (25), adults with depression and comorbid disorders (22), and adults in primary care psychotherapy (26), and with chronic pain (27)
PECAN is still a novel method, and many details on how to best set up data collection is yet unknown (28). There are many options open to a researcher on how to select problems relevant to the specific respondent, phrasing questions about causal relations between these problems, and aggregating individual networks into group-level. For example when collecting data on causal connections between problems, respondents can be asked either what the causes are to the problem (arrows going into the problem from other problems), or the effects of the problems (arrows going out from the problem to other problems). Studies have used both methods, and combinations (for example (29)). Even though the end result is the same (a network), it is unknown whether one of the two ways to ask might be more intuitive and yield more reliable results.
The present study aims to explore both the best way to set up a PECAN study for exploring parental understanding of child problems, and an application to a specific population of respondents: parents answering about their child with excessive screen time.
When asking respondents about causal relations, does asking about problem causes or effects differ in terms of comprehensibility and response consistency?
How do parents perceive that excessive screen time and related problems are causally linked, and are there differences between boys and girls?
Approval from the national ethics board was obtained before any data was collected (The Swedish Ethical Review Authority regnr 2024-02630-01). Qualtrics was used in all data collection, with anonymous surveys (not collecting IP addresses). The study was preregistered (https://aspredicted.org/fhff-wsqm.pdf) and data and materials used (i.e. Qualtrics questionnaires) can be found at OSF (https://osf.io/3b4nc).
Participants were recruited through advertisements on social media, aimed at men and women aged 30 to 65 years. The ads asked for participants who were worried about their child's screen time use, and ran for three week-long periods between September and October, 2024. No requirement on child age was defined. Prospective respondents followed a link to an informed consent form describing the study. Respondents did not receive any compensation for their participation.
Parents consenting to participate (n = 316) were asked background demographics questions about themselves and the child they worried about. Parents were asked whether their child had any diagnosed psychiatric or neurodevelopmental conditions. They were presented with a checklist including ADHD, autism spectrum disorder, depression, anxiety disorders, dyslexia, and an “other” option, and could select multiple responses. The respondents then completed the PECAN, and were randomized to two versions of this survey, differing only in whether causal questions were about problem causes or problem effects (described in detail below). Finally, respondents evaluated how well they understood the PECAN questions and were asked a randomly selected PECAN item again, in order to assess consistency of answers (also described in detail below). Respondents finally completed two standardized surveys: the GSMQ-9 and the SDQ (described below). Respondents who did not complete the PECAN part of the survey were excluded from all analysis, leading to 146 respondents being excluded at this step. Completers did not significantly from non-completers on parent age, child age, child gender, daily screen time, school attendance, parental education level, living situation, or presence of a psychiatric diagnosis (all p > .06). Effect sizes were uniformly small (Hedges g ≤ 0.20; Cramér’s V ≤ 0.10).
The method used in the present study was a simplified version of that previously described (e.g. in (22)). Most importantly, selected problems were not rated for severity or frequency, and edges between them were only rated as binary (not graded with regards to frequency, causal strength or certainty). These simplifications were implemented to make the method time-efficient. In contrast to previous versions of PECAN, respondents were allowed to indicate “auto-causation”, that is: a problem could be given as a cause / effect of itself.
Two versions of PECAN were created, one asking about problem causes (arrows going into the problem from other problems), and one asking about problem effects (arrows going out from the problem to other problems). Respondents were randomized to one of these versions.
First, respondents were presented with a list of 20 problems commonly associated with excessive screen time in adolescence (see Table 1). The list of 20 problem areas was developed through a multi-step process. First, problem areas were identified based on previous PECAN studies in adolescents (23) and literature on screen time and adolescent mental health (5,14). Second, the preliminary list was reviewed by two clinical psychologists (authors A.N. and L.K.) to ensure clinical relevance. Third, overlapping or redundant problems were merged, resulting in the final set of 20 problem areas. The goal was to cover child emotional, behavioural, social, and parental factors commonly discussed in relation to excessive screen use. These were presented in a random order. The respondent was instructed to indicate which of these had been problematic on a daily basis during the current semester.
Frequency of problem areas (in descending order of frequency in total sample)
| Problem area | Illustrative specifications | Girls; n = 44 | Boys; n = 82 |
|---|---|---|---|
| Passive scrolling | Tiktok, Youtube, Streams | 88.6 % | 75.6 % |
| Gaming | Roblox, Fortnite, Minecraft | 11.4 % | 65.9 % |
| Physically inactive | Used to do sports but stopped | 65.9 % | 59.8 % |
| Unfocused | Doesn’t listen, forgetful, absent | 61.4 % | 56.1 % |
| Parent worries | Loneliness, future independence | 56.8 % | 57.3 % |
| Parents not keeping boundaries | Can’t control screen or enforce rules | 59.1 % | 50.0 % |
| Parents stressed | Parents not synced, own mental health | 59.1 % | 48.8 % |
| Somatic concerns | Tired, headaches, nausea | 63.6 % | 36.6 % |
| Sad | Low self-esteem, self-critical | 61.4 % | 35.4 % |
| Parent too accommodating | Avoids conflicts, doesn’t work | 52.3 % | 31.7 % |
| No IRL friends | No initiatives, online friends only | 34.1 % | 48.8 % |
| Insomnia | Goes to bed late, awakenings | 45.5 % | 32.9 % |
| Unhealthy eating | Skips meals, very selective | 45.5 % | 32.9 % |
| School too hard | Unstructured, oral instructions | 34.1 % | 22.0 % |
| Aggressive | Yells insults, breaks things | 27.3 % | 26.8 % |
| No IRL interests | Never had any, interest stopped | 18.2 % | 32.9 % |
| Social anxiety | Avoids peers, avoids new people | 27.3 % | 15.9 % |
| School absence | Often late, doesn’t go at all | 25.0 % | 13.4 % |
| Socially awkward | Eye-contact, quiet, childish | 20.5 % | 15.9 % |
| Active on social media | Snapchat, Discord, Tiktok | 31.8 % | 3.7 % |
Next, respondents were informed that the questionnaire would now go through selected problems and ask about how they are causally linked for the child. Respondents completed a training item to make sure they understood the logic of the questionnaire. In the “causes” version, this training question was as follows: “Why does it snow?” with response options being “Piles of snow”, “Freezing temperature”, “Slippy roads”, “It snows”. In the “effects” version, the training question instead was “When it snows, what might this lead to?”, with response options being “Dark clouds”, “Slippery roads”, “Sun-shine”, and “It snows”. In both versions, only the second choice was considered correct. Respondents picking any other choice than the correct ones were excluded from analysis. Using this criterion: answering the “causal” version, 9 were excluded, answering the “effects” version, 14 were excluded.
Finally, respondents were presented, in random order, the previously selected problems. For each, respondents were asked to specify the problem in their own words (e.g. for Excessive gaming, the respondents were asked “Which games?”), for a summary of these specifications for each problem area see Table 1. The respondents were then asked either “What are contributing causes to this problem?” or “What does this problem lead to?”. Response options were all the selected problems, which were presented in a random order (randomized for each respondent put then presented in that consistent order across all causes / effects questions). The combined time spent on the instructions page, training item, and all causes / effects questions was counted: The “Causes” version took on average 8.2 minutes to complete (SD = 7.7), and the “Effects” version slightly longer with an average of 11.1 minutes (SD = 9.5). Respondents who completed this faster than 2 minutes were excluded from analysis, as this was deemed too fast to give reliable answers. 19 respondents were excluded by this criterion.
The respondents then answered an evaluation question: “The questions about how my child’s problems are related were easy to understand”, answered on a 5-point Likert scale: “Not at all”, “No”, “Some-what”, “Yes,” and “Yes, completely”.
To assess the consistency of responding for each respondent, a retest-question was asked. For this, one causal question (whether about causes or effects) was repeated, randomly selecting either the question about excessive gaming, social media or passive scrolling. These three were picked as we expected each respondent to have picked at least one of these problems as relevant to their child. The answer to this retest-question was compared, for each respondent, to the previous answers on the same question, yielding 49 comparisons used to calculate consistency of answers (26 for the “Causes” version, and 23 for the “Effects” version. We applied a conservative criterion, counting an answer as consistent only if all edges were reported identically in both the original and the retest question (i.e. if the exact same other nodes were selected in both questions).
In the present study, the Gaming and Social Media Questionnaire (GSMQ( was used descriptively to report the proportion of adolescents meeting proposed criteria for Gaming Disorder and Social Media Disorder, based on the recommended cut-off. The GSMQ is a concise, 9-item tool designed to assess the extent and impact of gaming and social media use among adolescents and whether the proposed diagnostic criteria for “Internet Gaming Disorder” or “Social Media Disorder” are filled (30). The GSMQ captures key aspects of digital engagement, focusing on frequency, duration, and the emotional and behavioral effects associated with these activities. It includes questions that evaluate the time spent on gaming and social media, as well as the perceived influence of these activities on academic performance, sleep patterns, social interactions, and overall well-being. The brevity of the GSMQ makes it an efficient instrument for both research and practical settings, allowing for a quick yet comprehensive assessment of adolescents’ digital habits. This tool is particularly valuable for identifying potential areas of concern related to excessive screen time, making it a useful resource for planning interventions aimed at promoting healthier digital behaviors. The GSMQ is scored separately for gaming and social media, and diagnostic criteria is considered filled if more than five items are scored 3 or higher (on a 0 to 4 Likert scale). Internal consistency was excellent in the present sample. Cronbach’s alpha was .95 for the Gaming subscale and .96 for the Social Media subscale, indicating very high reliability.
The Strengths and Difficulties Questionnaire (SDQ; (31)) is a widely used behavioral screening tool designed to assess the emotional and behavioral functioning of children and adolescents. It consists of 25 items that cover five key domains: emotional symptoms, conduct problems, hyperactivity/inattention, peer relationship problems, and prosocial behavior. Each domain includes both strengths and difficulties, providing a balanced view of a child’s psychological wellbeing. The SDQ can be completed by parents, teachers, or the young person themselves, making it versatile for use in various contexts such as clinical assessments, research, and educational settings. The results from the SDQ can help identify potential mental health concerns and inform targeted interventions or referrals. Its concise nature and strong psychometric properties make the SDQ a valuable tool for early detection of emotional and behavioral challenges, as well as for tracking changes over time in response to therapeutic or educational interventions. Internal consistency for the SDQ Total Difficulties score was acceptable, with a Cronbach’s alpha of .73, indicating satisfactory reliability in the present sample.
For our first research question (comparing the two versions of PECAN), we used Bayesian independent-samples t-tests with uninformative priors to compare the two randomized groups (causes vs effects version) on two outcome variables: (1) ratings of comprehensibility of the PECAN questions, and (2) within-person response consistency, assessed using the retest item. Consistency was operationalized as the proportion of participants whose repeated causal question yielded identical responses to the original question. These analyses assess differences in comprehensibility and reliability between the two versions, not differences between the resulting group-level networks. As we shall see, there were no differences between the versions, and thus data were combined for all subsequent analyses.
Group-level networks were constructed by aggregating individual perceived causal networks. Each directed edge (i.e. arrow) between two nodes represents the proportion of parents who reported that specific causal relation. To facilitate interpretability and reduce visual clutter, only the eight most frequently reported causal relations were visualized. This approach and rule-of-thumb cutoff has been used in previous perceived causal network studies (28). Notice that the group networks thus represent typical parental perceptions, not statistical associations. This was visualized separately as two networks: one for girls and one for boys (and a combination of the two).
Finally, we computed out-degree centralities split by gender. In perceived causal networks, out-degree centrality refers to the number of outgoing causal relations from a node and reflects the extent to which a given problem is perceived to influence other problems (Vogel et al., 2025). In this analysis, for each problem area, the average number of perceived out-going arrows was computed for individuals for whom the problem area was selected. Since parents of girls tended to perceive more causal edges between selected problems, the centrality metrics were normalized within individuals to make them comparable across groups. Thus, the out-degree centrality of a problem reflects how strongly it is perceived to cause other problems.
After applying exclusion criteria as described above, 128 respondents were included for further analysis (66 who answered the “causes” version of PECAN, 62 who answered the “effects” version). Respondent relationship to the child in question was mostly the mother (89 %), followed by father (9 %). Remaining relationships stated were “godfather” and “grand-mother”, one of each. Respondent age ranged from 37 to 65 years, with the average being 48.1 (SD = 5.3). Respondent education level was mostly university degree (91%), followed by upper secondary (8 %) and compulsory only (1 %). It should be noted that this level of education is higher than the general population of Sweden, where roughly half of adults have an education higher than upper secondary (“Gymnasiet”).
Children were reported as identifying as male (64 %), female (34 %) and other gender (2 %; the last group was omitted from group calculations). Ages ranged from 12 to 19 years, with a mean of 14.1 (SD = 1.8). Most children lived with both parents (77 %), with a minority alternating between parents (13 %) or living with one parent only (9 %). One child did not live with a parent at all (e.g. foster home). Average school attendance was 81 % of expected hours (range 0 to 100 %; SD = 23.9 %). Half of children were reported to have a psychiatric diagnosis (48 %), the five most frequent being:
ADHD 23 %
Autism 11 %
Depression 8 %
Social Anxiety 5 %
Dyslexia 5 %
According to the GSMQ, 26 % of the final sample filled proposed diagnostic criteria for gaming disorder, and 34 % filled proposed criteria for social media disorder. Average SDQ score was 22.1 (range 13 to 35; SD = 4.4; n = 119). Missing data were limited to the SDQ, which was not mandatory, with 9 participants (7%) missing these data. Comparisons between participants with and without SDQ data showed no significant differences in parent age, child age, screen time, or school attendance (all p > .50), suggesting no clear systematic pattern of missingness. The GSMQ had no missing data. Children were reported to spend on average 5.7 hours on screen time per day, with a range of 2 to 16 hours (SD = 2.2).
Respondents selected between 2 and 19 problems from the list, with an average of 8.1. How often problems were selected is shown in Table 1, split by child gender. We also show how parents typically specified each problem area (showing the top 3 categorized responses).
1. When asking respondents about causal relations, does asking about problem causes or effects differ in terms of comprehensibility and response consistency?
The Bayes t-tests showed that there was moderate evidence for the Null hypothesis (that is, the two versions performing equally well). Both with regards to answers to the evaluation question (the “Causes” version got an average of 2.94; SD = 0.82; “Effects” got an average of 3.06; SD = 0.90; BF10 = 0.257) and the consistency of answers on edges (the “Causes” version got 23.1 % identical answers, as compared to 21.7 % in the “Effects” version; BF10 = 0.287). Both these Bayes factors show that there is some reason to believe the two versions to perform equally well.
2. How do parents perceive that excessive screen time and related problems are causally linked, and are there gender differences?
Our study aimed to both evaluate two versions of the PECAN method (asking about causes and asking about effects), and then use PECAN to explore how parents perceive how screentime is linked to psychopathology in children.
We found that the two versions of PECAN performed equally well with regards to evaluations by respondents and consistency of their answers. Both visual inspection of averaged group networks and outdegree centrality metrics showed that parents perceive passive scrolling to be the most influential problem, causing other psychopathology in children. This was true for both boys and girls. Aside from this, the two groups differed markedly. Parents to boys perceived the problems as originating in lacking IRL friends and being stuck with gaming, leading to other problems. Parents to girls on the other hand perceived a feedback loop between physical inactivity and passive scrolling. Centrality analysis also showed that girls were perceived as being more influenced by being sad and having somatic concerns.
In sum, for boys the excessive screen time (gaming) seems to be perceived as a root cause, whereas for girls the screen time is perhaps part of a depressive clinical picture. Notice however that parents to both girls and boys perceived passive scrolling to be strongly caused by itself (in fact, this was the most commonly reported edge in the entire dataset). It is also interesting to note that parents did not perceive their own nodes (being stressed or worried) as causing child nodes, with the exception of parent not keeping boundaries. This latter node might thus be a promising focus for interventions aimed at helping parents wanting to reduce child screen time.
The finding that passive scrolling is perceived to be particularly detrimental is in line with previous studies that have found passive scrolling to be of particular interest when studying possible links between screen time and psychological ill-health (10,11,12). Verduyn et al. (13) proposed that the use of social networking sites could be dichotomized into an active-passive model, where a passive use is negatively associated with subjective well-being and an active use more positively associated with well-being. In this model, passive scrolling is linked to harmful social comparison resulting in a sense of envy and inferiority, while active use promotes social connectedness and an increase in social capital. This model has been met with some criticism (14), leading to updates to the model, nuancing the dichotomization (13). Our results lend support for the dichotomization model, since passive scrolling is much more often perceived as a central problem than active social media use. However, this model is primarily concerned with social media content creating harmful comparison and envy, which in turn is related to reduced well-being. The results from our study instead suggest that passive scrolling could be perceived as associated with for example physical inaction and difficulties in concentration, which in turn could affect well-being negatively. This points to a general challenge in studying the relationship between screen use and well-being; not only do the types of screen use differ greatly, but there are also numerous potential outcomes to study.

Averaged network for boys (64 % of sample, blue, left) and girls (34 % of sample, red, right). For each network, only the eight most common perceived causal relations and their connecting nodes are visualized: for boys the eight most frequently perceived causal relations connected six different nodes, for girls the eight most frequently perceived causal relations connected four different nodes. In the middle is a combined network overlaying the two (purple being nodes and edges common on both groups).
A major limitation of the study is the somewhat low reliability of individual ratings by parents (as indicated by the low consistency of answers, i.e. the test-retest questions), which limits conclusions that can be drawn (for example, it is probably not advisable to analyze individual networks). Further, an inherent limitation in the PECAN method is asking parents about their perceptions of causal relations. This introduces biases such as reporting causal relations that are discussed in the public domain (28). Although we did not analyse this, fathers and mothers very likely expect and thus perceive different causal relations. Reliability can also be due to the parent misunderstanding causal questions and confusing the direction of causal effects (29). This latter problem has indeed been observed when collecting data using interviews (24). The method employed in this study with a set list of problems to choose from is another limitation. Individual children will have an endless list of idiosyncratic problems that could serve as nodes in personalized networks (32). Using completely personalized nodes would increase the validity of each network and it would likely make it more intuitive to report perceived edges between them. These networks could also be aggregated to answer group level research questions, but this would require more work on the analysis side (grouping similar nodes together, etc).

Group-level out-degree centralities, split by gender and sorted by gender difference.
Future studies could explore the more bottom-up approach of creating completely personalized networks with nodes formulated freely by the respondent, and then aggregating these by identifying nodes similar across respondents. Preferably, this would be done by using PECAN in an interview format, which would ensure respondents understand causal directions. Parents and children could create networks together, or separately and then combine them. Another method to explore causal models of children's screen time and related psychopathology is to interview researchers and experts and let them create network models, a method previously used to map factors important for quality of life for individuals with an autism diagnosis (33). These two methods could also be combined, i.e. asking experts what causal relations they would expect given certain problem areas for a particular individual, and then updating this with the perceptions of a specific individual. Possibly, this could ensure validity of a person-specific network so that it could even be used to personalize an intervention.
Possibly, our findings could be used to inform future interventions, such as supportive programs for parents to children with excessive screen time. Based on the present study, such an intervention would be wise to focus on passive scrolling specifically, as this is perceived by parents to be central. Given the perceived differences between girls and boys, it might also make sense to target these groups with different interventions.