Have a personal or library account? Click to login
Assessment of Open Educational Resources Instruments: A Systematic Review Cover

Assessment of Open Educational Resources Instruments: A Systematic Review

Open Access
|Mar 2026

Full Article

Introduction

The rapid development and widespread adoption of Information and Communication Technologies (ICT) have unleashed a wave of severe effects across various sectors of society, and education is not an exception to that. The infusion of ICT into education has paved the way for significant advancements aimed at enhancing the quality of education (Lim, Chin & Wang 2020). By leveraging digital tools and resources, educators have been able to create more engaging and interactive learning experiences, fostering deeper understanding and knowledge retention among students (Goldingay & Boddy 2017).

However, while the integration of ICT has brought about positive transformations in education, it has also exacerbated the digital divide that exists within society. The digital divide refers to the gap between those who have access to and can effectively use digital technologies and those who do not (Wang, Zhou & Wang 2021). This divide is often rooted in socioeconomic disparities, geographical location, and limited infrastructure (Cabero- Almenara & Ruiz-Palmero 2017; Gómez Navarro et al. 2018; Rodríguez-Abitia et al. 2020) As a result, certain communities and individuals face significant barriers to accessing the benefits of ICT in education, hindering their ability to fully participate and thrive in the digital age.

In response to the pressing need to bridge this digital divide, the Open Educational Movement emerged as a powerful force for change (Patiño, Ramírez-Montoya & Buenestado-Fernández 2023). This movement aims to share educational information and resources openly, freely, and inclusively to mitigate the access gap that exists in underserved communities (Ramírez-Montoya 2013; Ramírez-Sánchez et al. 2020; Valle Jiménez et al. 2016). By embracing the principles of openness, collaboration, and sharing, the movement seeks to provide equal opportunities for all learners, regardless of their socioeconomic background or geographical location (Chiappe & Adame 2018).

Among the manifestations of the open educational movement are Open Educational Resources (OER). The term OER was first used in 2002 in the forum organized by UNESCO, which addressed issues related to the sharing of digital resources (Mishra 2017; UNESCO 2002). OER are educational materials to promote teaching, learning, and research, either in the public domain or published under an open license so people can use, reuse, adapt, and redistribute them (UNESCO 2019).

The use of OER in higher education has brought significant benefits for users. The literature reports that it has reduced costs due to the standardization of its application, increased opportunities for access to educational materials and resources, reduced time in class preparation by teachers, improved quality of education, and has favored lifelong learning for users in general (Al Abri & Dabbagh 2018; Henderson & Ostashewski 2018; Hilton III et al. 2014; Idrissi et al. 2018; Pitt 2015). This implies that OERs have been a tool that has not only favored the academic and student population: their benefits have also impacted non-formal education, thus promoting lifelong learning and skills development.

In 2019, UNESCO published guidelines for developing open educational resources policies, proposing eight typical OER policy constituent elements. Element number seven states that research based on empirical data should be promoted, with which the impact of OER can be observed; as a substantial activity, researchers are urged to design institutional quality assurance procedures that promote the evaluation of the effectiveness of OER use (UNESCO 2019). An essential element for generating research with empirical data is to have instruments that have solid evidence of validity and reliability to ensure that the instruments used measure effectively (Valdes et al. 2019).

The field of OER research has grown exponentially in recent years. Systematic literature reviews provide an account of how a field of research has developed over a given period. In the case of OER, previous studies have focused on analyzing aspects of teaching practices and accessibility for people with disabilities. Other reviews have provided an overview of trends and gaps in OER research (Al Abri & Dabbagh 2018; Connolly & Svoboda 2023; Henderson & Ostashewski 2018; Idrissi et al. 2018; Otto et al. 2021). However, no systematic literature review has been identified in the literature that analyzes the measurement of OER and is coupled with the need to develop research with empirical data on the impact of OER in education. The present study proposes the development of a systematic literature review on studies that addressed the construction or validation of instruments on OER during the years 2018 to 2023.

Method

The present study employs a systematic review of the literature using the guidelines suggested in the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta- Analyses) statement (Page et al. 2021), as well as the phases suggested by authors Ramírez-Montoya & Lugo-Ocando (2020).

Eligibility criteria

For the systematic review, studies were considered that addressed the development or validation of instruments on OER, which were published from 2018 to 2023 in English and Spanish.

Exclusion criteria

Studies that related OER to other variables and did not report the psychometric properties of the scale used to measure OER were excluded.

Exclusion criteria search strategy

The studies’ initial exploration was conducted in Elsevier, PubMed, and the Google Scholar search engine databases. We decided to use the databases to obtain more open and recently published resources. The search strings “measure,” “scale,” validity,” “OER,” and “Open Educational Resources” were used. A manual search was also performed in Research Gate with the keywords “psychometric properties” and “open educational resources”. The search ended on May 13, 2023, and no additional studies were included after this date. To avoid bias in the inclusion of the studies, these were analyzed by two independent evaluators so that doubts about their inclusion could be resolved by a third evaluator. The result of the process and analysis of the studies obtained culminated in including 17 articles that met the criteria for inclusion (see Figure 1).

Figure 1

Flowchart presenting the process of selecting the articles.

Codification of the information

Once the articles considered in the study had been obtained, they were analyzed by concentrating on the information in an Excel matrix in which information was collected on authors and year of publication; country where the scale was developed, application sample, name of the instrument, dimensions of the scale, number of items, type of reliability and type of evidence of validity.

Results

General characteristics of the studies

The articles included in the study were published from 2018 to 2023. Between the years 2018 and 2020, the publications on measurement instruments remained stable; within those three years, only six studies (35%) were identified, distributed two each year. The publication growth was evident from 2021 onwards; between 2021 and 2022, eleven studies were published (64%), with the year 2021 having the highest number of publications, six studies in total (35%). The distribution by country reflects that the United States is the country with the highest number of published studies, with five studies (29%) in total (Brasley 2018; Jaggars, Folk & Mullins 2018; Redcay, Pfannenstiel & Albert 2022; Tang & Bao 2023; Tipton 2020). In second place, countries such as Spain and China stand out with two studies each (23%); the rest of the studies were distributed in eight different countries such as Mexico, Germany, Pakistan, Ecuador, Arabia, Sri Lanka, Colombia, and Nigeria (Alkhasawneh 2020; Asghar, Erdoğmuş & Seitamaa-Hakkarainen 2021; Canchola et al. 2021; Osang 2019; Ramirez-Montoya & Tenorio-Sepulveda 2021; Sandanayake, Karunanayaka & Madurapperuma 2021; Sarango-Lapo et al. 2020; Zawacki-Richter, Müskens & Marín 2022).

Sample

Of the 17 articles included in the present studies, only 16 (94.1%) reported characteristics about the sample where they developed their study. The average sample of the studies included in the review is 290 participants, with 16 being the smallest sample and 1142 the largest (Brasley 2018; Ramirez-Montoya & Tenorio-Sepulveda 2021; Redcay, Pfannenstiel & Albert 2022). Regarding the type of participants, in the studies examined, eight (47.1%) focused on teachers, four (25%) on university students, and two (12.5%) combined teachers, students, and researchers in their study; the remaining two (12.5%) studies focused on students and researchers.

Subscales

Of the total of 17 studies, five (29%) studies reported in their instruments three dimensions related to (a) aspects of attitudes and beliefs; (b) knowledge, uses, and reuse, as well as aspects related to student engagement (Alkhasawneh 2020; Canchola et al. 2021; Jaggars, Folk & Mullins 2018; Tang & Bao 2023).

Meanwhile, only one instrument (6%) made a unidimensional scale (Redcay, Pfannenstiel & Albert 2022) and also only one instrument (6%) included 16 dimensions, being the scale with the highest number of dimensions which address aspects on (a) academic, target group and reusability content; (b) didactic design in specific areas such as alignment, interaction, applicability, student support, and evaluation; (c) accessibility in terms of open licensing, access to students with different abilities, compatibility and technical reusability and (d) usability taking up elements such as structure, navigation, orientation, layout, and readability and interactivity (Zawacki-Richter, Müskens & Marín 2022). Regarding the number of items on the scales, only 16 studies (94%) reported such information, and on average, the instruments have 25 items, where the minimum was 11 items, and the maximum was 47 in total (Redcay, Pfannenstiel & Albert 2022; Sandanayake, Karunanayaka & Madurapperuma 2021).

Reliability

Concerning the evidence of the reliability of the instruments, only 14 studies (82%) reported reliability. In total, ten studies (58%) reported reliability only by the Cronbach’s alpha method; the rest (23%) reported reliability by the Cronbach’s alpha method and composite reliability (Asghar, Erdoğmuş & Seitamaa-Hakkarainen 2021; Pozón-López et al. 2021; Zhang et al. 2021). Regarding the values obtained, 13 of the studies (92%) obtained values that are within acceptable values with a range of .73 to .96 (George & Mallery 2003). Two studies (14%) reported one of their dimension’s reliability values as well below acceptable values (Asghar, Erdoğmuş & Seitamaa-Hakkarainen 2021; Osang 2019).

Validity

Of the 17 studies, only 13 (76%) reported some validity evidence in their instruments. In 5 studies (38%), construct validity is reported, which is calculated through Exploratory and Confirmatory Factor Analysis (Jaggars, Folk & Mullins 2018; Redcay, Pfannenstiel & Albert 2022; Sarango-Lapo et al. 2020; Tang & Bao 2023; Yi & Tan 2022). Only two studies (15%) combined more than one type of validity, with one reporting content and construct validity (Canchola et al. 2021; Tipton 2020). Three studies (23%) described construct, discriminant, and convergent validity (Asghar, Erdoğmuş & Seitamaa-Hakkarainen 2021; Osang 2019; Sandanayake, Karunanayaka & Madurapperuma 2021). One study reported evidence of construct and discriminant validity (Pozón-López et al. 2021), one construct and convergent (Zhang et al. 2021) and the remaining two studies (15%) reported only content validity (Brasley 2018; Ramirez-Montoya & Tenorio-Sepulveda 2021).

Dimensions

Part of the results derived from the studies reviewed in depth show a great variety of dimensions or topics related to the measurement of OER, among which five main groups stand out: Quality and openness, use and usability, attitudes and emotions, student and teacher performance and institutional aspects (see Table 1).

Table 1

Summary of studies included in the review.

AUTHOR, YEAR AND COUNTRYSAMPLEINSTRUMENTDIMENSIONSITEMSRELIABILITYVALIDITY
Jaggars, Folk & Mullins 2018
EUA
611 undergraduate studentsStudents’ perceptions of open and affordable course materials.Quality, integration, and expertise.21Cronbach Alpha
(α = .21–.89)
Construct
Brasley 2018
EUA
16 expert teachersPerceptions of Open Educational Resources (OER).Importance and likelihood of implementing OER.35N/AContent
Osang 2019
Nigeria
NMDetermining Task Technology Fit (TTF).
Impact on faculty usage, satisfaction, and performance.
Use, satisfaction, Task Technology Fit, and performance.29Composite reliability
(α = .21–.89)
Construct, convergent and discriminant
Sarango-Lapoet al. 2020
Ecuador
271 university professorsDigital Competence and Use of Open Educational Resources (CD- REA).Competence in search, competence in selection, evaluation of information, competence in storage and retrieval, competence in the communication and dissemination of information, competence in the use of OER.16Cronbach Alpha
(α = .78–.89)
Construct
Tipton 2020
EUA
392 university professorsAttitude Towards Open Educational Resources (ATOER).Attitudes, subjective norms, perceived behavior, control.16Cronbach Alpha
(α = .75–.85)
Construct and content
Alkhasawneh 2020
Arabia
256 professors and research assistantsPerception of academic staff toward barriers, incentives and benefits of the Open.Barriers, incentives, and benefits.36Cronbach Alpha
(α = .85)
N/M
Canchola et al. 2021
Colombia
123 university professorsAttitudinal Scale of Open Educational Practices (ASOEP).Conceptual domain of OER, procedural management of OER, practices of use of OER.20Cronbach Alpha
(α = .83–.88)
Construct and content
Asghar 2021
Pakistan
376 preservice teachersPreservice teachers’ behaviour towards the use of Open Educational Resources.Attitude (ATT), perceived behavioral control (SEF), culture at a personal level (CPE), culture at a professional level (CPR), culture at the organizational level (COR), intentions to use OER (INT), actual use behavior of OER (USE)…25Cronbach Alpha and Composite Reliability
(α = .61–.80, CR = .80–.87)
Construct, convergent and discriminant
Ramírez-Montoya & Tenorio-Sepulveda 2021
Mexico
16 researcherse-Open instrument.Capacity building, development of support policies, promotion of effective, inclusive, and equitable access, creation of sustainability models, promotion of international cooperation.36
items
NAContent
Sandanayake, Karunanayaka & Madurapperuma 2021
Sri Lanka
102 undergraduate studentsOER-integrated online courses.Information design aspects, instruction design aspects, interface design aspects, and interaction design aspects.47Cronbach Alpha
(α = .75–.83)
Construct, convergent and discriminant
Pozón-Lopez et al. 2021
Spain
210 students, professors, and researchersUser satisfaction and intention to use Massive Open Online Courses (MOOCs).Perceived ease of use, perceived usefulness, emotions, the vividness of content, perceived interactivity, controlled motivation, autonomous motivation, perceived entertainment, perceived course quality, perceived satisfaction, and use intention.44Cronbach Alpha and Composite Reliability
(α = .77–.93, CR = .78-94)
Construct and discriminant
Redcay, Pfannenstiel & Albert 2022
EUA
1142 undergraduate studentsZero Textbook Satisfaction Scale (ZSS).Satisfaction.11Cronbach Alpha
(α = . 94)
Construct
Yi & Tan2022
China
380 students and professorStudent satisfaction with MOOC.Satisfaction and quality16Cronbach Alpha
(α = .91–.95)
Construct
Tang & Bao 2023
EUA
512 professorsCollege instructors’ value belief about using OER.Customizing classroom materials, supporting professional development, and engaging students.17Cronbach Alpha
(α = .76- .82)
Construct
Zawacki-Richter, Müskens & Marín 2022
Germany
NMQuality assessment of OER.Content/academic foundation, content/target group orientation, content/reusability of content, instructional design/alignment, instructional design/collaboration and interaction, instructional design/applicability, instructional design/student support, instructional design/assessment, accessibility/CC-license, accessibility/accessibility for students with disabilities, accessibility/reliability and compatibility, accessibility/technical reusability, usability/structure, navigation and orientation, usability/design and readability, and usability/interactivity.NMNMNM
Tlili 2022
Spain
57 professorsPersonality on educator attitudes toward open educational resources.Openness, conscientiousness, extraversion, agreeableness, neuroticism, attitudes towards OER, intention to use OER.NMCronbach Alpha
(α = .73–.85)
NM

Quality and openness: This dimension addresses issues related to integration, expertise, academic foundations, conceptual mastery and OER content, reusability, barriers, customization, accessibility, and equity, and finally aspects of instructional design and alignment. The studies that cover these aspects are Alkhasawneh (2020); Canchola et al. (2021); Jaggars, Folk & Mullins (2018); Pozón-López et al. (2021); Ramirez-Montoya and Tenorio-Sepulveda (2021); Sandanayake, Karunanayaka and Madurapperuma (2021); Tang & Bao (2023); Tlili et al. (2022); Yi & Tan (2022) and Zawacki-Richter, Müskens and Marín (2022).

Use and usability: This dimension addresses issues related to the importance and probability of using OERs, practices and intentions of use, ease of use, interactivity, readability, structure, and navigation. In addition, instruments that account for incentives, benefits and cooperation are referenced. The studies that cover these aspects are Alkhasawneh (2020); Asghar, Erdoğmuş and Seitamaa-Hakkarainen (2021); Brasley (2018); Canchola et al. (2021); Osang (2019); Pozón-López et al. (2021); Ramirez-Montoya and Tenorio-Sepulveda (2021); Zawacki-Richter, Müskens and Marín (2022).

Attitudes and emotions: This dimension addresses issues related to satisfaction, perceived entertainment, motivation, engagement, conscientiousness, subjective norms, and perceived behavioral control. The studies that cover these aspects are Asghar, Erdoğmuş and Seitamaa-Hakkarainen (2021); Osang (2019); Pozón-López et al. (2021); Redcay, Pfannenstiel and Albert (2022); Tang and Bao (2023); Tipton (2020); Tlili et al. (2022); Yi and Tan (2022).

Student and teacher performance: This dimension addresses issues related to task technology fit, digital and informational competencies, target orientation and supporting professional development. The studies that cover these aspects are Osang (2019); Sarango-Lapo et al. (2020); Tang and Bao (2023) and Zawacki-Richter, Müskens and Marín (2022).

Institutional aspects: This final dimension addresses issues related to management of OER, cultural aspects, capacity building and support policies. The studies that cover these aspects are Asghar, Erdoğmuş and Seitamaa-Hakkarainen (2021); Canchola et al. (2021) and Ramirez-Montoya and Tenorio-Sepulveda (2021).

Discussion

This systematic review analyzed studies that developed or validated measurement instruments on open educational resources from 2018 to 2023. The findings show that, although instrument construction and validation studies have increased, these still need to be improved. Regarding the general characteristics of the studies in the results: the year 2021 saw the most significant number of articles published on the topic of interest. The results show that the United States as a country stands out for its scientific productivity in the validation of instruments on open educational resources (Brasley 2018; Jaggars, Folk & Mullins 2018; Redcay, Pfannenstiel & Albert 2022; Tang & Bao 2023; Tipton 2020).

These findings imply that studies on the development and validation of scales are increasing but that the scientific community’s interest is recent. Likewise, although there is a country that stands out in productivity, about ten countries report studies on the subject; this confirms the idea that the scientific community is increasingly interested in the validation of instruments.

With regard to the participants targeted by the instruments reported in the research included in our study, about half of the studies focus on teachers; however, teachers are a crucial factor in the adoption of open educational resources, and less attention has been paid to the students’ perception of the use of open educational resources in their learning process (Jaggars, Folk & Mullins 2018; Redcay, Pfannenstiel & Albert 2022; Yi & Tan 2022). These findings imply that there is a research opportunity, and it is essential to develop or adopt scales that explore students’ perceptions of open educational resources.

The results on the subscales or dimensions of the instruments included in the studies show a preponderance for measuring aspects related to students’ attitudes, beliefs, knowledge, uses, reuse, and engagement (Asghar, Erdoğmuş & Seitamaa-Hakkarainen 2021; Canchola et al. 2021; Jaggars, Folk & Mullins 2018; Tang & Bao 2023). This may reflect the state of knowledge about open resources, which implies the need to continue developing studies with empirical data in which models with diverse variables are integrated.

Concerning the evidence of reliability, most of the studies presented acceptable reliability indexes by the Cronbach’s Alpha method; a few studies included, in addition to Cronbach’s Alpha, the composite reliability method (Pozón-López et al. 2021; Zhang et al. 2021). Although acceptable indices were obtained, an area of opportunity would be to reinforce the evidence of reliability with other methods that provide more robust evidence.

Likewise, the evidence of validity found in the studies is diverse; although not all the studies reported some validity, there is a predominance in the review of studies that only demonstrate the construct validity of their scales (Sarango-Lapo et al. 2020; Tang & Bao 2023; Yi & Tan 2022) mainly by applying exploratory factor analysis; however, a limitation presented by some of the studies that perform this type of analysis is their sample size. Therefore, future studies should have a representative and sufficient sample to guarantee data adjustment. On the other hand, in a smaller number of studies, more than two instances of validity evidence were combined, which represents a limitation that should be addressed from the point of view of measuring open educational resources.

Conclusions

The results of this study are significant as they provide an overview of the measurement of open educational resources. However, although some studies present evidence of the psychometric properties of the scales developed up to the period in which the search was conducted, this evidence remains insufficient. There is a need to develop research instruments for measuring and evaluating the use of OER, and it is essential that these instruments undergo rigorous psychometric analysis to ensure that the resulting measurements are reliable and applicable across different contexts. By taking these actions, we align with UNESCO’s recommendations (UNESCO 2019), which emphasize the importance of fostering scientific research on the use of OER.

Limitations and future research

Despite the contributions of this study, it is crucial to consider the limitations it presents, for example, the search period, which was limited to only the last five years, as well as the search strings, language, and databases selected, so that future studies could include a broader search period, as well as analyzing other categories associated with the measurement.

Data Accessibility Statement

The data are available upon request to the authors.

Acknowledgements

The authors would like to thank Tecnológico de Monterrey for the financial support provided through the ‘Challenge-Based Research Funding Program 2023’, Project ID #IJXT070-23EG99001, titled ‘Complex Thinking Education for All (CTE4A): A Digital Hub and School for Lifelong Learners.’ Also, technical and financial support from Writing Lab, Institute for the Future of Education, Tecnologico de Monterrey, México.

Competing Interests

The authors have no competing interests to declare.

Author Contributions

Leonardo David Glasserman Morales: Writing of first draft, literature review, Carolina Alcantar-Nieblas: Literature review, analysis of information and writing of first draft and continuous revision. Andres Chiappe-Laverde: Review of the last draft and supervision of the analysis.

DOI: https://doi.org/10.5334/jime.1009 | Journal eISSN: 1365-893X
Language: English
Submitted on: Feb 24, 2025
|
Accepted on: Mar 27, 2025
|
Published on: Mar 20, 2026
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2026 Leonardo David Glasserman Morales, Carolina Alcantar-Nieblas, Andres Chiappe-Laverde, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.