Have a personal or library account? Click to login
FIP Check: A Rubric-Based Tool for Assessing FAIR Implementation Profiles and Enabling Resources Cover

FIP Check: A Rubric-Based Tool for Assessing FAIR Implementation Profiles and Enabling Resources

Open Access
|Feb 2026

Full Article

FIP Check is publicly available through the fip-check GitHub repository (GO FAIR US, 2025).

Introduction

In today’s data-intensive research environments, stakeholders increasingly need structured and transparent ways to assess how repositories are engineered, governed, and interoperable (Wilkinson et al., 2016). The assessments are essential not only for understanding and improving data-sharing within a community, but also for enabling meaningful comparisons across systems, increasing the value of data, and enhancing collaboration across communities (Wilkinson et al., 2016).

The FAIR Guiding Principles—15 Principles categorized for the contribution to Findability, Accessibility, Interoperability, and Reusability—provide widely referenced guidance for enabling machine-actionable and interoperable data (GO FAIR Foundation, n.d.-c). While they are intentionally high-level and avoid proscribing implementation details, this flexibility can lead to inconsistency in implementations and significant challenges in assessment (Hoebelheinrich et al., 2025; Jacobsen et al., 2020). Over the past years, a variety of tools have been developed to address these challenges, including automated evaluators such as F-UJI (Devaraju and Huber, 2020), the FAIR Maturity Indicators (FAIRmetrics and FAIRsharing, n.d.), FAIRChecker (Gaignard et al., 2023), and FAIR Enough (Maastricht University, n.d.). Each of these plays a valuable role—often focusing on assessing digital objects directly, frequently through automated tests tied to a specific interpretation of FAIR.

FIP Check addresses a different but complementary need. Rather than evaluating datasets or repositories in their entirety, FIP Check is grounded in the FAIR Implementation Profile (FIP) ontology (Magagna et al., 2024). This ontology enables communities to declare the technical solutions, called FAIR Enabling Resources (FERs), that they use to implement the FAIR Principles (Schultes et al., 2020).

Building on this foundation, FIP Check offers a structured, rubric-based method for assessing how FAIR the declared FERs are and, in turn, how these FERs shape the FAIRness of the associated FIP. This shift in focus supports a more granular understanding of FAIRness and highlights targeted opportunities for improvement. The tool is openly available through GitHub (GO FAIR US, 2025), and is intended primarily for research communities and data infrastructure developers who wish to make concrete, comparable FAIR implementation choices. For clarity, Table 1 summarizes the acronyms and key terms used throughout this paper.

Table 1

Acronyms and Key Terms Used in This Study.

ACRONYM/TERMFULL NAMEBRIEF DESCRIPTION
FAIRFindable, Accessible, Interoperable, ReusableGuiding principles for improving the reuse of digital research objects
FIPFAIR Implementation ProfileA structured description of how a community implements FAIR Principles
FERFAIR Enabling ResourceTechnical solutions that support FAIR Implementation
FIP WizardN/AWeb-based tool that supports the collaborative creation and publication of FAIR Implementation Profiles (FIPs)

The distinction of the function of FIP Check is important because evaluating FERs within a FIP allows communities to: (1) identify strengths and gaps at the level of specific FERs, and (2) compare and converge on shared solutions across domains, advancing interoperability.

In the following sections, we describe the rationale and structure of the FIP Check, present its innovations and scoring framework, and demonstrate its application through a pilot study. By grounding the assessment in declared, community-authored FIPs, FIP Check offers a structured and transparent way to interpret how FERs contribute to FAIRness.

Background

FIPs are a structured way for communities of practice to document how they implement the FAIR Principles for digital objects. While their primary purpose has been to support FAIR implementation planning for data and metadata, FIPs can be more broadly applied to any digital objects, including the FERs themselves. FIPs contain answers to specific implementation questions that are relevant to the FER(s) being documented. FIPs are constructed collaboratively through the FIP Wizard (GO FAIR Foundation, n.d.-a; Magagna et al., 2024).

FERs are the building blocks of a FIP, capturing the technical solutions—standards, services, specifications, or policies—used to enable FAIR implementations. There are 13 FER types, each linked to a specific FAIR Principle and named according to its function (Magagna et al., 2024). Definitions and the corresponding FAIR Principles are provided in Appendix 1.

An interesting concept of this model is that the FIP itself is also included as one of the 13 FER types, specifically under FAIR Principle R1.3. This reflects the understanding that the act of creating and maintaining a FIP is itself a community practice that demonstrates a commitment to FAIRness. In the FAIR2Adapt project (European Commission, 2024), ongoing efforts to provide input for automated assessment tools use the FIP to check whether the FERs it declares are actually applied to the evaluated digital objects, thereby adding a validation layer to the assessment process. The FIP Check validation approach is described in the Interpretive Decisions and Methodological Considerations section.

By making the FERs explicit, FIPs enable communities to communicate their FAIR implementation approach by understanding which FERs are in place and how they relate to specific FAIR Principles. However, there is no systematic way to assess how well each of these resources contributes to FAIRness, nor to view the FIP in a consolidated and intuitive visualization. This lack of a tool makes it difficult for communities to recognize the effectiveness of their FERs and to pinpoint the areas for improvement.

Conceptual Innovations of FIP Check

FIP Check addresses this limitation by introducing a principled, rubric-based approach that shifts the assessment focus from entire repositories or datasets to individual FERs declared in a FIP. In doing so, it brings the underlying structure of FIPs to life, enabling communities to make informed decisions about where to focus their FAIR improvement efforts.

Granular and practical evaluation of FERs within FIPs

Unlike other FAIR assessment tools such as F-UJI or FAIR Maturity Indicators, which evaluate datasets or repositories against a specific interpretation of FAIR, FIP Check focuses on the individual FERs declared in a FIP. Each FER is scored using a set of FAIR Principle-aligned questions to assess the degree to which each FER advances specific FAIR Principles. These FER-level assessments are then aggregated across the 15 Principles, the four Principle Categories, and the entire FIP.

This layered approach makes it possible to pinpoint the specific FERs responsible for gaps in the FAIRness of a FIP, enabling targeted improvements and informed community decisions. While the FIP Wizard helps communities articulate their FERs, the FIP Check complements it by offering a standardized method to estimate the FAIRness of those choices in practice.

To achieve this, the FIP Check applies questions targeted at each FER type, drawing on established FAIR interpretations and frameworks of Jacobsen et al. (2020) and GO FAIR Foundation (n.d.-c). These targeted questions are detailed in Appendix 1. In particular, the questions are designed to ensure that (1) both FER authors and community-wide independent evaluators can answer them, and (2) a close coupling exists between the FER and its resultant FAIR impact.

Partial FAIRness

FIP Check recognizes and measures partial FAIRness, offering a structured alternative to the predominantly binary scoring approaches implemented in many FAIR assessment tools. Existing automated tools, such as F-UJI (Devaraju and Huber, 2020; Devaraju et al., 2020) and FAIR Maturity Indicators (FAIRmetrics and FAIRsharing, n.d.), often evaluate datasets or repositories as compliant or non-compliant with each FAIR Principle, even though they acknowledge the concept of FAIRness as a continuum (Candela, Mangione and Pavone, 2024; FAIRmetrics and FAIRsharing, n.d.).

This all-or-nothing approach supports machine-actionability, enabling clear automated testing and improvement recommendations (Gargio, n.d.). However, when the tests fail, the resulting binary outcome may discourage novices to FAIR, rather than guiding them towards progressive, context-appropriate improvements.

FIP Check makes the FAIR continuum visible and inspectable by explicitly applying a progressive scale to FERs. Each FER is assessed using a structured set of FAIR Principle-specific questions, with ratings that reflect a progressive level of FAIR alignment (see Table 2).

Table 2

Definitions of FAIR Rating Levels.

FAIR RATINGDESCRIPTION
Beginning FAIRFAIR Principles have not yet been adopted in any significant way. There is little to no use of FAIR-related services. This level represents a starting point with substantial room for improvement.
Basic FAIRSome initial FAIR-aligned steps have been taken, but adoption is limited and inconsistent. Only a few practices are in place.
Core FAIRA solid foundation of FAIR implementation exists. Many important elements are implemented, though several areas still need attention. Indicates a clear commitment to FAIR.
Advanced FAIRFAIR Principles are applied consistently and effectively. Most practices are well established, with only minor gaps remaining.
Exemplary FAIROutstanding implementation of FAIR. Practices are comprehensive, effective, and can serve as models for others in the community.

Partial FAIRness makes the current contribution of each FER visible, while also identifying specific, actionable pathways for improvement. By acknowledging partial FAIRness, the FIP Check recognizes resources for their contributions even when not fully aligned with FAIR Principles. It also ensures the tone of assessment is constructive and not judgmental; rather than simply flagging deficiencies in a dataset or repository, it provides reasonable insight into how a FER is currently contributing to FAIRness and where enhancements can be made. The tool also provides detailed interpretive scales for certain resources (e.g., Table 3 for F2. Metadata Schema), with explicit anchors for each scale point. This descriptive detail ensures that evaluator estimates of partial alignment are captured precisely.

Table 3

Interpretive Scale for Assessing F2. Metadata Schema. The highest entry in each Assessment Aspect is the most FAIR; the lowest entry is the least FAIR. Each row has five interpretive answers ordered from most to least FAIR.

ASSESSMENT ASPECTQUESTIONINTERPRETIVE SCALE
Field NamesDoes the metadata schema define structured representations of metadata attributes?Semantic Standard
Standardized
Transparent
Opaque
Absent
Field ValuesDoes the schema support globally unique, persistent, and resolvable identifiers (GUPRIs) for referenced entities?Formally Profiled
Semantic Standard
Standardized
Described
Undefined
Versioning SupportDoes the schema support identification and relation of multiple versions of an entity?Spec + Instance (Internal)
Spec (Internal)
Spec + Instance (External)
Spec (External)
None
Community AdoptionIs the schema widely adopted or compatible with community standards?Standard or International
Domain or Nation
Multi-Project
Project
Individual
Findability AttributesDoes the metadata schema include at least 20 attributes to support findability?20 or more
15 to 19
10 to 14
5 to 9
0 to 4
Schema RepresentationIs the schema itself a computable and structured specification that is described with core metadata?Formally Profiled
Semantic Standard
Computable
Descriptive
Absent
Schema FlexibilityCan the metadata schema support optional attributes and multiple-valued attributes?Optional + Ranged Multiple
Optional + Multiple
Optional Attribute
Repeated Attributes
Neither Supported

Transparent and Inspectable Balance

FIP Check deliberately balances objective evidence and structured, expert-informed interpretation. While other FAIR assessment approaches (Candela, Mangione and Pavone, 2024; FAIRsharing, n.d.) often rely heavily on automated mechanisms, the FIP Check incorporates human input to ensure nuanced and context-aware judgments. It requires individuals who are familiar with the FERs and how those FERs are implemented in the FIP, but it does not require them to be ‘FAIR experts.’ The expertise is embedded in the construction of the FIP Check tool itself—its structured questions and clearly defined scoring anchors—so that assessors with knowledge of any FER can readily provide accurate responses.

With FAIR Principle-aligned questions and evaluator answers visible, FIP Check provides a direct and reviewable link from each score back to the reasoning behind it, and this makes the process transparent and inspectable. FIP Check ensures transparency by exposing both the criteria and the human judgments applied to them, allowing others to trace, discuss, and refine the results.

By combining verifiable questions with carefully bounded expert input, the FIP Check produces results that are repeatable, verifiable, and responsive to diverse contexts and evolving standards. The tool also creates a pathway toward future automation. Its verifiable questions and rating scales could inform the development of automatic tests in FAIRO or FUJ-I for each FER type, allowing FERs to be annotated with these scores as they are incorporated into a FIP (FAIRification Framework, n.d.).

Operational Framework of the FIP Check

Implemented as a Google Sheets-based tool (GO FAIR US, 2025), the core mechanism operates by scoring each FER through target questions, which are organized into dedicated FER-specific tabs. The results are then automatically pulled into the main ‘FIP Assessment’ tab, where the individual FER scores are combined into an overall FAIRness assessment. Figure 1 (Kang et al., 2026a) shows the aggregation of individual FER scores by FAIR Principle and FAIR Principle Category. This main tab presents the evaluation structured by FIP, FAIR Principle Category, FAIR Principle, and each FER level, ensuring the full assessment remains transparent and easy to inspect.

dsj-25-2096-g1.png
Figure 1

FIP Check Interface - Primary Assessment Tab “FIP Assessment”.

This view shows the aggregation of individual FER scores by FAIR Principle and FAIR Principle Category. FIP-level scores (e.g., REPO-X, REPO-Y) are calculated using weighted averages across FAIR domains.

In practice, users start by importing a list of FERs for the FIP they wish to assess. This list can be exported from the FIP Wizard and downloaded as a spreadsheet file. Users then import this file directly into the FIP Assessment tab. Once imported, the main assessment tab displays all the FERs linked to the FIP, showing their current assessment status. If a FER has not yet been assessed by other experts, it will be automatically flagged with a note such as ‘No FER (score) in [FER tab].’ In these cases, the user navigates to the corresponding FER-specific tab, where they complete the FAIR Principle-aligned questions for that resource. Each FER tab contains a tailored question set relevant to that resource type, with answers scored using one of two assessment schemes (see Table 4), depending on the community’s preference and the level of detail needed.

Table 4

FIP Check Assessment Schemes – Standard (left) and Simplified (right).‘Standard Assessment Scheme’ uses an ordinal 5-point scale to capture varying degrees of FAIRness. Responses are mapped to scale from 0 (No Support) to 100 (Full Support), with intermediate levels (25, 50, 75) indicating partial alignment. ‘Simplified Assessment Scheme’ applies a binary approach: full or strong support scores 1, while any partial or weak support scores 0.

STANDARD ASSESSMENT
SUPPORT LEVELSCORE CONTRIBUTION
Yes/Full Support100
Mostly/Strong Support75
Partially/Moderate Support50
Minimally/Weak Support25
No/No Support0
SIMPLIFIED ASSESSMENT
Yes/Full Support100
Mostly/Strong Support100
Partially/Moderate Support0
Minimally/Weak Support0
No/No Support0

Once FERs are scored, FIP Check translates raw scores into a ‘FAIR Rating’ (see Table 5). This translation frames scores in supportive, understandable terms rather than presenting raw numbers alone. By grouping scores into five tiers—Beginning, Basic, Core, Advanced, and Exemplary FAIR—the rating system softens the delivery of results and minimizes score competition or score gaming. Detailed instruction is provided in the GitHub repository (GO FAIR US, 2025).

Table 5

FAIR Rating Levels and Score Thresholds.

FAIR RATINGSSCORE RANGE (UPPER INCLUSIVE)
Exemplary FAIR85–100
Advanced FAIR65–85
Core FAIR30–65
Basic FAIR10–30
Beginning FAIR0–10

A key feature of the FIP Check is its flexible weighting system, which lets communities adjust how much each FAIR Principle or specific criterion contributes to the total score. While we provide suggested (default) weights, these are based on our estimates of the contribution of each FAIR Principle to the overall FAIRness of the repository. We used both internal analyses and the FAIR Data Maturity Model indicator priority to formulate the final weights. These weights can be customized to match local priorities, project goals, or community requirements, as we did in the first project application. The weights are crucial because they ensure this assessment meaningfully reflects what matters most for a given context.

Piloting FIP Check

The development of FIP Check is closely tied with GO FAIR US’s work improving the FAIRness of data ecosystems at the National Institute of Allergy and Infectious Diseases (NIAID, 2024). Thus, the tool was piloted in the context of biomedical research data, which offered several attractive characteristics. It included repository implementations with a range of FAIR-aware practices and well-established standards. Finally, the FIP Check results actively informed our analyses of the repositories’ FAIR qualities.

The pilot also allowed us to cross-check the FAIRness levels initially assigned to each FER. These estimated FAIRness levels are necessarily based on engineering judgment and can be refined over time as needed. In the pilot, low FIP Check scores sometimes revealed missing or mis-scored FERs. Generally, a repository’s FIP Check summary score correlated with its technical maturity as well as our other evaluation results. Even in FIPs that describe technically mature and broadly FAIR biomedical repositories, the FIP Check revealed under-addressed FAIR Principles. The FAIR Principles F3 (Metadata-Data Linking Schema), I1 (Knowledge Representation Language), and R1.2 (Provenance Model), scored low across all the repositories.

This highlights a key strength of the FIP Check: it surfaces how FAIR Principles are supported through specific FERs. The tool thus supports both internal reflection and cross-community comparison, fostering targeted improvements to increase interoperability within that community. These numerical assessments were also visualized using radial charts that summarize each repository’s FAIR implementation. Figure 2 (Kang et al., 2026b) illustrates the FAIRness of a repository we label ‘REPO-D’. Each colored wedge represents the four FAIR Principle Categories—Findability (red), Accessibility (yellow), Interoperability (green), and Reusability (blue)—with its length indicating FAIR alignment and its width reflecting its relative weight. The visualization helps to clarify differences in emphasis and maturity across FAIR Principles and repositories.

dsj-25-2096-g2.png
Figure 2

FIP Check Radial Visualization of REPO-D’s FAIR Alignment.

As domain-specific cultures may influence perceived and measured FAIRness levels, these pilot results should be interpreted as illustrative of the biomedical community we studied. Applying FIP Check across additional domains is an important direction for future work.

Discussion

Emerging Strengths of the FIP Check

FIP Check demonstrated several strengths in its broader application beyond individual repository or digital object assessment. By assessing FERs as declared in a FIP, it enabled evaluation of the FIP as a whole, but also supported comparative analyses that revealed meaningful trends across the ecosystem. Averaging scores across FIPs enabled the identification of which FAIR Principles were commonly supported and which lacked concrete FERs. These comparisons—some visualized in Appendix 2, 3, and 4, and available in Kang et al. (2026c, 2026d, 2026e)—also allowed scores to be compared against peer repositories similarly to the comparisons performed for the ENVRI-FAIR communities (Peterseil et al., 2023).

This comparative lens is a powerful feature of the FIP Check. It moves beyond static scoring to support analysis of systemic trends, implementation strategies, and community norms across the FIP ecosystem. It highlights improvements in the selection or quality of FERs—such as adopting more FAIR-aligned resources or enhancing interoperability of FERs. Community members will have a concrete reference point to guide discussions, whether internally, with other community members, or with external stakeholders. This can foster greater community communication, adoption of common and increasingly FAIR implementations, and prioritization of efforts that create appropriate FAIRness.

These comparisons may also inform the structure of the FAIR Principles themselves, offering insight into how FAIRness is implemented and understood in practice. For example, the FAIR Principle R1.2 (Metadata Provenance Schema) often captures resources describing data but not metadata. This matches our intuition that metadata-related Principles would be supported in fewer repositories (namely, those with more advanced metadata management) than the corresponding Principles about data.

Interpretive Decisions and Methodological Considerations

Several interpretive decisions were necessary in designing and applying the FIP Check. One important decision involved how FAIR Principles are prioritized; assessing the R1.3 (Community Use of FIP) Principle proved challenging. The FAIR Principle states ‘(Meta)data meet domain-relevant community standards’ (GO FAIR Foundation, n.d.-c), and the FIP Wizard assumes that anyone filling out a FIP is declaring those standards, so it asks no questions.

However, FIP Check treats this question as an opportunity to characterize the repository’s level of adoption of its declared FERs. The FIP Check evaluation of each FER refers to the repository itself for its single question—‘Are the FAIR Enabling Resources described in this profile used/followed throughout the described project or system?’—and the answer affects only the R1.3 score. This scoring approach may still understate the broader influence of R1.3 across the FIP. Future work, such as ongoing developments in FAIR2Adapt (European Commission, 2024), aims to address this more comprehensively.

Structural and Technical Limitations

FIP Check assessments do not represent the complete spectrum of FAIRness concerns, and should not be interpreted as a complete or authoritative result. In addition, while some community initiatives (Gargio, n.d.) are developing shared frameworks, there is currently no ‘gold standard’ for FAIR assessment against which to benchmark the FIP-based approach. While the FIP Check team has conducted extensive internal reviews, no formal statistical evaluations have been performed. The engineering judgments—such as FAIR Principle and Category weights and FER FAIRness assessments—have not been externally reviewed.

FIP declarations allow users to indicate when an FER is proposed but not yet in use. This is important for supporting transparent community communication around FER choices. However, the current version of FIP Check assumes that all FERs listed in the FIP are implemented. This limitation can affect assessment accuracy; the FIP Check should therefore instead represent such distinctions by indicating the FIP Check result both with and without the planned FERs.

Technical scalability is another constraint. The current Google Sheets format cannot sustainably support large numbers of repositories. A shift to a database-backed application with semi-automation would be required for broader adoption, and calculating and including the results as annotations directly in the FIP Wizard would be valuable. On the other hand, the value of the tool and rubric are demonstrated within the context of the original project, where we performed a detailed analysis and comparison of seven repositories without issue.

Finally, ensuring consistency across the tool becomes increasingly important. While the rubric provides a structured foundation, interpretations can vary. Incorporating inter-expert agreement, shared scoring guidance, or even peer-review mechanisms—potentially in combination with the GO FAIR Foundation’s FSR Curation Pipeline (GO FAIR Foundation, n.d.-b)—could help improve consistency and trustworthiness in the assessments. Future directions include exploring foundational model approaches for scalable alignment.

Conclusion

This article introduces FIP Check, a novel tool for assessing FAIRness of FIPs and FERs. FIP Check embeds certain conceptual innovations, allowing for precise interventions, accounting for partial FAIRness, and being flexible enough to support different communities of experts. We have explained how the FIP Check works in practice and how it was tested through the pilot assessment of seven repositories. Its outputs can be visualized to inform diverse stakeholders—from data scientists to managers and funders—and its metrics can be tailored to reflect the priorities of its users. This paper marks only the beginning of the FIP Check’s development. It is maintained openly within GO FAIR US GitHub organization (GO FAIR US, 2025) and in collaboration with allied projects. Its design and impact will continue to evolve as more researchers, organizations, and communities engage with and refine the approach. Interested communities and tool developers are encouraged to review and integrate the tool, and contact the authors directly or via the fip-check GitHub repository.

Additional File

The additional file for this article can be found as follows:

Appendices

Acknowledgements

We are grateful to the wider GO FAIR US team who are contributing to this project; Alyssa Arce, Kevin Coakley, Doug Fils, Keith Maull, Matt Mayernik, Bert Meerman and Barend Mons. A special thanks also goes to Reed Shabman, Darya Pokutnaya, Lisa Mayer and Wilbert van Panhuis for their inputs.

Competing Interests

The authors have no competing interests to declare.

Language: English
Submitted on: Nov 1, 2025
|
Accepted on: Feb 9, 2026
|
Published on: Feb 27, 2026
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2026 Sungha Kang, John Graybeal, Barbara Magagna, Erik Schultes, Nancy Hoebelheinrich, Chris Erdmann, Ismael Kherroubi Garcia, Julianne Christopher, Christine R. Kirkpatrick, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.