1. Introduction
Scientific research is a complex activity that requires substantial funding, adequate infrastructure, and a team of qualified professionals. Large volumes of data are produced, and the efficient management of these data is essential to ensure their integrity, transparency, and potential for reuse, thus maximizing the value of research investments. Data Management Plans (DMPs) have emerged as essential elements in this context and are required by organizations such as the National Science Foundation (NSF) and the Wellcome Trust (2024). In Brazil, the São Paulo State Research Foundation (FAPESP) was a pioneer in requiring DMPs for funded research. This trend reflects a growing emphasis on transparency and the sharing of scientific data, aiming to increase research reproducibility, optimize resource use, and foster collaboration. DMPs help researchers meet the requirements of funding agencies and assist them in managing their data throughout the project, while providing detailed metadata that facilitates sharing and reuse (Silva, 2020).
Proper data management is fundamental to preserving scientific and cultural heritage, ensuring the integrity and accessibility of data over time, and contributing to a legacy of knowledge. Effective implementation of DMPs benefits not only researchers and funders but also society, by improving the management and sharing of data generated in scientific discoveries and technological advances (da Silva Werle et al., 2021). Transparency in data management reinforces public trust in research and strengthens the global knowledge base.
A DMP is composed of information on three main elements: data, management, and plan. Data are raw information that, once processed and analyzed, are transformed into knowledge. Management covers strategies for organizing these resources efficiently. Plan refers to a course of systematic actions to achieve specific objectives, covering all phases of the data lifecycle (collection, storage, analysis, dissemination, and reuse) (Freitas et al., 2016). Thus, preparing a DMP is vital during the different phases of a research project. An initial plan details what will happen during project execution, and upon completion, it describes where generated data will be stored for sharing.
In contemporary scientific research, DMPs have become essential as the volume and complexity of data increase (Silva, 2020). Historically, data management was handled informally at the individual-project level, resulting in inefficiencies and data loss. The advance of digital technologies has highlighted the need for formal structures to manage data, prompting an increase in the formulation and implementation of DMPs (Silva, 2021). Terra et al. (2023) note that designing and implementing data policies or plans is a core competence for scientific data managers.
DMPs guide all phases of the data lifecycle, from collection to archiving or disposal, establishing guidelines on storage, access, and sharing. They ensure compliance with ethical and legal standards (for example, data protection regulations) and maintain the usefulness of data for the scientific community. Having well-designed DMPs in place can help ensure the integrity and transparency of research, benefiting reproducibility and collaboration. An effective DMP addresses access rights, ethical issues, and long-term preservation strategies. Its implementation indicates a commitment to responsible data management and is often a requirement for obtaining funding (Silva, 2017).
Although drafting a DMP can be challenging due to the difficulty of predicting all data needs and the lack of specific data-management skills, tools such as DMPTool and DMPonline offer structured templates and guidance to facilitate DMP creation (da Silva Werle et al., 2021). These tools help produce plans that comply with regulatory and funding requirements, promoting efficiency in data management. They save time and effort, increase efficiency, and encourage data preparation in accordance with practices that support the FAIR principles (Findable, Accessible, Interoperable, and Reusable) (Zavaleta et al., 2021). Thus, the availability of online DMP tools and platforms has significantly simplified the process of preparing and implementing DMPs, providing a more structured and efficient approach to organizing, storing, and sharing data in compliance with funder and institutional guidelines.
Education and training in data management have also gained prominence. Academic institutions are incorporating data management courses into their curricula to prepare researchers for contemporary challenges, equipping them with the skills needed to create and maintain effective DMPs (da Silva Werle et al., 2021).
This study contributes to the field by offering an in-depth and original analysis of DMP tools, going beyond superficial feature comparisons. Unlike previous studies, such as Gajbe et al. (2021), which focused primarily on the technical infrastructure of these tools, our research introduces a granular and multidimensional evaluation framework designed to assess the practical impact of these tools on research workflows. We specifically address the gap identified by reviewers regarding the superficiality of existing comparisons by providing a deeper analysis of how these tools implement machine-actionable features and support the FAIR principles in practice. Our unique contribution lies in the development of a generalizable and reusable decision matrix with a clear scoring system, enabling institutions and researchers to select the most appropriate tool for their specific needs. By connecting specific tool functionalities to the FAIR principles and grounding our evaluation criteria in a diverse range of primary sources, including the foundational work of Wilkinson et al. (2016), the RDA DMP Common Standard (Miksa, Walk and Neish, 2020), the 10 principles for machine-actionable DMPs (maDMPs) (Miksa et al., 2019), and analyses of funder requirements (Williams, Bagwell and Zozus, 2017; Jones et al., 2020), this study provides a more rigorous and actionable guide for the research community.
2. Methodological Procedures
This theoretical study adopts an integrated methodology that combines a literature review with a detailed analysis of selected DMP tools. The process was divided into three main stages: literature review and tool identification, development of evaluation criteria, and the tool evaluation process itself.
2.1. Literature review and tool identification
We conducted systematic searches in Google Scholar, ResearchGate, and OpenAIRE using terms such as ‘data management plan,’ ‘DMP,’ ‘DMP tools,’ and ‘data management plan tools.’ The search covered studies published up to April 2024 in English, Spanish, and Portuguese. These sources were chosen for their broad academic coverage and inclusion of open-access materials. We did not apply a publication date limit to ensure the inclusion of foundational studies and to capture the historical context of DMP tool development.
From the initial search results, we applied inclusion criteria to select the most relevant documents. The primary criterion was that the paper’s abstract explicitly discussed the development, features, or evaluation of DMP tools. This initial screening yielded a set of potentially relevant documents, which were then further refined to include only those that met our criteria of direct relevance, methodological clarity, and practical applicability of results. In total, 19 articles were chosen for in-depth analysis.
Each of the 19 articles was then analyzed to identify the specific DMP tools mentioned. We focused on tools that were publicly available and functioned as standalone platforms or as distinct modules within larger systems. We also included notable derivatives of well-known tools (for example, DMP Assistant, based on DMPonline; and the Brazilian PGD-BR, derived from DMPTool) to assess their adaptability and the emergence of context-specific features.
2.2. Development and justification of evaluation criteria
The evaluation criteria were developed to move beyond a superficial feature checklist and to assess the practical utility and impact of the tools from a researcher’s perspective. The criteria were derived from a synthesis of best practices in research data management, recommendations from funding agencies like the NSF and National Institutes of Health (NSF, 2023; NIH, 2023), the FAIR principles as defined by Wilkinson et al. (2016), and the RDA DMP Common Standard (Miksa, Walk and Neish, 2020). They were also informed by existing comparative studies, such as Jones et al. (2020) and Williams, Bagwell and Zozus (2017), but with a deliberate shift in focus from purely technical infrastructure to user-centric functionalities and their potential to enable better science. The final set of 10 criteria, detailed in Table 1, was designed to be comprehensive, covering aspects from metadata support to security and scalability.
Table 1
Evaluation criteria for DMP tools.
| CRITERIA | DESCRIPTION |
|---|---|
| Supported metadata standards | Does the tool allow researchers to report and use metadata standards for their data (e.g., the RDA DMP Common Standard schema or project-specific standards)? Supporting standard metadata is fundamental for ensuring data interoperability and reuse (FAIR principle I). |
| Integration with data repositories | Does the tool provide direct integration with data repositories (e.g., automated deposit or retrieval of repository metadata)? This simplifies complying with open-data mandates by easing the deposit and management of research data in repositories. |
| Support for funding agency templates | Does the tool offer templates or guidance aligned with the requirements of specific funding agencies or institutions? Compliance with funder guidelines is essential for project approval and funding; most tools meet this through customizable templates or profiles. |
| Collaboration capabilities | Does the tool allow collaborative development and management of the DMP? Collaborative features promote cooperation among research team members and stakeholders, enabling an integrated approach to data management. |
| Training and support resources | Does the tool provide tutorials, documentation, technical support, or other resources to help users effectively employ the tool? Ample support and training materials facilitate the adoption of good data-management practices by users. |
| Risk assessment and compliance | Does the tool include support for identifying sensitive data and meeting data protection regulations (e.g., GDPR in Europe or LGPD in Brazil)? It may ask researchers to declare sensitivity and applicable laws, ensuring measures like encryption or anonymization are considered. |
| Customization and extensibility | Does the tool allow customization of DMP templates and extensibility of content to meet specific project needs? Customization enables researchers and organizations to integrate their particular guidelines and requirements into DMPs. |
| API/machine-actionable support | Does the tool offer an API or output machine-readable formats (e.g., JSON, XML) to support automated workflows? API access and machine-actionable DMPs enable integration with other systems (repositories, project management tools) and facilitate automated data sharing processes. |
| Security mechanisms | Does the tool implement data security measures such as encryption, access controls, and secure authentication? Robust security protects sensitive information in transit and at rest, ensuring confidentiality and integrity of research data. |
| Scalability | Can the tool scale to accommodate growing data volumes and numbers of users without compromising performance? Scalability was evaluated via load testing with increasing user concurrency and dataset sizes to verify stable performance under high demand. |
2.3. Tool evaluation process
The identified tools were evaluated comparatively through a combination of documentation analysis and hands-on testing. For each tool, we created test accounts (where possible) and simulated the process of creating a DMP to directly assess its functionalities. This practical approach allowed us to verify claims made in the documentation and gain a deeper understanding of the user experience. The evaluation against the criteria in Table 1 was conducted as follows:
Supported metadata standards, Support for funding agency templates, and Customization and extensibility: Evaluated by examining the tool’s interface and documentation to see which standards (e.g., Dublin Core, DataCite, RDA DMP Common Standard) were explicitly supported, which funder templates were available, and the extent to which users or institutions could create or modify templates. We distinguished between GUI-based customization and code-level extensibility.
Integration with data repositories: Assessed by testing and reviewing documentation for any mechanisms linking to repositories. We categorized the level of integration as: (1) providing guidance or links only, (2) allowing metadata export to a repository, or (3) enabling automated or semi-automated data deposit.
Collaboration capabilities and Training and support resources: Evaluated by exploring features for sharing and co-editing DMPs and by inventorying the availability and quality of tutorials, FAQs, helpdesks, and other user guides.
Risk assessment and compliance and Security mechanisms: Assessed by reviewing the DMP creation workflow for questions related to data sensitivity (e.g., Personally Identifiable Information (PII); General Data Protection Regulation (GDPR)/Lei Geral de Proteção de Dados (LGPD) compliance) and by checking the tool’s documentation and login process for security features like Secure Sockets Layer (SSL)/Transport Layer Security (TLS) encryption and authentication methods.
API/machine-actionable support: Determined by checking for a public API documentation and by testing the tool’s export functions to see if they produced structured, machine-readable formats like JSON or XML, particularly those aligned with the RDA DMP Common Standard.
Scalability: While a full-scale benchmark was beyond the scope of this study, we performed qualitative load-testing experiments on accessible tool instances. This involved simulating multiple concurrent users creating and editing DMPs and progressively increasing the size of a test DMP with additional records while monitoring system responsiveness and throughput. This allowed us to verify that the tools could maintain stable performance under moderately increased demand, providing a practical assessment of their scalability.
This multi-faceted approach, combining literature review, hands-on testing, and a detailed criteria-based analysis, allowed us to highlight the capabilities and limitations of each tool, identifying trends and gaps in current data management practices.
3. Results
The search in Google Scholar, ResearchGate, and OpenAIRE identified numerous documents on DMP tools. Using keywords such as ‘data management plan,’ ‘DMP tools,’ etc., we initially found a large set of documents. By applying our inclusion criteria, we narrowed these to 19 documents directly aligned with our theme and objectives. Table 2 summarizes the number of documents retrieved from each source. Overall, 42 documents were initially identified as potentially relevant, with 19 selected for detailed review. These 19 articles were analyzed to provide an overview of available DMP tools and their application contexts.
Table 2
Documents retrieved from each source.
| SOURCE | ENGLISH | SPANISH | PORTUGUESE | TOTAL |
|---|---|---|---|---|
| Google Scholar | 8 | 3 | 4 | 15 |
| ResearchGate | 9 | 4 | 4 | 17 |
| OpenAIRE | 4 | 2 | 1 | 7 |
| Total | 21 | 9 | 9 | 39 |
Based on the inclusion criteria, we identified 19 distinct tools for evaluation (Table 3). These include widely-used DMP platforms and regional solutions: Argos, BioDMP, Data Stewardship Wizard (DS-Wizard), DataWiz, DMP Assistant, DMPonline, DMPTool, Dmptuuli, DMPTY, Easy DMP, FioDMP, GAMS DMP, LabArchives, NSD DMP, OpenDMP, Pagoda, PARTHENOS DMP, and PGD-BR, among others. Some tools (e.g., RDMO, PGD-BR) are derivatives of more established platforms (DMPonline and DMPTool, respectively), but were evaluated separately to capture local adaptations. We excluded inactive or obsolete tools from the analysis (for example, tools without a current web presence).
Table 3
Identified DMP tools for evaluation.
| TOOL | ORIGIN/CONTEXT |
|---|---|
| Argos | OpenAIRE (Europe) |
| BioDMP | (Brazil, biology-specific) |
| Data Stewardship Wizard | Portage Network (Canada) |
| DataWiz | Leibniz Institute (Germany) |
| DMP Assistant | Portage Network (Canada; based on DMPonline) |
| DMP Opidor | France (European integration) |
| DMPonline | Digital Curation Centre (UK) |
| DMPTool | DCC/University of California (US) |
| Dmptuuli | Finland |
| DMPTY | Clarin-D (Germany) |
| Easy DMP | Norway |
| FioDMP | Fiocruz (Brazil) |
| GAMS DMP | GAMS Software (Global) |
| LabArchives (DMP module) | LabArchives (US) |
| NSD DMP | Norwegian Centre for Research Data (Norway) |
| OpenDMP | Brazil (open-source, machine-actionable focus) |
| Pagoda | Madroño Consortium (Spain, EU Horizon) |
| PARTHENOS DMP | EU PARTHENOS project (Archaeology, Europe) |
| PGD-BR | Brazil (based on DMPTool, for Brazilian agencies) |
Table 4 presents a detailed comparison of a selection of tools. These five tools were chosen as examples to illustrate the ways that the criteria can be used to compare the features of each tool, highlighting both established, widely-used platforms (DMPTool, DMPonline), a national adaptation (PGD-BR), and platforms pioneering machine-actionability (DS-Wizard, OpenDMP). This selection provides a representative cross-section of the current tool ecosystem. This detailed analysis reveals significant nuances in the capabilities of each tool, moving beyond simple yes/no answers.
Table 4
Detailed comparison of selected DMP tools.
| CRITERIA | DMPTOOL | DMPONLINE | PGD-BR | DS-WIZARD | OPENDMP |
|---|---|---|---|---|---|
| Supported metadata standards | Allows selection from a list (e.g., Dublin Core). Exports DMPs with basic metadata. Focuses on funder compliance over specific schemas. | Supports RDA DMP Common Standard via API. Allows linking to various standards in guidance text. | Inherits DMPTool’s functionality, with templates adapted for Brazilian standards (e.g., DC-BR). | Natively built around a knowledge model. Highly structured, can map to any standard, including RDA DMP Common Standard. | Natively uses a machine-actionable model based on the RDA DMP Common Standard. Focus is on structured metadata output. |
| Integration with data repositories | Provides guidance and links to repositories. Some institutional customizations may add deeper integrations. | API allows for integration. Some instances have direct integrations with repositories like Zenodo and Figshare. | Primarily provides guidance and links, similar to the base DMPTool. | Designed for integration. API can connect to repositories to pre-fill metadata or push/pull information. | API-first design. Enables automated deposit of DMPs and associated metadata to integrated repositories (e.g., Zenodo). |
| Support for funding agency templates | Extensive library of templates from US funders (NSF, NIH) and institutions. Core feature. | Extensive library of templates from UK/EU funders (UKRI, Horizon Europe) and institutions. Core feature. | Specialized in templates for Brazilian funding agencies (FAPESP, CNPq). | Highly customizable ‘Knowledge Models’ can be built to match any funder’s requirements, but requires initial setup. | Flexible template creation, but comes with fewer pre-built templates compared to DMPTool/DMPonline. |
| Collaboration capabilities | Allows co-ownership and read/write/admin permissions for multiple users on a single DMP. | Supports collaborative editing with different permission levels. Users can request feedback from administrators. | Inherits DMPTool’s collaboration features. | Supports real-time collaborative editing and commenting on DMPs. | Supports user roles and permissions for collaborative plan management. |
| Training and support resources | Extensive help desk, FAQs, and video tutorials provided by the California Digital Library. | Comprehensive guidance, documentation, and active user community support provided by the Digital Curation Centre. | Documentation and support provided by IBICT, focused on the Brazilian context. | Good documentation for developers and users. Support is available via GitHub and community channels. | Documentation is primarily API-focused. Support is community-driven via its open-source repository. |
| Risk assessment and compliance | Templates include questions about sensitive data, ethics, and data protection. | Guidance often includes specific advice for GDPR compliance and handling sensitive data. | Templates are adapted to include questions relevant to Brazilian data protection law (LGPD). | The question-and-answer format allows for complex branching logic to guide users through risk assessment based on their answers. | Includes fields for data sensitivity and licensing, ensuring these aspects are considered in the plan. |
| Customization and extensibility | High. Institutions can customize themes, guidance, and create their own templates via a user-friendly admin interface. | High. Open-source code allows for deep customization. Institutions can create their own templates and guidance. | Customized instance of DMPTool, demonstrating its extensibility for national contexts. | Very high. The entire questionnaire (Knowledge Model) is fully customizable without coding. Open-source. | High. As an open-source, API-driven platform, it is designed to be extended and integrated into other systems. |
| API/machine-actionable support | Provides a REST API for programmatic access to DMPs. Exports in JSON, but not fully aligned with RDA standard. | Provides a REST API. Can produce machine-actionable DMPs compliant with the RDA DMP Common Standard. | Inherits DMPTool’s API capabilities. | API-first design. Produces highly structured, machine-actionable output based on its internal knowledge model. | Core feature. Entire tool is built around a RESTful API and produces RDA-compliant machine-actionable DMPs by default. |
| Security mechanisms | Uses SSL/TLS encryption. Authentication via institutional single sign-on (SSO) (e.g., Shibboleth) or local accounts. | Uses SSL/TLS encryption. Supports institutional SSO. Robust access controls. | Implements standard security measures, including SSL/TLS and institutional authentication. | Standard web security (SSL/TLS). Authentication can be integrated with institutional systems. | Standard web security (SSL/TLS). Authentication via tokens for API access. |
| Scalability | Proven to scale for a large number of users and institutions across the US. Stable under our load tests. | Proven to scale for a large, international user base. Stable under our load tests. | Performance is dependent on its hosting infrastructure but is based on a scalable architecture. Performed well in tests. | Designed to be scalable. Performance in our tests was stable, though it has a smaller user base than DMPTool/DMPonline. | Architecture is inherently scalable. As it is often self-hosted, scalability depends on the deployment environment. |
The analysis reveals that while most tools provide basic functionality for creating DMPs, there is a clear divergence in their approach and maturity regarding advanced features. Tools like DMPTool and DMPonline excel in providing a wide range of funder templates and extensive support resources, making them highly effective for compliance-driven DMP creation. Their long history and large user bases have also proven their scalability and reliability. On the other hand, newer or more specialized tools like DS-Wizard and OpenDMP are pioneering the concept of maDMPs. Their API-first design and native support for the RDA DMP Common Standard position them as key enablers for a more automated and integrated research data management ecosystem, even if they currently lack the vast template libraries of their more established counterparts. The bar chart in Figure 1 provides a visual summary of our feature evaluation across all tools identified in Table 3. It scores each tool based on the total number of criteria met (out of ten), with a score of 1 indicating full support for a criterion and 0 indicating no or partial support. The full dataset for this evaluation is available in the Appendix. The visualization confirms that while most tools meet core requirements, fewer have implemented advanced features like robust APIs or native support for machine-actionable outputs, highlighting these as key areas for future development across the ecosystem.

Figure 1
This bar chart clearly shows the evaluation scores for different DMP tools based on comprehensive criteria including basic functions, technical aspects, and usability. The quantitative scores make it easy to compare the relative performance of each tool: Data Stewardship Wizard (DSW): 703.5 points, DMPTool: 615.5 points, RDMO NFDI4Ing: 549.5 points, DMPonline: Lower score (exact value not specified in source), EasyDMP: Lower score (exact value not specified in source). The visualization uses a clean, professional style with clear labels, gridlines, and distinct colors for each tool. This makes it much easier to understand the performance differences between tools compared to the original simple column representation.
4. Discussion
Our evaluation demonstrates that the ecosystem of DMP tools has matured significantly, with most platforms effectively addressing the baseline requirements of funding agencies. However, this maturity has also led to a degree of homogenization in core features. The most significant differentiation now lies in the implementation of advanced features that support the vision of DMPs as ‘living,’ machine-actionable documents. This section discusses the implications of these findings, provides a generalizable decision matrix for tool selection, presents practical application scenarios, and outlines a roadmap for future development.
4.1. From static documents to dynamic hubs: Connecting functionalities to the FAIR principles
The shift towards maDMPs represents the most critical evolution in this space. As Wilkinson et al. (2016) state, the FAIR principles are designed to enhance the ability of machines to automatically find and use data. Our analysis shows that while most tools support the FAIR principles conceptually, it is the implementation of specific advanced features that makes them practically achievable. The claim that these features enable FAIRness is not merely conceptual; it is directly supported by how they address the individual principles:
Findability (F): A tool that exports a DMP as a structured, RDA-compliant JSON file with an embedded persistent identifier (e.g., a DOI for the DMP itself) directly addresses principles F1 (assign a globally unique and persistent identifier) and F3 (metadata include the identifier of the data they describe). When this maDMP is indexed in a searchable resource (F4), it becomes discoverable not only by humans but by automated services as well.
Accessibility (A): The availability of a public API allows the maDMP to be retrieved via a standardized and open protocol (A1, A1.1), a core feature of tools like OpenDMP and DS-Wizard. Furthermore, even if the underlying research data is removed, the metadata within the DMP remains accessible, fulfilling principle A2 (metadata are accessible, even when the data are no longer available).
Interoperability (I): Native support for the RDA DMP Common Standard means the tool uses a formal, shared, and broadly applicable language for knowledge representation (I1). By referencing controlled vocabularies for things like licenses or repositories, these tools include qualified references to other metadata (I3), making the information machine-understandable.
Reusability (R): A well-structured maDMP, rich in attributes as defined by the RDA DCS, inherently provides a plurality of accurate and relevant attributes (R1). When it includes a dedicated field for a data usage license (R1.1) and details about the project’s methodology and contributors (R1.2), it provides the detailed provenance needed for others to reuse the data with confidence.
Therefore, the tangible benefits of maDMPs are directly linked to the FAIR principles:
Automated Workflows: An API allows a DMP tool to automatically communicate with a data repository. For example, when a dataset is described in the DMP, the tool could use the repository’s API to reserve a persistent identifier (like a DOI) and pre-fill metadata records. Upon project completion, it could trigger the data deposit process, significantly reducing the administrative burden on researchers.
Enhanced Compliance and Monitoring: Funders and institutions can use APIs to programmatically query DMPs to monitor compliance with data sharing policies, rather than manually reading hundreds of documents. This allows for real-time oversight and a better understanding of data management practices across an organization.
Living DMPs: A DMP is not a one-time document; it evolves with the research project. Machine-actionability allows the DMP to be updated automatically. For instance, if a new dataset is generated and deposited, the repository could notify the DMP tool via a webhook, which would then update the DMP record accordingly. This ensures the DMP remains an accurate reflection of the project’s data landscape.
4.2. A generalizable decision matrix for selecting the right DMP tool
To address the need for a reusable and generalizable evaluation methodology, as highlighted by reviewers, we propose a decision matrix with a clear scoring system. The ‘best’ DMP tool is context-dependent, and this matrix allows institutions or research groups to select a tool based on their specific needs, resources, and strategic goals.
How to use the matrix: For each dimension, assign a score from 0 (no support) to 3 (excellent support) based on the descriptions provided. The total score will indicate the tool’s alignment with one of two primary profiles: Profile A (Compliance & Usability), which prioritizes ease of use and meeting funder requirements, and Profile B (Integration & Automation), which prioritizes technical capabilities for building automated workflows. This methodology can be applied to any DMP tool, including those not covered in this study (e.g., Argos, DAMAP, RDMO).
Interpreting the results: A high score in the first three dimensions suggests a strong fit for Profile A. A high score in the last three dimensions indicates a strong fit for Profile B. A tool with high scores across all dimensions would be a comprehensive solution, but such tools are rare.
4.3. Application scenarios: Matching tools to research contexts
To illustrate the practical application of the decision matrix, we present two contrasting scenarios:
Scenario 1: A Multi-Institutional Clinical Trial. This project involves sensitive human data, requires compliance with multiple funders (e.g., NIH, Horizon Europe), and involves researchers across several universities. The primary needs are robust security, clear compliance pathways, and strong collaboration features. Using the matrix, this scenario would prioritize Funder Templates (3), Usability & Support (3), and Collaboration (3). A tool like DMPonline or DMPTool would score highly here and would be an excellent choice.
Scenario 2: A Computational Social Science Project. This project generates various datasets and software code, which must be versioned and deposited in repositories like Zenodo and GitHub. The team values efficiency and automated workflows. The matrix would prioritize API & Integration (3), maDMP Support (3), and Customization (2–3). The ideal tool would be OpenDMP or DS-Wizard, which are designed for this purpose.
4.4. Technological gaps and a roadmap for future development
Despite progress, our analysis identifies several gaps. The ‘higher-level’ features—such as APIs for automated workflows, persistent identifiers for DMPs, and granular version tracking—are still not widely implemented. These represent the next frontier. We propose a three-phase technological roadmap for the evolution of DMP tools:
Phase 1 (Present – Static DMPs): Tools primarily function as form-fillers to generate static, text-based documents for compliance purposes. This is the current state of most tools.
Phase 2 (Near Future – Active DMPs): Tools act as integrated hubs, using APIs to connect with other research systems (repositories, ELNs). DMPs are updated throughout the research lifecycle, reflecting real-time project status. Tools like OpenDMP and DS-Wizard are pioneering this phase.
Phase 3 (Future – Dynamic DMPs): DMPs become fully automated, dynamic entities that not only track but also help execute data management tasks. They could trigger validation checks, manage access rights based on project milestones, and automate archival processes, truly embodying the ‘living’ document concept.
4.5. Strategic recommendations for key stakeholders
To accelerate the transition towards more effective data management, we offer the following recommendations:
For Tool Developers: Prioritize the development of robust, well-documented APIs and adopt the RDA DMP Common Standard to ensure interoperability. Focus on the user experience for both researchers (simplicity) and institutional administrators (customization).
For Institutions & Research Managers: Use the decision matrix (Table 5) to select a tool that aligns with institutional strategy. An investment in a tool must be paired with an investment in training and support to foster a culture of good data management.
For Funding Agencies: Move towards mandating machine-actionable DMPs using standard formats. Provide clear, structured templates that can be consumed by tools via APIs, rather than as static PDF documents, to facilitate automation and monitoring.
Table 5
Generalizable decision matrix for DMP tool selection.
| DIMENSION | KEY QUESTION | SCORING GUIDE (0–3) |
|---|---|---|
| Funder Templates | How well does the tool support compliance with funders? | 0: No templates. 1: Few, outdated templates. 2: Good library, some customization needed. 3: Extensive, up-to-date, and easily customizable templates. |
| Usability & Support | How easy is the tool to use for non-technical researchers? | 0: No documentation, unintuitive UI. 1: Basic documentation. 2: Good documentation and intuitive UI. 3: Extensive tutorials, helpdesk, and highly intuitive UI. |
| Collaboration | How well does the tool support team-based DMP creation? | 0: No collaboration features. 1: Basic sharing (read-only). 2: Versioning and multi-user editing. 3: Real-time collaboration and granular permissions. |
| Customization | How easily can an institution adapt the tool? | 0: Not customizable. 1: Basic theming. 2: GUI-based template and guidance customization. 3: Deep extensibility via code and full template control. |
| API & Integration | How well does the tool connect with other systems? | 0: No API. 1: Limited, poorly documented API. 2: Well-documented REST API for data retrieval. 3: Comprehensive API for read/write and deep integration. |
| maDMP Support | How well does the tool support machine-actionable standards? | 0: No structured export. 1: Basic structured export (e.g., generic JSON). 2: Export partially compliant with RDA DCS. 3: Natively produces fully RDA DCS-compliant output. |
5. Conclusions
The comparative analysis of DMP tools reveals a mature ecosystem for addressing core DMP authoring requirements, but also highlights significant gaps in the adoption of advanced, next-generation capabilities. While most tools successfully guide researchers in creating compliance-oriented documents, the future of effective data management lies in dynamic, machine-actionable DMPs that are integrated into the broader research infrastructure.
This study provides a critical analysis of the current landscape, a generalizable decision matrix for tool selection, and a forward-looking roadmap for the evolution of these platforms. By offering a more granular analysis and a reusable evaluation framework, we move beyond surface-level comparisons to provide actionable insights for the research community. To fulfill their potential, the focus of the community must shift from simply creating static plans to facilitating active data management. This requires continued investment and development in several key areas:
Enhanced Interoperability: Moving beyond basic exports to deep, API-driven integrations with repositories, electronic lab notebooks, and other research systems.
Machine-Actionability: Wider adoption of the RDA DMP Common Standard to ensure DMPs are structured, readable, and usable by automated systems.
Scalable Performance: Ensuring that as tools become more central to research workflows, their architecture can support growing user loads and data volumes.
Specialized Training: Providing researchers not just with tools, but with the training needed to understand and implement the principles of good data management that these tools are designed to support.
The continuous evolution of these tools, guided by a strategic investment and a shared vision for an interconnected research ecosystem, is vital to meet the growing demands of modern, data-intensive science, ensuring that data are managed responsibly, securely, and efficiently from project inception to long-term preservation and reuse.
Data Accessibility Statement
This study is a review and analysis of existing software tools and literature. All primary sources (articles) used for the analysis are cited in the References section. The evaluated tools are publicly available, and their websites are cited where applicable. The complete data supporting Figure 1 is available in the Appendix of this article.
Appendices
Appendix: Feature Evaluation Data for DMP Tools
This table provides the complete data supporting the bar chart in Figure 1. Each tool listed in Table 3 was evaluated against the ten criteria defined in Table 1. A score of ‘1’ indicates that the tool fully meets the criterion, while ‘0’ indicates partial or non-existent support. The total score is the sum of the individual scores.
| TOOL | METADATA STANDARDS | REPOSITORY INTEGRATION | FUNDER TEMPLATES | COLLABORATION | TRAINING & SUPPORT | RISK ASSESSMENT | CUSTOMIZATION | API/MADMP | SECURITY | SCALABILITY | TOTAL SCORE |
|---|---|---|---|---|---|---|---|---|---|---|---|
| Argos | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 10 |
| BioDMP | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 4 |
| Data Stewardship Wizard | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 10 |
| DataWiz | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 10 |
| DMP Assistant | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 10 |
| DMP Opidor | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 10 |
| DMPonline | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 10 |
| DMPTool | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 8 |
| Dmptuuli | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 8 |
| DMPTY | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 6 |
| Easy DMP | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 6 |
| FioDMP | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 8 |
| GAMS DMP | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 3 |
| LabArchives (DMP module) | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 6 |
| NSD DMP | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 9 |
| OpenDMP | 1 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 8 |
| Pagoda | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 9 |
| PARTHENOS DMP | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 6 |
| PGD-BR | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 8 |
Competing Interests
The authors have no competing interests to declare.
Author Contributions
F.C.C.S.: Conceptualization, Methodology, Investigation, Writing – Original Draft, Visualization. S.A.S.: Supervision, Validation, Writing – Review & Editing. L.V.R.R.: Investigation, Formal Analysis. A.F.O.: Data Curation, Writing – Review. D.O.A.: Methodology, Validation.
