Introduction………………………………………………………………………………………………………………. 3
The structure of the funding review system……………………………………………………………………. 4
- 2.1
A hierarchical funding scheme………………………………………………………………………………‥ 4
- 2.2
Description of the steps involved in the peer review system for funding……………………… 6
- 2.2.1
Submission and initial evaluation………………………………………………………………………… 6
- 2.2.2
Peer review……………………………………………………………………………………………………… 6
- 2.2.3
Panel discussions and funding decisions……………………………………………………………… 6
- 2.2.4
Announcement and possible appeal……………………………………………………………………‥ 7
- 2.2.1
- 2.1
Stakeholders and their expectations concerning peer review……………………………………………. 8
Factors influencing the ultimate decisions……………………………………………………………………… 9
- 4.1
Principal investigators and their teams……………………………………………………………………… 9
- 4.2
The project proposal itself……………………………………………………………………………………… 9
- 4.3
External influences……………………………………………………………………………………………… 10
- 4.1
Far-reaching consequences: Influence on scientists’ careers and the future of humanity……. 10
Testing peer review for funding decisions and problems found in the funding review system‥ 11
- 6.1
Difficulties in testing peer review for funding decisions…………………………………………… 11
- 6.2
Solutions for difficulties in testing peer review for funding decisions and their funding. 11
- 6.2.1
The inter-rater problem was found in the NSF’s peer review system……………………‥ 12
- 6.2.2
Measuring the reliability, validity, and fairness of a review process………………………. 12
- 6.2.3
Testing the success of peer review panels…………………………………………………………‥ 13
- 6.2.1
- 6.3
The root cause of the inter-rater problem: Biases of peer review………………………………‥ 14
- 6.4
Problems with the organization of the peer review procedure for funding decisions……. 15
- 6.4.1
Lacking recognition for peer review…………………………………………………………………. 15
- 6.4.2
The tragedy of the reviewer commons………………………………………………………………. 16
- 6.4.3
Lack of transparency in the review procedure……………………………………………………. 16
- 6.4.4
Specific problems related to interdisciplinary projects…………………………………………. 16
- 6.4.5
Lack of an efficient mechanism for funding international collaborations………………‥ 17
- 6.4.1
- 6.1
Towards a better peer review system for funding decisions……………………………………………. 17
- 7.1
Granting reviewer credibility and credit as a way to solve the problems related to the lack of recognition and the tragedy of the reviewer commons……………………………‥ 17
- 7.2
Developing infrastructure for sharing and accessing data and providing a feedback channel to address the lack of transparency……………………………………………………………. 17
- 7.2.1
Developing infrastructures for sharing and accessing data……………………………………. 18
- 7.2.2
Establishing a feedback channel to let the peer review system act in a formative role… 18
- 7.2.1
- 7.3
Establishing a specific branch for interdisciplinary projects……………………………………… 19
- 7.4
Establishing a global funding coordinating institution……………………………………………… 19
- 7.5
Constructing better panels……………………………………………………………………………………. 19
- 7.1
New initiatives………………………………………………………………………………………………………… 20
- 8.1
Substantiating the funding decision with scientific evidence……………………………………‥ 20
- 8.2
Organizing the review procedure loosely………………………………………………………………‥ 21
- 8.2.1
Funding: a lottery?…………………………………………………………………………………………… 21
- 8.2.2
A crowd-based funding model…………………………………………………………………………‥ 21
- 8.2.3
Decentralized models……………………………………………………………………………………… 22
- 8.2.1
- 8.3
Organizing the review procedure more strictly through more interactions among stakeholders………………………………………………………………………………………………………. 22
- 8.3.1
Interaction between applicants: Distributed peer review……………………………………… 22
- 8.3.2
Interaction between funders, reviewers, and applicants………………………………………‥ 22
- 8.3.3
Interaction in the review procedure…………………………………………………………………… 23
- 8.3.4
Deepening the interaction between reviewers and proposers by joining the team……‥ 23
- 8.3.1
- 8.4
The challenge of choosing between independent and interactive review, as well as between loosely or strictly structured procedures……………………………………………………. 23
- 8.1
Conclusion……………………………………………………………………………………………………………… 25
References…………………………………………………………………………………………………………………‥ 26
In this article, we aim to guide the reader to new ideas related to research funding and the corresponding peer review process. Funding is just one instance in which peer review can play a role. As stated in Lee et al. (2013), the term “peer review” covers a broad range of activities, such as evaluating the suitability of journal submissions, making funding decisions, performing research assessments, reviewing colleagues’ teaching effectiveness, making decisions on fellowship applications for scientific societies, contributing to selection processes for prestigious awards like the Nobel Prize, and deciding on the quality, relevance, and interpretability of datasets. In other words, peer review has become an essential part of the science system.
Grant decisions differ from, e.g., tenure decisions and manuscript decisions. In grant decisions, the allocation of public or private funds is at stake. Funding agencies that rely on the autonomy of the scientific community invite experts to come to a consensus for allocation decisions. As a fundamental part of the research life cycle, grant funding drives the research and innovation ecosystem by providing financial support for scientific investigations.
The aim of funding decisions is always to find the best scientists to perform the best future science, or stated otherwise, to fund future excellence.
This article is organized as follows. We begin by describing the overall structure of the funding review system. We then analyzed the expectations of the various stakeholders involved. By systematically reviewing the results of testing the various fund agencies’ review systems, several issues were identified within the current process and considered potential improvements. Finally, before concluding, we review recent initiatives, such as the partial lottery approach, and identify originality among proposals by considering non-consensus among reviewers and proposers.
In this subsection, we discuss the overall structure of the funding review system. The specific steps are covered in the next subsection. Typically, funding agencies follow a process that includes the following steps: initial submission check (triage), peer review evaluation, funding decision by a separate panel, and announcement of the decision, often with the possibility of appeal afterward. Although various approaches exist for gathering peer expertise (Bendiscioli, 2019; Bendiscioli & Garfinkel, 2021; Meadmore et al., 2020) and research funding agencies have different structures for peer review, they generally adhere to these steps (Oxley & Gulbrandsen, 2024).
As stated above, funding agencies usually adopt a hierarchical structure to organize their project review procedures, as illustrated in Figure 1. This hierarchical structure is not the only one possible but comes close to most real-world situations. Flowcharts may differ in detail between different funding agencies. Figure 2 shows a flowchart of the NSFC review process. In this example, a flowchart is drawn from the point of view of the funding agency.

Hierarchical schema of a review process in funding systems.

The flowchart of the NSFC review process (NSFC, 1986).
An institution, such as a national science foundation, distributes funds. For this purpose, panels are composed. If the funding institute is a small one, e.g. a malaria fund, it uses one panel of experts to make the decision.
In most cases, especially for large funders, the panels consulted a broad group of experts. Each expert is asked to review one or more proposals, and each proposal is sent to at least two experts. During the review process, members of this expert group typically do not meet in person and are not allowed to discuss with each other; instead, they write reports assessing the merits and weaknesses of the proposals assigned to them independently. Based on these reports, panel members meet in person, unless it is impossible, such as during the COVID-19 pandemic, to discuss reports and make decisions. As participants in the scientific community, experts and panel members must adhere to strict deontological rules (see e.g., De George & Woodward, 1994; Rousseau et al., 2018).
Funding agencies perform triage for eligibility based on the relevant qualification requirements of different programs.
In this subsection, we describe the various steps of the funding scheme in more detail (see Figure 1).
Once the proposal is submitted, funding agencies will first conduct an initial evaluation (triage). For example, before the review process, the DFG (German Research Foundation) Head Office checks the proposals to ensure that all formal requirements have been met. If not, the applicant has the opportunity to supply missing information. NSFC’s (China) eligibility checks are carried out according to relevant qualification requirements for different programs. Applicants are notified of the results of these eligibility checks, specifying whether the applications are accepted for review or rejected without review. In this way, some proposals may be filtered out in this process.
Although peer review has well-known flaws (discussed further below), the majority of applicants and funders agree that peer review, especially external peer review, is the best method for allocating research funds (Meadmore et al., 2020). Funding agencies must ensure that all the important aspects of the proposal fall within the expertise of the selected reviewers. Reviewers must be recognized as experts in their fields and be capable of providing an objective appraisal of the proposal. In addition, they must carefully avoid conflicts of interest arising from collaboration or competition, teacherstudent relations, reciprocal reviews (see also Section 8.3.1 for recent development), etc. It goes without saying that an expert group usually includes colleagues from all over the world.
Some funders prefer three reviewers per application to come to a majority view among the initial reviewers (Meadmore et al., 2020). In most cases, funding agencies adopt remote reviews, panel reviews (see the next step), or site visits for different proposals. Scientific research fund review practices can vary significantly due to differences in social contexts, institutional structures, and stages of development. This diversity is evident among U.S. funding agencies, each of which tends to align its review approach with its specific mission. For example, the Department of Energy (DoE) considers the laboratory environment, the Department of Defense (DoD) emphasizes technical aspects, and the National Institutes of Health (NIH) prioritizes public health and patientrelated outcomes. Such varied evaluation methods contribute to more balanced and appropriate project assessments.
The next step after the review is to evaluate the reviewer’s statement about the submitted proposal. Generally, the corresponding division director and a committee (panel) make the final decision.
The role of the panel is to make a substantiated decision based on the reports of the reviewers or a ranking of projects, leading to a yes: funded or no: not funded decision. This is generally done in face-to-face meetings or through environmentally friendlier virtual meetings.
The selection of panel members, perhaps even more important than that of experts, is a key element in organizing a successful review procedure. The first point one has to decide is whether the panel is brought together ad hoc or whether members are assigned for a fixed time. In most funding agencies, members of such panels are specialists in a broad field chosen for several overlapping years. This overlap ensures continuation. This panel must be gender-balanced; it should also have a reasonable age balance, so that it is not dominated by older, even retired, scientists. Although the expert group should include colleagues from all over the world, for the panel, that is another question. Including foreign colleagues may increase the costs and may lead to linguistic problems. While incorporating international colleagues into funding panels can enhance expertise and bring diverse perspectives, it is essential to weigh these benefits against potential drawbacks, especially in the social sciences and humanities. Careful selection of panel members, with attention to cultural competence and contextual knowledge, is crucial to ensure fair and effective evaluations (Baccini & Re, 2025). Other questions related to the composition of panels include: must this panel always include a statistician, a computer scientist, a social scientist, a philosopher of science, and other non-specialists (Luo et al., 2021), besides specialists in the panel’s field? This is a legitimate question as many project proposals include statistics, (big) data handling, and discuss social implications.
If panel members do not agree, is there a person who is responsible for making the final decision? Is that person a member of the panel? It may happen that for a difficult decision, such as one about which there exists controversy, or involving a huge sum of money, the final answer comes from “higher up.” In such cases, there should be a majority and a minority report so that the final decision maker(s) can make a well-founded decision. In some funding agencies, the president represents the funding organization, whose task is to confirm the panel’s decision or make the final decision whenever the panel cannot conclude. Yet, in, e.g. the National Natural Science Foundation of China (NSFC), the president and other officials do not have the right to intervene with the panel’s experts. The NSFC states that it is a democracy of scientists.
After the decision, funding agencies inform applicants and other stakeholders about the final decision as soon as possible. Some funding agencies have also established peer review appeal systems, such as the U.S. National Science Foundation (NSF) and the National Institutes of Health (NIH) in the USA to provide investigators and applicant organizations the opportunity to seek reconsideration of the initial review results.
Understanding stakeholders’ expectations provides valuable insight into the direction a review system should take. Severin and Chataway (2021) highlight that these expectations regarding the roles of peer review can vary widely and, at times, even conflict. In this section, we outline the key stakeholders involved in the review process and their respective expectations. We distinguished six groups of stakeholders: reviewers, applicants, funders, academia as a whole, political leaders, and the public at large.
As reviewers work mostly for free, they want to be taken seriously for the benefit of their field. They would appreciate some forms of recognition, such as “points” for continuing professional development (Turner et al., 2018).
As it usually takes several months to write a project proposal and collect collaborators, besides the actual intellectual input, they expect that their efforts are appreciated, and if their proposal shows clear deficits concerning the goals of the project for which it has been submitted, they expect that reviewers will provide help in preparing a better project proposal (perhaps with another funder). Severin and Chataway (2021) note that early- and mid-career researchers especially value the feedback function of the peer review system.
Those responsible for funding expect the peer review system to perform technical assessments and serve as a decision-making tool (Severin & Chataway, 2021).
Academia as a whole expects the review system to function in line with its intended purposes, even though these purposes may evolve throughout scientific history (Csiszar, 2016). Although the review system was not originally conceived as “the lynchpin about which the whole business of science is pivoted” (Ziman, 1968), academia still relies on it to uphold the principle that science is open to correction. The system is expected to support objective judgment and foster consensus, helping the scientific community reinforce its role and credibility in society.
Political leaders expect the national science system, especially its review process, to contribute to a stronger economic and political nation. Others consider the entire world and hope that peer review favors projects that benefit global interests. When political leaders specify areas of interest, they expect reviewers to pay proper attention to them, since the government is often a primary source of research funding, along with some private companies, foundations, and charities.
The public at large expects that science, through its review system, shows that it is worth the public’s trust so that, in the case of emergencies, its review system can select research problems and find answers so that science is ready to act.
The expectations from different stakeholders sometimes conflict with each other, for instance, when funders especially value novelty, while this may not be the case for some reviewers or the public at large, who might prefer further elaborations of existing basic knowledge. Sometimes, expectations are going in the same direction; for instance, when, as custodians of the scholarly record, publishers, journals (whether commercial, society-led, or independent), funders, and learned societies are increasingly expected to engage in data sharing and support the development of infrastructure that strengthens the peer review system. Furthermore, both the public and the scientific community now anticipate that these efforts align with the principles of open science (Squazzoni et al.,2020).
The funding agency must always be clear about the aims of the program(s) they are willing to fund. Submissions that do not correspond to these aims are eliminated in the triage step. Then, the following factors play major roles in funding decisions:
- 1)
The career and scientific standing of the principal investigators (PIs) and the members of their team can be taken into account via their curricula vitae (CVs), bibliometric indicators (either publication-citation, patent, or altmetrics-based) (Rousseau & Rousseau, 2021), or a combination of both.
However, some funders, such as the Investigator Program of the Howard Hughes Medical Institute, explicitly aim to fund people, not projects, to carry out basic biomedical research. This program explicitly encourages researchers to take risks and explore unknown territories. Moreover, funding comes for a long period of seven or more years. It has been pointed out that this program is very successful and that over the years, it has supported 35 Nobel Prize winners (Azoulay et al., 2011; HHMI, 2024).
- 2)
Previous project submissions and their outcomes.
Taking previously accepted submissions into account is a reasonable idea, but it is well known that past performance is no guarantee for future success (Ramos & Sarrico, 2016). Moreover, the efficiency of earlier grants should be taken into account. If high costs in the past led to a relatively small amount of research products or products with relatively small academic and/or social value, then this should be taken into account as a negative point.
- 1)
The general purpose of funding decisions is to advance knowledge in society (Jacobson, 2017). Where the merit of a proposal can be situated and what proposal can contribute to the (existing) knowledge system are major factors that influence the funding decision. Possible results can be considered likely or unlikely from what is already known in the literature, including pilot studies by submitters.
- 2)
The probability that the promise of the proposal can be fulfilled. The scientific content of the proposal is, of course, the main issue, but it should be noted that what is stated in the proposal is just a promise. Are all aspects included that make success reasonable? Or is a project of the type “high risks but high gains?” Is this clearly stated? Does the review panel have a mandate to support this type of project?
- 3)
Currently, research proposals are generally expected to address their potential societal or broader impact (Ma et al., 2020; Oxley & Gulbrandsen, 2024). While this is relatively straightforward in applied fields such as cancer or climate research, it can be far more challenging in disciplines such as abstract mathematics. Outlining the potential dissemination of results and their broader context may offer a meaningful way to convey societal relevance. For projects involving new instruments or technologies, training colleagues or introducing educational courses on their use may not constitute a direct societal impact but surely contribute to a broader impact. Similarly, creating a video or film related to research with the help of a media professional is now more accessible and can enhance broader dissemination. Finally, especially in multi-disciplinary projects, writing a book for the scientific community or even for a general audience, explaining both the research findings and the collaborative process, can serve as another powerful form of impact.
As some of these methods of diffusion require highly skilled technicians, they should be mentioned in the project submission.
Panels must decide which of the two aspects (proposers or the project itself) should weigh the most. This dilemma is illustrated in the title of Coveney et al. (2017), namely “Are you siding with a personality or the grant proposal?”
External circumstances may influence funding and peer review. A recent example is the COVID-19 pandemic. Most leading journals, especially health or pharmacology-related journals, did everything in their power to speed up the review and publication process (Horbach, 2021; Rousseau, 2021). A similar point can be made here for funding decisions (Nature, 2021).
Another type of external influence consists of policy measures aiming to increase the gender and ethnicity balance in science (UNESCO, 2024). One may also mention age balance here, giving young scientists a chance on the one hand and not wasting years of experience on the other.
Peer review of research proposals has repercussions that extend beyond the outcome of a specific research grant, especially due to the widespread prevalence of the Matthew effect, ensuring that “the rich get richer” (Merton, 1968). Any source of unfairness, such as that discussed in this article, can significantly impact the career trajectory of early career researchers. Whether they receive a particular fund has lifelong implications. Earning tenure at a prestigious university exemplifies these lasting effects. As noted by Triggle and Triggle (2007), an incompetent review may result in the rejection of a grant application and the ultimate failure of the author’s career (see also Squazzoni and Gandelli (2012); Thorngate and Chowdhury (2014)).
The consequences of peer review in funding decisions extend to the future of humanity itself. These decisions aim to identify the most capable scientists to conduct groundbreaking research. The discoveries and knowledge generated through their work are expected to address critical challenges and improve human lives. In this sense, one may say that peer review in the context of funding plays a pivotal role in shaping humankind’s future.
Given the far-reaching impact of peer review, even minor errors can cause serious consequences. Therefore, researchers from various fields strongly want to test the process, identify issues, investigate the causes of problems, and find ways to improve peer review (Nicholas et al., 2015; Smith, 2006; Ware, 2008). In pursuit of this, ongoing efforts have sought to uncover potential procedural flaws and identify barriers to better funding decisions.
However, testing peer review is challenging because there is no gold standard for how peer review should be conducted. Assuming that peer review was perfect, there are two groups of projects: the good ones, which receive funding, and the poorer ones, which do not. When tested later, the first group naturally performed better. How can one prove that the reason one group performed better is due to their intrinsic quality and not because they received funding?
Scientists who submitted a proposal but were not funded are indeed at a disadvantage for the following main reasons:
- a)
They do not receive funding and hence are restricted in accessing instruments and/or getting the help of assistants, or at best, receive funding with a delay (from another source, or when reapplying).
- b)
In particular, when focusing on borderline cases (in two directions), funding may be the decisive factor for later success or failure.
One may say that it is difficult to measure meaningful changes before and after funding when trying to determine whether funding itself has been successful.
Moreover, testing peer review is extremely difficult because of the request for anonymity and confidentiality in the peer review for funding decisions. Without the permission of a funding agency, ordinary researchers have no way to test the peer review system for funding decisions, even though many critics claim that scientists who are most capable of advancing science are sometimes denied grants and that scientists who are doing less significant work are given grants.
Nevertheless, to answer the public criticism of the peer review system raised in Congressional hearings and elsewhere, Cole et al. (1977, 1981), working as consultants of the National Academy of Science’s Committee on Science and Public Policy (COSPUP), tested the peer review system of the NSF. Their study was divided into two phases. In the first phase, they relied on the peer-review ratings elicited by the NSF program directors as an indicator of quality and combined qualitative and quantitative sociological techniques. In the second phase, they appraised the quality of the rate through an experiment in which 150 proposals submitted to the NSF were evaluated independently by new panels organized by members of the National Academy of Sciences (COSPUP experiment).
Cole et al. (1977) reported the findings of the first phase: by 75 extended interviews with NSF staff and analyzing 1,200 proposals drawn from ten NSF programs, they found that ratings were strongly related to actual funding decisions. In this way, they concluded that although highly stratified, relying on peer review ratings, the scientific enterprise is exceedingly equitable. However, in the second phase, the result of the COSPUP experiment showed 24 percent and 30 percent of reversal rates with NSF’s funding decisions, even though the reversals shifted from the top 25 positions to the bottom 25 positions; around the cutting point, the reversal rate reached 60 percent in one program. The results indicated that whether or not a proposal is funded depends, in a large proportion of cases, upon which reviewers happened to be selected for it. Cole et al. (1981) reported experimental results and concluded that substantial disagreement among eligible reviewers may reverse funding decisions.
Two or more honest reviewers coming to divergent opinions is called the inter-rater problem (Hug & Ochsner, 2022). The inter-rater problem is found to exist ubiquitously in the peer review system and has detrimental effects on funding decisions, as it violates the consistency criterion for peer review ratings. If these divergent opinions correspond to certain recognizable groups, the inter-rater effect may lead to an in-group versus out-group effect. A similar problem occurs when calibrating scores by different reviewers, such as when “your 3 is my 9” (Wang & Shah, 2019). The detrimental effects of the inter-rater problem tend to make peer review unreliable.
Marsh and Ball (1981) proposed to measure the reliability and validity of a review as the correlation between two independent assessors’ ratings of the same submissions across a large number of different submissions, while Mutz et al. (2015) performed a highly technical test in the framework of funding decisions. They set out to answer the following three questions related to the reliability, validity, and fairness of the funding procedure. Is the funding procedure reliable? Specifically, they investigated the extent of agreement among reviewers on the quality of grant applications and whether the proposed research project should be funded. Second, they investigated if the procedure was valid; that is, do ratings correlate with scientific performance measures applied after funding? Finally, they wondered if the procedure was fair. This means: are there external factors that affect the decision-making process, but that have nothing to do with the quality of the grant proposal? Because researchers have, for most applications, only data ex-post, namely for those that are actually funded, they applied a novel method where non-funded proposals are considered as missing data, and, statistically, taken into account in this way. They applied their methods to real data from the FWF (Austrian Science Fund). From their experiment, they came, at least in this particular case, to some recommendations for testing peer review, such as increasing the number of ex-post reviewers from one to two, and these must be different persons than those that acted as ex-ante reviewers; final decisions should be made in different meetings, each related to one large discipline (not in one big meeting for all fields). As reliability, validity, and fairness turned out to be closely related, they recommend that in future studies, these aspects should, as far as possible, be examined together. Finally, they stated that one should make a distinction between potential bias and real bias. Potential bias refers to the ex-ante evaluation of project submissions, for example, potential bias between male and female applicants, while real bias (dealing with fairness) takes the actual performance, ex-post, into account.
In a study in cooperation with the Australian Research Council, Frijters and Torgler (2019) critically evaluated peer reviews of grant applications and the potential biases associated with applicants, assessors, and their interactions. They found that peer reviews lacked reliability. They observed a major systematic bias and even unreliability, and invalid ratings by assessors nominated by the applicants themselves. To explore how reviewers assign ratings to the applications they review and whether there is consistency in the reviewers’ evaluation of the same application in the U.S. National Institutes of Health (NIH), Pier et al. (2018) replicated all aspects of the NIH peerreview process. They found low levels of agreement among reviewers in evaluations of the same grant applications, not only in terms of the preliminary rating that they assigned, but also in terms of the number of strengths and weaknesses that they identified. Yang (2003) sent a set of ten proposals to three sets of reviewers after the first round of review and found that the peer-review ratings of the proposal ranked in the middle changed significantly, and the only funded proposal was evaluated as mediocre in one set of reviewers. This result indicated that the inter-rater problem also existed in the NSFC review system.
Existing research on the success of peer review panels has focused on understanding whether there is a correlation between good peer-review scores and successful research outcomes, but all in all, these studies yielded mixed results. Li and Agha (2015), for instance, studied whether NIH reviewers yielded high value-added reviews, meaning that differences in scores given to funded grant applications predict differences in subsequent research output, controlling for previous accomplishments of applicants. Their research indicated that this was indeed the case. Specifically, they found that a one-standard-deviation worse peer review score among awarded grants was associated with 15% fewer citations, 7% fewer publications, and 14% fewer patents, proving the effectiveness of the NIH peer-review system. However, some parts of their findings have been questioned by Fang et al. (2016), who, using a subset of the data used by Li and Agha, pointed out that their analysis confirmed that although peer review has some ability to discriminate between the quality of proposals, this does not extend to the critical range of percentile scores.
Testing peer review for funding should be part of the evaluation of a country’s research system. However, this would naturally lead to international comparisons, which are notoriously difficult to perform as inputs and outputs are difficult to measure.
What causes this inter-rater problem? In the ideal scenario, reviewers are supposed to arrive at identical evaluations. However, it is well-known that this is rarely the case. Each scientist has his/her own prejudices, misunderstandings, and knowledge gaps. Hence, it is no surprise that, for this reason, peer review is often biased. We recall that the term bias in peer review refers to violation of the ideal of impartiality. The different forms of bias make peer review fundamentally flawed and cause inter-rater problems in the peer review process for funding decisions:
- 1)
Bias against novel research. Recently, Veugelers et al. (2025), studying European Research Council (ERC) grants and using their own measure of novelty (Wang et al., 2017), found a clear bias against novel research proposed by early-career applicants, especially those located in nontop host environments. Sometimes, reviewers do not even recognize a novel approach (Ayoubi et al., 2021).
- 2)
Reputation bias (of authors and institutes), leading to strengthening of the Matthew effect (Merton, 1968). In a famous paper, Peters and Ceci (1982) found clear evidence of bias against authors from less prestigious institutions. The readers may have noticed that this point also plays a role in the study by Veugelers et al. (2025). Considering the prestige of the PI’s affiliation as an indicator of probable research success may trigger a Pygmalion effect (Rosenthal & Jacobson, 1992), where elevated expectations are “confirmed” by seemingly objective evaluations.
- 3)
Bias against minorities (Gallo et al., 2020). Disturbing reports exist of large disparities in funding success across racial groups in the USA. This holds especially for underrepresented minority scientists (Hoppe et al., 2019).
- 4)
Gender bias (Bornmann et al., 2008; Marhoffer et al., 2024; Woolston, 2021). Several studies, such as those mentioned, have shown that male applicants for funding have statistically significantly greater odds of receiving grants than female applicants. This occurs even when the names of the submitters are blinded. This may be due to language use, as female applicants tend to use narrower and more specific terms (Kolev et al., 2020).
- 5)
Topic bias. It has been found that a topic bias (bias with respect to topic choice) also exists, often playing to the disadvantage of women or minorities (Hoppe et al., 2019). Nowadays, certain topics, e.g. disinformation, may no longer be funded by official institutes in the USA (Fasio, 2025).
- 6)
Personal beliefs and intellectual proximity bias. An experiment demonstrated that reviewers were strongly biased against versions of research that contradicted their own theoretical views (Hojat et al., 2003). Such biases can reinforce established perspectives and hinder the acceptance of novel ideas. A related point is that applicants who share intellectual similarities with experts may be either favored or viewed as rivals. In a study of peer review at the NIH, Li (2017) found that the first alternative prevails, and intellectual proximity generally benefits applicants.
We also mention bibliometric indicators to evaluate principal investigators and their teams, leading to what is known as informed peer review. Although the use of bibliometric indicators introduces some (veneer of) objectivity into the process, it is well known that indicators such as journal impact factors, h-indices, and the “reputation” of publishers can be gamed and may be highly biased and not scientific at all (DORA, 2012; Johnson et al., 2012). Moreover, informed peer review may favor mainstream ideas over niche topics. We note that there is a tendency in those countries where bibliometric indicators played a role in funding decisions to give bibliometric indicators a smaller role, with the Netherlands being a point in case (Kummeling et al., 2024).
These findings show that even with the best of intentions, how and whether peer review identifies high-quality science remains uncertain. Different professional perspectives lead review experts to have various review focuses and blind spots. Their opinions on a project are always influenced by their careers and are further shaped by discipline and culture.
The evaluation of the reliability, validity, and fairness of peer review in funding decisions has fallen short of expectations. Researchers have aimed to identify the underlying issues that contribute to these shortcomings. In this subsection, we will examine several challenges associated with organizing the peer review process for funding decisions. Broadly, these challenges concern the effectiveness (or ineffectiveness) of the system, its objectivity, the burden it places on the scientific community, potential obstacles to innovation, issues of reliability, lack of transparency, and, at times, even ambiguity in its goals (Bendiscioli & Garfinkel, 2021; Guthrie et al., 2018). We will not delve into detail about all these problems but rather discuss some of them.
Assuming that members of the panel are professional scientists and/or administrators of scientific institutes (not specialists working in industry), should they be paid extra or sit on a panel just part of their job? Could they, perhaps, be “paid” by points that can be used for promotion? Or, in the case of university professors, by reduced teaching obligations?
A difficult question related to peer review is whether reviewers should be paid, especially for proposals that need a lot of expertise and time to evaluate. In mathematics – a field in which it can take weeks of study before anyone can fully understand and appreciate the contents of a proposal– some colleagues think that fair compensation would be appropriate (Fried, 2007).
A related question is whether reviewers, instead of being paid, are recognized as contributors. This question is related to the expected (or required) role of a scientist at a university, a company, a national or international research institute, or other public or private research-related enterprises, such as a think tank or quality news magazine. The second Publons report (Hayes & Hardcastle, 2019) contains the results of a large survey of scientists and funders. It sketches the framework in which peer review takes place and points out the continued relevance of the peer review process. One of the conclusions is that researchers are dissatisfied with the lack of recognition of their review work. They believed that greater recognition by reviewers would improve the process. Although cash payments may seem an attractive driver of reviewer participation, this report found that cash payments are not a significant means of reviewer motivation.
Hochberg et al. (2009) describe the so-called tragedy of reviewer commons. This expression refers to the possible misuse of colleagues as reviewers. If everyone were to misuse the time and efforts of colleagues (in the role of reviewers), no one would want to review anymore. This is especially true when reviewers do not receive any credit or monetary rewards for their work. Problems in this respect include authors targeting inappropriate journals or funding organizations, reviewer fatigue, individuals not bearing their share of the overall refereeing responsibility resulting in overburdened colleagues who do take responsibility, and rejected authors not taking reviewers’ comments into account when resubmitting their manuscript or project.
Before discussing the lack of transparency, we first state what is meant by the expression “transparency in peer review for funding.” According to Meadmore (2020):
Transparency in peer review for research funding refers to the level of openness that the peer review processes provide for recommending research to be funded, who makes the decisions, how and when the decisions are made and the extent to which this is made clear to all involved.
This definition includes several aspects, mostly related to the openness of the different steps involved in the review procedure. Such openness would open up the possibility for outsiders to study the quality of peer review, starting with simple data collection about the time taken to review and the time taken to come to a consensus decision.
We note that the topics of transparency and openness, and their relationship, crop up on different occasions in this article.
Interdisciplinary projects often have specific problems. Besides the lack of institutional appreciation, a general bias in the evaluation as evaluation standards are often discipline-based (De Rijcke et al., 2016; Mallard et al., 2009). Bromham et al. (2016) show that funding often has a bias against interdisciplinarity. If the review panels consist of disciplinary experts, this result is no surprise. Indeed, as far as we know, there does not exist a worldwide database in which suitable interdisciplinary experts can be found for assessment purposes.
What happens if the proposals are multi-disciplinary? In Hayes and Hardcastle (2019), an anonymous survey respondent wrote:
Nobody ever wants to pitch in to review interdisciplinary grant proposals because everyone thinks it’s outside of their area. That suffocates all attempts to think outside the box.
During the period 2019-2024, based on the “attributes” of scientific problems, the NSFC experimented with allocating its funding to the four categories of scientific problems, where interdisciplinary was one of the categories. However, as both the number of proposals and funded projects in the interdisciplinary category were significantly lower than those in the other categories (Yang et al., 2024), the NSFC canceled this category in 2024.
Although solving global problems such as climate change and saving biodiversity needs input from all over the globe, international collaborations are, by their nature, more difficult to realize than national collaborations. Moreover, it is a fact that many funding agencies are national, measuring their “results” on the influence on local economies and their own population.
It has been stated in several publications that funders need to experiment with versions of peer review and decision-making (Bendiscioli, 2019; Bendiscioli & Garfinkel, 2021; Bendiscioli et al., 2021). Therefore, in this section, we consider ideas that may lead to a better peer review funding system.
Responsible reviewers who contribute constructive and credible comments should be able to claim credit, which will automatically improve the review process (Hayes & Hardcastle, 2019; Moussian, 2016). A credibility-cherishing reviewer will evaluate applications responsibly and make tangible contributions to the proposed research work. The reviewer’s contributions will then be recorded, further enhancing his or her credibility. As a result, credibility becomes the driving force, motivating reviewers to make meaningful contributions. The recorded contributions help reviewers build a reputation. In the next step, funding agencies will prioritize highly reputed reviewers (Zhou & Zhao, 2019). Wilson and Lancaster (2006) proposed a referee factor, an indicator based on the number of reviews performed as part of a standard performance assessment.
It happens that reviewers offer condescending or offensive comments in the anonymous review system (Clements, 2020). Many scholars are inclined to support openness in peer review, believing that open and transparent reviews lead to more constructive reviews and criticism would be more reasonable. Reviewer reports should be made open, possibly consisting of a majority and a minority report, and reviews should be performed within a set of rigorous rules, set in advance, and known to all involved in the process. For instance, if collaboration (between different fields, different universities, and different countries) is a goal of funding, then this must be made clear from the start. Moreover, the names of those peers who make the final decision should be made public, be it not necessarily in advance. This is done, for example, by the NSFC.
Open review can create challenges, either straining relationships between colleagues through overly sharp comments or weakening the scientific enterprise through overly polite ones. Several measures have been proposed to address this tension.
Funding agencies and researchers often call for more studies on the structure and organization of peer review in order to ensure that funding decisions are made more scientifically (De Vrieze, 2017). However, such research requires access to data, which is typically kept confidential, making systematic investigation nearly impossible. Squazzoni et al. (2020) argued that, as custodians of the scholarly record, publishers, independent journals, funders, and learned societies have a responsibility to engage in data-sharing and to support the development of infrastructures that strengthen the peer review system. They propose the establishment of a broad data-sharing framework to enable systematic research in this area.
To foster institutional collaboration in sharing data on diverse review formats, a group of scholars, publishing professionals, and funders launched a European Union–funded project called PEERE, supported by COST (the European Cooperation in Science and Technology). In 2017, PEERE released a protocol for sharing peer review data that carefully addresses ethical considerations, responsible data management, protection, and privacy (Squazzoni et al., 2020).
In addition, the Transparency and Openness Promotion (TOP) Guidelines highlight the importance of institutional commitments to facilitate meta-research on the effectiveness and integrity of peer review practices (Lee & Moher, 2017).
In addition to playing a summative role, peer review must also play a formative role. It has become unthinkable that the answer to a submitted proposal is simple yes/funded or no/not funded, even when wrapped in a polite letter. The peer review process must be fair, and the act of submitting a proposal should be a learning opportunity for all involved.
An early career researcher (Parrilla-Gutierrez, 2021) complained about the lack of feedback on his applications. Rejections of articles submitted to a journal or of projects submitted as a grant proposal are part of a scientist’s life. Yet, given the large amount of time invested in a grant proposal, and the many favors one has to ask for from other academics and sometimes even from commercial suppliers, to write letters of recommendation and reduce prices to fit within an appropriate budget, it is rather disrespectful as this colleague wrote if a letter of rejection does not contain review reports with helpful suggestions for improvements.
Although articles have been written with titles such as “Secrets to writing a winning grant” (Sohn, 2020), that kind of information is nonexistent in our eyes. However, using feedback from colleagues, family and friends (representing the world outside academia), and funding agencies to improve the proposals is surely a way to increase the probability of winning a grant.
Interactive peer review is a valuable form of feedback. Even when submitters and reviewers remain anonymous to each other, it is beneficial to allow submitters to clarify specific points in response to reviewer questions. These clarifications do not constitute changes to the proposal but are added before the panel’s deliberations. If time permits, panel members should also have the opportunity to request clarification.
In the old funding review system, PIs who performed interdisciplinary investigations had to decide for themselves if they considered their proposals as belonging to one field, maybe with essential contributions from one or more other fields, and hence present their proposals to the panel for the main field. As interdisciplinary work becomes more and more important in solving scientific problems, funding agencies are adopting different measures to promote it. For a particular situation, a multi-disciplinary panel may be established. Some funding agencies, such as the NSFC, have even established a new branch of interdisciplinary sciences to evaluate such proposals. This branch is divided into sub-branches for organizing the review of proposals that integrate some specific disciplines (https://www.nsfc.gov.cn/publish/portal0/tab1333/).
Many funding agencies have a branch to provide the necessary support for international cooperation and exchange programs. However, as these branches only deal with programs related to funding agencies, an effective global cooperation funding mechanism is needed. The establishment of the Global Research Council (GRC;www.globalresearchcouncil.org), established in May 2012, bringing together leaders of key science funding organizations, was a useful step. This group convenes annually to reach its long-term objective of fostering multilateral research and collaboration across continents to benefit both developing and developed nations.
Rousseau et al. (2017) proposed five methods to construct a panel for research evaluations. In all cases, a cognitive distance is calculated between the evaluatees and evaluators (the panel). The authors expressed a preference for the similarity-adapted approach. For a detailed description of this method, we refer the reader to Rousseau et al. (2017). In another study, Price and Flach (2017) discussed how computer-aided methods can lead to an optimal panel. These methods are also of interest for the construction of funding panels. Feliciani et al. (2022) studied the design of grant peer review panels to increase the correctness of the choices made by such panels using an empirically calibrated simulation model.
Caught between the inherent issues identified so far and ongoing efforts to improve the review process, funding agencies and the scientific community are rethinking how to design a more effective funding system. Should the review procedure be grounded more firmly in scientific evidence or allow for greater randomness? Should it be more interactive or maintain reviewer independence? Various new initiatives are emerging, and some are already being adopted by funding bodies.
Most of today’s peer review systems use a set list of review items to keep evaluations clear and consistent. Organizers then require reviewers to follow the correct steps mentioned in this list. Moreover, they ensure that panels include experts who, together, can judge all parts of a project.
However, these measures are not enough. Can the listed items fully reflect the attributes of the scientific problems? Is the procedure used fair? Do panels come to the correct judgment? What evidence can be provided to proposers to justify funding decisions?
Funding decisions should follow strict scientific criteria based on the attributes of the problem. Strauss et al. (2021) provided a box listing the points that reviewers of institutional review boards (IRBs) and research ethics committees (RECs) must consider promoting diversity in subject selection in biomedical research. The authors explain how IRBs use these points in the case of understudied and underserved groups. If there are underrepresented groups, policies should be established, and necessary resources should be provided to ensure that reviewing IRBs can meet this obligation. An IRB has the authority to require that a research protocol include study elements relevant to diversity considerations. If the research protocol significantly deviates from the diversity requirements in subject selection, the IRB can require modifications based on the nature, phase, aims, and location of the investigations. The diversity in subject selection is the pivot to understanding how biological variability and social determinants of health contribute to disease prevalence, transmission, course, experience of illness, and treatment outcome. Therefore, diversity is one of the most important attributes of these types of scientific problems. IRBs list the points that substantiate the diversity in all aspects and in the entire process of pertinent investigations.
Non-consensus is said to be one of the core attributes of original research (Chen, 2021; Li et al., 2024). Consequently, the NSFC formulated a pilot implementation scheme to identify original projects characterized by non-consensus (Zhao et al., 2025). This scheme shows the NSFC’s ambitions to organize a review procedure based on scientific evidence.
While funding agencies usually have a general idea of the scientific problems they aim to address, the specific attributes of these problems, particularly the key features that determine how they might be solved, often remain unclear. Identifying these attributes throughout the entire process is a complex task. Funders have called for more research into the content and structure of the funding peer review system, and to ground funding decisions on solid evidence (De Vrieze, 2017). However, because funding agencies have not yet fully grasped the essential characteristics of scientific problems (hence, the call for proposals), it remains impossible to design a system that can consistently base funding decisions on robust evidence.
Since funding decisions are intended to support future scientific excellence, and predicting the future is inherently uncertain, no system can guarantee perfect project selection. Acknowledging this uncertainty may justify the use of a (partial) lottery as a potential solution (see Subsection 8.2.1).
Already applied by some agencies, such as the Swiss National Science Foundation, is the use of a (partial) random selection procedure (Chawla, 2021). Indeed, when experts consider two or more proposals of equal value, then resorting to a lottery makes sure that no bias creeps in because it is particularly at this point that this might occur. It often happens that between proposals that definitely should be funded and those that certainly should not, there is a grey zone (Bendiscioli & Garfinkel, 2021). Especially for proposals in this zone, randomization might be a good solution. Hence, combining a lottery system with some aspects of peer review, leading to a modified lottery system, may well be part of a solution to solve the problems related to the contemporary peer review system for funding allocation (De Peuter & Conix, 2022). Bendiscioli and Garfinkel (2021) noted that this brings transparency to the selection process of otherwise equally deserving proposals. Solving the “grey zone dilemma” by a lottery is an acknowledgment of the limits of precision achievable by classical peer review (Barlösius et al., 2023). However, a lottery considers projects as independent units, while in reality, scientific projects are parts of an interconnected system of conceptual, experimental, and technical practices (Bedessem, 2020). For this reason, Bedessem (2020) criticizes the lottery system.
Currently, it is a common problem for funding agencies to spend more time reviewing grants while success rates are dropping to demoralizing lows. This contributes to an inefficient science system in which a lot of energy is wasted. In response, a controversial and innovative suggestion was made by Bollen et al. (2014). These colleagues proposed a crowd-based, decentralized funding model, using the wisdom of the entire scientific community. In their proposal, each scientist receives a fixed and equal amount of money from a national funding agency. However, each scientist is required to pass on a fixed amount (say 50%) of their previous year’s money to other scientists whom they think would make the best use of the money. The details of this procedure can be found in the original article. If this proposal is accepted, it would drastically reduce the costs of funding agencies. Note that here, the term “peers” refers to the whole community of scientists. However, this procedure could easily become a popularity contest, and as such, could favor more visible fields, scientists, or teams with well-known successes in the past. Ideally, one should vote for projects, not for people.
The crowd-based funding model is an example of a decentralized funding model. Discussing decentralized models, Bedessem (2020) poses the question, “Who should have the right to vote?” The chosen solution reflects a perspective on participation in science. Would it be a balanced group consisting of scientists, direct stakeholders, citizens, or their chosen representatives? This choice is an important political and ethical issue.
Finding suitable reviewers is often a difficult and time-consuming task; recall that many potential reviewers decline (Gallo et al., 2020). A radical solution for this, proposed by Merrifield and Saari (2009) and discussed in (Guthrie, 2019), is to require submitters to review several other proposals with timely delivery, mandatory for their own application to be considered. As the review tasks are distributed among the applicants, this mechanism is called distributed peer review (DPR).
Distributed peer review has obvious advantages for funders. It reduces the burden to identify reviewers, and timely review delivery is guaranteed. Moreover, more reviewers than usual are available, so extreme opinions have less influence. In this way, DPR is more suitable for selecting proposals that can reach a consensus across the community. The main disadvantage is that submitters must perform more work than they would otherwise.
As applicants and reviewers act in the same review system in the DPR, reciprocal reviews are unavoidable. Because reciprocal reviews can exchange benefit and hence are strictly forbidden in a typical review system, mechanisms are needed to ensure that no gaming takes place in the system, a point that is discussed in Guthrie (2019) and Pearson (2025). Moreover, researchers interact overtly and covertly, must be a factor of consideration. Pearson (2025) and Brainard (2025) report that distributed peer review is now being tested by the Volkswagen Foundation and UK Research and Innovation (UKRI), and that the gains seem huge in terms of speed and efficiency.
Wang et al. (2011) published a proposal for a so-called heuristic review mechanism. This mechanism is not meant to deal with all types of funding. It is specifically constructed for pioneering research, that is, non-conventional, highly exploratory, possibly groundbreaking, and relatively high-risk basic research. It has the potential to be transformative. The proposed method provides an answer to the claim that peer reviews tend to be conservative in China, but also in the USA (Packalen & Bhattacharya, 2020). Topics are determined by the NSFC. A leader and reviewers for a given topic are selected. Among these are multi-disciplinary scientists and experts who have comprehensive science perspectives. The leader must help to create an open-minded communication atmosphere. The key step is the organization of a forum in which candidates present their innovative ideas, even if they do not yet have data to support them. The review panel can directly ask questions, leading to the information it deems necessary for selection purposes. A panel discussion will include brainstorming, in-depth discussions, and debates. After these interactions, reviewers and the leader discuss, vote, and jointly identify outstanding candidates. Finally, the selected candidates were supposed to write and submit a full research plan with almost certainty to be approved by the NSFC.
A somewhat similar idea has been implemented in Norway, informally named a “research sandpit,” and more formally Idélab’ (idea lab). It turned out that this model could indeed be useful to generate multi-disciplinary research as part of a multifaceted approach to funding scientific research (Maxwell & Benneworth, 2018).
The National Science Foundation of China (NSFC) is working on the profound reform of the review system for funding applications, aiming at a more efficient and fairer evaluation system. In this framework, three major tasks must be accomplished: identifying funding categories, improving evaluation mechanisms, and optimizing the layout of research areas for the funding system (Tang et al., 2021). To fulfill its reform aim, the NSFC is trying to optimize its review procedure to make interactions between the funder, the applicants, and the reviewers possible. In this new review procedure, the applicants can evaluate whether the scientific comments of reviewers are helpful or not and state the reason why the scientific comments are or are not helpful. For the moment, the evaluations from the applicants do not influence the funding decision, but they will help the funders to evaluate the reviewers’ responsibility and credibility. Based on these evaluations, funders expect to be able to select responsible and credible reviewers.
Liu and Rousseau (2023) proposed an approach in which it is allowed that a reviewer of a proposal joins the team (those scientists who submitted the proposal). This possibility acts as an incentive where all parties (submitters, reviewers, and funders) and science itself may benefit. This level of interaction between the reviewer and the proposer can become so close that the reviewer is allowed to join the original team for the benefit of science.
Peer review for funding decisions is navigating the challenge of choosing between independent and interactive review, as well as between loosely or strictly structured procedures. Loosely organized approaches such as lotteries are sometimes used because traditional peer reviews have limited precision. Achieving greater precision requires a thorough understanding of the characteristics and solvability of scientific problems. However, this is not clear, making it difficult to use them to guide the design of the review process.
A lottery is said to not be a good choice because projects are part of an interconnected system and are not independent units (Bedessem, 2020). However, an active policy requires reviewers to judge the value of the project independently (State Council of the People’s Republic of China, 2024). Because of existing biases and the inter-rater problem, it is impossible to obtain a consensus through independent reviews. Only through interaction or discussion can consensus be achieved. This is the major reason that almost all funding agencies adopt a funding scheme of remote and panel reviews. In the remote review stage, an independent review is required; in the panel stage, interaction is required. Thus, both independent and interactive reviews take place in one review procedure.
Generally, funding agencies prefer independent reviews for typical programs but favor interactive reviews for original or groundbreaking projects. Zhao et al. (2023) describe how the peer review process for the original exploration program is organized by the NSFC: it involves two stages—pre-application and formal application. In the pre-application stage, proposers must explain their original ideas using fewer than 2,000 Chinese characters. The NSFC arranges expert reviews to assess the originality of ideas. During this phase, proposers need to convince two referees of the originality of their concepts. Once approved, they submit a formal proposal that goes through the standard review process. Li et al. (2024) explain how the peer review policy for the Original Exploratory Program (OEP) in China has evolved since 1987. The proper handling of appeals is a key part of this process. Handling the appeals of the proposers occurs through the interaction between the NSFC’s officials, reviewers, and proposers. Some appeals are identified as non-consensus but involve innovative projects that do get funded. In this view, the core feature of original basic research is non-consensus, so projects with a higher value of originality should be sought among the non-consensus proposals (Li et al., 2024). In this regard, the NSFC is launching on Jul. 7, 2025, an experiment to select key non-consensus projects through a special procedure with a large tolerance for failures that combines referee recommendation and identification by the NSFC of cases with a large disagreement between proposers and reviewers. This type of funding will be allocated step by step according to the needs of the investigations (Zhao et al., 2025).
On the one hand, Regulations of the National Natural Science Foundation of China, amended on 8th Nov. 2024, require that reviewers judge the value of projects independently, and proposers cannot appeal against the judgment of the reviewers, but on the other hand, the NSFC wants to identify key projects through non-consensus between proposers and reviewers. Where will this contradiction lead to peer review for funding decisions? Only the time and the outcomes of the experiments can be told.
Undoubtedly, expectations related to peer review, and in particular, peer review for funding decisions, have increased over time. For peer review to function well within the science system, many questions need to be answered, such as how institutions, universities, government agencies, and funding organizations should manage the peer review process, how to define the quality and utility of individual reviews, how to assess peer review (such as conflicting opinions), and how to use the data generated during their implementation. Solutions for providing a funding system that is fair to scientists, does not overburden reviewers, and leads to progress in science and society must be sought, thus requiring the wisdom of all scientific and technological circles.
Peer review in a funding context is a widespread mechanism to select innovative funding proposals. However, peer review itself has bias problems, leading to the inter-rater problem in the peer review system, which further result in a lack of reliability, validity, and fairness for funding decisions. There is limited evidence of peer review’s capacity to guarantee accurate and high-quality research. Actual funding decisions based on peer review can hardly be said to be perfect. The peer review process in a funding context must itself be the topic of research and, consequently, of improvement.
On the one hand, more colleagues refuse to serve as reviewers, and those who participate find the process increasingly time-consuming. On the other hand, success rates for funding are decreasing to discouragingly low levels. As a result, the traditional peer review model for funding decisions is facing growing criticism. Should we try to find a solution by reducing or eliminating the role of peer review, or by strengthening its role in the science system? Several solutions have been proposed to solve this problem. Some weaken the role of peer review, such as the lottery system and crowd-based funding model, whereas others aim to strengthen it by providing more scientific evidence for funding decisions. This includes following a strict set of steps during the review process and establishing a responsible project funding system. Consequently, the peer review system for funding must become more open, transparent, interactive, and formative. New initiatives have been proposed for funding agencies. Peer review for funding decisions is pioneering the way forward amidst the debate over independence versus interactive review, as well as between loosely or strictly structured procedures.