See the Glossary for more terms.
Strategy for NIH Funding
Strategy for Funding Decisions · How NIAID Makes Funding Decisions
Pages of Part 7. Funding
On this page, we tell you how your overall impact/priority score for your R01 application is converted into a percentile, how NIH calculates percentiles, and why it uses them.
Our grant paylines—based either on overall impact/priority score or percentile—are conservative cutoff points for funding applications.
It is important to note that we set paylines conservatively and fund more applications at the end of the fiscal year when we have a clearer budget picture. Paylines also vary considerably among NIH institutes.
(This page has factual information only.)
For unsolicited R01s reviewed by the Center for Scientific Review (CSR), NIH converts your overall impact/priority score into a percentile.
A percentile ranks your application relative to the other applications reviewed by your study section at its last three meetings.
The percentile rank is roughly the percentage of applications receiving a better overall impact/priority score than the others reviewed by the same study section during one year.
Why does NIH use percentiles?
Percentiling counters these trends by ranking applications relative to others scored by the same study section.
Altogether, percentiling makes review fairer to applicants.
Percentiles are determined by matching an application's overall impact/priority score against a table of "relative rankings."
Here is how NIH arrives at percentiles.
Step 1–Following the discussion led by the primary reviewer, all reviewers rate the overall impact of an application, assigning a whole number from 1 to 9.
Step 2–These scores are averaged, rounded mathematically to one decimal place, and multiplied by 10 to create the overall impact/priority score. E.g., a 1.34 average yields a 13 overall impact/priority score.
Step 3–Percentiles are determined by matching an application's overall impact/priority score against a table of relative rankings containing all scores of applications assigned to a study section during the three last review cycles.
Step 4–NIH calculates percentiles using the following formula.
(The 0.5 percent is a standard mathematical procedure used for rounding.)
These numbers are then rounded up, e.g., a percentile of 10.1 becomes 11, to create a whole number percentile ranging from 1 to 99. NIH includes not discussed applications in the percentile calculation.
Below we show hypothetical data to illustrate the two factors that can skew results and even create mathematical imprecision:
We simulated scores and percentiles over three peer review cycles to illustrate how percentiles relate to scores and how clustering may work.
These data depict three scenarios representing the three study section meetings NIH normally uses to calculate percentiles.
Table 1 shows the effect of score distribution. In Cycle 1, only 2 grants got an overall impact/priority score of 15 or lower, which became a percentile of 3.
Contrast that result with the numbers in Cycles 2 and 3. The study section clustered scoring towards the lower end (remember that a lower number is a better score): 7 and 15 applications scored 15 or lower, resulting in percentiles of 8 and 13, respectively.
When overall impact/priority scores are converted to percentiles, percentiling can spread out scores across a broader range. In Cycle 3, scores of 15 and 20 became percentiles of 13 and 24.
Table 2 further shows the effect of score distribution and illustrates the impact of the moving three-review-cycle window on percentiles.
In Cycle 1, 10 applications scored 15 or below, and a score of 15 ranked at the 17 percentile. Compare that figure to Cycle 3, in which 15 applications—one-third more—had a score of 15 or better.
You would expect the larger number of applications to result in a significantly higher (worse) percentile as seen in Table 1. Yet in Cycle 3, a score of 15 ranked at the 18 percentile—only one point higher than in Cycle 1—because it was calculated using a different cohort of applications.
Tables 1 and 2 also highlight the different scoring behaviors of study sections. Compared to the study section in Table 1, the study section in Table 2 consistently judged more applications in the top range, resulting in very different percentiles.
Percentiling spreads out scores across all possible rankings. But the more scores are bunched together, the more percentiling exaggerates their differences. Although this illustration shows scores in five-step intervals, in reality, there could be scores at each integer.
The second factor that skews results is the entry of a new cohort of applications and removal of an old one for each review cycle.
One-third of the base used to calculate percentiles turns over at each study section meeting, while percentiles for the two most recent meetings are fixed, contributing to a lack of mathematical precision.
For more details on scoring, read How Reviewers Score Applications in Part 5.
NIAID sets paylines conservatively to make sure we will have enough funds to pay grants throughout the year.
Each fiscal year, we set our paylines, funding cutoff points that we use to fund unsolicited applications. You can find them at NIAID Paylines.
We set paylines for every major type of grant (activity codes that comprise the largest pools of applications) by balancing the amount of funds we have to spend with an estimated number of applications we expect study sections to recommend for funding.
Note that for activity codes that result in fewer applications, the Institute prefers the flexibility of not setting a payline. Also, applications for some activity codes are only in response to requests for applications (RFAs) or program announcements with set-aside funds (PASs). Even without a payline, we usually fund applications in overall impact/priority score order.
Recognizing the diversity of our large grant portfolio, we use paylines as the fairest way to make funding decisions. A numerical value lets us cut across disciplines and fund the best science as determined by initial peer review.
At NIAID, we establish our paylines using an NIH formula and historical data including:
We set our paylines conservatively to make sure we will have enough funds to pay grants throughout the year.
A conservative payline also lets us meet out-year payments for existing grants as well as any new congressional mandates, for example, for biodefense or AIDS.
At year's end when we have a clearer budget picture, we award more grants that scored beyond the payline.
Paylines vary among NIH institutes, so a percentile or overall impact/priority score that is not fundable in one institute may be fundable in another. Find NIAID's budget information on Paylines and Funding.
Keep in mind that at the start of the fiscal year we usually use interim paylines. For more information on how evolving paylines may affect you, go to:
Typically higher than paylines, success rates are a better indicator than paylines of the percentage of applications we are funding for each activity code, especially for R01s.
The reason: many applications are funded beyond the R01 payline, for example, through NIAID's requests for applications, selective pay, and R56-Bridge awards.
A success rate is roughly the number of applications funded by an institute divided by the number of peer reviewed applications referred to it (excluding resubmissions that occur in the same fiscal year—each application is counted only once).
To find success rate data, go to Success Rates on NIH's RePORT Web site. For NIAID information by fiscal year, see Research Project Success Rates for NIAID.
Strategy for NIH Funding
See the other sections ofPart 7. Funding
Table of Contents for the Strategy
We welcome your comments, questions, or suggestions. Email email@example.com.
Last Updated December 09, 2014
Last Reviewed September 30, 2011