Strategy for NIH Funding
Strategy for Funding Decisions · How NIAID Makes Funding Decisions
Understand Paylines and Percentiles
On this page, we tell you how your overall impact/priority score for your R01 application is converted into a percentile, how NIH calculates percentiles, and why it uses them.
Our grant paylines—based either on overall impact/priority score or percentile—are conservative cutoff points for funding applications.
It is important to note that we set paylines conservatively and fund more applications at the end of the fiscal year when we have a clearer budget picture. Paylines also vary considerably among NIH institutes.
Table of Contents
(This page has factual information only.)
Percentiles Indicate Relative Rank
For unsolicited R01s reviewed by the Center for Scientific Review (CSR), NIH converts your overall impact/priority score into a percentile.
A percentile ranks your application relative to the other applications reviewed by your study section at its last three meetings.
 Percentiles range from 1 to 99 in whole numbers. Rounding is always up, e.g., 10.1 percentile becomes 11.
 In contrast to usual mathematical practice, a lower number indicates a better score.
The percentile rank is roughly the percentage of applications receiving a better overall impact/priority score than the others reviewed by the same study section during one year.
Why does NIH use percentiles?
 Percentiling counters a phenomenon called "score creep" where study sections give applications increasingly better scores. As a result, scores cluster in the exceptional range, making it impossible to discriminate among applications.
 Each study section can apply the NIH review criteria differently, scoring either more harshly or more favorably.
Percentiling counters these trends by ranking applications relative to others scored by the same study section.
Altogether, percentiling makes review fairer to applicants.
How Percentiles Are Determined
Percentiles are determined by matching an application's overall impact/priority score against a table of "relative rankings."
Here is how NIH arrives at percentiles.
Step 1–Following the discussion led by the primary reviewer, all reviewers rate the overall impact of an application, assigning a whole number from 1 to 9.
Step 2–These scores are averaged, rounded mathematically to one decimal place, and multiplied by 10 to create the overall impact/priority score. E.g., a 1.34 average yields a 13 overall impact/priority score.
Step 3–Percentiles are determined by matching an application's overall impact/priority score against a table of relative rankings containing all scores of applications assigned to a study section during the three last review cycles.
Step 4–NIH calculates percentiles using the following formula.
Percentile = 
100 
x (relative rank minus 0.5) 

Number of Applications 

(The 0.5 percent is a standard mathematical procedure used for rounding.)
These numbers are then rounded up, e.g., a percentile of 10.1 becomes 11, to create a whole number percentile ranging from 1 to 99. NIH includes not discussed applications in the percentile calculation.
Hypothetical Percentiling Data
Below we show hypothetical data to illustrate the two factors that can skew results and even create mathematical imprecision:
 Clustered distributions of overall impact/priority scores.
 The shifting cohort of applications for each review cycle.
We simulated scores and percentiles over three peer review cycles to illustrate how percentiles relate to scores and how clustering may work.
These data depict three scenarios representing the three study section meetings NIH normally uses to calculate percentiles.
Table 1 shows the effect of score distribution. In Cycle 1, only 2 grants got an overall impact/priority score of 15 or lower, which became a percentile of 3.
Contrast that result with the numbers in Cycles 2 and 3. The study section clustered scoring towards the lower end (remember that a lower number is a better score): 7 and 15 applications scored 15 or lower, resulting in percentiles of 8 and 13, respectively.
When overall impact/priority scores are converted to percentiles, percentiling can spread out scores across a broader range. In Cycle 3, scores of 15 and 20 became percentiles of 13 and 24.
Table 1. Overall Impact/Priority Score Distribution for Study Section A

Cycle 1 
Cycle 2 
Cycle 3 
Priority Score 
Grants 
Percentile 
Grants 
Percentile 
Grants 
Percentile 
10 
1 
2 
2 
3 
5 
4 
15 
1 
3 
5 
8 
10 
13 
20 
5 
12 
5 
16 
10 
24 
25 
5 
20 
10 
28 
10 
38 
30 
5 
28 
15 
45 
10 
55 
40+ 
43 
62 
23 
73 
15 
76 

Table 2 further shows the effect of score distribution and illustrates the impact of the moving threereviewcycle window on percentiles.
In Cycle 1, 10 applications scored 15 or below, and a score of 15 ranked at the 17 percentile. Compare that figure to Cycle 3, in which 15 applications—onethird more—had a score of 15 or better.
You would expect the larger number of applications to result in a significantly higher (worse) percentile as seen in Table 1. Yet in Cycle 3, a score of 15 ranked at the 18 percentile—only one point higher than in Cycle 1—because it was calculated using a different cohort of applications.
Tables 1 and 2 also highlight the different scoring behaviors of study sections. Compared to the study section in Table 1, the study section in Table 2 consistently judged more applications in the top range, resulting in very different percentiles.
Table 2. Overall Impact/Priority Score Distribution for Study Section B

Cycle 1 
Cycle 2 
Cycle 3 
Priority Score 
Grants 
Percentile 
Grants 
Percentile 
Grants 
Percentile 
10 
5 
8 
2 
6 
5 
7 
15 
5 
17 
5 
14 
10 
18 
20 
10 
33 
8 
29 
15 
36 
25 
10 
50 
10 
46 
5 
53 
30 
10 
67 
15 
67 
5 
69 
40+ 
20 
83 
20 
83 
10 
83 

Percentiling spreads out scores across all possible rankings. But the more scores are bunched together, the more percentiling exaggerates their differences. Although this illustration shows scores in fivestep intervals, in reality, there could be scores at each integer.
The second factor that skews results is the entry of a new cohort of applications and removal of an old one for each review cycle.
Onethird of the base used to calculate percentiles turns over at each study section meeting, while percentiles for the two most recent meetings are fixed, contributing to a lack of mathematical precision.
For more details on scoring, read How Reviewers Score Applications in Part 5.
NIAID sets paylines conservatively to make sure we will have enough funds to pay grants throughout the year.
Paylines Are a Conservative Funding Cutoff Point
Each fiscal year, we set our paylines, funding cutoff points that we use to fund unsolicited applications. You can find them at NIAID Paylines.
We set paylines for every major type of grant (activity codes that comprise the largest pools of applications) by balancing the amount of funds we have to spend with an estimated number of applications we expect study sections to recommend for funding.
Note that for activity codes that result in fewer applications, the Institute prefers the flexibility of not setting a payline. Also, applications for some activity codes are only in response to requests for applications (RFAs) or program announcements with setaside funds (PASs). Even without a payline, we usually fund applications in overall impact/priority score order.
Recognizing the diversity of our large grant portfolio, we use paylines as the fairest way to make funding decisions. A numerical value lets us cut across disciplines and fund the best science as determined by initial peer review.
At NIAID, we establish our paylines using an NIH formula and historical data including:
 Number of applications reviewed by NIAIDrelevant study sections.
 Amount of grant money in the budget.
 Average grant costs.
We set our paylines conservatively to make sure we will have enough funds to pay grants throughout the year.
A conservative payline also lets us meet outyear payments for existing grants as well as any new congressional mandates, for example, for biodefense or AIDS.
At year's end when we have a clearer budget picture, we award more grants that scored beyond the payline.
Paylines vary among NIH institutes, so a percentile or overall impact/priority score that is not fundable in one institute may be fundable in another. Find NIAID's budget information on Paylines and Funding.
Keep in mind that at the start of the fiscal year we usually use interim paylines. For more information on how evolving paylines may affect you, go to:
Success Rates Indicate Funding Levels
Typically higher than paylines, success rates are a better indicator than paylines of the percentage of applications we are funding for each activity code, especially for R01s.
The reason: many applications are funded beyond the R01 payline, for example, through NIAID's requests for applications, selective pay, and R56Bridge awards.
A success rate is roughly the number of applications funded by an institute divided by the number of peer reviewed applications referred to it (excluding resubmissions that occur in the same fiscal year—each application is counted only once).
To find success rate data, go to Success Rates on NIH's RePORT Web site. For NIAID information by fiscal year, see Research Project Success Rates for NIAID.
More Resources
Strategy for NIH Funding
Strategy for Funding Decisions · How NIAID Makes Funding Decisions
See the other sections of
Part 7. Funding
Table of Contents for the Strategy
We welcome your comments, questions, or suggestions. Email deaweb@niaid.nih.gov.