Rock Talk: NIH Extramural News
While NIH policies focus on early stage investigators, we also recognize that it is in our interest to make sure that we continue to support outstanding scientists at all stages of their career. Many of us have heard mid-career investigators express concerns about difficulties staying funded. In a 2016 blog post we looked at data to answer the frequent question, “Is it more difficult to renew a grant than to get one in the first place?” We found that new investigators going for their first competitive renewal had lower success rates than established investigators. More recently, my colleagues in OER’s Statistical Analysis and Reporting Branch and the National Heart Lung and Blood Institute approached the concerns of mid-career investigators in a different way – by looking at the association of funding with age. Today I’d like to highlight some of the NIH-wide findings, recently published in the PLOS ONE article, “Shifting Demographics among Research Project Grant Awardees at the National Heart, Lung, and Blood Institute (NHLBI)”.
Using age as a proxy for career stage, the authors analyzed funding outcomes for three groups: principal investigators (PIs) aged 24 – 40, 41-55 (the mid-career group), and 56 and above. The figure below shows the proportion of research project grant awardees in each of these three groups. The proportion of NIH investigators falling into the 41-55 age group declined from 60% (1998) to 50% (2014).
Interestingly, regardless of age, applicants have an approximately equal chance of having a new or renewal application funded.
What then, might contribute to the decline in the proportion of mid-career NIH-supported investigators seen in the earlier figure? The authors propose two factors: multiple grants and average RPG award funding.
The authors argue that having multiple grants may confer an “enhanced survival benefit”, as PIs with multiple grants have a salary-support buffer that enables them to remain in the academic research system. If an investigator holds zero or one grant, an application failure could well mean laboratory closure, whereas an investigator who holds multiple grants can keep the laboratory open. Moving from younger to mid-career to older investigators, the average number of RPG awards per awardee increased from 1.28 to 1.49 to 1.54. Consistent with this, the amount of total RPG funding per awardee (looking at direct costs, specifically) is highest for PIs 56 and over:
The funding spread is further enhanced by the distribution of certain types of research programs, such as P01 awards, which support multi-project research projects. The figure below shows the age group distribution of P01 funding (direct costs only) from 1998-2014. As noted by the authors, by 2014, NIH PIs age 56 and over, who represent just 34% of the total NIH RPG awardee population, receive 70% of competing P01 funding.
In their discussion, the authors suggest that their analyses should stimulate alternate explanations about why funding is being increasingly distributed to well-established investigators. They write, “For instance, a widely held belief within the academic research community is that the scientific workforce is aging because more established investigators are simply better scientists. In this belief we are all ‘Darwinists’, in that, during stressful times our first presumption is that the best survive and the merely good fall away. But what if that is not the full situation?” Of note, two recent papers in Science (here and here ) present evidence that scientific impact does not necessarily increase with experience; the policy implication is that it may make more sense to maximize stable funding to meritorious scientists throughout the course of their careers.
I encourage you to take a look at the full paper, which contributes to our ongoing discussion of the age of the biomedical research workforce, and contributes to past, present, and future studies of how we can sustain the careers of those we fund as trainees and early-stage investigators.
Do you remember walking into the person’s office down the hall from you when you needed to ask a question, instead of “popping” them an email, instant message, or text? There’s no disputing that the digital age definitely has its advantages – making information sharing faster, cheaper, and more convenient, and allowing us to communicate locally and abroad in seconds. But in this fast paced world of instant communication – the internet, email, and all of our social media choices – sometimes we forget how valuable face-to-face interactions can be.
That is exactly one of the reasons I love the NIH Regional Seminars on Grant Funding and Program Administration. The seminars give me the opportunity to join over 60 of my fellow NIH and HHS faculty in sharing our knowledge and perspectives to attendees who are eager to learn how to navigate NIH, know the latest NIH initiatives, and understand how NIH and HHS policies affect their role in working on NIH grants. The seminars cover the basics that can help you understand how to find funding, write a grant application, manage a grant award, and comply with policies. But they also offer sessions that are a more advanced, including subjects you would see here on my blog. Some of those hot topic discussions include upcoming changes in how we will be supporting and providing oversight of clinical trials, as well as diversity in the biomedical research workforce. There are career planning sessions where we highlight topics related to getting your first NIH award and administrative topics such as how to manage international collaborations.
Perhaps even more valuable than formal presentations, in my mind, are the opportunities these events provide you and our faculty to interact….to meet, learn, and share from one another. Throughout the seminars, we offer opportunities to meet individually with our faculty to make connections, ask questions, and share perspectives.
Details on the NIH Regional Seminar in New Orleans, Louisiana (May 3-5) can be found on our website, and registration is open now. If the spring seminar location or dates aren’t ideal for you, then please consider our second seminar of 2017 in Baltimore, Maryland (October 25-27).
I look forward to seeing and meeting face-to-face with some of you there!
“My first submission got an overall impact score of 30. Is that good enough? What’s the likelihood I’ll eventually get this funded?”, or, “My first submission was not even discussed. Now what? Does anyone with an undiscussed grant bother to resubmit? And what’s the likelihood I’ll eventually get this funded?”
In a past blog we provided some general advice and data to help you consider these types of questions, and obviously the answers depend on specifics — but even so, based on your feedback and comments we thought it would be informative to offer high-level descriptive data on resubmission and award rates according to the first-time score, that is, the overall impact score on the A0 submission.
Here we describe the outcomes of 83,722 unsolicited A0 R01 applications submitted in fiscal years 2012 through 2016. Of these, 69,714 (or 83%) were “Type 1” (de novo) applications, while 14,008 (or 17%) were “Type 2” (or competing renewal) applications.
Let’s begin with looking at award rates: as a reminder, award rates are the total number of awards divided by the total number of applications. Figure 1 shows the award rate of these A0 applications broken out by type 1 (de novo) vs type 2 (competing renewals). (If you’re interested in looking at new and competing renewals in aggregate, for this and the following figures, these are shown in the Excel file we’ve posted to the RePORT website.)
Now, let’s look at the resubmission rates for the unfunded A0 applications, binned by overall impact score and broken out by de novo or competing renewal (type 1 versus type 2). As might be expected, we see a strong gradient: applicants were much more likely to resubmit the better their overall impact score. Resubmission rates declined from 80-90% for applications with overall impact scores of 10-30 to just under 20% for Type 1 applications which were not discussed, and just under 50% for Type 2 application which were not discussed For any given A0 overall impact score, we were more likely to see resubmissions with Type 2 applications, the difference between Type 1 and Type 2 resubmission rates being most striking for non-discussed A0s.
Now let’s look at the outcomes of the unfunded applications on their first resubmission (A1). Figure 3 shows the award rates for A1s according to the A0 overall impact score. Not surprisingly we see a similar gradient – the better the A0 overall impact score, the more likely the revision was awarded. For A0 applications that were not discussed, the A1 award rate was between 12% and 22% — quite low, but not zero. For any given A0 overall impact score, A1 award rates are higher for Type 2 applications.
Finally in Figure 4 we move to eventual award rates – taking into account awards at the A0 or A1 stage. Applications with an A0 overall impact score of 10-30 have an 80-90% chance of eventually being funded. In contrast, applications not discussed at the A0 stage have less than a 10% chance of being funded.
We present these outcomes to show a high-level picture of applicant behavior and award outcomes. Nonetheless, as we have discussed before, we urge you to take advantage of extensive available information on our web pages and to feel free to contact your program officials for individual-level advice.
I am most grateful to my colleagues in the OER Statistical Analysis and Reporting Branch for helping put these data together.
Many thanks for your terrific questions and comments to last month’s post, Research Commitment Index: A New Tool for Describing Grant Support. I’d like to use this opportunity to address a couple of key points brought up by a number of commenters; in later blogs, we’ll focus on other suggestions.
The two points I’d like to address here are:
- Why use log-transformed values when plotting output (annual weighted relative citation ratio, or annual RCR) against input (annual research commitment index, or annual RCI).
- What is meant by diminishing returns.
We use log-transformed values because scientific productivity measures follow a highly skewed, log-normal distribution. This is well described and therefore log-transformed plots are the norm in the literature (see here , here , and here for examples).
Figures 1 and 2 show the highly skewed distributions of annual RCI and annual weighted RCR in our sample of over 70,000 unique investigators who received at least one NIH research project grant (RPG) between 1995 and 2014.
When we and others refer to “diminishing returns,” what we mean is that we see diminishing marginal returns. Marginal returns are incremental returns on input associated with incremental increases in input. Mathematically, we talk about the slope (or more precisely first derivative) of the production plot that relates annual RCR to annual RCI.
Figure 3 is the log-log plot; it is the same as the prior blog’s figure 5, except that the axis labels show log values. I’ve added dotted tangent lines that illustrate how the slope decreases at higher values of annual RCI.
Another way to visualize this is to look directly at marginal productivity, at how RCR changes compare in respect to changes in RCI – in other words, how the instantaneous slopes shown in Figure 3 (aka first derivative) change as RCI increases. Figure 4 shows the first derivative of the association of log annual RCR to log annual RCI with values of log annual RCI. As annual RCI increases, the marginal productivity decreases – this is what is meant by diminishing returns.
Figure 5 shows a non-transformed plot relating annual RCR to annual RCI. It’s technically incorrect – since both annual RCR and annual RCI follow highly skewed, log-normal distributions. Nonetheless, the dotted tangent lines show that the slope (marginal productivity) decreases with increasing RCI, again consistent with the phenomenon of diminishing marginal returns.
The phenomenon of diminishing returns is one that is well known across many fields of human endeavor. It’s important to recognize that diminishing returns does not mean negative returns. If we, as a funding agency, choose to increase funding to a laboratory, there is a high likelihood that the increased funding will lead to increased productivity. But the incremental increase in productivity may be less than the incremental increase in support ; if baseline funding is already high, the incremental returns may be less than if baseline funding is lower. Alberts and colleagues pointed this out in their essay. Others from Canada and Germany have put forth similar arguments: funding agencies might maximize their impact by funding a larger, and more diverse, group of investigators with the limited resources available.
Again, many thanks for your terrific comments. We look forward to addressing other points (including basic and clinical science and changes over time) in future posts.
Over the past few days, we released our annual web reports, success rates and NIH Data Book with updated numbers for fiscal year 2016. Overall, we see steady increases. In addition to looking back over the numbers we typically highlight in this post, we want to point out several new research project grant (RPG)-specific activity codes used to support extramural research. FY 2016 saw the launch of some new activity code uses, such the Phase 1 Exploratory/Developmental Grant (R61 – in lieu of the R21), of which 14 new projects were funded. Large-scale RPGs with complex structures like the RM1 increased substantially from 2015 (when we first began to fund RM1s), from slightly over $4 million in grant money to over $15 million. These activity codes, as well as those more familiar to you such as the R21, collectively supported a variety of specific scientific areas such as the improvement of outcomes in cancer research, support pilots for Alzheimer’s research, genomic research centers, and clinical studies for mental disorders.
Over the past year, NIH grants supported almost 2,400 research organizations, including higher education, independent hospitals and research institutes. We received 54,220 competing research project grant applications in fiscal year 2016, a steady increase. Of these, 30,106 were applications for R01-equivalent grants (as a reminder, R01-equivalents are mostly R01s, but also include activity codes for similar independent RPG programs such as the R37 MERIT award). Although, organizations have seen increased support for RPGs in 2016 totaling $17,137,754,907, for competing and noncompeting grants, the average size of awards continued to increase to $499,221, a historical high for both competing and non-competing awards.
The success rate for competing FY 2016 RPG applications was 19.1% compared to 18.3% in FY 2015. The 2016 success rate for competing R01-equivalent applications was also slightly higher than last year (19.9% compared with 18.9% in 2015). Success rates continue to remain far below the 30% levels we saw 15-20 years ago, during the NIH doubling; the low success rates reflect the hypercompetitive environment we continue to face.
I’ve included a highlight of some additional numbers below from the 2016 fiscal year as well the two prior fiscal years.
2014 2015 2016 Research Project Grants Number of research project grant (RPG) applications: 51,073 52,190 54,220 Number of new or renewal (competing) RPG awards: 9,241 9,540 10,372 Success rate of RPG applications: 18.1% 18.3% 19.1% Average size of RPGs: $472,827 $477,786 $499,221 Total amount of NIH funding that went to RPGs (both competing and noncompeting): $15,635,912,476 $15,862,012,059 $17,137,754,907 R01-equivalents Number of R01-equivalent grant applications: 27,502 28,970 30,106 Number of new or renewal (competing) R01-equivalent awards: 5,163 5,467 6,010 Success rates for R01-equivalent applications: 18.8% 18.9% 19.96% Average size of R01-equivalent awards: $427,083 $435,525 $458,287 Total amount of NIH funding that went to R01-equivalents (both competing and non-competing): $10,238,888,890 $10,279,687,172 $11,077,251,191
On this blog we previously discussed ways to measure the value returned from research funding. The “PQRST” approach (for Productivity, Quality, Reproducibility, Sharing, and Translation) starts with productivity, which the authors define as using measures such as the proportion of published scientific work resulting from a research project, and highly cited works within a research field.
But these factors cannot be considered in isolation. Productivity, most broadly defined, is the measure of output considered in relation to several measures of inputs. What other inputs might we consider? Some reports have focused on money (total NIH funding received), others on personnel. And all found evidence of diminishing returns with increasing input: among NIGMS grantees receiving grant dollars, among Canadian researchers receiving additional grant dollars, and among UK biologists overseeing more personnel working in their laboratories.
It might be tempting to focus on money, but as some thought leaders have noted, differing areas of research inherently incur differing levels of cost. Clinical trials, epidemiological cohort studies, and research involving large animal models are, by their very nature, expensive. If we were to focus solely on money, we might inadvertently underestimate the value of certain highly worthwhile investments.
We could instead focus on number of grants – does an investigator hold one grant, or two grants, or more? One recent report noted that more established NIH-supported investigators tend to hold a greater number of grants. But this measure is problematic, because not all grants are the same. There are differences between R01s, R03s, R21s, and P01s that go beyond the average amount of dollars each type of award usually receives.
Several of my colleagues and I, led by NIGMS director Jon Lorsch – chair of an NIH Working Group on Policies for Efficient and Stable Funding – conceived of a “Research Commitment Index,” or “RCI.” We focus on the grant activity code (R01, R21, P01, etc) and ask ourselves about the kind of personal commitment it entails for the investigator(s). We start with the most common type of award, the R01, and assign it an RCI value of 7 points. And then, in consultation with our NIH colleagues, we assigned RCI values to other activity codes: fewer points for R03 and R21 grants, more points P01 grants.
Table 1 shows the RCI point values for a PI per activity code and whether the grant has one or multiple PIs.
Table 1:Activity Code Single PI point assignment Multiple PI point assignment P50, P41, U54, UM1, UM2 11 10 Subprojects under multi-component awards 6 6 R01, R33, R35, R37, R56, RC4, RF1, RL1, P01, P42, RM1, UC4, UF1, UH3, U01, U19, DP1, DP2, DP3, DP4 7 6 R00, R21, R34, R55, RC1, RC2, RL2, RL9, UG3, UH2, U34, DP5 5 4 R03, R24, P30, UC7 4 3 R25, T32, T35, T15 2 1
Figure 1 shows by a histogram the FY 2015 distribution of RCI among NIH-supported principal investigators. The most common value is 7 (corresponding to one R01), followed by 6 (corresponding to one multi-PI R01). There are smaller peaks around 14 (corresponding to two R01s) and 21 (corresponding to three R01s).
Figure 2 uses a box-plot format to show the same data, with the mean indicated by the larger dot, and the median indicated by the horizontal line. The mean of 10.26 is higher than the median of 7, reflecting a skewed distribution.
From 1990 through 2015 the median value of RCI remained unchanged at 7 – the equivalent of one R01. But, as shown in Figure 3, the mean value changed – increasing dramatically as the NIH budget began to increase just before the time of the NIH doubling.
Figure 4 shows the association of RCI and the age of PIs; the curves are spline smoothers. In 1990, a PI would typically have an RCI of slightly over 8 (equivalent to slightly more than one R01) irrespective of age. In 2015, grant support, as measured by RCI, increased with age.
We now turn to the association of input, as measured by the RCI, with output, as measured by the weighted Relative Citation Ratio (RCR). We focus on 71,493 unique principal investigators who received NIH research project grant (RPG) funding between 1996 and 2014. We focus on RPGs since these are the types of grants that would be expected to yield publications and because the principal investigators of other types of grants (e.g. centers) won’t necessarily be an author on all of the papers that come out of a center. For each NIH RPG PI, we calculate their total RCI point values for each year, and divide it by the total number of years of support. Thus, if a PI held one R01 for 5 years, their RPG RCI per year would be 7 ((7 points * 5) / (5 years). If a PI held two R01s for 5 years (say 2000-2004) and during the next two years (say 2005 and 2006) held one R21, their RPG RCI per year would be 11.43 [(14 points * 5) + (5 points * 2)] / (7 years).
Figure 5 shows the association of grant support, as measured by RPG RCI per year, with productivity, as assessed by the weighted Relative Citation Ratio per year. The curve is a spline smoother. Consistent with prior reports, we see strong evidence of diminishing returns.
A limitation of our analysis is that we focus solely on NIH funding. As a sensitivity test, we analyzed data from the Howard Hughes Medical Institute (HHMI) website and identified 328 unique investigators who received NIH RPG funding and HHMI funding between 1996 and 2014. Given that these 328 investigators received both NIH grants and HHMI support (which is a significant amount of long term person-based funding), they would be expected to be highly productive given the additive selectivity of receiving support from both NIH and HHMI. As would be expected, HHMI investigators had more NIH funding (measured as total RCI points, annual RCI, number of years with NIH funding) and were more productive (more NIH-funded publications, higher weighted RCR, higher annual RCR, and higher mean RCR).
Figure 6 shows annual weighted RCR by annual RCI, stratified by whether the PI also received HHMI funding. As expected, HHMI investigators have higher annual weighted RCR for any given RCI, but we see the same pattern of diminishing returns.
Putting these observations together we can say:
- We have constructed a measure of grant support, which we call the “Research Commitment Index,” that goes beyond simple measures of funding and numbers of grants. Focusing on funding amount alone is problematic because it may lead us to underestimate the productivity of certain types of worthwhile research that are inherently more expensive; focusing on grant numbers alone is problematic because different grant activities entail different levels of intellectual commitment.
- The RCI is distributed in a skewed manner, but it wasn’t always so. The degree of skewness (as reflected in the difference between mean and median values) increased substantially in the 1990s, coincident with the NIH budget doubling.
- Grant support, as assessed by the RCI, increases with age, and this association is stronger now than it was 25 years ago.
- If we use the RCI as a measure of grant support and intellectual commitment, we again see strong evidence of diminishing returns: as grant support (or commitment) increases, productivity increases, but to a lesser degree.
- These findings, along with those of others, suggest that it might be possible for NIH to fund more investigators with a fixed sum of money and without hurting overall productivity.
At this point, we see the Research Commitment Index as a work in progress and, like the Relative Citation Ratio, as a potentially useful research tool to help us better understand, in a data-driven way, how well the NIH funding process works. We look forward to hearing your thoughts as we seek to assure that the NIH will excel as a science funding agency that manages by results.
I am grateful to my colleagues in the OER Statistical Analysis and Reporting Branch, Cindy Danielson and Brian Haugen in NIH OER, and my colleagues on the NIH Working Group on Policies to Promote Efficiency and Stability of Funding, for their help with these analyses.
In September Dr. Carrie Wolinetz and I blogged about our policy reforms to build a more robust clinical trials enterprise through greater stewardship and transparency at each phase of the clinical trial journey from conception to sharing of results. We discussed how these efforts promise to improve the quality and efficiency of clinical trials, translating into more innovative and robust clinical trial design, and accelerated discoveries that will advance human health.
Over the past months we have continued to partner with the community to work through the implementation of these new policies, developing responses to frequently asked questions and even reconsidering the timing of our single IRB policy to give our grantees time to work through how to operationalize the change.
- Good Clinical Practice (GCP) Training. Please remember that as of January 1, 2017 NIH expects investigators involved in NIH-supported clinical trials, and staff who design, oversee, manage, or conduct clinical trials, to receive training in GCP. NIH does not expect training to be completed by this date; rather, as long as steps are being taken to meet the training expectation, the training can be completed after the effective date. Note that NIH does not specify a particular GCP training; there are many free and fee-based courses available. If you are interested, free courses are offered by NIAID and NIDA, or, if you are looking for one geared to social and behavioral research, you might be interested in the training offered by NCATS. If you have considerable clinical trial experience, many GCP training courses offer an optional pre-test. A high enough score on the pre-test allows you to immediately earn a certificate of completion. Additional information may be found in the new GCP training FAQs.
- Enhancing Clinical Trial Registration and Summary Results Information. Also remember that for all grant applications and contract proposals submitted on or after January 18, 2017, NIH expects that investigators conducting clinical trials (funded in whole or in part by the NIH) will ensure that these trials are registered at ClinicalTrials.gov within 21 days of first-patient enrollment and that the results information from these trials is submitted to ClinicalTrials.gov within one year of trial completion. NIH’s policy complements a new federal regulation to improve the accessibility of information on clinical trial availability and on the outcomes and results of completed trials. Today, we published a new set of FAQs on this topic, to help you.
- Use of a Single Institutional Review Board (sIRB) for Multi-site Studies. We have been gratified to hear that institutions have been working through the issues related to implementing the sIRB policy for NIH funded multi-site studies. We recently issued a notice extending the effective date of the sIRB policy from May 25, 2017 to September 25, 2017 to ensure institutions have enough time to plan. We also published sIRB implementation FAQs to address questions you may have, and will continue to update and add to this new resource.
Work continues on updating a clinical trial protocol template, on developing ways to capture clinical trial information in the most useful way in the application, on developing funding opportunity announcement language, and more. We’ll be sure to keep you updated every step of the way.
Five major blog topic areas, and links to related Open Mike blog posts:
- Applicant activity, behavior, and outcomes
- Peer review
- Basic science
- Biomedical research workforce and training
- Scientific rigor, transparency, and research impact
As the year 2016 ends, my first full year in my new role here at NIH, I’d like to reflect on some of the topics covered here on Open Mike. Thanks to our NIH Regional Seminars, I have had the pleasure of hearing feedback from some of you in person, and I am also greatly appreciative of our virtual interactions, through the thoughtful comments posted by blog readers in this space.
Our blog opened on October 19, 2015, when I noted that NIH is an extraordinary success story; even skeptics identify NIH as a government program that works. But at the same time, I also noted that all is not well with the biomedical research enterprise. In many respects, the 50+ blogs that followed have dug deeper into our anxieties and challenges.
The sidebar highlights five major themes arising over the past year or so, and blogs related to those categories. To get a sense of community interest, we have also compiled some reader statistics. Further below, Table 1 shows which blogs, as of December 27, received the most page views, and Table 2 shows which blogs received the most comments.
These themes, your viewership, and your comments reflect realities and concerns voiced within the scientific community. For example, as I have previously discussed on the blog, concerns include the hypercompetition described by Kimble et al., or the misaligned incentives and unintended consequences created by a hypercompetitive climate, as described by Alberts et al. and others.
Thinking more broadly, our blog themes also reflect the realities and concerns of the public as a whole: despite all of the scientific advances that have improved human health thus far, the research enterprise is not producing the “cures” the public yearns for, and at the same time, some science is not conducted with proper rigor and transparency. Questions we must face include: What underlies the “Eroom’s Law” phenomenon where over the last six decades, fewer new pharmaceutical treatments are being produced relative to R&D expenditures? What are the costs, and root causes of irreproducibility in preclinical research, which some suggest occurs in over half of published preclinical studies? What is the evidence that we are adopting best practices for preclinical animal studies, particularly in terms of study design, reporting and publishing guidelines?
As aforementioned, NIH is a success story, a government program that works. Our scientific discoveries capture public interest; we are an essential contributor to major health advances; and we create positive impacts beyond the bench. At the same time, we are scientists, and we want to examine our work and culture with the same thoughtfulness and curiosity with which we explore our gene, protein or disease of interest. In doing so, we reveal insights that help make our research enterprise even greater.
This is why NIH is actively examining and addressing these challenges and concerns as best we can, as reflected in the past year’s posts. Several posts described current and future efforts related to scientific rigor. We’ve also talked about clinical trials – one of the most visible components of NIH, where research intersects with the general public through enrollment of volunteer participants. In two blogs and a JAMA article published this fall, we addressed shortcomings and challenges throughout the lifespan of an NIH-supported clinical trial, and the corresponding NIH efforts which, in combination, intend to ensure rigor and efficiency in the US clinical trial enterprise. Core objectives of NIH’s 2016-2020 strategic plan are to “enhance scientific stewardship” and “excel as a federal science agency by managing for results.” The topics discussed on the Open Mike blog over the past year demonstrate our commitment to furthering these goals in the upcoming years.
Of course we cannot do this alone, and we are grateful for partnerships beyond NIH to make our goals a reality. Funders, professional societies, journal editors, and patient groups are engaged in activities to raise the rigor, quality, transparency, and progress of science. Others are working on better ways to understand how the scientific process works (or doesn’t) and to measure its outcomes. The 21st Century Cures Act has provisions that promise to raise the success of NIH-funded research even further.
Let me take this opportunity to thank you for your interest and feedback and to wish you, your colleagues, and your families all the best for a happy and healthy New Year. I also want to thank my wonderful colleagues at NIH (in the Office of Extramural Research, in the Office of the Director, and in our 27 institutes and centers) for their invaluable help as stewards of NIH’s extramural program. Their work – not just on the initiatives described above, but on the day-to-day operations that are extremely important, yet less highly visible – make NIH’s world-class, robust, and fruitful research program a reality.Table 1: Top “Open Mike” posts by page view, through Dec. 27, 2016 Post Page views Welcome to the Open Mike Blog 31,244 Authentication of Key Biological and/or Chemical Resources in NIH Grant Applications 31,144 How New US Overtime Provisions Will Affect Postdoctoral Researchers 29,010 Citations per Dollar as a Measure of Productivity 19,608 Scientific Premise in NIH Grant Applications 14,053 The Predictive Nature of Criterion Scores on Impact Score and Funding Outcomes 13,486 How Many Researchers? 13,417 Outcomes for R01 “Virtual A2s” 13,322 Are Attempts at Renewal Successful? 12,959 Updates on Addressing Rigor in Your NIH Applications 12,436 Table 2: Most commented “Open Mike” posts, through Dec. 27, 2016 Post Number of comments How New US Overtime Provisions Will Affect Postdoctoral Researchers 58 Perspectives on Peer Review at the NIH 48 Citations per Dollar as a Measure of Productivity 46 Are Attempts at Renewal Successful? 42 How Many Researchers? 36 Outcomes for R01 “Virtual A2s” 23 Publication Impact of NIH-funded Research – A First Look 20 Grant Renewal Success Rates: Then and Now 18 NIH’s Commitment to Basic Science 18 The Predictive Nature of Criterion Scores on Impact Score and Funding Outcomes 16
An investigator’s long-term success depends not only on securing funding, but on maintaining a stable funding stream over time. One way to assure continued funding is to submit a competing renewal application. However, as we noted earlier this year, while new investigators were almost as successful as experienced investigators in obtaining new (type 1) R01s, the difference between new investigator and experienced investigator success rates widens when looking at competing renewals (type 2s), and success rates of new investigators’ first renewals were lower than those of experienced investigators. In addition, we know that since the end of NIH’s budget doubling in 2003, success rates for competing renewals of research project grants overall have decreased.
To further understand trends in success rate for R01 competing renewals (“type 2s”) I’d like to share some additional analyses where we look at characteristics of type 2 R01 applications, and the association of their scores for individual review criteria (“criterion scores”) with overall impact score and funding outcomes.
You might recall our previous blog where we described the association of criterion scores with overall impact score among a sample of over 123,000 competing R01 applications NIH received over 4 years. My colleagues published their findings , which demonstrated that the strongest correlates, by far, of overall impact score were approach and, to a lesser extent, significance criterion scores. Here we follow up on that analysis, this time focusing on outcomes for only the subset of R01 applications which were competing renewals.
Figure 1 shows box plot distributions for criterion scores and overall impact scores. Consistent with what we saw for all applications, the approach criterion score most closely approximates overall impact scores.
Figure 2 shows a heat map of Spearman correlation coefficients for the different criterion scores. Once again we see that approach scores were highly correlated with overall impact scores (r=0.83), while other criterion scores had weaker correlations.
Now let’s dive a little deeper into understanding potential predictive variables correlated to the success of a type 2 R01 application. We next performed a series of random-forest multivariable regressions to describe what factors are the most important correlates to four selected outcomes in the path from application to award: discussion in a study section; approach score; overall impact score; and finally, of course, funding. We have used this approach to assess factors associated with NIH funding and research outcomes.
We first looked at correlates of discussion in a peer review study section. Figure 3 shows variable importance in a 100-tree forest; the model explained 50% of the variance. The strongest correlates by far were approach score and significance score. Personal and organizational characteristics were not materially correlated with discussion.
Figure 4 shows the random-forest correlates of approach score; the model explained 63% of the variance. The strongest correlates were significance score, innovation score, and investigator score. When we removed the other criterion scores, the model (which was left with personal and organizational characteristics) only explained ~ 3% of the variance.
Figure 5 shows the random-forest correlates of overall impact score among discussed applications; the model explained 73% of the variance. The strongest correlate, by far, was approach score.
Figure 6 shows random-forest correlates of funding among discussed applications; the model explained 61% of the variance. As might be expected, overall impact score was the strongest correlate.
Figure 7 shows the results of the random-forest multivariable regression, which explained 41% of the variance. When we left out overall impact score, we found that approach score and to a lesser extent significance scores were the strongest correlates of funding.
Putting these findings together, we find that among type 2 competing renewal R01 applications, the strongest correlates of success at peer review and of eventual funding are peer review impressions of approach and significance. Personal characteristics (like age, gender, race, prior training) were not materially correlated with success. As we noted before, we think it is helpful for R01 applicants to know that when trying to renew their ongoing projects, the description of experimental approach is the most important predictor of success.
I’d like to thank the authors of the criterion score paper for their help with their analysis, along with my colleagues in the Statistical Analysis and Reporting Branch of the Office of Extramural Research.
Yesterday, NIH published a guide notice establishing stipend levels for postdoctoral trainees and fellows supported by Kirschstein-NRSA awards in fiscal year (FY) 2017. The 2017 increase of NRSA postdoctoral stipend levels reflects our continued recognition of the important work of postdoctoral researchers to the NIH, AHRQ, and HRSA missions. This increase also builds on increases following the Advisory Committee to the NIH Director (ACD) working group on the biomedical research workforce recommendations. Their 2012 report called for ”the current stipends for NIH-supported postdoctoral fellows to be adjusted to levels that better reflect their years of training,” and subsequent NRSA stipend increases, including these FY17 NRSA postdoctoral stipend adjustments, follow the spirit of these recommendations.
NRSA postdoctoral training related expenses, institutional allowances, and tuition and fees categories remain unchanged as described in further detail in the Guide notice (NOT-OD-17-003).
You may have been following news of the 21st Century Cures Act, a landmark piece of legislation with provisions for healthcare, medicine, and research. Republican and Democratic lawmakers supported this bill through its development and eventual passage, and yesterday, President Obama signed the bill into law.
The Act establishes a multitude of important changes to our nation’s approach to supporting and funding health care, medical interventions, and research. Readers of this blog may be particularly interested in the many changes directly relevant to NIH’s mission. A New England Journal of Medicine Perspective essay by NIH Director Francis Collins and NIH Deputy Director Kathy Hudson highlights those changes, and I encourage you to read it. Drs. Collins and Hudson draw attention to support for certain ongoing high-priority initiatives, enhancement of the biomedical research workforce, improved clinical research, better privacy protection for patients who participate in clinical research, greater transparency in science, and reduced red tape.
- High-priority initiatives: The Act includes support for major ongoing NIH scientific initiatives, such as BRAIN, the Precision Medicine Initiative (“All of Us”), and the Cancer Moonshot.
- Biomedical research workforce: A number of provisions focus on early career researchers, who continue to be the subject of much interest. Studies carried out by the National Academies committee referenced in the law will look at factors within NIH – and beyond – that impact the future workforce. Other provisions will enable NIH to develop and promote policies that will attract and sustain support for diverse groups of outstanding young and new investigators.
- Clinical research, transparency, and privacy: The Act contains measures to assess, report, and improve inclusion of key demographic groups, groups that reflect diversity of sex, age, and minority status. NIH is encouraged to further efforts in understanding health disparities between different demographic groups. Other measures enhance the impact of “big data” through data sharing, while also protecting private information of research volunteers. For example, certificates of confidentiality – formerly provided upon request to researchers collecting sensitive information about research participants – will now be provided to all NIH-funded scientists, with strong protections against involuntary disclosure.
- Red tape: The Act exempts NIH-supported or NIH-conducted research from the “ironically titled” Paperwork Reduction Act, making it possible to launch projects in faster time and without fulfilling paperwork requirements that have rarely yielded substantive change. The Act also strikes barriers that have made it difficult for NIH extramural staff to engage in outreach efforts to the research community through attendance at and participation in scientific conferences.
We are greatly appreciative of the hard work that went into making this bill become law. The consideration of these biomedical research topics in the scope of the 21st Century Cures Act is a huge vote of confidence in what we as a nation can accomplish, and improve, through supporting a robust and dynamic scientific enterprise.