Rock Talk: NIH Extramural News

Syndicate content
NIH Extramural Nexus
Updated: 13 hours 29 min ago

Certificates of Confidentiality for NIH Grants

Fri, 04/28/2017 - 08:29

Earlier this year I wrote a post about the 21st Century Cures Act and its changes that directly affect the NIH. One part of this new legislation contains provisions to improve clinical research and privacy through certificates of confidentiality.

Currently, certificates of confidentiality (or “CoCs”) are provided upon request to researchers collecting sensitive information about research participants. Soon, CoCs will be automatically provided for NIH-supported research, as set forth in the 21st Century Cures Act.

CoCs are an important to both the researchers conducting the study, and to the patient volunteers who make the research possible through their participation. CoCs protect researchers and institutions from being compelled to disclose information that would identify their research participants. They also provide research participants with strong protections against involuntary disclosure of their sensitive health information.

NIH-funded research has evolved since CoCs were first introduced in the 1970s. It is now more common to have projects that involve large-scale data sets and genomic information, and likewise, many thoughts leaders have sought to have the CoC process provide privacy protections more broadly.

We will soon be publishing an NIH Guide notice announcing how and when NIH will begin including certificates of confidentiality in the terms and conditions of award. By automatically providing CoCs as part of the NIH award process, we can provide an additional measure of protection to research participants, through a streamlined process that does not add additional burden to researchers. Stay tuned to the NIH Guide for Grants and Contracts for more detailed information.

Categories: NIH-Funding

Applications, Resubmissions, and the Relative Citation Ratio

Tue, 04/25/2017 - 16:02

Measuring the impact of NIH grants is an important input in our stewardship of research funding. One metric we can use to look at impact, discussed previously on this blog, is the relative citation ratio (or RCR). This measure – which NIH has made freely available through the iCite tool – aims to go further than just raw numbers of published research findings or citations, by quantifying the impact and influence of a research article both within the context of its research field and benchmarked against publications resulting from NIH R01 awards.

In light of our more recent posts on applications and resubmissions, we’d like to go a step further by looking at long-term bibliometric outcomes as a function of submission number. In other words, are there any observable trends in the impact of publications resulting from an NIH grant funded as an A0, versus those funded as an A1 or A2? And does that answer change when we take into account how much funding each grant received?

First, let’s briefly review long-term historical data on R01-equivalent applications and resubmissions.

Figure 1 shows the proportions of over 82,000 Type 1 R01-equivalent awards by resubmission status. We see dramatic shifts: 20 years ago, and during the doubling, the majority of awards came from A0 applications. By the time of the payline crash (~2006), most awards came from A1 and A2 applications. In 2016, several years after A2s were eliminated, half of awards came from A0 applications and half from A1 applications.

Figure 1

Figure 2 shows award rates. Over the years, resubmissions consistently do better; in 2016, A1 submissions were three times more likely to be funded than A0s.

Figure 2

Now we’ll “switch gears,” and look at long-term grant bibliometric productivity as associated with resubmission status. We’ll focus on 22,312 Type 1 R01-equivalent awards first issued between 1998-2003: this was a time when funds were flush (due to the NIH budget doubling) and substantial numbers of awards were given as A0s (N=11,466, or 51%), A1s (N=8,014, or 36%), and A2s (N=2,832, or 13%). By looking at grants that were first awarded over 14 years ago, we’ve allowed all projects plenty of time to generate papers that then had time to receive citations.

Table 1 shows grant characteristics according to resubmission status at the time of award. Characteristics were generally similar except that a smaller proportion of A0 awards involved human subjects.

 Table 1 A0

(N=11,466) A1

(N=8,014) A2

(N=2,832) Percentile 15 (7-21) 14 (8-21) 14 (8-20) Human study 34% 42% 42% Animal study 50% 50% 51% Total costs ($B-M) 2.2 (1.4-3.7) 2.0 (1.4-3.4) 1.9 (1.4-3.1) Duration (years) 5 (4-10) 5 (4-9) 5 (4-6)

Continuous variables are shown as median (25th-percentile – 75th percentile) while categorical variables are shown as percent.

Table 2 shows selected bibliometric outcomes – total number of publications, number of publications adjusted for acknowledgement of multiple grants (as described before), weighted relative citation ratio (RCR), weighted relative citation ratio per million dollars of funding, and mean relative citation ratio. Figures 3, 4, and 5 show box plots for weighted RCR, weighted RCR per million dollars of funding, and mean RCR, with Y-axes log-transformed given highly skewed distributions. We see a modest gradient by which productivity is slightly higher for grants awarded at the A0 stage than for grants awarded on A1 or A2 resubmissions.

 Table 2 A0

(N=11,466) A1

(N=8,014) A2

(N=2,832) Papers 10 (4-21) 9 (4-19) 9 (4-17) Papers adjusted* 5.1 (2.1-11.5) 4.9 (2.0-10.5) 4.8 (2.0-9.9) Weighted RCR* 6.3 (1.8-17.3) 5.7 (1.7-15.2) 5.2 (1.5-13.6) Weighted RCR*/$Million 2.93 (1.00-6.55) 2.66 (0.92-6.09) 2.60 (0.88-5.94) Mean RCR 1.29 (0.76-2.04) 1.22 (0.74-1.91) 1.16 (0.68-1.83)

*Accounting for papers that acknowledge multiple grants

Figure 3

Figure 4

Figure 5

In summary, over the past 20 years we have seen marked oscillations in application and resubmission status, reflecting changes in policy (e.g. end of A3’s in 1997, end of A2’s in 2011, permission for “virtual A2s” in 2014) and changes in budget (e.g. doubling from 1998-2003, stagnation in the years following, increase in 2016). In 2016, about three-quarters of applications are A0 and one-quarter are A1s; half of awards stem from A0 applications, while half stem from A1 applications. We see no evidence of improvements in bibliometric productivity among grants that were awarded after resubmission; if anything, there’s a modest gradient of higher productivity for grants that were funded on the first try.

Categories: NIH-Funding

A Reminder of Your Roles as Applicants and Reviewers in Maintaining the Confidentiality of Peer Review

Fri, 04/07/2017 - 11:25

Dr. Richard Nakamura is director of the NIH Center for Scientific Review

Imagine this: you’re a reviewer on an NIH study section, and receive a greeting card from the Principal Investigator (PI) on an application you are reviewing. A note written inside the card asks that your look favorably upon the application, and in return, the PI would put in a good word with his friend serving on your promotion committee. Do you accept the offer, or just ignore it? Or, do you report it?

Or this: a reviewer on an NIH study section finds that one of his assigned applications contains an extensive statistical analysis that he does not quite understand. So he emails the application to his collaborator at another university and asks her to explain it to him.

Or what about an investigator who submits an appeal of the outcome of review, citing a particular reviewer as having told him that another reviewer on the study section gave a critical review and unfavorable score to the application out of retaliation for an unfavorable manuscript review?

Or maybe several days after the initial peer review of your application, you receive a phone call from a colleague you haven’t spoken to in quite a while. The colleague is excited about a new technique you developed and wishes to collaborate. You realize the only place you’ve disclosed this new technique is in your recently reviewed NIH grant application. What do you do?

Scenarios like these are thankfully few and far between. Given the size of NIH’s peer review operations, the rarity of such scenarios is a testament to all you do in supporting the integrity of peer review, and the public trust in science. Nevertheless, reminders are helpful, and it’s important to be prepared and understand your role in upholding the integrity of NIH peer review, just in case you are ever put in a situation like the ones described here.

While professional interactions between applicants and reviewers can continue while an application is undergoing peer review, discussions or exchanges that involve the review of that application are not allowed. As an applicant, you should not contact reviewers on the study section evaluating your application to request or provide information about your application or its review, no matter how “trivial” the piece of information may seem.  As a reviewer, you should not disclose contents of applications, critiques, or scores. Reviewers should also never reveal review meeting discussions or associate a specific reviewer with an individual review.

Why are these responsibilities important? Because supporting the public trust in science takes the support of the entire research community. Attempts to influence the outcome of the peer review process through inappropriate or unethical means result in needless expenditure of government funds and resources, and erode public trust in science. In addition, NIH may defer an application for peer review or withdraw the application if it determines that a fair review is not feasible because of action(s) compromising the peer review process. Depending on the specific circumstances, the NIH may take additional steps to ensure the integrity of the peer review process, including but not limited to: notifying or requesting information from the institution of the applicant or reviewer, pursuing a referral for government-wide suspension or debarment, or notifying the NIH Office of Management Assessment.

Your responsibility doesn’t end there.  All participants in the application and review process, including investigators named on an NIH grant application, officials at institutions applying for NIH support, and reviewers need to report potential breaches of peer review integrity. Immediately report any peer-review integrity concerns to your Scientific Review Officer. For peer review activities within the Center for Scientific Review, you can also send an email message to csrrio@mail.nih.gov. If you need to report an incident to someone outside of CSR, you may email the NIH Review Policy Officer. We also provide additional resources on our Integrity and Confidentiality in NIH Peer Review page, and encourage you to share this resource, and this blog post, with your peers, colleagues, and trainees.

Categories: NIH-Funding

Following Up On Interim Research Products

Tue, 03/28/2017 - 14:43

The role of preprints — complete and public draft manuscripts which have not gone through the formal peer review, editing, or journal publishing process – continues to be a hot topic in the biological and medical sciences. In January, three major biomedical research funders – HHMI, the MRC, and the Wellcome Trust, changed their policies to allow preprints to be cited in their progress reports and applications.

Thinking about preprints also raises questions about the broader class of interim research products, and the role they should play in NIH processes. Other interim products include products like preregistration of protocols or research methods, to publicly declare key elements of a research project in advance. While, under current policy, NIH does not restrict items cited in the research plan of an application, applicants cannot claim preprints in biosketches or progress reports.

So, in October, we issued a call for comments to get a fuller understanding of how the NIH-supported research community uses and thinks about interim research products. Today I’d like to follow up with what we’ve learned from your input, and the policy changes this feedback suggests.

We received 351 responses, the majority (79%) submitted by scientists/authors. Twenty-two professional societies representing groups of scientists also submitted responses. Of the 351 respondents who commented on how use of preprints & interim research products might impact the advancement of science, the majority were supportive, and some predicted or noted specific benefits, such as improving scientific rigor, increasing collaboration, and accelerating the dissemination of research findings. (See Fig. 1)

Figure 1

When asked about the peer review impact of citing interim products in NIH applications, the majority of respondents predict positive impacts. Some specific benefits noted include speeding the dissemination of science, helping junior investigators, and providing authors with the chance to incorporate feedback into their drafts and even form new collaborations.

Figure 2

We also received some concerns about these materials not being peer-reviewed, and whether any potential benefit they may offer to the review process was offset by potential burden to reviewers and applicants. However, the overall response about review was favorable. Respondents felt reviewers should be able to tell the difference between a final and interim product and could draw their own conclusions about the validity of the information. Again, it’s worth noting that these findings inform a potential increase in the use of interim products in review; we already have no restrictions in what can be cited in the reference section of a research plan.

Based on this general feedback and many other thoughtful suggestions, we developed guidance on how NIH applicants will have the option, for applications submitted for due dates of May 25 and beyond, to cite interim research products in applications. As described in the NIH Guide Notice issued Friday (NOT-OD-17-050), citations of interim research products in biosketches should follow citation formats that include citation of the object type (e.g. preprint), a digital object identifier (DOI) in the citation, and information about the document version. This guidance is also incorporated into NIH application instructions, which were just updated last week. We also offer FAQs.

Example preprint citation: Bar DZ, Atkatsh K, Tavarez U, Erdos MR, Gruenbaum Y, Collins FS. Biotinylation by antibody recognition- A novel method for proximity labeling. BioRxiv 069187 [Preprint]. 2016 [cited 2017 Jan 12]. Available from: https://doi.org/10.1101/069187.

The Guide Notice also outlines NIH’s expectations for what qualifies as a preprint, and suggests best practices to the many preprint repositories, including: open metadata; machine accessibility; transparent policies about plagiarism and other integrity issues; and an archival plan for content, versions and links to the published version.

For renewal applications submitted for the May 25, 2017 due date and thereafter, awardees can also claim these products on the progress report publication list (an attachment required specifically in renewal applications.) Awardees can also report these products on their research performance progress reports (RPPRs) as of May 25, 2017, and link them to their award in their My Bibliography account.

On behalf of NIH, I’d like to thank all of you who took the time to submit comments and share insightful and thoughtful viewpoints and experiences with us. There is a growing recognition that interim research products could speed the dissemination of science and enhance rigor

We see preprints and other interim products complementing the peer-reviewed literature. Our goal with this Guide notice is to offer clear guidance and suggested standards for those in the research community who are already using, or considering use of preprints and interim research products. Some scientific research communities may be more ready than others to use preprints –for example, there continue to be discussions and concerns specific to clinical research. We appreciate that different biomedical research disciplines are likely to adopt interim research products at varying paces; at the same time, with our new guidelines, we aim to make this option as viable as possible for all members of our community.

Categories: NIH-Funding

Outcomes of Amended (“A1”) Applications

Thu, 03/23/2017 - 17:27

In a previous blog, we described the outcomes of grant applications according to the initial peer review score. Some of you have wondered about the peer review scores of amended (“A1”) applications. More specifically, some of you have asked about amended applications getting worse scores than first applications; some of you have experienced amended applications not even being discussed after the first application received a priority score and percentile ranking.

To better understand what’s happening, we looked at 15,009 investigator-initiated R01 submissions: all initial submissions came to NIH in fiscal years 2014, 2015, or 2016, and all were followed by an amended (“A1”) application. Among the 15,009 initial applications, 11,635 (78%) were de novo (“Type 1”) applications, 8,303 (55%) had modular budgets, 2,668 (18%) had multiple PI’s, 3917 (26%) involved new investigators, 5,405 (36%) involved human subjects, and 9205 (60%) involved animal models.

Now the review outcomes: among the 15,009 initial applications, 10,196 (68%) were discussed by the peer review study section. Figure 1 shows the likelihood that the amended application was discussed according to what happened to the initial application. For the 10,196 submissions where the initial application was discussed, 8,843 (87%) of the amended applications were discussed. In contrast, for the 4,813 submissions where the initial application was not discussed, only 2,350 (49%) of the amended applications were discussed.

 

Figure 1

 

Figure 2 shows the same data, but broken down according to whether the submission was a de novo application (“Type 1”) or a competing renewal (“Type 2”). The patterns are similar.

 

Figure 2

Table 1 provides a breakdown of discussed amended applications, binned according to the impact score of the original application. Well over 90% of amended applications with impact scores of the original applications 39 or better were discussed.

 

Table 1:

Impact Score Group Amended Application Discussed Amended Application Not Discussed Total 10-29

759  (97%)

23  (3%)

782

30-39

3,779  (94%)

241  (6%)

4,020

40-49

3,116  (84%)

588 (16%)

3,704

50 and over

1,189  (70%)

501 (30%)

1,690

Total

8,843  (87%)

1,353 (13%)

10,196

 

We’ll now shift focus to those submissions in which both the initial and amended applications were discussed and received a percentile ranking. Figure 3 shows box plots of the improvement of percentile ranking among de novo and competing renewal submissions. Note that a positive number means the amended application did better. Over 75% of amended applications received better scores the second time around.

Figure 3

What are the correlates of the degree of improvement? In a random forest regression, the strongest predictors, by far, were the initial percentile ranking and the all other candidate predictors – de novo versus competing renewal, fiscal year, modular grant, human and/or animal study, multi-PI applications, and new investigator applications – were minor.

Figure 4 shows the association of percentile ranking improvement according to initial percentile ranking and broken out by de novo application versus competing renewal status. Not surprisingly, those applications with the highest (worst) initial percentile ranking improved the most – they had more room to move! Figure 5 shows similar data, except stratified by whether or not the initial application included a modular budget.

Figure 4

Figure 5

These findings suggest that there is something to the impression that amended applications do not necessarily fare better on peer review – but worse outcomes much more represent the exception than the norm. Close to 90% of applications that are discussed on the first go-round are discussed when amended. And for those applications that receive percentile rankings on both tries, it is more common for the percentile ranking to improve.

Categories: NIH-Funding

Mid-career Investigators and Shifting Demographics of NIH Grant Recipients

Mon, 03/06/2017 - 17:48

While NIH policies focus on early stage investigators, we also recognize that it is in our interest to make sure that we continue to support outstanding scientists at all stages of their career. Many of us have heard mid-career investigators express concerns about difficulties staying funded. In a 2016 blog post we looked at data to answer the frequent question, “Is it more difficult to renew a grant than to get one in the first place?” We found that new investigators going for their first competitive renewal had lower success rates than established investigators. More recently, my colleagues in OER’s Statistical Analysis and Reporting Branch and the National Heart Lung and Blood Institute approached the concerns of mid-career investigators in a different way – by looking at the association of funding with age. Today I’d like to highlight some of the NIH-wide findings, recently published in the PLOS ONE article, “Shifting Demographics among Research Project Grant Awardees at the National Heart, Lung, and Blood Institute (NHLBI)”.

Using age as a proxy for career stage, the authors analyzed funding outcomes for three groups: principal investigators (PIs) aged 24 – 40, 41-55 (the mid-career group), and 56 and above. The figure below shows the proportion of research project grant awardees in each of these three groups. The proportion of NIH investigators falling into the 41-55 age group declined from 60% (1998) to 50% (2014).

All figures from: Charette M, Oh Y, Maric-Bilkan C, Scott L, Wu C, Eblen M et al. Shifting Demographics among Research Project Grant Awardees at the National Heart, Lung, and Blood Institute (NHLBI). PLOS ONE 2016. (CC-BY)

Interestingly, regardless of age, applicants have an approximately equal chance of having a new or renewal application funded.

What then, might contribute to the decline in the proportion of mid-career NIH-supported investigators seen in the earlier figure?  The authors propose two factors: multiple grants and average RPG award funding.

The authors argue that having multiple grants may confer an “enhanced survival benefit”, as PIs with multiple grants have a salary-support buffer that enables them to remain in the academic research system.  If an investigator holds zero or one grant, an application failure could well mean laboratory closure, whereas an investigator who holds multiple grants can keep the laboratory open. Moving from younger to mid-career to older investigators, the average number of RPG awards per awardee increased  from 1.28 to 1.49 to 1.54. Consistent with this, the amount of total RPG funding per awardee (looking at direct costs, specifically) is highest for PIs 56 and over:

The funding spread is further enhanced by the distribution of certain types of research programs, such as P01 awards, which support multi-project research projects. The figure below shows the age group distribution of P01 funding (direct costs only) from 1998-2014. As noted by the authors, by 2014, NIH PIs age 56 and over, who represent just 34% of the total NIH RPG awardee population, receive 70% of competing P01 funding.

In their discussion, the authors suggest that their analyses should stimulate alternate explanations about why funding is being increasingly distributed to well-established investigators.  They write, “For instance, a widely held belief within the academic research community is that the scientific workforce is aging because more established investigators are simply better scientists. In this belief we are all ‘Darwinists’, in that, during stressful times our first presumption is that the best survive and the merely good fall away. But what if that is not the full situation?” Of note, two recent papers in Science (here and here ) present evidence that scientific impact does not necessarily increase with experience; the policy implication is that it may make more sense to maximize stable funding to meritorious scientists throughout the course of their careers.

I encourage you to take a look at the full paper, which contributes to our ongoing discussion of the age of the biomedical research workforce, and contributes to past, present, and future studies of how we can sustain the careers of those we fund as trainees and early-stage investigators.

Categories: NIH-Funding

Meet NIH & HHS in New Orleans for the NIH Regional Seminar, May 3-5!

Fri, 03/03/2017 - 12:21

Do you remember walking into the person’s office down the hall from you when you needed to ask a question, instead of “popping” them an email, instant message, or text? There’s no disputing that the digital age definitely has its advantages – making information sharing faster, cheaper, and more convenient, and allowing us to communicate locally and abroad in seconds. But in this fast paced world of instant communication – the internet, email, and all of our social media choices – sometimes we forget how valuable face-to-face interactions can be.

That is exactly one of the reasons I love the NIH Regional Seminars on Grant Funding and Program Administration. The seminars give me the opportunity to join over 60 of my fellow NIH and HHS faculty in sharing our knowledge and perspectives to attendees who are eager to learn how to navigate NIH, know the latest NIH initiatives, and understand how NIH and HHS policies affect their role in working on NIH grants. The seminars cover the basics that can help you understand how to find funding, write a grant application, manage a grant award, and comply with policies. But they also offer sessions that are a more advanced, including subjects you would see here on my blog. Some of those hot topic discussions include upcoming changes in how we will be supporting and providing oversight of clinical trials, as well as diversity in the biomedical research workforce. There are career planning sessions where we highlight topics related to getting your first NIH award and administrative topics such as how to manage international collaborations.

Perhaps even more valuable than formal presentations, in my mind, are the opportunities these events provide you and our faculty to interact….to meet, learn, and share from one another. Throughout the seminars, we offer opportunities to meet individually with our faculty to make connections, ask questions, and share perspectives.

Details on the NIH Regional Seminar in New Orleans, Louisiana (May 3-5) can be found on our website, and registration is open now. If the spring seminar location or dates aren’t ideal for you, then please consider our second seminar of 2017 in Baltimore, Maryland (October 25-27).

I look forward to seeing and meeting face-to-face with some of you there!

Categories: NIH-Funding

Resubmissions Revisited: Funded Resubmission Applications and Their Initial Peer Review Scores

Fri, 02/17/2017 - 14:47

“My first submission got an overall impact score of 30. Is that good enough? What’s the likelihood I’ll eventually get this funded?”, or, “My first submission was not even discussed. Now what? Does anyone with an undiscussed grant bother to resubmit? And what’s the likelihood I’ll eventually get this funded?”

In a past blog we provided some general advice and data to help you consider these types of questions, and obviously the answers depend on specifics — but even so, based on your feedback and comments we thought it would be informative to offer high-level descriptive data on resubmission and award rates according to the first-time score, that is, the overall impact score on the A0 submission.

Here we describe the outcomes of 83,722 unsolicited A0 R01 applications submitted in fiscal years 2012 through 2016. Of these, 69,714 (or 83%) were “Type 1” (de novo) applications, while 14,008 (or 17%) were “Type 2” (or competing renewal) applications.

Let’s begin with looking at award rates: as a reminder, award rates are the total number of awards divided by the total number of applications. Figure 1 shows the award rate of these A0 applications broken out by type 1 (de novo) vs type 2 (competing renewals). (If you’re interested in looking at new and competing renewals in aggregate, for this and the following figures, these are shown in the Excel file we’ve posted to the RePORT website.)

Figure 1

Now, let’s look at the resubmission rates for the unfunded A0 applications, binned by overall impact score and broken out by de novo or competing renewal (type 1 versus type 2). As might be expected, we see a strong gradient: applicants were much more likely to resubmit the better their overall impact score. Resubmission rates declined from 80-90% for applications with overall impact scores of 10-30 to just under 20% for Type 1 applications which were not discussed, and just under 50% for Type 2 application which were not discussed For any given A0 overall impact score, we were more likely to see resubmissions with Type 2 applications, the difference between Type 1 and Type 2 resubmission rates being most striking for non-discussed A0s.

Figure 2

Now let’s look at the outcomes of the unfunded applications on their first resubmission (A1). Figure 3 shows the award rates for A1s according to the A0 overall impact score. Not surprisingly we see a similar gradient – the better the A0 overall impact score, the more likely the revision was awarded. For A0 applications that were not discussed, the A1 award rate was between 12% and 22% —  quite low, but not zero. For any given A0 overall impact score, A1 award rates are higher for Type 2 applications.

Figure 3

Finally in Figure 4 we move to eventual award rates – taking into account awards at the A0 or A1 stage. Applications with an A0 overall impact score of 10-30 have an 80-90% chance of eventually being funded. In contrast, applications not discussed at the A0 stage have less than a 10% chance of being funded.

Figure 4

We present these outcomes to show a high-level picture of applicant behavior and award outcomes. Nonetheless, as we have discussed before, we urge you to take advantage of extensive available information on our web pages and to feel free to contact your program officials for individual-level advice.

I am most grateful to my colleagues in the OER Statistical Analysis and Reporting Branch for helping put these data together.

 

Categories: NIH-Funding

Following up on the Research Commitment Index as a Tool to Describe Grant Support

Wed, 02/15/2017 - 15:20

Many thanks for your terrific questions and comments to last month’s post, Research Commitment Index: A New Tool for Describing Grant Support. I’d like to use this opportunity to address a couple of key points brought up by a number of commenters; in later blogs, we’ll focus on other suggestions.

The two points I’d like to address here are:

  • Why use log-transformed values when plotting output (annual weighted relative citation ratio, or annual RCR) against input (annual research commitment index, or annual RCI).
  • What is meant by diminishing returns.

We use log-transformed values because scientific productivity measures follow a highly skewed, log-normal distribution.  This is well described and therefore log-transformed plots are the norm in the literature (see here , here , and here  for examples).

Figures 1 and 2 show the highly skewed distributions of annual RCI and annual weighted RCR in our sample of over 70,000 unique investigators who received at least one NIH research project grant (RPG) between 1995 and 2014.

Figure 1

Figure 2

When we and others  refer to “diminishing returns,” what we mean is that we see diminishing marginal returns. Marginal returns are incremental returns on input associated with incremental increases in input. Mathematically, we talk about the slope (or more precisely first derivative) of the production plot that relates annual RCR to annual RCI.

Figure 3 is the log-log plot; it is the same as the prior blog’s figure 5, except that the axis labels show log values. I’ve added dotted tangent lines that illustrate how the slope decreases at higher values of annual RCI.

Figure 3

Another way to visualize this is to look directly at marginal productivity, at how RCR changes compare in respect to changes in RCI – in other words, how the instantaneous slopes shown in Figure 3 (aka first derivative) change as RCI increases. Figure 4 shows the first derivative of the association of log annual RCR to log annual RCI with values of log annual RCI.  As annual RCI increases, the marginal productivity decreases – this is what is meant by diminishing returns.

Figure 4

Figure 5 shows a non-transformed plot relating annual RCR to annual RCI. It’s technically incorrect – since both annual RCR and annual RCI follow highly skewed, log-normal distributions. Nonetheless, the dotted tangent lines show that the slope (marginal productivity) decreases with increasing RCI, again consistent with the phenomenon of diminishing marginal returns.

Figure 5

The phenomenon of diminishing returns is one that is well known across many fields of human endeavor. It’s important to recognize that diminishing returns does not mean negative returns. If we, as a funding agency, choose to increase funding to a laboratory, there is a high likelihood that the increased funding will lead to increased productivity. But the incremental increase in productivity may be less than the incremental increase in support ; if baseline funding is already high, the incremental returns may be less than if baseline funding is lower. Alberts and colleagues  pointed this out in their essay.  Others from Canada  and Germany  have put forth similar arguments: funding agencies might maximize their impact by funding a larger, and more diverse, group of investigators with the limited resources available.

Again, many thanks for your terrific comments. We look forward to addressing other points (including basic and clinical science and changes over time) in future posts.

Categories: NIH-Funding

FY2016 By The Numbers

Fri, 02/03/2017 - 16:28

Over the past few days, we released our annual web reports, success rates and NIH Data Book with updated numbers for fiscal year 2016. Overall, we see steady increases. In addition to looking back over the numbers we typically highlight in this post, we want to point out several new research project grant (RPG)-specific activity codes used to support extramural research. FY 2016 saw the launch of some new activity code uses, such the Phase 1 Exploratory/Developmental Grant (R61 – in lieu of the R21), of which 14 new projects were funded. Large-scale RPGs with complex structures like the RM1 increased substantially from 2015 (when we first began to fund RM1s), from slightly over $4 million in grant money to over $15 million. These activity codes, as well as those more familiar to you such as the R21, collectively supported a variety of specific scientific areas such as the improvement of outcomes in cancer research, support pilots for Alzheimer’s research, genomic research centers, and clinical studies for mental disorders.

Over the past year, NIH grants supported almost 2,400 research organizations, including higher education, independent hospitals and research institutes. We received 54,220 competing research project grant applications in fiscal year 2016, a steady increase. Of these, 30,106 were applications for R01-equivalent grants (as a reminder, R01-equivalents are mostly R01s, but also include activity codes for similar independent RPG programs such as the R37 MERIT award). Although, organizations have seen increased support for RPGs in 2016 totaling $17,137,754,907, for competing and noncompeting grants, the average size of awards continued to increase to $499,221, a historical high for both competing and non-competing awards.

The success rate for competing FY 2016 RPG applications was 19.1% compared to 18.3% in FY 2015. The 2016 success rate for competing R01-equivalent applications was also slightly higher than last year (19.9% compared with 18.9% in 2015). Success rates continue to remain far below the 30% levels we saw 15-20 years ago, during the NIH doubling; the low success rates reflect the hypercompetitive environment we continue to face.

I’ve included a highlight of some additional numbers below from the 2016 fiscal year as well the two prior fiscal years.

 

  2014 2015 2016 Research Project Grants Number of research project grant (RPG) applications: 51,073 52,190 54,220 Number of new or renewal (competing) RPG awards: 9,241 9,540 10,372 Success rate of RPG applications: 18.1% 18.3% 19.1% Average size of RPGs: $472,827 $477,786 $499,221 Total amount of NIH funding that went to RPGs (both competing and noncompeting): $15,635,912,476 $15,862,012,059  $17,137,754,907 R01-equivalents Number of R01-equivalent grant applications: 27,502 28,970 30,106 Number of new or renewal (competing) R01-equivalent awards: 5,163 5,467 6,010 Success rates for R01-equivalent applications: 18.8% 18.9% 19.96% Average size of R01-equivalent awards: $427,083 $435,525 $458,287 Total amount of NIH funding that went to R01-equivalents (both competing and non-competing): $10,238,888,890 $10,279,687,172 $11,077,251,191
Categories: NIH-Funding

Research Commitment Index: A New Tool for Describing Grant Support

Thu, 01/26/2017 - 16:59

On this blog we previously discussed ways to measure the value returned from research funding. The “PQRST” approach (for Productivity, Quality, Reproducibility, Sharing, and Translation) starts with productivity, which the authors define as using measures such as the proportion of published scientific work resulting from a research project, and highly cited works within a research field.

But these factors cannot be considered in isolation. Productivity, most broadly defined, is the measure of output considered in relation to several measures of inputs. What other inputs might we consider? Some reports have focused on money (total NIH funding received), others on personnel. And all found evidence of diminishing returns with increasing input: among NIGMS grantees receiving grant dollars, among Canadian researchers receiving additional grant dollars, and among UK biologists overseeing more personnel working in their laboratories.

It might be tempting to focus on money, but as some thought leaders have noted, differing areas of research inherently incur differing levels of cost. Clinical trials, epidemiological cohort studies, and research involving large animal models are, by their very nature, expensive. If we were to focus solely on money, we might inadvertently underestimate the value of certain highly worthwhile investments.

We could instead focus on number of grants – does an investigator hold one grant, or two grants, or more? One recent report noted that more established NIH-supported investigators tend to hold a greater number of grants. But this measure is problematic, because not all grants are the same.  There are differences between R01s, R03s, R21s, and P01s that go beyond the average amount of dollars each type of award usually receives.

Several of my colleagues and I, led by NIGMS director Jon Lorsch – chair of an NIH Working Group on Policies for Efficient and Stable Funding – conceived of a “Research Commitment Index,” or “RCI.” We focus on the grant activity code (R01, R21, P01, etc) and ask ourselves about the kind of personal commitment it entails for the investigator(s). We start with the most common type of award, the R01, and assign it an RCI value of 7 points. And then, in consultation with our NIH colleagues, we assigned RCI values to other activity codes: fewer points for R03 and R21 grants, more points P01 grants.

Table 1 shows the RCI point values for a PI per activity code and whether the grant has one or multiple PIs.

Table 1:

Activity Code Single PI point assignment Multiple PI point assignment P50, P41, U54, UM1, UM2 11 10 Subprojects under multi-component awards 6 6 R01, R33, R35, R37, R56, RC4, RF1, RL1, P01, P42, RM1, UC4, UF1, UH3, U01, U19, DP1, DP2, DP3, DP4 7 6 R00, R21, R34, R55, RC1, RC2, RL2, RL9, UG3, UH2, U34, DP5 5 4 R03, R24, P30, UC7 4 3 R25, T32, T35, T15 2 1

Figure 1 shows by a histogram the FY 2015 distribution of RCI among NIH-supported principal investigators. The most common value is 7 (corresponding to one R01), followed by 6 (corresponding to one multi-PI R01). There are smaller peaks around 14 (corresponding to two R01s) and 21 (corresponding to three R01s).

Figure 1:

Figure 2 uses a box-plot format to show the same data, with the mean indicated by the larger dot, and the median indicated by the horizontal line. The mean of 10.26 is higher than the median of 7, reflecting a skewed distribution.

Figure 2:

From 1990 through 2015 the median value of RCI remained unchanged at 7 – the equivalent of one R01. But, as shown in Figure 3, the mean value changed – increasing dramatically as the NIH budget began to increase just before the time of the NIH doubling.

Figure 3:

Figure 4 shows the association of RCI and the age of PIs; the curves are spline smoothers. In 1990, a PI would typically have an RCI of slightly over 8 (equivalent to slightly more than one R01) irrespective of age. In 2015, grant support, as measured by RCI, increased with age.

Figure 4:

We now turn to the association of input, as measured by the RCI, with output, as measured by the weighted Relative Citation Ratio (RCR). We focus on 71,493 unique principal investigators who received NIH research project grant (RPG) funding between 1996 and 2014.  We focus on RPGs since these are the types of grants that would be expected to yield publications and because the principal investigators of other types of grants (e.g. centers) won’t necessarily be an author on all of the papers that come out of a center. For each NIH RPG PI, we calculate their total RCI point values for each year, and divide it by the total number of years of support. Thus, if a PI held one R01 for 5 years, their RPG RCI per year would be  7 ((7 points * 5) / (5 years). If a PI held two R01s for 5 years (say 2000-2004) and during the next two years (say 2005 and 2006) held one R21, their RPG RCI per year would be 11.43 [(14 points * 5) + (5 points * 2)] / (7 years).

Figure 5 shows the association of grant support, as measured by RPG RCI per year, with productivity, as assessed by the weighted Relative Citation Ratio per year. The curve is a spline smoother. Consistent with prior reports, we see strong evidence of diminishing returns.

Figure 5:

A limitation of our analysis is that we focus solely on NIH funding.  As a sensitivity test, we analyzed data from the Howard Hughes Medical Institute (HHMI) website and identified 328 unique investigators who received NIH RPG funding and HHMI funding between 1996 and 2014.  Given that these 328 investigators received both NIH grants and HHMI support (which is a significant amount of long term person-based funding), they would be expected to be highly productive given the additive selectivity of receiving support from both NIH and HHMI.  As would be expected, HHMI investigators had more NIH funding (measured as total RCI points, annual RCI, number of years with NIH funding) and were more productive (more NIH-funded publications, higher weighted RCR, higher annual RCR, and higher mean RCR).

Figure 6 shows annual weighted RCR by annual RCI, stratified by whether the PI also received HHMI funding.  As expected, HHMI investigators have higher annual weighted RCR for any given RCI, but we see the same pattern of diminishing returns.

Figure 6:

Putting these observations together we can say:

  • We have constructed a measure of grant support, which we call the “Research Commitment Index,” that goes beyond simple measures of funding and numbers of grants. Focusing on funding amount alone is problematic because it may lead us to underestimate the productivity of certain types of worthwhile research that are inherently more expensive; focusing on grant numbers alone is problematic because different grant activities entail different levels of intellectual commitment.
  • The RCI is distributed in a skewed manner, but it wasn’t always so. The degree of skewness (as reflected in the difference between mean and median values) increased substantially in the 1990s, coincident with the NIH budget doubling.
  • Grant support, as assessed by the RCI, increases with age, and this association is stronger now than it was 25 years ago.
  • If we use the RCI as a measure of grant support and intellectual commitment, we again see strong evidence of diminishing returns: as grant support (or commitment) increases, productivity increases, but to a lesser degree.
  • These findings, along with those of others, suggest that it might be possible for NIH to fund more investigators with a fixed sum of money and without hurting overall productivity.

At this point, we see the Research Commitment Index as a work in progress and, like the Relative Citation Ratio, as a potentially useful research tool to help us better understand, in a data-driven way, how well the NIH funding process works. We look forward to hearing your thoughts as we seek to assure that the NIH will excel as a science funding agency that manages by results.

I am grateful to my colleagues in the OER Statistical Analysis and Reporting Branch, Cindy Danielson and Brian Haugen in NIH OER, and my colleagues on the NIH Working Group on Policies to Promote Efficiency and Stability of Funding, for their help with these analyses.

Categories: NIH-Funding