Rock Talk: NIH Extramural News

Syndicate content
NIH Extramural Nexus
Updated: 9 hours 24 min ago

Perspectives on Evidence-based Funding

Fri, 06/23/2017 - 16:17

At the NIH Regional Seminar this past May, I had the pleasure of giving the keynote talk and presenting different perspectives on how NIH can further the impact of our research funding. Some of the topics I presented in this talk will be familiar to frequent Open Mike blog readers – our concerns about the hypercompetitive nature of applying for NIH support, for example. Others we haven’t discussed in depth here yet – such as how we might measure the contributions of NIH-supported research to treating diseases. My staff recorded this talk and has made it available to you on the NIH Grants YouTube channel. If you’re interested in the topics covered here on the blog (which I hope you are, since you are reading this now!) then you may be interested in this talk.

 

Categories: NIH-Funding

NIH’s Next Generation Researchers Initiative

Fri, 06/16/2017 - 12:22

At the Advisory Committee to the Director meeting last week, NIH Principal Deputy Director Dr. Larry Tabak presented a new NIH initiative to strengthen the biomedical workforce. This presentation followed extensive discussions with stakeholders both here through this blog, at stakeholder meetings, and at NIH advisory council meetings over the last month. We heard unequivocal endorsements for supporting early-career and mid-career researchers given the hypercompetitive funding environment — a challenge we have addressed many times in our blog posts. However, many voiced concerns about our taking a formulaic approach to capping grant funding and called on us to be more direct in enabling greater support for the next generation of biomedical researchers.

For this reason, we have shifted our approach to a focused initiative to support early- and mid-career investigators. As described in a June 8 NIH Director’s statement, and in recognition of the call for such action in the 21st Century Cures Act, we are naming this effort the Next Generation Researchers Initiative. We will take a multi-pronged approach to increase the number of NIH-funded early-stage and mid-career investigators and stabilize the career trajectory of scientists. We describe these approaches on a new web page that we will continue to update. Our activities address both research workforce stability, and evaluation of our investments in research. In brief, NIH will:

  • commit substantial funds from NIH’s base budget, beginning this year with about $210 million, and ramping to approximately $1.1 billion per year after five years (pending availability of funds) to support additional meritorious early-stage investigators and mid-career investigators
  • create a central inventory and track the impact of NIH institute and center funding decisions for early- and mid-career investigators with fundable scores to ensure this new strategy is effectively implemented in all areas of research
  • place greater emphasis on current NIH funding mechanisms aimed at early- and mid-career investigators
  • aim to fund most early-career investigators with R01 equivalent applications that score in the top 25th percentile
  • encourage multiple approaches to develop and test metrics that can be used to evaluate the effectiveness of our research portfolio, and assess the impact of NIH grant support on scientific progress, to ensure the best return on investment

Applicants do not need to do anything special to be eligible for this funding consideration. Beginning this fiscal year, the NIH institute or center (IC) who would fund the grant will give your application special consideration for support if you are:

  • an early-stage investigator (within 10 years of completing your terminal research degree or medical residency and have not previously received a substantial independent NIH research award) and receive a score in the top 25th percentile (or an impact score of 35 if the application is not percentiled)
  • a mid-career investigator (within 10 years of receiving your first NIH R01 equivalent award) who scores in the 25th percentile, and either:
    • are at risk of losing all support, or,
    • are a particularly promising investigator currently supported by a single ongoing award (i.e, NIH will prioritize funding an additional concurrent research project grant award)

NIH ICs make funding decisions to support their mission, and this plan provides flexibility in how ICs will meet the NIH-wide goal of supporting highly scoring early-stage and mid-career researchers. Each IC will make its decisions about how it will prioritize funding to support this initiative.

As further details are announced, we will be updating the Next Generation Researchers Initiative web page with this information. In the meantime, we encourage you to read the NIH Director’s statement, and look at the Advisory Committee to the Director presentation and webcast recording.

We appreciate your feedback in addressing the very important issue of stabilizing the biomedical research workforce. Your comments to this blog (or via email, if preferred) are welcome. With the continued input from individuals at every career stage, as well as research institutions and other stakeholders, we can work together to make changes that ensure the long-term stability and strength of the U.S. biomedical research enterprise, and that advance science to improve health for all.

Categories: NIH-Funding

Getting to Know Federal Funders and their Research Interests

Tue, 06/06/2017 - 15:29

Working with NIH applicants and awardees as an extramural program division director, I often shared the NIH RePORTER resource as a tool for exploring the research topics NIH supports.  Learning what projects we support, using a robust database of historical and newly-funded projects (updated weekly), provides researchers valuable insight as they consider developing their own research programs and applications for funding.

Another valuable tool which you might be familiar with is Federal RePORTER, which expands the RePORTER concept to support searching over 800,000 projects across 17 Federal research agencies, with trans-agency data updated annually. As Federal RePORTER recently received an update to introduce some new functions and additional agency data we’d like to highlight some of the ways it helps both the public and scientific researchers alike understand the government’s research portfolio and trace its impact through published articles and patents.

Figure 1

Search or browse data across agencies: Federal RePORTER is designed for ease-of-use. The homepage offers quick search tools for the most commonly used fields, or you can skip the search and use the interactive bar charts and maps on the home page to quickly drill down to projects funded by a certain agency or projects occurring in a particular state. We’ve also added easy-to-follow walkthroughs as “Guided Tour” links on the home page, advanced search page, and results page to learn more. From your search results, you can refine results through links on the sidebar, or read more about individual projects (including a description, and details on the investigator, research organization, and funder.)

Figure 2

Figure 3

Figure 4

Explore search results even further:  As with NIH RePORTER, you can export the results for further exploration and analysis, or use the built -in “Charts”, “Map”, or “Topics” tools from the sidebar to learn more about the projects, as in the examples shown below. For example, you can summarize the projects by agency, state, or fiscal year (Figure 2), or map where the research is taking place (Figure 3). You can also explore groups of scientific topics within your search results (for example, a search for “lead” and “drinking water” returns groups of projects covering “ground water”, “ early life”, “arsenic exposure”, and more.) From there, you can drill down into subgroups, to generate lists of projects in that group (Figure 4).

Identify research outcomes: Federal RePORTER aims to link Federal funding to the outcomes of research including publications and patents. Using agency-supplied information, the public can trace the impact of the funding by seeing what academic publications and patents cited the project funding.

With growing resources for identifying agency-supported publications, future plans include expanded coverage of these two important measures of research impact.

These are just a few of the excellent Federal RePORTER features that can help you find collaborators, get to know the research interests of federal science-funding agencies, understand your institution’s sources of support, and prepare your applications and research plan equipped with additional knowledge. We are grateful to all of the federal agencies and offices that provide data and support to Federal RePORTER and make this resource possible.  These new functions, additional agency data, and modernized user interface make it easier for you – and all stakeholders in the U.S. scientific enterprise – to learn about the Federal science and engineering portfolio.

Categories: NIH-Funding

Following Up on Your Feedback on How to Strengthen the Biomedical Research Workforce

Mon, 06/05/2017 - 13:07

We appreciate the many thoughtful comments posted to the blog about working together to improve NIH funding support for early- and mid-career investigators to stabilize the biomedical workforce and research enterprise using a measure called the Grant Support Index (GSI). Some clear themes have emerged, including:

  • Possible unintentional adverse consequences
  • Possible deleterious effects on collaborative research
  • If/how institutional training grants should factor into the GSI
  • Other ways to support a larger number of scientists
  • Other approaches to measure PI effort
  • Discussion of the GSI values (point scale)
  • Having us look internally at NIH’s intramural program

Based on community feedback from the blog, council meetings, and other discussions with stakeholders, we have made changes to the planned policy to include additional measures beyond GSI to strengthen NIH funding support for early-and mid-career investigators. We will also provide greater flexibility in the use of GSI as a measure for guiding NIH funding decisions, and will make other changes to be sure that this approach does not discourage collaboration and training. These updates will be presented at the June meeting of the NIH Advisory Committee to the Director. We encourage you to tune in via NIH videocast to the presentation on Thursday, June 8.

To provide us with additional feedback, please post comments to this blog or send an email to PublicInput@od.nih.gov.

Categories: NIH-Funding

Implementing Limits on Grant Support to Strengthen the Biomedical Research Workforce

Tue, 05/02/2017 - 15:15

NIH realizes that, as stewards of the American investment in biomedical sciences, we must do all we can to protect the future of the biomedical research enterprise, taking additional measures regardless of our budget situation. In the opening pages of this blog, we noted that our increasingly hypercompetitive system is threatening the future of biomedical research and of the hundreds of thousands of scientists who we look to for discovering tomorrow’s cures. This is a strange irony, given that the last 25-50 years have been times of extraordinary discovery and progress in basic, translational, and applied science. Death rates from cardiovascular disease have plummeted, and death rates from cancer are falling steadily. Scientists have a much deeper understanding of human biology to the point where this knowledge can drive the design of drugs and biologics. Big data and high-throughput technologies now enable rapid development and testing of hypotheses that previously would have taken years. The successes are myriad. But so are the problems, problems so real that some have gone so far as to write, “It is time to confront the dangers at hand and rethink some fundamental features of the US biomedical research system.”

In these pages and elsewhere we have seen:

  • Concerns that the core problems besetting biomedical research are “too many researchers vying for too few dollars, [and] too many postdocs competing for too few faculty positions”
  • Data showing that since the NIH doubling ended, the number of scientists seeking NIH funding has increased at a much higher rate than the number of scientists NIH funds
  • Concerns that we are not paying enough attention to the number of investigators we support, as, given the unpredictable nature of science, we are more likely to generate transformational discoveries by funding more laboratories and research groups
  • Data showing that the supply of scientists continues to outstrip demand
  • Data showing that a relatively small proportion of scientists are receiving a large proportion of available funds; other data showing that there may be an increasing concentration of funds going to relatively few institutions
  • Data and models showing that the scientific workforce is aging at a much more rapid rate than the general workforce, leading to concerns that promising younger and mid-career investigators risk being crowded out
  • Data suggesting that scientific productivity tracks only weakly with funding; larger scientific groups or greater degrees of funding may not generate as much additional scientific output as expected due to the impact of diminishing returns
  • Data and concerns suggesting that the group of scientists who are most seriously affected by all these trends are young faculty, who see the need for funding as the biggest threat to their long-term success

These many concerns have drawn much attention – and perhaps of greatest concern is the wellbeing of the next generation of scientists. The US Congress, in the 21st Century Cures Act, has called on NIH to develop and promote policies that will attract and sustain support for diverse groups of outstanding young and new investigators.

What can be done to relieve the pressures of hypercompetition, to offer a brighter future for today’s early and mid-career investigators, and to minimize the impact of diminishing returns? How can we increase the number of independent early career scientists and stabilize the career trajectories of those who do high quality work? While we have seen some success with the implementation of our Early Stage Investigator policy, the concerns persist.

Going back to the opening pages of this blog, we noted a few of the recommendations that others have put forth: these included capping support for individual laboratories, capping salary support, supporting programs as opposed to projects, supporting more staff scientists, raising post-doc salaries, training scientists for non-academic careers, and assuring more efficient funding of expensive core facilities.

Shortly after we released that post, FASEB, which represents 30 scientific societies that count over 125,000 members, released a detailed report with extensive recommendations on how to sustain discovery in a strained biomedical research system. These recommendations included a call on research sponsors to “monitor the amount of funding going to a single individual or research group to ensure a broader distribution of research funding.” The report goes on to say, “Limiting the amount of funding awarded to any individual scientist or laboratory would enable more people to be actively engaged in research. With more ‘hands at the bench,’ the number of ideas would increase, and this could expedite progress in many areas of science.” A possible cap could be $1 million of RPG funding – such a cap could potentially free up enough NIH money to fund an additional 2,000 scientists. Kimble et al wrote, though, that any “redistribution” of funds from well-funded to unfunded (or nearly unfunded) investigators “will be painful, especially for established senior investigators, but necessary to support the next generation and cutting edge research.”

We and others have noted, however, that focusing on money alone as a measure of support may be problematic, as different areas of research entail different levels of expense: it is inherently more expensive to run clinical trial networks and large-animal research facilities. Therefore, we developed and described a different measure of grant support, which we called the “Research Commitment Index,” and will now refer to as the “Grant Support Index.” This is effectively a modified grant count, one that allows for different cost scales across diverse types of research, while at the same time accounting for the differing levels of intellectual and leadership commitment entailed by various NIH grant mechanisms. Consistent with what others have found with money or with personnel, we find that increasing levels of the Grant Support Index are associated with diminishing incremental returns.

Again — What can be done to relieve the pressures of hypercompetition, to offer a brighter future for today’s younger and mid-career investigators, and to minimize the impact of diminishing returns? How can we increase the number of early career funded scientists and stabilize the career trajectories for those who do high quality work?

NIH Director Francis Collins has announced that NIH is proposing several steps to set us on a stronger, more stable path, and to assure that NIH is maximizing the impact of the public dollars we spend. We will continue our work in monitoring, on a trans-agency level, the number and characteristics of the researchers we support with the idea that by doing so we can broaden and diversify the enterprise. We will take additional efforts to identify funding for even more early stage investigators who submit meritorious applications. When necessary, we will encourage use of bridge funding to offer additional stability and chances for obtaining an award.

To improve opportunities for early established mid-career investigators, we will take special steps to identify meritorious applicants who are only one grant away from losing all funding. Prioritizing these applicants for funding consideration may alleviate the squeeze being felt by mid-career investigators.

And we will monitor, on a trans-agency basis, investigators’ Grant Support Index, with the idea that over time and in close consultation with the extramural research community, we will phase in a resetting of expectation for total support provided to any one investigator. We plan to implement a Grant Support Index cap of 21 points, essentially the equivalent of 3 single-PI R01 grants. Over the next few weeks to months, we will meet with NIH Advisory Councils and other stakeholder groups to explore how best to phase in and implement this cap – so that formal assessment of grant support can be used to best inform, on a trans-NIH basis, our funding decisions.

In our conversations over the next weeks and months, we will need to consider carefully a number of issues and details. How should the Grant Support Index be calibrated? Should we assign more or fewer points to certain grant mechanisms? What headline metrics should we follow? How will we know whether we achieving desired effects of funding more early career investigators and stabilizing their trajectories? How do we assure that we don’t inflict unintended harms on scientific progress, on the productivity of highly productive consortia, or on the stability of the research ecosystem? When would it be appropriate to allow exceptions to caps on individual researchers? How will decisions on exceptions be made? And what analogous steps should be taken with NIH’s Intramural Research Program?

We recognize the serious challenges and dangers that we face given increasing degrees of hypercompetition and scientific complexity. Yet, NIH is committed to assuring the robustness and stability of the next generation of investigators. We have a responsibility to the public to assure that we are optimizing the use of our limited resources to obtain the maximum impact possible. We look forward to working with you to figure how best we can use the tools available to us to “bend the curves,” including resetting expectation on support provided to any one research group. Finally, we will monitor and track data to assess progress, make corrections as needed, and mitigate unintended consequences.

Categories: NIH-Funding

Certificates of Confidentiality for NIH Grants

Fri, 04/28/2017 - 08:29

Earlier this year I wrote a post about the 21st Century Cures Act and its changes that directly affect the NIH. One part of this new legislation contains provisions to improve clinical research and privacy through certificates of confidentiality.

Currently, certificates of confidentiality (or “CoCs”) are provided upon request to researchers collecting sensitive information about research participants. Soon, CoCs will be automatically provided for NIH-supported research, as set forth in the 21st Century Cures Act.

CoCs are an important to both the researchers conducting the study, and to the patient volunteers who make the research possible through their participation. CoCs protect researchers and institutions from being compelled to disclose information that would identify their research participants. They also provide research participants with strong protections against involuntary disclosure of their sensitive health information.

NIH-funded research has evolved since CoCs were first introduced in the 1970s. It is now more common to have projects that involve large-scale data sets and genomic information, and likewise, many thoughts leaders have sought to have the CoC process provide privacy protections more broadly.

We will soon be publishing an NIH Guide notice announcing how and when NIH will begin including certificates of confidentiality in the terms and conditions of award. By automatically providing CoCs as part of the NIH award process, we can provide an additional measure of protection to research participants, through a streamlined process that does not add additional burden to researchers. Stay tuned to the NIH Guide for Grants and Contracts for more detailed information.

Categories: NIH-Funding

Applications, Resubmissions, and the Relative Citation Ratio

Tue, 04/25/2017 - 16:02

Measuring the impact of NIH grants is an important input in our stewardship of research funding. One metric we can use to look at impact, discussed previously on this blog, is the relative citation ratio (or RCR). This measure – which NIH has made freely available through the iCite tool – aims to go further than just raw numbers of published research findings or citations, by quantifying the impact and influence of a research article both within the context of its research field and benchmarked against publications resulting from NIH R01 awards.

In light of our more recent posts on applications and resubmissions, we’d like to go a step further by looking at long-term bibliometric outcomes as a function of submission number. In other words, are there any observable trends in the impact of publications resulting from an NIH grant funded as an A0, versus those funded as an A1 or A2? And does that answer change when we take into account how much funding each grant received?

First, let’s briefly review long-term historical data on R01-equivalent applications and resubmissions.

Figure 1 shows the proportions of over 82,000 Type 1 R01-equivalent awards by resubmission status. We see dramatic shifts: 20 years ago, and during the doubling, the majority of awards came from A0 applications. By the time of the payline crash (~2006), most awards came from A1 and A2 applications. In 2016, several years after A2s were eliminated, half of awards came from A0 applications and half from A1 applications.

Figure 1

Figure 2 shows award rates. Over the years, resubmissions consistently do better; in 2016, A1 submissions were three times more likely to be funded than A0s.

Figure 2

Now we’ll “switch gears,” and look at long-term grant bibliometric productivity as associated with resubmission status. We’ll focus on 22,312 Type 1 R01-equivalent awards first issued between 1998-2003: this was a time when funds were flush (due to the NIH budget doubling) and substantial numbers of awards were given as A0s (N=11,466, or 51%), A1s (N=8,014, or 36%), and A2s (N=2,832, or 13%). By looking at grants that were first awarded over 14 years ago, we’ve allowed all projects plenty of time to generate papers that then had time to receive citations.

Table 1 shows grant characteristics according to resubmission status at the time of award. Characteristics were generally similar except that a smaller proportion of A0 awards involved human subjects.

 Table 1 A0

(N=11,466) A1

(N=8,014) A2

(N=2,832) Percentile 15 (7-21) 14 (8-21) 14 (8-20) Human study 34% 42% 42% Animal study 50% 50% 51% Total costs ($B-M) 2.2 (1.4-3.7) 2.0 (1.4-3.4) 1.9 (1.4-3.1) Duration (years) 5 (4-10) 5 (4-9) 5 (4-6)

Continuous variables are shown as median (25th-percentile – 75th percentile) while categorical variables are shown as percent.

Table 2 shows selected bibliometric outcomes – total number of publications, number of publications adjusted for acknowledgement of multiple grants (as described before), weighted relative citation ratio (RCR), weighted relative citation ratio per million dollars of funding, and mean relative citation ratio. Figures 3, 4, and 5 show box plots for weighted RCR, weighted RCR per million dollars of funding, and mean RCR, with Y-axes log-transformed given highly skewed distributions. We see a modest gradient by which productivity is slightly higher for grants awarded at the A0 stage than for grants awarded on A1 or A2 resubmissions.

 Table 2 A0

(N=11,466) A1

(N=8,014) A2

(N=2,832) Papers 10 (4-21) 9 (4-19) 9 (4-17) Papers adjusted* 5.1 (2.1-11.5) 4.9 (2.0-10.5) 4.8 (2.0-9.9) Weighted RCR* 6.3 (1.8-17.3) 5.7 (1.7-15.2) 5.2 (1.5-13.6) Weighted RCR*/$Million 2.93 (1.00-6.55) 2.66 (0.92-6.09) 2.60 (0.88-5.94) Mean RCR 1.29 (0.76-2.04) 1.22 (0.74-1.91) 1.16 (0.68-1.83)

*Accounting for papers that acknowledge multiple grants

Figure 3

Figure 4

Figure 5

In summary, over the past 20 years we have seen marked oscillations in application and resubmission status, reflecting changes in policy (e.g. end of A3’s in 1997, end of A2’s in 2011, permission for “virtual A2s” in 2014) and changes in budget (e.g. doubling from 1998-2003, stagnation in the years following, increase in 2016). In 2016, about three-quarters of applications are A0 and one-quarter are A1s; half of awards stem from A0 applications, while half stem from A1 applications. We see no evidence of improvements in bibliometric productivity among grants that were awarded after resubmission; if anything, there’s a modest gradient of higher productivity for grants that were funded on the first try.

Categories: NIH-Funding

A Reminder of Your Roles as Applicants and Reviewers in Maintaining the Confidentiality of Peer Review

Fri, 04/07/2017 - 11:25

Dr. Richard Nakamura is director of the NIH Center for Scientific Review

Imagine this: you’re a reviewer on an NIH study section, and receive a greeting card from the Principal Investigator (PI) on an application you are reviewing. A note written inside the card asks that you look favorably upon the application, and in return, the PI would put in a good word with his friend serving on your promotion committee. Do you accept the offer, or just ignore it? Or, do you report it?

Or this: a reviewer on an NIH study section finds that one of his assigned applications contains an extensive statistical analysis that he does not quite understand. So he emails the application to his collaborator at another university and asks her to explain it to him.

Or what about an investigator who submits an appeal of the outcome of review, citing a particular reviewer as having told him that another reviewer on the study section gave a critical review and unfavorable score to the application out of retaliation for an unfavorable manuscript review?

Or maybe several days after the initial peer review of your application, you receive a phone call from a colleague you haven’t spoken to in quite a while. The colleague is excited about a new technique you developed and wishes to collaborate. You realize the only place you’ve disclosed this new technique is in your recently reviewed NIH grant application. What do you do?

Scenarios like these are thankfully few and far between. Given the size of NIH’s peer review operations, the rarity of such scenarios is a testament to all you do in supporting the integrity of peer review, and the public trust in science. Nevertheless, reminders are helpful, and it’s important to be prepared and understand your role in upholding the integrity of NIH peer review, just in case you are ever put in a situation like the ones described here.

While professional interactions between applicants and reviewers can continue while an application is undergoing peer review, discussions or exchanges that involve the review of that application are not allowed. As an applicant, you should not contact reviewers on the study section evaluating your application to request or provide information about your application or its review, no matter how “trivial” the piece of information may seem.  As a reviewer, you should not disclose contents of applications, critiques, or scores. Reviewers should also never reveal review meeting discussions or associate a specific reviewer with an individual review.

Why are these responsibilities important? Because supporting the public trust in science takes the support of the entire research community. Attempts to influence the outcome of the peer review process through inappropriate or unethical means result in needless expenditure of government funds and resources, and erode public trust in science. In addition, NIH may defer an application for peer review or withdraw the application if it determines that a fair review is not feasible because of action(s) compromising the peer review process. Depending on the specific circumstances, the NIH may take additional steps to ensure the integrity of the peer review process, including but not limited to: notifying or requesting information from the institution of the applicant or reviewer, pursuing a referral for government-wide suspension or debarment, or notifying the NIH Office of Management Assessment.

Your responsibility doesn’t end there.  All participants in the application and review process, including investigators named on an NIH grant application, officials at institutions applying for NIH support, and reviewers need to report potential breaches of peer review integrity. Immediately report any peer-review integrity concerns to your Scientific Review Officer. For peer review activities within the Center for Scientific Review, you can also send an email message to csrrio@mail.nih.gov. If you need to report an incident to someone outside of CSR, you may email the NIH Review Policy Officer. We also provide additional resources on our Integrity and Confidentiality in NIH Peer Review page, and encourage you to share this resource, and this blog post, with your peers, colleagues, and trainees.

Categories: NIH-Funding

Following Up On Interim Research Products

Tue, 03/28/2017 - 14:43

The role of preprints — complete and public draft manuscripts which have not gone through the formal peer review, editing, or journal publishing process – continues to be a hot topic in the biological and medical sciences. In January, three major biomedical research funders – HHMI, the MRC, and the Wellcome Trust, changed their policies to allow preprints to be cited in their progress reports and applications.

Thinking about preprints also raises questions about the broader class of interim research products, and the role they should play in NIH processes. Other interim products include products like preregistration of protocols or research methods, to publicly declare key elements of a research project in advance. While, under current policy, NIH does not restrict items cited in the research plan of an application, applicants cannot claim preprints in biosketches or progress reports.

So, in October, we issued a call for comments to get a fuller understanding of how the NIH-supported research community uses and thinks about interim research products. Today I’d like to follow up with what we’ve learned from your input, and the policy changes this feedback suggests.

We received 351 responses, the majority (79%) submitted by scientists/authors. Twenty-two professional societies representing groups of scientists also submitted responses. Of the 351 respondents who commented on how use of preprints & interim research products might impact the advancement of science, the majority were supportive, and some predicted or noted specific benefits, such as improving scientific rigor, increasing collaboration, and accelerating the dissemination of research findings. (See Fig. 1)

Figure 1

When asked about the peer review impact of citing interim products in NIH applications, the majority of respondents predict positive impacts. Some specific benefits noted include speeding the dissemination of science, helping junior investigators, and providing authors with the chance to incorporate feedback into their drafts and even form new collaborations.

Figure 2

We also received some concerns about these materials not being peer-reviewed, and whether any potential benefit they may offer to the review process was offset by potential burden to reviewers and applicants. However, the overall response about review was favorable. Respondents felt reviewers should be able to tell the difference between a final and interim product and could draw their own conclusions about the validity of the information. Again, it’s worth noting that these findings inform a potential increase in the use of interim products in review; we already have no restrictions in what can be cited in the reference section of a research plan.

Based on this general feedback and many other thoughtful suggestions, we developed guidance on how NIH applicants will have the option, for applications submitted for due dates of May 25 and beyond, to cite interim research products in applications. As described in the NIH Guide Notice issued Friday (NOT-OD-17-050), citations of interim research products in biosketches should follow citation formats that include citation of the object type (e.g. preprint), a digital object identifier (DOI) in the citation, and information about the document version. This guidance is also incorporated into NIH application instructions, which were just updated last week. We also offer FAQs.

Example preprint citation: Bar DZ, Atkatsh K, Tavarez U, Erdos MR, Gruenbaum Y, Collins FS. Biotinylation by antibody recognition- A novel method for proximity labeling. BioRxiv 069187 [Preprint]. 2016 [cited 2017 Jan 12]. Available from: https://doi.org/10.1101/069187.

The Guide Notice also outlines NIH’s expectations for what qualifies as a preprint, and suggests best practices to the many preprint repositories, including: open metadata; machine accessibility; transparent policies about plagiarism and other integrity issues; and an archival plan for content, versions and links to the published version.

For renewal applications submitted for the May 25, 2017 due date and thereafter, awardees can also claim these products on the progress report publication list (an attachment required specifically in renewal applications.) Awardees can also report these products on their research performance progress reports (RPPRs) as of May 25, 2017, and link them to their award in their My Bibliography account.

On behalf of NIH, I’d like to thank all of you who took the time to submit comments and share insightful and thoughtful viewpoints and experiences with us. There is a growing recognition that interim research products could speed the dissemination of science and enhance rigor

We see preprints and other interim products complementing the peer-reviewed literature. Our goal with this Guide notice is to offer clear guidance and suggested standards for those in the research community who are already using, or considering use of preprints and interim research products. Some scientific research communities may be more ready than others to use preprints –for example, there continue to be discussions and concerns specific to clinical research. We appreciate that different biomedical research disciplines are likely to adopt interim research products at varying paces; at the same time, with our new guidelines, we aim to make this option as viable as possible for all members of our community.

Categories: NIH-Funding

Outcomes of Amended (“A1”) Applications

Thu, 03/23/2017 - 17:27

In a previous blog, we described the outcomes of grant applications according to the initial peer review score. Some of you have wondered about the peer review scores of amended (“A1”) applications. More specifically, some of you have asked about amended applications getting worse scores than first applications; some of you have experienced amended applications not even being discussed after the first application received a priority score and percentile ranking.

To better understand what’s happening, we looked at 15,009 investigator-initiated R01 submissions: all initial submissions came to NIH in fiscal years 2014, 2015, or 2016, and all were followed by an amended (“A1”) application. Among the 15,009 initial applications, 11,635 (78%) were de novo (“Type 1”) applications, 8,303 (55%) had modular budgets, 2,668 (18%) had multiple PI’s, 3917 (26%) involved new investigators, 5,405 (36%) involved human subjects, and 9205 (60%) involved animal models.

Now the review outcomes: among the 15,009 initial applications, 10,196 (68%) were discussed by the peer review study section. Figure 1 shows the likelihood that the amended application was discussed according to what happened to the initial application. For the 10,196 submissions where the initial application was discussed, 8,843 (87%) of the amended applications were discussed. In contrast, for the 4,813 submissions where the initial application was not discussed, only 2,350 (49%) of the amended applications were discussed.

 

Figure 1

 

Figure 2 shows the same data, but broken down according to whether the submission was a de novo application (“Type 1”) or a competing renewal (“Type 2”). The patterns are similar.

 

Figure 2

Table 1 provides a breakdown of discussed amended applications, binned according to the impact score of the original application. Well over 90% of amended applications with impact scores of the original applications 39 or better were discussed.

 

Table 1:

Impact Score Group Amended Application Discussed Amended Application Not Discussed Total 10-29

759  (97%)

23  (3%)

782

30-39

3,779  (94%)

241  (6%)

4,020

40-49

3,116  (84%)

588 (16%)

3,704

50 and over

1,189  (70%)

501 (30%)

1,690

Total

8,843  (87%)

1,353 (13%)

10,196

 

We’ll now shift focus to those submissions in which both the initial and amended applications were discussed and received a percentile ranking. Figure 3 shows box plots of the improvement of percentile ranking among de novo and competing renewal submissions. Note that a positive number means the amended application did better. Over 75% of amended applications received better scores the second time around.

Figure 3

What are the correlates of the degree of improvement? In a random forest regression, the strongest predictors, by far, were the initial percentile ranking and the all other candidate predictors – de novo versus competing renewal, fiscal year, modular grant, human and/or animal study, multi-PI applications, and new investigator applications – were minor.

Figure 4 shows the association of percentile ranking improvement according to initial percentile ranking and broken out by de novo application versus competing renewal status. Not surprisingly, those applications with the highest (worst) initial percentile ranking improved the most – they had more room to move! Figure 5 shows similar data, except stratified by whether or not the initial application included a modular budget.

Figure 4

Figure 5

These findings suggest that there is something to the impression that amended applications do not necessarily fare better on peer review – but worse outcomes much more represent the exception than the norm. Close to 90% of applications that are discussed on the first go-round are discussed when amended. And for those applications that receive percentile rankings on both tries, it is more common for the percentile ranking to improve.

Categories: NIH-Funding

Mid-career Investigators and Shifting Demographics of NIH Grant Recipients

Mon, 03/06/2017 - 17:48

While NIH policies focus on early stage investigators, we also recognize that it is in our interest to make sure that we continue to support outstanding scientists at all stages of their career. Many of us have heard mid-career investigators express concerns about difficulties staying funded. In a 2016 blog post we looked at data to answer the frequent question, “Is it more difficult to renew a grant than to get one in the first place?” We found that new investigators going for their first competitive renewal had lower success rates than established investigators. More recently, my colleagues in OER’s Statistical Analysis and Reporting Branch and the National Heart Lung and Blood Institute approached the concerns of mid-career investigators in a different way – by looking at the association of funding with age. Today I’d like to highlight some of the NIH-wide findings, recently published in the PLOS ONE article, “Shifting Demographics among Research Project Grant Awardees at the National Heart, Lung, and Blood Institute (NHLBI)”.

Using age as a proxy for career stage, the authors analyzed funding outcomes for three groups: principal investigators (PIs) aged 24 – 40, 41-55 (the mid-career group), and 56 and above. The figure below shows the proportion of research project grant awardees in each of these three groups. The proportion of NIH investigators falling into the 41-55 age group declined from 60% (1998) to 50% (2014).

All figures from: Charette M, Oh Y, Maric-Bilkan C, Scott L, Wu C, Eblen M et al. Shifting Demographics among Research Project Grant Awardees at the National Heart, Lung, and Blood Institute (NHLBI). PLOS ONE 2016. (CC-BY)

Interestingly, regardless of age, applicants have an approximately equal chance of having a new or renewal application funded.

What then, might contribute to the decline in the proportion of mid-career NIH-supported investigators seen in the earlier figure?  The authors propose two factors: multiple grants and average RPG award funding.

The authors argue that having multiple grants may confer an “enhanced survival benefit”, as PIs with multiple grants have a salary-support buffer that enables them to remain in the academic research system.  If an investigator holds zero or one grant, an application failure could well mean laboratory closure, whereas an investigator who holds multiple grants can keep the laboratory open. Moving from younger to mid-career to older investigators, the average number of RPG awards per awardee increased  from 1.28 to 1.49 to 1.54. Consistent with this, the amount of total RPG funding per awardee (looking at direct costs, specifically) is highest for PIs 56 and over:

The funding spread is further enhanced by the distribution of certain types of research programs, such as P01 awards, which support multi-project research projects. The figure below shows the age group distribution of P01 funding (direct costs only) from 1998-2014. As noted by the authors, by 2014, NIH PIs age 56 and over, who represent just 34% of the total NIH RPG awardee population, receive 70% of competing P01 funding.

In their discussion, the authors suggest that their analyses should stimulate alternate explanations about why funding is being increasingly distributed to well-established investigators.  They write, “For instance, a widely held belief within the academic research community is that the scientific workforce is aging because more established investigators are simply better scientists. In this belief we are all ‘Darwinists’, in that, during stressful times our first presumption is that the best survive and the merely good fall away. But what if that is not the full situation?” Of note, two recent papers in Science (here and here ) present evidence that scientific impact does not necessarily increase with experience; the policy implication is that it may make more sense to maximize stable funding to meritorious scientists throughout the course of their careers.

I encourage you to take a look at the full paper, which contributes to our ongoing discussion of the age of the biomedical research workforce, and contributes to past, present, and future studies of how we can sustain the careers of those we fund as trainees and early-stage investigators.

Categories: NIH-Funding