Reform(atting) the Canadian Institutes of Health Research – a living autopsy

dr.-jim-woodgettLast year, the Canadian Institutes of Health Research (CIHR), which is the primary federal funding agency of health research in Canada, embarked on a bold and wide-ranging series of reforms that change virtually every aspect of how health research funding is applied for, evaluated and distributed. On July 15, 2015, the results of the first major competition under the new system were released, as were the first casualties. Judging from the social media firestorm that followed, we felt that Canada’s research community may want to know some more. So I asked Jim Woodgett, Director of Research at the Lunenfeld-Tanenbaum Research Institute, to help us out. He kindly obliged by writing the following very informative guest blog. Jim was one of the initial F-scheme awardees, so this is far from being a rant of a bitter applicant. The simple fact is that nearly 300 accomplished Canadian researchers (those who submitted to Stage 2 but were not funded) cannot expect to receive funding for the next 12 months, at a minimum. As any responsible research leader in this country should, he worries about both the short and long term impact of the funding reforms on Canada’s health research community. 

A guest blog by Jim Woodgett, Director of Research, Lunenfeld-Tanenbaum Research Institute

This essay describes the background, the changes, the issues and some possible ideas for improvement. It is based on publically available information from CIHR as well as personal experience and correspondence with many applicants throughout the lengthy process.

Burning Platforms: Globally, public funding of science has not recovered well from the recession of 2008/9 but was showing signs of sickness prior to that setback. In most developed countries, both government and non-governmental funding agencies have flat-lined scientific research budgets since the halcyon days of the late 1990s/early 2000s. That period saw many organizations, like the National Institute of Health in the US double their science budgets. In Canada, CIHR emerged from the ashes of the Medical Research Council and saw its budget increase 3-fold between 1999 and 2006. An expanded cohort of highly trained and accomplished graduate students and postdoctoral fellows developed out of the increased activities of that period, fueling growth of laboratories around the world. New research facilities were built or expanded.

After 2006, increases in agency budgets began to slow and fall behind inflation. In some countries, including Canada, research funding was increasingly diverted to more applied aspects of research, at the expense of basic/discovery science. As a consequence of these pressures, success rates of grant competitions steadily fell which promoted increases in the numbers of grant applications – causing yet further downward pressure on success rates. Over time, scientists significantly increased the amount of time they spent preparing applications and reviewing them, and some labs began to be shuttered.

End of the line, all change: By 2012, the status quo had become untenable and funding agencies scrambled to adapt to the new economic realities and depressed scientific climate. The main three Federal granting agencies in Canada, the so-called tri-councils (CIHR, NSERC and SSHRC) have each made significant recent process changes but CIHR has been the most ambitious. A program of comprehensive reforms was announced by CIHR in early 2012 and included radical changes to the types of funding programs and the manner in which they were adjudicated. Most notably, CIHR proposed to shift from conventional face-to-face reviews, which are clustered by expertise, to a system of virtual review.

Virtual Review: The most worrying aspects of the proposed changes revolved around the switch from Face-to-Face to virtual peer review. The intent by CIHR was to improve peer review consistency by having 5 reviews per application. In the prior system, a grant application would be read and evaluated by two primary reviewers and a secondary reader (each reviewer would be assigned ~8 applications). Panels of 8-15 reviewers with interests focused in specific areas of research (e.g. cell physiology, cardiology, neuroscience, etc.) would meet in an Ottawa hotel for 2-3 days to discuss 20-70 applications. If the preliminary scores were high enough, there would be discussion of strengths and weakness, a consensus score reached and the other panel members would record their own score. The averaged score would then be calculated by the agency and used to rank within the panel. In the most recent competition (transitional Open Operating Grant Program; tOOGP), the success rate was 14.2% so for a panel of 50 applications, ~7 would be approved for funding. The difference between a funded and an unfunded grant could be 0.01 – especially as scored tend to cluster around the anticipated funding line.
Virtual review increases the number of reviewers assigned to each grant and also does away with the panel structure. This was desired by CIHR as panels can develop their own “cliques” and many panel mandates (the range of expertise they were intended to cover) did not fit well with applications, especially those that crossed disciplines. To overcome these issues a global ranking system was developed. Reviewers are assigned a pile of grants based on matching their expertise with keywords in the applications. Each reviewer is assigned 15 or so grants and asked to rank them within their own pile. This forms a series of 5 fractions that is assigned to each grant. If Reviewer 1 has 16 applications and ranks Application A 2nd in her pile, the fraction is 2/16. The fractions are arithmetically combined, and because there are 5, the variance between the scores can be calculated. This allows all applications to be ranked relative to one another, irrespective of research area. Reviewers do not physically meet but are expected to upload their ranking data on-line and to participate in asynchronous chats to try to reach consensus.

The agency also proposed to replace traditional open operating grants with two largely independent funding streams termed Foundation (F-scheme) and Project (P-scheme). The former is a three-stage, applicant-focussed and designed to support a program of research rather than specific projects. The idea was to increase flexibility of research and was also expected to reduce application pressure as awardees would be funded for 5-7 years – and be excluded from applying for the separate P-scheme competition (an investigator can only hold one F-scheme award). P-scheme awards were designed for smaller budgets and for 1-5 years. Multiple P-scheme awards can be held by an investigator or by a team.

Pushback: When announced, these changes were met with a fair degree of scepticism and concern by many researchers, which were expressed at town halls and through letter campaigns to CIHR. Others welcomed the changes. CIHR asked for feedback and formally responded to the collective concerns in the Fall of 2012, but the fundamental principles of the reforms were unchanged from the initial proposal.

A Reforms Advisory Working Group was set up by CIHR with comprised ten researchers across all four pillars. I was a member and the group provided input over the next 18 months. As more details about the processes, and virtual review in particular, became apparent, the Advisory Group wrote to CIHR Science Council in June 2013 to communicate their joint concerns. These included:

  1. Undue reliance on modeling scenarios that were based on untested assumptions as well as a concern for early and mid-career investigators.
  2. Doubts about the execution of virtual review while abandoning face-to-face reviews.
  3. Concerns over recruitment of a sufficiently qualified College of Reviewers.
  4. The conformist effect of structured applications and reviews – required to increase efficiency of the review process given that many more reviews are required over the prior system.
  5. Poor definition of the 3rd stage of the F-scheme review and definition of the “Grey Zone”.
  6. The reduced number of grant application opportunities during the transition from old to new systems – including the cancellation of two competitions in order to provide enough funding for the first F-scheme.
  7. The increased risk associated with changing so many aspects of the funding process at once – without any additional funding to smoothen the switch.

Following a rapid and rather dismissive reply, the Working Group was soon disbanded and the first F-scheme competition was launched in June of 2014. While hindsight is 20:20, the problems associated with this first F-scheme was almost entirely predicted by the Working GroupPastedGraphic-1.

The (first) results are in as are the first casualties: At the outset, CIHR predicted between 120 and 250 F-scheme awards would be made in the first “pilot” of this scheme and eligibility was managed such that the competition was only open to those who held CIHR funding but had an existing grant expiring in FY2015, to those who had never held CIHR funding and to those who were within the 1st five years of holding funding at the time of registration (defined as Early Career Investigators, ECIs). There were 1375 applications to the initial phase of this F-scheme, known as Stage 1. This stage largely comprised a curriculum vitae (in the form of a bespoke “Common CV”) as the adjudication criteria were evidence of leadership, significance of prior contributions and productivity. On December 10, 2014, the results of Stage 1 wPastedGraphic-2ere released.

Along with a consolidated ranking (which combined the rankings of the 5 reviewers), the standard deviations of these ranking were reported. Of the 1366 Stage 1 applicants that successfully submitted, 467 (34%) were approved to go on to Stage 2. There were 559 ECIs who applied to Stage 1, 87 (15%) were invited to Stage 2. Notably, the degree of variance among the rankings appeared very high (average 20%, maximum 45%), as anticipated by many, including the Working Group.

The culling of ECIs was inevitable given the emphasis on qualities that young investigators simply would not have had time to demonstrate, even though reviewers were instructed to take career stage into account. ECIs were directly compared with mid- and later career scientists. Indeed, mid-career applicants also fared poorly. This may relate to the managed intake given that this required existing grantees to have at least one expiring grant and so was biased towards investigators with multiple grants who tend to be more senior. In other words, this first competition was likely atypically competitive compared to later competitions. A letter from several Stage 1 reviewers emphasized the problems encountered in this phase and made several constructive suggestions.

Stage 2 and structural damage: Of 467 applicants invited to apply for Stage 2, 455 actually did. The drop-off was likely related to uncertainty associated with the pilot F-scheme process as this was the decision point for entering the last open operating grant competition to be conducted using face to face panels (this transitional OOGP was run at the same time as the F-scheme).

Those that decided to submit to Stage 2 were not eligible to submit to the other open grant competition. Notably, this exclusion has been lifted for applicants to the 2nd F-scheme pilot who will be allowed to submit to the P-scheme in Spring, 2016. CIHR also refined the anticipated number of F-scheme award to between 150-210 at this point. Stage 2 was focused on the scientific program and comprised 5 character-limited (including spaces) sections: Concept; Approach; Expertise; Mentorship; Environment.

Personally, I found this forcing of structure to be restrictive. CIHR maintains this structuring is necessary for efficiency of review and the reviewers specifically graded each of these sections (with a supremely Canadian scale of – highest to lowest – O++, O+, O, E++, E+, E, G, F, P). These scores were weighted and used to assess the relative ranking as in Stage 2. This approach is fine if the goal is to minimize the investment required by reviewers and to increase operational efficiency of CIHR.

However, such strictures likely come at the expense of scientific creativity. As it stands, the applications for CIHR funding cover the gamut of health research from basic biology to health systems and epidemiology. Requiring each application to be framed in a specific way distorts how science is described, forces a way of thinking and quite possibly leads to conformism that discriminates against those at the margins of thought from whom the most impact often derives.

While fairness and efficiency are clearly important in adjudicating science, these should never interfere with the primary mission of supporting ideas with the potential for the greatest effect. Artificial constraint of how research may be described endangers the very diversity of thinking that is the hallmark of great scientists.

Another casualty is the loss of feedback to the applicants. In the new system, CIHR requires reviewer comments only justify the ranking, they are not intended to provide constructive critique. By contrast, face-to-face panels provided valuable experience and learning opportunities for younger investigators by exposing them to the grant applications and review critiques of others, not to mention less tangible benefits of socializing with peers and exchanging ideas.

This loss of feedback and interaction are deleterious changes that handicap the less experienced. There were opportunities of on-line discussion between Stage 2 reviewers (as well as Stage 1) in order to highlight discrepancies in scoring, although the degree of interaction was reportedly very limited, consistent with high degrees of variance between scores as reported by individuals.

Stage 2 results were released on July 14th, 2015. Of 445 applications, 189 were invited to Stage 3. The remaining 256 were sent reviews and rankings and among this cohort were many highly accomplished investigators, now facing at least a 12 months hiatus in a large chunk of their funding.

The Stage 3 review was conducted by a single, multidisciplinary panel. The top 75 ranked applicants from Stage 2 were denoted “Green Zone” and were not discussed further. Each member of the Stage 3 was tasked with evaluating about 16 applicants that fell into the “Grey Zone” – that is, the applicants ranked ~76 to 189 and were provided with the application materials and reviews from Stage 1 and Stage 2. This final stage was designed to identify anomalies in scoring and to provide sober second thought by a different set of minds. Ultimately, 150 applications were approved for funding, but a further hurdle first had to be jumped.

PastedGraphic-3

Show me the money! Applications to Stage 2 included a budget request. F-scheme budget calculations were to be based on existing levels of open operating grant funding from CIHR. For applicants with no prior funding from the agency, information on their other grants could be submitted. Requests for funding above these base-lines was allowed but had to be justified within a few character-limited lines. CIHR communicated to Stage 3 applicants that 90% of their budget requests were significantly above their expected amount and, if approved as is, the funds available would only fund ~54 grants. Applicants were supplied with CIHRs own base budget calculation and asked to justify any increase over this.

The announcement of the results of the F-scheme was timed to coincide with those of the tOOGP on July 15, although a technical glitch inadvertently released the results of the latter competition two days earlier (the profound impact of this on nerves of the applicants cannot be underestimated!).

Any funding agency with set budgets that supports multiple year funding and has annual competitions only has a fraction of its budget available for new competitions. Given the cancellation of a previous OOGP competition, as well as one normally timed for the Fall of 2015, the funds available for the two competitions announced on July 15th was ~$600 over the following 7 years (the longest tenure of an F-scheme award). Existing, non-expired CIHR funding of F-scheme awardees was rolled into their future F-scheme funding – in essence extending these grants to the 5 or 7 year term. Likewise, unfunded F-scheme applicants who had CIHR grants that were not due to expire in FY2015 kept those awards.

Going forward, CIHR has said it expects to support ~114 new F-scheme grants per year at steady state. The number of expected P-scheme grants is unclear, but given that these average 25% of the value of an F-scheme grant, will likely be 200-250 x 2 competitions per year. Notably, the number of open operating grants awarded in 2015 is 533 versus 800 in previous years. In future years, which will have 2 P-schemes as well as an F-scheme competition, this is likely to rise to ~600. The overall success rate of the F-scheme competition was 10.9% and the most recent OOGP as 14.8%.

Lessons and challenges: The impact of the CIHR reforms on researchers has been substantial. Setting aside for a moment the significant concerns about the effectiveness of the virtual review ranking process and the utility of the new scheme design, the simple fact is that nearly 300 accomplished Canadian researchers (those who submitted to Stage 2 but were not funded) cannot expect to receive funding for 12 months, at minimum. This huge opportunity cost is due to the need for CIHR to cancel two competitions during the transitional process.

For the next F-scheme competition, applicants will be eligible to apply for the Spring P-scheme which removes an important opportunity exclusion. It will be interesting to see how many applicants there are to this competition, given the previous experience. Notably, the review criteria have not substantially changed and this must be a disincentive to early career scientists.

There is also concern that the F-scheme will further the Matthew effect (rich get richer) and barriers to entry will increase over time. This is because demonstrable track record is needed and those with multiple grants benefit from higher eligible base budgets. For younger investigators, the fear is that more limited P-scheme opportunities (and intrinsically smaller budgets) will make it ever harder to accumulate funding to make F-scheme funding worthwhile, since the downside of this scheme is lock-in at the approved funding for 5-7 years.

CIHR should consider separate envelopes or evaluation criteria that target career stage. Indeed, mid-career investigators also appear to not have done well in the first competition. Are the review criteria weighted too much to seniority?

Change is always difficult and CIHR will no doubt improve processes over time but there remain deep concerns that the approach taken to these reforms has fundamental flaws. In my view these are:

  1. Multiple changes were made at once. This is not only terrible science, it precludes any meaningful assessment of the relative performance of the new systems compared with old. We will not know the outcome on quality of science for many years, by which time there will be no going back. Various aspects of the changes were piloted but these were “live”, with real scientific livelihoods at stake. As an experiment, it’s doubtful the reforms would never have passed scientific or ethical review!
  1. Changes were initiated without any additional funding for transition. This resulted in lost opportunities. Does CIHR intend to support fewer researchers? If so, this was not communicated. Could other new initiatives have been put on hold during the transition to free up bridge funds?
  1. The biggest impact of the reforms is on the confidence of early to mid-career investigators, many of whom must be seriously evaluating whether they wish to continue their careers in Canada under these circumstances. Do they have a place in the F-scheme? Can they realise their potential with only P-scheme awards? The grass is always greener elsewhere but the funding restraints facing CIHR are similar in many other jurisdictions. However, we should all be worried when many of our most talented people in whom we have already invested are second-guessing their future.
  1. While CIHR has made much instructional information available, critically including the reviewing parameters, the research community is dependent on CIHRs release of data to evaluate the processes. Release of de-identified information for third party evaluation is essential for researchers to assess the programs and to identify errors and poor behaviours. As many scientists know, there have been heated exchanges at town halls with senior CIHR staff that have eroded confidence in the information being shared. From these meetings, it is clear that Canadian researchers have not given up on face to face reviewing, which the NIH, for example, still regards as the gold standard. Other funding agencies are presumably following the developments at CIHR with interest as they face similar challenges. We have to hope they will benefit from the painful lessons learned here before they follow in our footsteps.

This entry was posted in R&D Policy and tagged , , , , . Bookmark the permalink.

46 Responses to Reform(atting) the Canadian Institutes of Health Research – a living autopsy

  1. Frank Beier says:

    Just got around to read this now – very well written essay Jim! While we all know where you stand (and most of us have very similar views), I think this is quite objective and provides a fair overview of the history of all of this. Let’s all try to keep the constructive criticism up – CIHR has listened to some of it and implemented a few changes (e.g. lifting the restrictions on number of applications in a year, allowing simultaneous application to F and P at stage 2 in the next round etc.). Let’s hope they will continue to listen and implement more changes.

    • Ray Truant says:

      I’m not certain the CIHR has listened to criticisms before the new scheme was implemented. Part of the point of the funding system reform was to lower the administrative and reviewer burden, which arguably was not sustainable, but exactly who is now going to review all these multiple Project grants in the upcoming competitions? I’ve seen the old process as panel member (CP and CBMD) and the new process as Foundation reviewer, and while any application now sees 5+ eyes, the quality of reviews has plummeted without the responsibility of having to justify a ranking in person to colleagues. Peer review panels are a lot of work and stressful.
      Also, the interaction, education and inter-disciplinary collaborative results of peer review panels is now completely gone. I sit on NIH study sections now, because I will learn more science and have more valuable scientific interactions in two days than any week at a conference. The value of scientific peer review panels in the development of a scientist (because we never stop developing) was never considered.
      As a mid-career scientist in Canada, after the first results have come back I can only conclude the Foundation program is best for senior scientists late in careers or new scientists.
      Thanks for your efforts on this Jim.

      • jimwoodgett says:

        The concerns about a distinct loss of mentorship and eye-to-eye validation/peer pressure in the shift from a face to face to a virtual system were repeated made to CIHR and were largely ignored. There is a distinct distancing of the research community from CIHR due to these changes, compounded by the contraction of Institute Advisory Boards. I’d estimate that around 800-1000 researchers who’d normally have at least bi-annual direct interactions in person with CIHR have been lost in this new regime. This cannot be good for the agency as it increasingly acts in isolation of its research community. There is also a clear reduction in the quality for he review process. This is not to say that there are not good reviewers, but that it is easier to get away with a minimal job. We are all busy and have competing demands but how can you actually tell whether a reviewer has spent 2 days or 30 minutes on a review? This, I think, will devalue CIHR in the eyes of not only the Canadian research community, but also peer funding agencies here and around the world. as the quality of your review process is the most important facet of a funding agencies existence.

  2. Eric Arts says:

    Any comments I make will sound like sour grapes considering I was ranked 69th, should have been in the “green zone” by all measures from CIHR, and I was not funded. I came back to Canada and don’t want to be perceived as an opportunist. The “transparency” is stage 3 is non-existent and I have general sense of malaise going forward with conspiracy theories swirling around my head.

    All that being said, there are so many outstanding scientists that got knocked out at the all three stages and many feeling a sense of injustice and helplessness. The restrictions on applying for CIHR funding through the F process has killed a generation of our best and talented “young” senior researchers. Everyone is afraid to discuss ageism but are we considering trajectory of careers rather than funding the end of careers when comparing CVs (which is essentially the review process here).

  3. Joel Katz says:

    My situation isn’t as bad as that of Eric Arts. I was ranked 148 so perhaps not surprising that I was not among the 150 applications that were funded. What disturbs me is the total absence of transparency in Stage 3. I was not provided with any new reviews from Stage 3, no new ranking was given, and there weren’t any SO notes. Just the following statement:

    “Considering the rating and ranking of your application relative to the other applications reviewed, the Final Assessment Stage committee agreed that your application was not competitive in its current form and therefore did not proceed with an extended discussion of the application. As a result, no Scientific Officer notes were generated; however, the reviewers’ reports from the Stage 2 are available on ResearchNet. Please refer to the Notice of Decision for more information regarding the ranking of your application.”

    When I referred to the Notice of Decision, it says I was ranked 148 out of the 189 going into Stage 3 and not among the 150 that were funded. The Stage 2 reviews help me to understand why I made it into Stage 3 but not why my application wasn’t viewed as competitive by the Stage 3 reviewers.

    The total lack of transparency in the Stage 3 adjudication process was mirrored in the survey CIHR circulated on July 20, requesting feedback from Stage 2 applicants, 189 of whom made it into Stage 3; there was not one question about Stage 3.

    Maybe this is sour grapes but it’s also intended to help me combat the sense of injustice and helplessness that Eric Arts describes.

  4. jimwoodgett says:

    We know about the 150 who were approved. We will not know about those1200 who were not, unless they speak up. Yes, it might sound like sour grapes but constructive criticism is always valuable. Moreover, this and the next F-scheme competitions are pilots. The more information that is out there about the processes and results, the more informed everyone will be. Monday is the deadline for the 2nd F-chemem pilot registration and yet there are so many outstanding questions. I am hoping participants of the Stage 2 and Stage 3 review processes will speak up (as did a selection of Stage 1 reviewers linked to in the essay) to provide non-confidential insights into their experience.

    • In the spirit of this thread: I applied as an early career researcher and didn’t make it through the first stage. I’m not sure that I should have, but it’s very hard to know. Most comments from the reviewers were positive and perfectly reasonable. In a panel situation, I would be happy to trust that I was not competitive relative to others at my stage of career. However, despite similar comments the reviewer’s rankings were highly divergent, meaning that they were all following different rules in translating comments to “score” to rank.

  5. One of my concerns is that this new college of reviewers system has no mechanism to ensure balance across fields the way that panels did. Panels were all guaranteed nearly the same funding rate. This new system could quickly result in a regression to a much narrower group of fields, especially if a small group of stage 3 reviewers change the ranks. There is an inevitable bias to promote things you understand the significance of.

    An interesting exercise would be to take the list of 150 successful F-scheme grants and match them to the most likely panel. There would be some big winners and losers. VVP, for instance, got 1 single award. As someone new I’m not sufficiently familiar with the panels to do this.

    • jimwoodgett says:

      One of the rationales used by CIHR to justify the ranking scheme was that panels became inward looking and that many applications did not fit into their mandates (it is true that a lot of grants got passed between panels looking for a “home”. This that were not directly within the comfort zone of a panel tended not to do as well. Of course, the mandates could have been broadened and many other aspects of face to face review could have been adopted to improve what we had. CIHR might also argue that their ranking system was designed to better reflect applicant subject diversity since reviewers in Stage 1/2 would be selected based on key word matching (although it’s not clear to me that this occurred or was highly developed). Hence, if there were 7% of applications relating to mechanisms of diabetes, they’d be matched to the same proportion of reviewers. The problem is that the College of Reviewers is nascent and a lot of reviewers declared inability to review for some of their assigned applications. I agree that Stage 3 could be problematic as it cannot possibly be representative and might introduce familiarity bias. Indeed, I thought it’s role was only to apply some level of insight into scoring discrepancies and to relate that to the reviews to identify poor or fatal flaw outlier reviews. Your proposed analysis of mapping to panels is a good idea. If only CIHR released the meta data they require applicants to include (e.g. declared areas of research and categories).

      • That sounds like a very reasonable critique of the panel system. As you suggest, the key will be to identify the inherent weaknesses of the college system and find ways to mitigate them. The loss of face to face discussion is very concerning. At the least, there must be some way to ensure reviewers actually participate in discussion and justification of their rankings in the online forum.

        This doesn’t ensure that a broad range of fields is funded, however. Maybe with a more populated College of Reviewers this will be less of a problem, but I can also imagine scenarios where it gets worse. Fields that lose out will need to be vocal, and fields that win perhaps gracious.

  6. Martin Beaulieu says:

    Hi Jim, this is a great overview of the issue.

    (As a disclosure: I am still funded and I was not illegible to apply to the F-scheme.
    I also was on several peer review committee and participated to the review of stage1,
    Yet, I find the results and the implementation of the reform process very worrisome.)

    If I may add, another part of the problem is the uncertainty, should I say improvisation, around the implementation of the P-scheme. Part of the problem is that its implementation is improvised. For example, we still do not know what grants application will look like or how they will be review.

    Actually there seams to be a disconnect between the discourse that was used to sale the new system and the reality of the new system.
    As it stands, this is quite worrisome especially for early career independent scientists (you know these youngsters that are pushing 40!). Actually the new system will shunt growth at early career stage.

    Here are two scenario:
    1- A new investigator may get a five year grant that is equivalent in budget to a large CIHR grant in the actual system. Then, this investigator cannot apply for anything else and may face a funding gap at the critical time of getting tenure.

    2- A new investigator will be stuck in a hyper-competitive (annual success rate 13-14% for a single competition) application system. This will be for grants that may be shorter in duration and possibly with smaller annual funds since 2/3 of operating grant budgets will go to f-scheme.

    This system may never allow this hypothetical investigator to reach the foundation system. In comparison, NIH is now having special opportunities for young investigators and three rounds of R01 competition per year.

    One has to wonder what such a system will do to the attractiveness of Canada for new talents in research. This is not good. Canada has become more attractive over the past years. We may lose that very fast with the new system. In facts this is very counterproductive when considering the major investment that was done in the Vanier fellowships to encourage talented young scientist during training.

    The same stands for more established investigators as they will have to gamble between F-scheme or stay in the fray of an under-funded normal project competition. In any case that means having to fight harder and write more grants to get less money.

    This is the opposite of the arguments that were used to sale this new system.
    What we are getting is more grant writing, more peer review work and less money for less research.

    And these are just a few of several issues. …

    • jimwoodgett says:

      Yes, there are increasing dilemmas for applicants and show this [pans out over the next couple of years is anyones guess (including, based on their projection success rate to date, CIHR). The one offset is that the barrier to applying for both F and P-schemes simultaneously (for those eligible for the 2nd F-scheme pilot) has been removed. But, as you point out, that will only add to the reviewer unburden and application pressure. And the issue of aspiration to F-scheme is very real. What is the path if P-scheme funding is increasingly difficult?

      Re: pilot project scheme, there is some information here:
      http://www.cihr-irsc.gc.ca/e/49051.html

      There are missing details such as, er, the actual application forms but I guess we are 6 months away from the deadline. There are worrying elements of this program, even at this stage. It basically comprises Stage 2 and Stage 3 of the F-scheme. The P-scheme appears to have rolled in several prior programs as it includes commercialization and KT that have separate internal envelopes. Perhaps P stands for Pandora? As others have pointed out, this years F-scheme awarded $410 million coupled with $251 million to the tOOGP. The $410 million includes previously awarded funding that is rolled in – but this wasn’t broken out. Next year, there is $500 million to fund the second pilot F-scheme plus 2 P-schemes. I’d estimate the existing roll-in funds for F-scheme awardees would be less than this year. CIHR estimates 300-500 awards. Noting the number of awards this year (389 tOOGP and 150 F-scheme) was at the very low end of CIHR estimates, and the agency has previously projected around 114 F-scheme awards at steady state, there will be 180-380 P-schemes (90 to 190 per competition). This should worry everyone. There will be around 800 grants ending in FY2016 (which represents 550-600 PIs), not to mention applications from many more who don’t have a grant ending or who were not funded in this years competitions.

      • jimwoodgett says:

        Sorry, I made a calculation error. CIHR has stated there will be 300-500 P-scheme awards per year so I shouldn’t have subtracted ~120 F-schemes. Still, given past history, it looks more likely that the lower end will come to pass – i.e. 150 grants per competition. Big drop from 400 per competition for the past decade or so and these will include KT and POP-like grants.

        Also, the 1st sentence above was scrambled: “Yes, there are increasing dilemmas for applicants and how this pans out over the next couple of years is anyones guess (including, based on their projection success rate to date, CIHR).”

  7. John Bergeron says:

    I was a round 2 reviewer. Here is part of what I wrote to my host university CIHR rep to transmit to CIHR.
    John Bergeron

    “I had 13 in my cohort each of which I wrote detailed critiques for each of the 5 parts each split into separate sections for strengths and weaknesses. In addition I wrote detailed critiques on the additional budget section( the most important ) that several( most?) colleagues did not even put in one word except indicate the amount to be cut.
    CIHR has indicated it will go by the rankings and not the scores.
    However not only did my 13 grants come from 3 different chairs and therefore 3 different virtual grants panels, the number of reviews for each grant were different. One only had 3 reviews and one only 4, the rest did have 5.
    The rankings from the different reviewers were wildly different with an enormous variation that I have not calculated but it may be worse than for round one for these folks.
    Here is the problem however, different reviewers had different numbers of grants to review .
    I was the only one with 13 , some reviewers had 7 in their cohort to review and some had 8 , some had 9 etc .
    Of course there are aways normalization factors but this is getting really complicated .
    All of this would have been attenuated by face to face peer review . It would not have eliminated the worst of our reviewer colleagues( I would fire them ) but it would have led to a thrashing out of the differences if the Chairs wished.”

  8. mossyfiber says:

    Reblogged this on thefutureofcanadianscience and commented:
    Very thoughtful overview of CIHR reforms from Jim Woodgett.

  9. Eric Arts says:

    Back again with comments. I want to comment on the remarkable screw ups in regards to release of information and the lack there of. Does everyone know that CIHR accidentally released the scoring from stage 2 to the university research offices prior to stage 3 and then asked not to report on the results to the university applicants. Similar to releasing the tOGs two days too earlier. I have yet to receive an email from CIHR from my stage 3 results. I only found out by searching extensively on ResearchNet on the 15th. My opening letter stated “Your recent application to 2014 Foundation Scheme live pilot competition, entitled: “xx”, “ has been considered by the Canadian Institutes of Health Research (CIHR). Unfortunately, your application was not approved for funding.” and that was about it. Funny how my project title was supposedly “xx” and not the actual title “HIV transmission and pathogenesis”. I received nothing like Joel Katz got in his letter. No explanation, no description of the stage 3 process, and no information that my application was even discussed in stage 3. They did invite me back to the second pilot. Also, most of you received a letter and instructions to revise your budget after stage 2. I have never received support from CIHR and my letter indicated that no budget modifications where necessary. I thought this was good news but now I have a strong impression that I was kicked out of the funding range before stage 3 even happened.

    Here is my current conspiracy theory and without any explanation from CIHR, I am going with it. My budget (admittedly high) was a problem for CIHR because unlike most applicants with ongoing CIHR funding, I have no CIHR funds to provide back to CIHR to support the Foundation funds. I clearly asked senior officials of the CIHR Foundation after Stage 1 and 2 if budget requests (after revisions) would impact the funding order. On two occasions they said no. In fact, I will find the video from our university where an official is quoted as saying that rank order out of stage 2 and specifically the green zone would not be affected by budget. The rank would be rank and if they had to fund less so be it. In addition, the official stated that many asked for ridiculous amounts so budget revisions would be requested but this again would not change the consolidate rank order. So what happened with my 69th rank. With such an opaque stage 3 process and without grievances to insist on release of stage 3 information, I again state what is the point of me resubmitting for the second pilot. By the way, I could use my previous funding from NIH (which I calculated a floating average for the last five years) to justify my budget request. I of course did not include the indirects, only included grants where I was PI, subtracted all the subcontracts, and subtracted the portion of my grant that paid my salary. Finally, I reduced this number by ~15%.

    So final question. What happens next year. I will have survived on a reasonable NIH grant for one more year. I could not apply for CIHR funds and it will be two years since I came back to Canada to even have a chance for funding. That’s my problem and I knew what I was getting into. I thought I would have a competitive Foundation application and I did.

    I am relaying my sob story for one reason! THIS COULD HAPPEN TO ANY OF YOU SO FIGHT NOW TO OPEN UP THE PROCESS AND MAKE THE NECESSARY CHANGES!!!!

    • jimwoodgett says:

      CIHR has repeatedly said that the transition phase competitions are regarded as pilots. That they are subject to refinement and that rules, etc. may change. Yet critical information is missing for applicants to assess whether to apply. T’s fairly obvious that there is a significant loss of confidence in the processes at CIHR as well as it’s commitment to excellence. The recent competition results will be released (to great fanfare no doubt) next Tuesday in Calgary. There will be handshakes and smiles. I hope as people as possible take the opportunity to ask the right questions.

  10. Hi Jim,
    We’ve touched base a bit on Twitter, but this blog post allows me a little more room to give my perspective as a New Investigator (Early Career) applicant to the Foundation scheme pilot. Hopefully my unique perspective will be helpful to others. I was one of the few ECIs that was funded by the Foundation grant (yay)! Unfortunately, I received 15% of the budget applied for (boo)!

    I’m a pediatric gastroenterology health services researcher with an interest in IBD and the methods used to conduct research using health administrative data. I was in year 5 of my faculty position, with a small CIHR open operating grant expiring in September 2015. So, I had the unique opportunity to apply as an ECI. In addition to my CIHR funding (small open operating grant for methods-based research), I had funding in the past five years from foundations, the American College of Gastroenterology, internal competitions, and am PI (not principal applicant) on a large national cohort study (in charge of health services and quality improvement).

    I knew about the ‘baseline funding calculation’ thing early in the competition, but CIHR staff reassured ECIs that they would be allowed to apply for amounts higher than the baseline funding calculation. My first 3-4 years as faculty was spent doing Ontario-based IBD and pediatrics research using health administrative data. However, in the past 1-2 years, a bunch of researchers have joined forces to conduct national distributed-network research using provincial administrative data. This network has received peer-reviewed funding from foundations, the Ontario government, and other sources. I am leading this group and am PI on all these grants. Therefore, my research is expanding beyond Ontario-only work to national work – a critical progression of my career.

    I was fortunate enough to get past both stage 1 and 2 of the Foundation competition (which was heavily stacked against ECIs). There was never any comment about my budget request, and reviewers consistently said that the application represented the next logical step in my career. I was consistently reassured by CIHR staff that I should ask for as much funding as required to run my proposed research program for the next 5 years.

    Then I got the Stage 3 acceptance letter asking me to reduce my ‘ask’, and stating my baseline funding calculation (which was about 10% of my ‘ask’). I started to get worried. I re-wrote my budget justification carefully, emphasizing my leadership role in the CIHR directed grant for the national network, as well as all other sources of funding. I even reduced the budget of my grant.

    In the end, I was fortunate to receive funding, but I received 15% of my revised budget. I proposed 5-6 streams of research to fund our national work over the next 5 years, including linkage of clinical data from the network grant to administrative data. The reviewers were supportive. They did not criticize the budget. The amount provided by CIHR administrators will allow me to run 1 project, and not more.

    Obviously, I’m extremely grateful to be funded in this environment. However, I’m concerned about the following:

    1. The deck is clearly stacked against ECIs and mid-career investigators, even more so. There were not reserved spots at each stage for each career stage, leading to a culling process in Stage 1 and 2 of ECIs (and likely mid-career scientists as well).

    2. The variation in reviews was huge. For one section of my grant, I received reviewers ranging from O++ to F! This indicates inadequate guidance by CIHR of both applicants and reviewers.

    3. If CIHR asked for a budget reflective of what my research program would cost for 5 years, and reviewers granted funding based on my proposal, who exactly is qualified to cut my budget and plans by 85%? CIHR administrators? This is very concerning for the future of science in Canada.

    4. Although CIHR dropped my budget by 85%, I’m now stuck with the Foundation grant for 5 years, and locked out of the Project Scheme. I hope they change this.

    5. Assuming that I will be applying for renewal in 5 years, will this ridiculous ‘baseline funding calculation’ continue? How will my research career survive if they only allow the same amount or slightly more? The ‘baseline funding calculation’ implies that CIHR wants research in Canada to remain stagnant. Expansion of exciting research programs, and new collaborations are essentially vetoed.

    Again, I stress that I’m very grateful for CIHR funding in this environment. I obviously won’t turn it down. However, I am hoping that the implementation will improve for future competitions, and perhaps CIHR will reconsider some of their rules in the Foundation Scheme.

    • jimwoodgett says:

      Every ECI should read your comment Eric.

    • jimwoodgett says:

      I also think that your original and even revised requests for budget were likely a surprise for CIHR. This is probably due to the lack of clarity in budget instructions – especially for ECIs. In return for giving up the ability to apply for any other funding from CIHR* for 5 years, your new award is around $90,000 per year. From your comment, it seems your base budget calculation (existing open funding) was ~$60,000/year. You applied for an ambitious program with 5 strands of research. The message here is that the F-scheme is not a vehicle to significantly ramp up funding based on the applicants ambition. That’s a severe message to send to ECIs. In addition, the scientific reviewers were not instructed to score based on budget. Hence, they evaluated the scientific feasibility of your proposal. The budget appears to have been assessed by a mix of scientists and KPMG accounts under the shadow of major budgetary constraints. In the second F-scheme pilot, those invited to Stage 2 will be provided with their calculated base budget. That may give many pause for thought. Why would any ECI restrict their scientific program for 5 years to essentially what they had previously?

      *The 5-7 year lock will hit some researchers far more than others. For some, there are significant other granting sources (e.g. cancer, CV or diabetes research). For many others, CIHR is the only source of funding, and this is also true for some basic scientists working in areas relating to dance, diabetes, etc as the charities become more translational/patient-oriented and for people working in Provinces where the charitable sector is limited.

  11. Pingback: Reblog of Reform(atting) CIHR | SPATIALDETERMINANTS

  12. Jean-Pierre Julien says:

    Dear Jim, this is an excellent essay. Did you know about the existence of stage 4 in the Foundation competition? On 15 July I was informed through research net that my Foundation grant has been approved and that my application was not discussed at stage 3 given that it was highly ranked. This was great news! However, the process was not over as there is a serious problem with CIHR in communicating the results. I have not received yet (12 days later) my final decision letter from CIHR with the amount and rank. It is also very annoying that my name is not yet on the list of awardees on the CIHR web site. It seems that I am not alone in this situation since only 139 out of 150 are listed. Somehow I am stalled in a kind of “pending decision process” (a kind of stage 4) due to involvement of controlled drugs and substances (even though my research does not involve such drugs). I contacted my university (Laval University) and I received confirmation of acceptation of my grant as well as the amount. Nevertheless, after 13 months of stressful review process I would like to obtain my official decision letter with score and amount and to see my name on the list of awardees. I have sent several email messages and phone calls to various persons at CIHR but I got no reply so far. While all CIHR folks seem to be in vacation, I keep receiving email about CIHR surveys to fill up! This is ridiculous.

    • jimwoodgett says:

      Two of ours are also pending. This isn’t Stage 4 in a true sense since these are post-facto approvals. They are usually regulatory such as controlled drugs, human stem cells, etc. Those approvals have to wait for standing committees (e.g. Stem Cell Oversight Committee) and are the same for any grants. Frustrating but this is not a yes/no decision, rather that if there are concerns raised, your institution must address them. I do know of a couple of people who had their award pending due to non-completion of end of grant reports (promptly completed soon after!).

  13. Thanks for this summary, Jim. Like others above, I was in Stage 3 but not funded (ECI, rank in Stage 2 was 171.) Disappointing news for me and my lab this year, but it’s life in research. Nobody is entitled to a grant, though the overall lack of options for Canadian researchers is alarming and I think a merit-based payline would probably be somewhere around 25-30%, not 11-14%. Unfortunately, that requires more funds. (As you noted, this is not only a Canadian thing. I am co-I on a recently-funded NIH R21 for which the payline was 7th percentile. We just barely squeaked through at 6th percentile, which is mind-boggling.)

    Absent a significant influx of funding, my main suggestions for improvement in the short term, which I have also shared in my survey response to CIHR, are to (1) increase transparency and (2) improve reviews.

    Rejection is part of our job as academics and I can deal with that. However, the lower the payline, the more important it is to be transparent about the process and provide the most useful feedback possible. For any applicant, but especially for junior people, constructive criticism is essential for improving our science and our presentation of it. I had been in my first faculty position for less than 2 years when the 2014-2015 F-Scheme pilot started. This competition provided my first and only feedback from CIHR reviewers, who seem to be a little different from other reviewers I’ve encountered. I did get some useful feedback, which I appreciated (I’ve already implemented two pieces of career advice from two of my reviewers) and most reviewer comments I received in both stages were, in my opinion, fair assessments. However, I also got some odd statements in both stages and some comments were so short and lacking in specifics that they are difficult to interpret.

    Like Joel Katz, I would especially value some transparency re: Stage 3. If my application was discussed at the panel, it would be great to have a summary statement, and if it wasn’t, it would be useful to know if one or two people put me in their ‘yes’ pile or if it was a total zero and I should rethink my approach if I apply again. I’ve emailed to ask if CIHR will be providing any Stage 3 feedback, but I am not holding my breath.

    I agree that the budget evaluations were difficult to understand. My baseline was calculated as about half of my current operating funds. I don’t know how they arrived at that number. More transparency around that process would be useful, especially for those of us who hold US funds. It would be helpful to see CIHR’s calculations. I subtracted indirects and followed what I thought were the guidelines (e.g., I excluded funding from sources like CFI, even though F-scheme allows for equipment in the budget) but I still didn’t arrive at the number they got. I would like to see more explicit budget feedback earlier in the process, e.g., once applications are accepted to Stage 2.

  14. jimwoodgett says:

    It is essential, going forward, that people know how decisions are made. It serves no one to enter into a competition for which they have no reasonable chance due to considerations they don’t understand (I’m thinking 98% of ECIs – I fail to imagine this cohort of people, recruited under extremely intense pressure, is any less competitive than any other!). Equally, it serves no one that an applicant can be effectively removed from competition by a reviewer who has interpreted the process differently. Everyone is learning and it is obvious that the first competitions will have errors at various levels but is critical that both reviewers and applicants are able to learn and correct mistakes, improve, etc. I particularly worry that the F-scheme and P-schemes do not allow for rebuttal or proper feedback. this is constructive and helps everyone improve. Instead, each application will be an island.

  15. John Bergeron says:

    With respect to the young investigators, my own experience as a round 2 reviewer suggests that genuine discoveries by our young talent go unrecognized by some reviewers. This is not surprising since it is the younger colleagues that use new methods and new thinking to generate and test a new idea. This happened to one such PI whose round 2 grant application was in my 13 grants to be reviewed. The young PI had made a genuine discovery as a postdoc with a first author paper and followed it up in several independent publications as PI. The other reviewers knew nothing of the work or its importance. This is fine and usually a face –to- face meeting at the panel would resolve this. However without face- to- face meetings, the only mechanism was the online discussion. I cut and pasted a paragraph from each of two published review articles that acknowledged the importance of this discovery. Responses from the other reviewers were minimal except for a comment from one expressing some annoyance at my “lecturing” them about this discovery. In the end no rankings or ratings changed as I recall with the final comment from a detractor who questioned the number of publications from this young PI. No matter that the publications for the independent work done by the PI were in outstanding journals and that the issue of productivity was I thought already addressed by round 1 evaluation, the PI was not served well by this exercise. A further complication is that my 13 grants to review had 3 different chairs that I assume meant 3 different panels. Ten of my 13 grants were in the outstanding category( as expected since these are our elite ) but this is not considered by a reliance only on rankings that inevitably must sink several of the PIs whose outstanding grants were assigned to me.

  16. Kelly McNagny says:

    Not to sound like a spoiled kid, but I wish we would also show some attention to the open transitional Operating Grant Competition folks. These folks, too, were BADLY beaten up by the cancelation of the fall open competition.

    One could argue that these investigators were FAR MORE vulnerable to a cancelled competition than the majority of Foundation applicants:

    First, people with multiple grants would be more likely to be in the “eligible zone” for Foundation scheme than those with one or two grants. While I’m extremely sympathetic to those leading investigators who failed to obtain a Foundation grant, they likely have more wherewithal to weather the storm. For those folks in the Open Grant competition, loss of their one grant (or one of two grants) for a year, is likely to mean the end of a laboratory or a severe loss in skilled personnel that will take years to recover.

    Devastating is the term I would use.

    • jimwoodgett says:

      Hi Kelly, although I focused on the F-scheme, I too recognize the major impact on those who had no option but to trundle up to the depleted tOOGP pool. The reforms have impacted a large amount of health researchers and we haven’t a clue as to how the P-scheme will pan out. We can be fairly sure of low success rates, increased application pressure and across the board cuts given that F-scheme people can apply, you can apply for multiple P-schemes and there is a raft of PIs left without funding thanks to the low success rate of tOOGP and dearth of competitions – given that 103 of the funded people in tOOGP (touted among the 500 new grants) have 1 year bridge grants. Also, there are quite a few people who applied to the 1st F-scheme who have only one grant and are in the same predicament as their counterparts who were not funded in the tOOGP.

      There have been congratulatory tweets from most universities (retweeted by CIHR) about these competitions yet I wonder how many scientists int these universities are unable to offer support to maintain these labs. I’ve heard of NIH POs scrambling to find base support for Canadians who are key members of consortia and team grants. These investigators are also less willing to speak up as they feel they’d be written off as winers or bad losers. I would hope those lucky enough to be funded in these rounds will speak up for their colleagues.

      Yup, devastating is an apropos word for the situation.

  17. I received a response from CIHR regarding how the budget was calculated. I’ve removed the CIHR representative’s name to protect privacy, but here is what they said:

    ————
    All 150 Foundation grantees were, as a first step, allocated their CIHR-established baseline budget amount (this includes a 2% annual inflationary increase). The remaining funds available for this competition were then applied to fund the requested and soundly justified increases to the fullest extent possible while remaining within the limited budget parameters. For this competition, applicants with justifiable increases received approximately 30% of this increase. You did receive the 30% increase that was applied to the difference between the committee recommended amount and your CIHR calculated baseline.

    The intent of the Foundation Scheme is not to fund an entire program of research but rather to ensure that the successful applicants have a stable source of CIHR funding supporting their program, likely as part of a cohort of grants from different institutions/ agencies. Applicants receiving a Foundation grant are still expected to seek funding from other sources as they have always done in the past. CIHR expects only that the funds awarded are used appropriately in the context of the program of research.
    ————–

    Frustrating, but at least it sheds some light on how the budgetary amounts were derived..

  18. Petronela Ancuta says:

    I am happy this blog exists. I agree, the lack of a face-to-face discussion in the committees favors the existence of “bad reviewers” who are not investing the time and effort required to perform accurate reviews and who will score either high or low with no appropriate analysis/justification. The online discussions should be replaced by conference calls so that serious professional discussions on applications can be engaged. Since when we do not need to argue anymore? Only by discussing together in direct, one committee may complete an appropriate review. The CIHR should impose solid justification on ranking and should reject meaningless notes in the reports. Despite all these limits, I can say that for the applications ranked at the top there was generally consensus in the committee I belonged. There was however a tendency to rank easily at the top outstanding senior or young researchers, while I felt it was more difficult to place mid-career scientist at the top because they were compared to the senior ones with a long track-record.

    • jimwoodgett says:

      The mid-career investigators, it may be argued, suffered equally as poorly as the early career investigators in the initial competition. As you allude to, this is likely because of direct comparisons with more senior investigators. I note that the Common CV has some new changes for the F-scheme (as of last week) where only the past 7 years worth of trainees and papers can be listed (this is undocumented but you’ll get errors if other dates are included). However, this change also removes a lot of information given that most trainees in that time-frame are unlikely to have fulfilled their career aspirations.

  19. Z Jia says:

    CIHR stipulates that a “co-applicant” of a CIHR grant end in Sept. 2016 is not eligible for F-scheme and only a “co-principal applicant” is eligible. However, CIHR operating grant does not have an option for “co-principal applicant”. I am very confused. If something did not exist, how could it be used as a criterion? Anybody has insights?

    • jimwoodgett says:

      I think there was the possibility of a co-Principal Applicant (as opposed to co-applicant) – at least in some types of competition. You can justifiably ask what was the meaningful difference between he categories as there was always nominated PI who was the person with full administrative responsibility (and divided up the funding to others as they saw fit).

      But the real issue here, is that CIHR has been limiting the application pressure to the F-scheme through the eligibility process. They wished to restrict those who could apply to the first two “pilot” competitions. Any mechanism to do that had to be arbitrary at some level.

  20. Kelly McNagny says:

    Really appreciate this blog and the analyses people are providing….its encouraging to see the passion. If there are any concerted lobbying efforts underway I would be happy to participate.

    Foundation Reviews:
    I agree that the virtual committees are quite flawed: I found tremendous variability in the Foundation committees depending on the Chair. Some demanded discussion and singled out reviewers to provide additional justification for scores while others were completely silent. I also found that in the absence of a face to face reviews, the reviewers were far less thorough. I think the pressure of having to face ones colleagues is a strong motivator for solid reviews and this is gone with electronic review.
    With regard to virtual review committees, I found only one positive aspect: the ability to reflect on comments and come back with a better argument. For example, in the past I have previously seen in camera discussions and scores on a grant swayed by a vociferous reviewer whom I later found was providing biased information. The one (and in my opinion ONLY benefit of the electronic review is that it provides an opportunity to go back and fact check the other reviewers.

    Evaluating Foundation Grants:
    I found these extremely difficult to review. In an eleven page scientific grant I find it easy to see the logic and thought processes of a plan and that makes in simple to decide on the quality. This is truncated in the Foundation Scheme and I found it incredibly difficult to judge quality with out seeing a detailed logical plan.

    CIHR aside some other troubling trends I;m seeing in Canadian science that are likely a “ripple effect” from CIHR.

    – The strain that the CIHR shortfall is putting on charity foundations is palpable. When the Cancer Society writes to applicants and encourages applicants to re-read the call and “pull” non-relevant LOIs due to an unusual increase in LOIs, I think its clear that people are desperate. Likewise I’ve had people contacting me to be a coapplicant on Heart and Stroke and CBCF grants within minutes of the announcements being posted online….never had that happen before. It seems everyone is hungry for an alternate source of funding to keep their programs alive.

    – Last and most concerning, when giving public talks on stem cells and emerging therapies I’m getting an increasing number of wealthy Canadians telling me they are going overseas for treatments because they feel they get better access to cutting edge experimental therapies in other jurisdictions. This last one is a huge concern…not only are Canadians spending their dollars overseas for questionable therapies, they are taking their “advocacy” with them (the people who can afford these therapies tend to be Canadians who would normally be strong lobbyists for CIHR and government sponsored research….when we lose them we are really in trouble).

    Again, happy to participate in any lobbying efforts underway in an effort to change these trends..

    • jimwoodgett says:

      Good points Kelly. Indeed, the virtual review system does have some good elements and you’d think that having 5 instead of 2 reviewers poring over an application would lead to more robust evaluation. Sadly, this isn’t the case for a number of reasons including the fact that reviewer effort can be much more variable as they do not have to defend the applicant or their comments if they don’t wish to. The virtual chair might play a bigger role in cajoling a reviewer to contribute but the additional workload of 15 or so grants coupled with the structured nature of the review has likely led some reviewers to be far more superficial (and get away with it). However, the asynchronicity, virtuality and distance of the review process makes that job difficult. Moreover, the actual rankings of a reviewer are what is important. The individual scores for the sections are unlikely to fall on the same curve for every reviewer. Some will be more generous than others – only their relative ranking matters and that information is not reported to the applicant.

      There are continuing efforts to try to mitigate the damage and I’d suggest you work with your UD as well as your colleagues as a start.

  21. Thank you Jim for this great analysis of the situation and all participants for insightful and constructive suggestions . I am a mid-career investigator (6 years in) previously funded by CIHR during 4 years and I now belong to the pool of “devastated” PIs because I did not renew my CHIR during the last tOOGP competition. I was eligible for the F-Scheme but knew that I did not fit the bill. My CIHR grant was my main source of funding (I also obtained small grants from diverse sources) and you can easily imagine how I feel now with the coming 2016 pilot P-Scheme , uncharted territories…While I am in crisis management mode, I wanted to share my opinions and concerns on the P-Scheme and beyond.
    Reviewing: The weaknesses raised during the F-Scheme reviewing process will be highly similar for the P-scheme unless CIHR decides to make major changes. P-Scheme will include 2 stages with 5 different reviewers/stage who will be selected in the college of reviewers based on “keywords”, off-line discussion of the application between reviewers and thus high SD expected in the scores and ranking.
    Similar to the F-Scheme, the research proposal will be really short, 4 pages (11 pages allowed in the previous system). First, how do we write a project in 4 pages? What kind of detail and preliminary data are expected? Most ECI and several MCI have no experience writing this type of proposal. That means you’d better have great experienced mentors and institution to help you with the writing. Second, are reviewers able to fairly and accurately evaluate a 4 pages proposal? CIHR should provide reviewers with super clear guidelines on how to review these applications. Otherwise, it is going to be a continued disaster. Third, reviewers will be invited based on expertise and matching “keywords”. How accurate and efficient will that be in getting the best and most appropriate reviewers?
    This question is related to a recent ode to reviewers I wrote on Twitter https://twitter.com/jwoodgett/status/619278788229136385
    I would like to add two other points:
    1-Grant proposals should not be reviewed like articles. What is the point of trashing a grant on methodological/technical details. Since there will be no room for details on a 4 pages proposal, are we going to deal with critics regarding the overall methodology based on reviewer knowledge and beliefs? e.g. “oh you’d better use CrispR than the Cre-lox strategy (or vice-versa)”. This point is directly in line with another important issue regarding the fact that CIHR (& other agencies) funds safe and predictable science. There is so much emphasis on preliminary data and feasibility that CIHR ends up funding science that has been already done and ready to publish! Most concerning, reviewers are now formatted to give high marks to this type of predictable research. This approach is a killer of originality and creativity.
    2-The quality of the reviews and reviewers should be evaluated by CIHR and/or the chair to get rid of lazy and incompetent reviewers . I think it is easy to guess who spent 30 min or 2 days on an application. I am not saying all reviewers are doing a bad job, on the contrary, but we all experienced at least once this type of crazy and inappropriate comments that kill your application. Several journals/editors already rank article reviews and thus the corresponding reviewer. Why not implementing something similar when it comes to proposal reviewing? On the long term, that may help reduce the SD on ranking and improve the overall quality of the reviews. Finally, some recognition (by CIHR, universities and/or institutions) for the “good reviewers” would help reaching this goal.
    Budget: For those who did not renew their CIHR grant last July, how are we supposed to survive and at the same time come back with a better and stronger P-Scheme application when recruitment has to be put on ice and experiments/animal colonies stopped (let’s not talk about articles and revisions in the pipeline). Given the number and stats put out-there, 2016 is going to be worse than 2015 for project applications, even with two P-Scheme competitions. Many foundations and associations are overloaded with applications which will lead to an (historical) anticipated low funding rate. How come this was not foreseen by CIHR given that the September 2015 competition was canceled? How come there was no plan B put together to avoid such a loss in productivity, time, effort and highly qualified personnel with a type of emergency fund to keep PIs afloat and science running? Even a 20 or 30k$/year would have make a huge difference. It was the case before, but now more than ever given the overall funding climate, it is impossible to run and sustain a lab in Canada with only one CIHR grant. What if I and many other PIs don’t get funded next March 2016? Is this a purging of the system? I would love to see numbers on how much investment $$ (recruitments, starting packages, CFI, university support and salaries, grants from different sources, trainees) will be lost.
    Science policy: Where is science in Canada heading? What do we want for the future? Short-sighted translational science and applied research that will go nowhere without strong fundamental research ? Where is science and R&D in the current political and societal debate? Reformatting CIHR is one thing but for sure the biggest challenge is to put back science and research in the Canadian agenda. The survey results are clear, the public is largely in favor of research and science. Surprisingly, the scientific community in Canada seems poorly organized and unified on these matters. Maybe it is time to wear our lab coats and raise our voices in Ottawa?

  22. Jennifer Estall says:

    As much as making us more visible would help a lot (this is evident by the huge impact of the student strikes in quebec), at the same time, the government understands dollars and cents. If we could hire (or beg) a management consulting firm to calculate a number for the projected losses you mention Thierry (in past investment, training, brain drain outside Canada, etc.), maybe we can show them what is really at stake. They would then see how much they are actually losing by not investing in research and by continuing to cut the CIHR budget. We need to speak their language.

    • Jean Martin Beaulieu says:

      Quebec scientists also did a great job in rolling back cuts at FRQ-S after the student strikes. Yet under the radar cuts are less easy to expose. (in the case of Quebec students , this was less tax credits!)

      I am afraid that in the current social and political climate, it may take more then a single study.

      Maybe it is time for scientists to form a lobby.

  23. Thierry Alquier @AlquierThierry says:

    I totally agree. First, Canadian universities and institutions should know how much was invested in recruitment over the past 5 years including packages, CFI leader, salary awards etc. It is somehow surprising these numbers are not out there given what is going on with funding and decreasing number of students enrolling in biomed science programs. They could help us establish a rough estimate but I guess we have to ask for it. Second, the most difficult is to estimate/anticipate the impact of CIHR reforms and decreasing budget on jobs and labs losses. I’m not even sure anybody knows (even CIHR!) how many PIs and labs are funded by CIHR. But at least, Jim, Michael and others have made the math for 2016 taking into account 1 F- and 2 P-scheme competitions and the dedicated budget. Conclusions: it is going to be really tough especially for ECI and MCI (see for example https://twitter.com/MHendr1cks/status/634831075714441216 ) but how many PIs/groups will be in unsustainable situations? Impossible to know if they don’t speak out.
    That brings me to the last point. There was a protest in Ottawa recently organized by government scientists (from what I understood), see https://twitter.com/E4Dca/status/641336536815042560 and https://evidencefordemocracy.ca/en/sciencepledge

    I don’t get why this was not communicated to canadian universities to gather academic scientists. We definitely have work to do to unite and form a lobby. The tools are out there, free and easy to use.
    If you look at the last survey of the Canadian Association for Neurosciences regarding CIHR reforms: http://can-acn.org/cihr-reforms-questionnaire-results the opinions are clear (103 participants), there is a concensus, a core for a lobby.

  24. Pingback: CIHR Needs a Leadership Review | CIHE Blog

  25. Pingback: Reblog of Reform(atting) CIHR - Spatial Determinants of Health Lab

  26. Pingback: Faut-il avoir peur des études scientifiques ? – Nouvelles du web

Leave a reply to Eric Benchimol (@ericbenchimol) Cancel reply