As promised, here is a first installment of several posts I am preparing about the Discovery Grant program after my public debate with Isabelle Blain, NSERC’s VP for Research Grants & Scholarships. This first one will give the background behind the changes, and some of their effects on the community at large. The next post will deal with a comparison between the old and new systems for evaluating NSERC’s Discovery Grant applications, while in the third post, I will discuss the “Stockholm Syndrome” among panelists.
First, some background according to Isabelle: “NSERC undertook a major review of its Discovery Grants program in 2007. The two review committees found the program was highly effective in meeting its goals, but did recommend ways it could be enhanced. These focused on areas such as the peer review process, grant selection committee structure and funding levels”.
“With respect to peer review, the key recommendation was to separate the process of assessing scientific or engineering merit from assigning funding.
In doing so, two principles were fundamental:” (according to NSERC’s interpretation of the recommendations):
- First, that the level of a grant should be commensurate with scientific or engineering merit,
- Second, that within a given discipline group, proposals with similar scientific merit should have similar grant levels regardless of the applicant’s granting history with NSERC.
It is worth noting here that NSERC cherry-picked what suited their agenda from the recommendations, since the international panel also found that the program was exceptionally effective in international comparison and in “maintaining a diversified base of high quality research capability in Canadian universities.”
But what was NSERC’s agenda? My guess is that first they wanted a review system that could address the challenges of the growing inter-disciplinarity in research projects. Fair enough.
The second point on their agenda was the relatively high-looking success rate that was showing on their books (After all, no one counts those who have given up altogether on the system). Indeed, NSERC was under all kind of pressure about that (CIHR? NSF? Lynch? Reformists?).
Besides providing a clear and definite endorsement of the DG program, the panel seems to have anticipated NSERC’s anxiety about the high-looking success rates, and proceeded to warn the agency that “any significant intentional reduction in the DGP success rate…would result in reduced research support in the smaller provinces and in small institutions.”
I should add that the other (internal) program review also initiated by NSERC in 2007 concluded that the DG program funding was insufficient to match the increasing numbers of university researchers hired over the past decade, while the average discovery grant had not kept pace with inflation. A point never addressed by the NSERC leadership, as they are currently busy shoring up their contribution to the so-called “Innovation agenda”.
Blain agrees: “The total number of Discovery Grants held at a given time is another important statistic to keep in mind. This figure rose steadily for much of the past decade, from 7,886 in 2001 to a high of 10,340 in 2008. It currently stands at 9,948, still well above historic averages”.
We believe it, but it would be nice to compare these numbers with the evolution of the budget available to basic research. It turned out that the CAUT was right, as NSERC has indeed announced plans to cut $14.5 million from the Discovery Grant program over the next three years. Indeed, NSERC states in its 2010–2011 Report on Plans and Priorities (You need to scroll down a bit) that it intends to reduce funding for basic research a further 3.6 per cent from $364.9 million in 2009–2010 to $351.9 million by 2012–2013.
Meanwhile, the “innovation” budget, which supports commercialization initiatives and university-industry partnerships, will be increasing again over the same period. Check this, where NSERC’s president never addresses the future of basic research in her portfolio.
What makes matters worse is that they now have found a way to start changing the culture even within the DG program. How? 69% of the current Discovery Accelerator Supplements, where the new increase for the DG program is supposedly going, are now targeted at areas in line with “someone’s priorities”.
NSERC went ahead and changed the review system, the net result of which seem to be a significant drop in the success rate, an increase in the number of appeals, a non trivial number of demoralized young researchers, and a sour mood at many smaller institutions that are feeling disenfranchised by the system.
Isabelle argues that these statements are wrong, and she has numbers to prove it. She is definitely welcome to share them with us on this blog. She states, “this (new system) has created a more dynamic funding system”. I will argue in the next post that this has created a chaotic and volatile review system.
But for now, let’s review the impact of these changes on the ground. The program’s success rate is on a downward spiral — from 71 per cent in 2008 to 64 percent in 2009 and falling to a record-low level of 58 per cent this year.
But Isabelle claims “In 2010, 72 per cent of applicants holding a grant at the time of application were successful in obtaining a grant. Those who did not have a current grant had a lower level of success”.
So the granting history is relevant after all.
Moreover, the number of appeals rose from 122 in 2008 to 223 in 2009 to 224 in 2010. The number of successful appeals, however, remains in the 20′s, with the exception of 2009, a transition year that left NSERC scrambling. NSERC’s website states that the rate of success of appeals is less than 25%. It is actually half of that.
Isabelle claims that the numbers are relatively small. Isn’t it ironic that those who have given up on the system (and stopped appealing) are now contributing indirectly to its validation? Plus, I don’t find a doubling rate to be relatively negligible in any scheme.
Isabelle has numbers claiming that smaller institutions were not affected by this new system. This goes contrary to what we have been hearing about the mood in these institutions across Canada. I know that several members of NSERC’s own council are very aware of the situation and are eager to fix it. I will leave it at that.
Finally, and unless exceptions were made, how can the new system not affect the success rate at smaller institutions when HQP (the supervision of Highly Qualified Personnel) accounts for one third of the evaluation “mark” (with a potential role in ruling out any funding –see next post), and when most smaller institutions do not even have a graduate program (let alone a substantial one so that several faculty could contribute to the training).
I don’t get it Isabelle. Please explain.
The comments from the international review panel also included: “The average DGP award is modest, at a little more than $30,000 (in 2006-07), and has been declining recently in real terms largely as a result of the applicant pressure generated by the wave of faculty renewal in universities across Canada. In the opinion of the Committee, the erosion of the value of the average grant is a more serious problem than the 70% success rate.” Like all organizations, NSERC operates in the context of a defined budget and must choose the most effective way to allocate funding. NSERC has chosen to maintain individual Discovery Grants at levels that can support quality research programs, which limits the total number of grants that can be issued in a given year.
Both the increased number of applicants to the DG Program and the globalization of science and technology have resulted in raising the “bar of excellence.” It is now imperative that Canada be a strong contributor on the world scale, meeting world level standards of excellence.
Pingback: NSERC responds (What took you so long?) | Piece of Mind