16 out of the 28 members of Evaluation Group 1508 (for Mathematics and Statistics) wrote the following letter to NSERC’s President Suzanne Fortier to “draw (her) attention to the distressing results of the 2011 Discovery Grants Program”. It is a strong letter that reflects the anguish of a distinguished group of individuals who dedicated much time and effort to insure a fair granting process, only to see their meticulous work grossly distorted by a flawed system.
The letter must certainly be music to the ears of the dozens of mathematicians who have been feeling betrayed by the system. It confirms and explains the anomalies that were reported about the latest competition, and will surely force NSERC to address the issues. It also explains the impossible choices that the two mathematicians on the EG executive had to face.
The letter “urges (NSERC) to take rapid corrective action so that the 2012 competition will not suffer from the same problems”, but unfortunately doesn’t suggest any remedy for this year’s applicants.
The fact that the statisticians on the panel did not sign the letter -in addition to the discrepancies in bin values- will surely undermine the “forced marriage” between them and the mathematics community, that has been imposed by NSERC. The mandate of the “Long Range Plan” will have to be revised and restructured. The absence of the Group Chair from this heart-felt, honest and constructive evaluation of a process that she had supposedly overseen, will not go unnoticed.
May 19, 2011
Dr. Suzanne Fortier, President, NSERC
Dear Dr. Fortier,
We would like to draw your attention to the distressing results of the 2011 Discovery Grants Program in Mathematics, a competition in which all the signatories of this letter played a role, as members of the Evaluation Group for Mathematics and Statistics (1508). We have strong reasons to believe that the outcome of the competition in 2011 is not consistent with NSERC’s clearly stated goal to have a “process that is responsive to merit and is objective, transparent and fair”, as quoted from your recent presentation of the competition summary.
We detail below each aspect that seems seriously problematic in the hope and expectation that NSERC will rapidly correct the situation for the applicants from the recent competition of 2011 or, at the very least, ensure that these issues are adequately resolved in time for next year’s competition.
Lack of fairness: The members of the Evaluation Group (hereafter, the EG) have been deeply concerned by the fact that there are huge funding differences for comparable files in the 2010 and 2011 competitions. Indeed, funding for the same bin in 2010 and 2011 dropped by as much as 40% ($12,000, in this particular case), even though we, as members of the EG, agree that rating standards for the two successive years were not significantly different. Researchers expect to be judged on the merit of their file when compared to the whole community, not only to the applicants in the same competition. Variations such as those observed between 2010 and 2011 are highly unfair.
Responsiveness to merit: In addition to the bin funding discrepancies between 2010 and 2011, we have observed that funding differences between successive bins (from $1,000 between bins I and J, to $9,000 between bins D and E) are extraordinarily large and appear, frankly, arbitrary from the perspective of merit. Researchers with remarkable achievements and outstanding potential are provided with insufficient funds to develop and carry out their research projects as a consequence of this extremely unbalanced funding model. As committee members, we were asked to use the full scale in our ratings, and we did so, but with the expectation that scientific merit would ultimately be rewarded with proportional grant amounts. The bin values selected by NSERC distort the rating scale in such a drastic fashion that the lengthy and rigorous evaluation process that took place was rendered almost meaningless.
Transparency: The 2011 competition and its aftermath also raises questions about the transparency of the present system in which critical funding choices are exclusively the purview of the Executive committee of the EG and NSERC staff, and we feel that Isabelle Blain’s letter to the applicants hardly addresses any of the concerns expressed by the EG members and the community.
The Executive of our EG was placed in the very difficult situation to have to choose between two unpalatable and unfair scenarios, a huge drop in success rates from a year to the other (from 62% to under 45%), or reducing drastically bin values in mid-range bins. All along this process as well as after it, for confidentiality reasons, they were unable to consult and explain in a detailed way the whole situation to the rest of the EG.
One example of how the general lack of openness affected the process and its aftermath is related to the decision made by the Executive and NSERC staff to have different funding levels per bin for mathematics and for statistics, even though the two subjects share the same EG. The difference in funding levels was swiftly explained in Ms. Blain’s letter by claiming a different interpretation of the merit criteria. However, there was no comparison of files with the same rankings in mathematics and in statistics during or after the end of the competition to justify such a broad claim – in fact, bins can not even be opened after the end of the competition. After noticing a markedly different bin distribution in mathematics and statistics, the Executive decided to split the budgets between these two groups, a decision justified by the fact that, historically, these two subjects have been separated one from the other in the funding process and the respective subgroups of the EG had worked in complete separation also in this evaluation exercise. As a consequence, the bin values in statistics became higher than in mathematics and reached levels comparable to those in 2010. As the confidentiality principle did not allow the Executive to explain to the EG, not to mention to consult with its members on, the full details of the various scenarios involved, this led to considerable unease in the EG and beyond.
Another example of lack of transparency is apparent in the fact that Ms. Blain’s letter hardly makes any attempt to justify grant funding differences between 2010 and 2011. Her letter indicates that one of the priorities in the current year was to fund the bin J. This is commendable and in keeping with the wishes of our community, as we understand them. But it does not explain why this led to hugely different results from 2010, when the same criteria were applied and the same bin J was funded, with a similar overall success rate. We are distressed that her letter did not refer to any of the other variables entirely in NSERC hands and involved in this outcome: first and foremost the budget, the values of the lowest funded and the top bins, and the gradations between funding amounts for different bins.
As members of the EG, we were involved in a process that is of great importance to our scientific community and, more generally, for the advancement of research and innovation in Canada. We have made considerable efforts to implement the new grant distribution system in ways consistent with the highest scientific standards in a context made particularly difficult by the chronic and increasingly severe underfunding of the mathematical and statistical sciences when compared with other NSERC supported disciplines. We established our ratings with the best of our scientific expertise and we naturally expected this to be reflected accurately in terms of budgetary allocations. This was far from being the case. As reviewers, we were highly impressed with the quality of the 2011 applicants. Canada, through NSERC, has supported many of them in past years and can take justifiable pride in their achievements. Moreover, the Discovery Grants Program is essential for the progress of science in Canada and, for the outright majority of applicants reviewed by the EG, is the only available source of significant research funding. Unfortunately, the 2011 Discovery Grants Program in Mathematics failed many of the most promising of these applicants. As a consequence, our confidence in the program, as currently administered, is regrettably shaken. We urge you to take rapid corrective action so that the 2012 competition will not suffer from the same problems.
Mike Bennett (UBC), Nantel Bergeron (York), Lia Bronsard (McMaster), Thomas Brüstle (Sherbrooke and Bishop’s), Olivier Collin (UQAM), Benoit Collins (Ottawa), Octav Cornea (Montreal), Alan Dow (North Carolina, Charlotte), Hermann Eberl (Guelph), Christopher Godsil (Waterloo), Eyal Goren (McGill), Robert McCann (Toronto), Matthias Neufang (Carleton and Fields), John Stockie (Simon Fraser), Holger Teismann (Acadia), Xiaoqiang Zhao (Memorial University)
cc: Ms. Isabelle Blain, Vice-President, Research Grants & Scholarships; Ms. Madeleine Bastien, Team Leader, NSERC Evaluation Group 1508; Mathematics-NSERC Liaison Committee.
Now *this* is radioactive! The panelists have seen, first-hand, the deliberations which went on, and the EG chairs have signed this. For a rank-and-file mathematician, here’s the true shock:
‘We are distressed that her letter did not refer to any of the other variables entirely in NSERC hands and involved in this outcome: first and foremost the budget, the values of the lowest funded and the top bins, and the gradations between funding amounts for different bins.’
It sounds like the EG did their best within the constraints given, and that the issue we are collectively most upset about – the underfunding of our discipline at all levels – was not something within their ability to fix. It would take the wisdom of Solomon, indeed, to decide between funding our future (young researchers, who will likely be found in the lower bins), and established researchers (middle to top bins). I propose a single-issue letter on behalf of the community: increased funding by at least 35% of current allocation to the mathematical sciences DG pool, and include a list of compelling reasons.
I have read this article and the comments with great interest, and though I am a PI that competes in the NSERC DG program, I do not do so in the maths and sciences. I agree with many of the concerns in the letter, and though I can’t say with proof, I assume many of the same issues apply to all the fields that have been combined in the new evaluation system. My concern is with the idea of a single issue letter suggesting a 35% increase to the mathematical sciences DG pool. We all realize that funding sources are limited and in some ways shrinking, however if you asked applicants in all the different pools, they would all say they are grossly underfunded and would like the idea of a 35% increase in their pools funding. The problem is – where do these additional funds come from? Clearly the only alternative is from another group that likely also feels significantly underfunded. I think a more constructive approach is to target the issues that can be fixed – inequalities within the group sections (ie math vs stats) and transparency. Further we must accept that our success and grant amounts are determined by who we compete against. A given bin on a given year will not get the same amount of money if in year X far more people submit grants or more people submit better grants. It is the nature of the “game”
It seems that the NSERC motivation for separating Mathematics and Statistics in 2011 (which was `exceptional’) was as follows:
You can calculate the amount of the budget that was attributed to the EG as coming from statistics and coming from mathematics (this is generated by a formula). Apparently by using common levels for the bins, there was a substantial net flow of money from statistics to mathematics. The separation of the budgets prevented that.
What I have a hard time understanding is how this is justified given the findings of the international report. There it was clearly argued that at an individual level, the system should disrupt inertial effects between researchers. This decision, however, seems to have been deliberately imposed to guarantee inertial effects between disciplines. This appears to contradict the spirit (and in fact the letter) of the findings of the International Report…
Pingback: Time to clean up the mess at the Discovery Grants program | Piece of Mind
Pingback: A senior scholar reports on S. Fortier’s presentation at the CMS meeting | Piece of Mind
valiant effort but the letter presumes that nserc can be influenced by evidence, logic, and reason. i suggest that nserc’s policy changes and apparent shortcomings are entirely intentional.
in addition, i would argue that the government and nserc are free to set policies as they like, however bad they may be. that nserc’s policies contradict the spirit and letter of its own international review–evidence and facts be dammed–is but an inconvenience. just another example of “decision-based evidence making”, though it does raise the question of why nserc spent money on an international review only to ignore the spirit and letter of its key findings.
imho, the only way that nserc can be successfully challenged is through a complaint to the auditor general. since the auditor general CANNOT critique government policy, a successful complaint would require clearly demonstrating that nserc has failed to follow or ignored its own rules, of which there appear to be numerous examples on this site and elsewhere. any takers???
minor correction to the above post.
i meant to say that a successful complaint would need to demonstrate FINANCIAL MISMANAGEMENT on the part of nserc through the failure to follow its own rules in the discovery grant adjudication process, of which there appear to be numerous examples on this site and elsewhere.
Pingback: Dirigisme: Research prioritization and funds reallocation … by staff | Piece of Mind
Pingback: Term limits and the integrity of the peer review process | Piece of Mind
Pingback: You are not alone! | Piece of Mind
Pingback: NSERC, Math, Stats, Innovation and all that jazz | Piece of Mind