You know there is a serious problem, when the members of NSERC’s Evaluation Groups (EG) are the first to call foul, and announce that they are shocked, surprised and offended by the results of the latest NSERC’s Discovery Grant competition – the one they just finished running. They agreed to play the game and they played it to the best of their abilities, yet they do not want to own it. They may have good reasons for doing so. But what is the role of the EG executive committees, NSERC staff and management, as well as the various internal and external consultative panels in all this mess, and where does the responsibility lie?
Disclosure: What follows are not direct quotes from specific individuals, but paraphrases of my own understanding of various discussions.
1. Panelist A: I am not responsible for the case of applicant Y because I was not one of the five sub-panelists responsible for her “binning”.
The main expert in the field was on the sub-panel for applicant X, but not on the sub-panel for applicant Y. He is therefore not responsible for the outcome that assigned three times more to X than Y, though he is well aware that most of the publications of applicants X and Y are joint, that Y’s HQP are … etc. Remember the Reid-Fraser case?
2. Panelist B: I was indeed on the binning crew assigned to case Z. I was surprised (and outraged) by the grant assigned to him since it does not reflect at all the real intentions of the sub-panel while we were voting on his file. I am not responsible because I was out of the loop, when the EG executive and the NSERC staff were deciding on the funding formula for each bin.
I thought that a bin of OVV, i.e., Outstanding (re:Excellence of Researcher), Very Strong (on the quality of the proposal), Very Strong (on HQP) would get him at least 30K, and not the 18K, which is a big drop from his previous 45K grant. As far as I am concerned, the funding to binning map has totally distorted our rankings:
10K for SSS, 11K for SSV, 15K for VVV, 18K for OVV, then 26K for OOV, and 47K for EEO.
That is totally crazy. One slightly off vote by one of the sub-panelists can cost an applicant 40K over 5 years.
3. The Evaluation Group: NSERC Discovery Accelerator Supplements are highly competitive awards, which provide “substantial and timely resources to outstanding researchers who have a well-established research program and who are at a key point in their careers at which they can make, or capitalize on, a significant breakthrough”. How come then those who eventually received these Accelerator Supplements fared so badly in the regular Discovery Grants competition that your EG ran?
We are not responsible. The discrepancy comes from the fact that the EG executive first filters our recommendations, and then another interdisciplinary panel (probably consisting of the Group Chairs) makes the final selection. At the end of the process, there is little correlation left between the EG top choices and the lucky recipients.
4. The EG Executive: We are unhappy because the rules of the game, and the budgetary pressures forced us to adopt a skewed funding to binning map. We are not responsible because we were out of the loop, when the NSERC staff assigned to us a budget for the competition. We are outraged that the average grant in our discipline is 20K less than the next closest among the disciplines.
We had many people in the top 3 bins, which consumed a good chunk of the budget, since their old (and relatively large) grants are protected by the new system. On the other hand, the community wanted us to fund bins all the way down to SSS, i.e., the Strong, Strong, Strong bin, which includes a large number of applicants. NSERC staff insisted on keeping the minimal grant above 10K, and so little money was left for researchers in the middle range. That’s why so many senior researchers from the University of Toronto went down from the 40K’s to 18K (awfully close to the minimal grant!).
5. NSERC Management: We are not responsible for the “historical” allocations of funding between the disciplines. It is true that the last competitive re-allocation exercise occurred in 2002, and we are not telling you whether any reallocation by management had occurred since. We did however ask the Council of Canadian Academies “to examine the international practices and supporting evidence used to assess performance of research in the natural sciences and engineering disciplines.” The new budgets re-allocation will probably not take effect (if any) before the 2013 competition.
We are not responsible either for the flaws of the new binning system, because it was essentially constructed in response to the recommendation of the International Review Panel, which had asked that we “separate the process of assessing scientific or engineering merit from assigning funding.” We deduced two fundamental principles from that review:
- that the level of a grant should be commensurate with scientific or engineering merit.
- that within a given discipline group, proposals with similar scientific merit should have similar grant levels regardless of the applicant’s granting history with NSERC.
Hence, the “binning system”!
6. The International review panel: We are not responsible, since we have told NSERC that the Discovery Grant program was exceptionally effective in international comparison and in “maintaining a diversified base of high quality research capability in Canadian universities.”
None of that “binning” business!
7. The Council of Canadian Academies: We will not be responsible for what NSERC is going to do with our upcoming report. Actually, we were neither asked to review the funding envelopes, nor to re-allocate funding between the various disciplines in the Discovery Grants program. Our mandate as stated by the minister is only “to examine the international practices and supporting evidence used to assess performance of research in the natural sciences and engineering disciplines”. We suspect that NSERC’s management will do the re-allocations, according to their interpretation of our findings.
The plane is on fire. Is there a pilot in the cockpit?
According to Isabelle Blain’s email after the 2011 results were published, “One of the EGs (Chemistry) recommended a higher quality cut-off and a reduced number of grants to protect the purchasing power for the most highly rated applicants.”
As someone who was cut off cold by the Chemistry EG this year, I would like to know just what gives them the right to make such decisions? I thought their mandate was to rank proposals, but this sounds like a broad policy decision to me – and perhaps a self-serving one given that members of the EG are more commonly high-profile members of the community. Where is the accountability for such decisions?