NSERC should stick with linear thinking…and own principles

Dozens of mathematical and statistical scientists are currently signing an open letter to Tony Clement (Minister of Industry) and Suzanne Fortier (NSERC’s President). You can sign it here if you wish. No, this petition is not about the long-form census, but about NSERC’s recent changes to the evaluation process in the Discovery Grants Program and the anomalous results they are causing, especially in the latest competition. What is remarkable about this open letter is that even members of the Evaluation Committee –those who were supposed to be accountable for these results– are also signing it.

Petitions about disappointing outcomes in research grant competitions are not unusual events. Just look at the latest petition from CIHR researchers. The novelty here is that Evaluation Committee members who supposedly arrived at these results by following NSERC’s new schemes are saying that the funding outcome has totally distorted their evaluations. This is serious business.

A previous blog post explains somewhat the issue. The Evaluation Group were given the responsibility to evaluate research proposals and classify people into bins. After they finished their work, NSERC staff informed the EG Executive committee (3 co-chairs and 1 group leader) about the available budget and recommended/negotiated appropriate amounts to each bin. The rest of the Evaluation Group were not privy to this discussion (they were involved in the 2009 competition, but not since!), and that’s where the serious problems start.

Here is again the “bin to dollar” distribution map in the last two years.


The map for 2011 is acutely non-linear: The increments between the consecutive bins are: 1K, 2K, 2K, 3K, 8K, 9K, 6K, 6K, 5K

One of the problems with such an exaggeratedly nonlinear map is that the panelists cannot recognize their own rankings. Let’s count the ways:

First, there are good reasons to believe that when the panelists decide to give an applicant an O-V-V (Outstanding, Very Strong, Very Strong), they had no idea that the candidate was going to end up with an 18K grant, which is lower than the average and awfully close to a minimal grant. A returning panelist would have assumed that it was going to be around 30K just as in last year’s competition.

Secondly, when members of the Evaluation Group were painstakingly debating whether someone should have 14 points (i.e., bin E) and not 15 points (Bin D), they were not aware that a one point difference in their decision would cost the applicant 45K in 5 years.

When the “bin to dollar” map distorts so much the incremental steps between bins, the panelists cannot get a handle on what their rankings mean. It’s like you’re looking at your own painting through a highly distorting lens. You can hardly recognize it. This is a non-linear effect.

Then came the following email from an outstanding mathematician and a very classy one.

“While I’ve benefited hugely from the preservation of grants of people rated ‘E’, this really does make a nonsense of the whole new system. With ‘EVV’ , I get a grant over 45,000 because that was my old grant, but a younger rising star with EVV would get 26,000. So the new system is now far more sticky than the old system.

I think my case is rather outrageous actually. I give you permission to use it, but please remove identifying information”. 

So you see the irony. The new granting system is ending up amplifying and institutionalizing all the flaws it was supposed to fix. 

More conservative:  NSERC staff changed the process because they wanted to shake up  the system and make it more dynamic so as to allow younger stars to move up the granting ladder quicker than before. Yet, they introduced a rule that “protects the purchasing power” of the people who had at least an “E” (for Exceptional) on the “Excellence of Researcher” category, which means that these applicants can keep their old grants, which are usually much larger than the current value of their bin (in this case 45+K instead of 26K).

Unstable: Here is yet another indication of how unstable the system now is. Senior mathematicians at Toronto came down from 42K to 18K because they were in the OVV  bin. Had they been in the neighboring EVV bin, they would have kept their old grants. In other words, only one vote by one person on one criterium have changed an E (Exceptional) into an O (Outstanding) and cost them (42-18) x 5=120K. That’s a travesty of an evaluation.

Unfair: Moreover, those who got OOV and OOO (i.e.,a score of  14 and 15 pts), hence in the same or in higher bins than EVV (14pts) got 26K and 35K respectively (much lower than the protected EVV grant at 45+).

Now NSERC claims (I kid you not!) that the following fundamental principle is at the basis of its new evaluation system.

“Within a given discipline group, proposals with similar scientific merit should have similar grant levels regardless of the applicant’s granting history with NSERC.”

This entry was posted in Op-eds. Bookmark the permalink.

Leave a comment