It is reported that in a still embargoed presentation to the 2011 Canadian Mathematical Society meeting in Edmonton, NSERC’s President, Suzanne Fortier, cited *“Grade inflation”* as one of the factors for the disastrous collapse of grant levels in mathematics in the 2011 Discovery Grant competition. Fortier’s statement led UVic Professor, Anthony Quas, to do what any good scientist would in such circumstances: Check the facts, let the data speak and extract conclusions. This is one in an upcoming series of “You are not alone” posts.

**Grade Inflation in Discovery Grant Competitions – by Anthony Quas**

Each year in the NSERC Discovery Grant competition, there is a fixed pot of funds to be allocated by each Evaluation Group (determined by a mechanism that is not widely understood). Under the current (post-2009) grant evaluation system, the zero-sum aspect is hidden from panelists as they rank proposals. Proposals are ranked one at a time, but not compared. Further, there is little that is absolute in the NSERC descriptions of the merit indicators — try comparing the criteria for a *strong* HQP rating with *very strong* rating for example.

Accordingly, there is a danger of *Grade Inflation*, by which the standards, for example, for achieving a very strong HQP rating might become easier year-on-year. In fact, the previous system (not without faults of its own) behaved much better in this regard: each panelist was told they have an approximate `budget’ and they were to recommend a distribution of this amongst those applications for which they were the first reader. Clearly the fixed pot size was very evident to panelists as they made their recommendations.

Grade inflation was one of the factors blamed by Suzanne Fortier in her presentation to the 2011 CMS meeting for the disastrous collapse of grant levels in mathematics in the 2011 Discovery Grant competition.

Studying the data, I concluded that this was indeed a real problem. To measure grade inflation from one year to the next, I asked the question: what is the ratio of the cost of funding (or not) the average application in 2011 (according to its bin) using 2010 bin values to the cost of funding the average application in 2010.

More specifically, I computed the bin distribution of applications (all applications – not just those that were funded) in both years and took the ratio of the weighted average grant sizes *using 2010 bin values* under the two distributions. A ratio of 1 suggests that this year’s distribution is comparable to last year’s. A ratio above 1 suggests grade inflation. In the graphs below, I’ve expressed grade inflations as percentages.

This indicates considerable grade-instability across evaluation groups (over 40 % of the groups experienced grade inflation or deflation of over 10% between 2010 and 2011) with substantial grade inflation in 2011 in geosciences and both of the mathematics and statistics halves of EG1508 and substantial grade deflation in several of the engineering evaluation groups.

Needless to say, this instability leads to considerable uncertainty year-on-year in the research climate, undermining efforts to consolidate and improve training of graduate students and other highly qualified personnel.

Nice analysis, Anthony. I think ‘grade instability’ is a much better description than grade ‘inflation/deflation’; many aspects of the new evaluation system seem to interact to create considerable instability. One hears of applicants who submitted nearly the same proposal in two succeeding years with quite dissimilar results.