We are not used to see the normally confident scientists at the University of Toronto so agitated and angry at NSERC. Some of their mathematicians are screaming from the rooftops that the “NSERC Peer Review System is Broken for Mathematics“. We don’t have the full picture of this year’s Discovery Grants competition yet, but from what we know so far, they may have very good reasons for doing so.

NSERC is apparently not in a position right now to make public this year’s results. We are told that the government is preventing them from doing so, because of the mandatory blackout on government funding announcements during an election period. This is strange, as one would think that such an embargo applies only to new federal spending, and not to the results of a competition within a predetermined budget for a supposedly independent and relatively autonomous research council.

One should however take into consideration that since the Conservatives came to power, the Minister of Industry and the Minister of Science and Technology have started a new tradition of announcing themselves the results of various NSERC competitions. Still these were not announcements for new government funding, and it would have been perfectly fine if they were made by NSERC officials, at least this time around.

This said, individual applicants did already receive their notification of decision, and that’s why one can already hear that the outcry is louder and more generalized than usual, at least within the mathematical sciences. What do we know so far and what can we make of the few things we learned?

Well, with the help of several colleagues we managed to collect information from various departments across the country, such as UBC, UVic, SFU, Calgary, U.Alberta, McGill, UToronto, York, Waterloo, McMaster, Dalhousie, and Mount Allison, which represents about 1/3 of the 2011 data. The universities in Quebec (McGill aside) have been however surprisingly quiet and non responsive.

Using these results, we first proceeded to reconstruct a pretty good approximation of the ‘bin to funding map’, which assigns $$ to each bin for the 2011 math. competition. Here is how it compares to last year’s.

One can already see a major problem. If you were in Bin F in 2010 you g0t 12K more than if you were in the same bin in 2011. This translates to a 60K differential over the 5-year duration of the grant between two applicants that have been rated equal.

Someone has forgotten to read page 21 of the Peer Review Manual: *“Bin levels, budget permitting in a given competition year, are expected to be in a similar range from year to year.”* It did say however, “budget permitting*“. *So let’s continue our investigation.

How many grants do we have in each bin? Here’s what the NSERC awards search engine says the distribution was for individual discovery grants in “mathematical sciences” in the 2010 competition:

12,000 – 55 grants (Bin J)

15,000 – 37 grants (Bin I)

20,000 – 27 grants (Bin H)

24,000 – 22 grants (Bin G)

30,000 – 11 grants (Bin F)

35,000 – 5 grants (Bin E)

40,000 – 6 grants (Bin D)

7 grants in bins B and C (2 at 44,000, 3 at 48,000, one at 55,000 and another at 60,000.

Plus eight grants at the following non-standard amounts (7,880 – 10,000 – 10,970 – 11,000 – 18,000 – 21,000 – 23,300 – 26,000)

We don’t know yet the number of grants in each bin for 2011, but we did infer from the partial information we have, three important pieces of information, though one of them still needs to be confirmed.

The first piece of information is that there was a comparably large number (55-60) of grants in bin I (i.e., the Strong, Strong, Strong bin). However, the value for this bin is 10K for this year (as opposed to 12K for last year). This means that this boundary condition cannot be held responsible for the shortage of funds for the mid-range bins, which had caused so many senior researchers from the University of Toronto to go down from the 40K’s to 18K.

On the other hand, one could see from the collected sample that there were more people this year in the top 3 bins A, B, C. For example, 2010 saw 7 people in the top 3 bins, while the 1/3 of the 2011 data that is known to us already shows 8. Moreover, for the top 6 bins, 2010 had 29 in total, and our 1/3 of the 2011 data already shows 29 too. Was this year’s cohort particularly strong or was there a case of “bin inflation”?

Remember that –as Isabelle Blain says– the “purchasing power” of the people in the top 3 bins is protected, which means that they can keep their old grants, which are more often than not (alas not in my case) much larger than the current value of bins A, B, C. These bins ended up consuming a good chunk of the budget. The question is whether this fact alone could account for the shortage of funds for the mid-range bins.

This leads us to the probably most important –but so far least substantiated– piece of information. Was there a large difference between the total value of returning grants and the budget made available for this year’s competition? In other words, did NSERC skim from this year’s budget for Math?

Well, a friend did some basic statistical analysis to extract information from the available partial data for 2011, which showed a 7% drop in funding. He used NSERC’s 2010 data and took a random sample of 1/3 of the grants, to find that there is only a 13% probability that their sum would end up 7% or more below 1/3 of the total (i.e., there is only 13% probability that the 7% apparent drop in funding was simply noise). Of course, we don’t have a random sample, and the 2011 distribution is different from 2010’s. Still this points to a highly probable event that there was less money in the pot this year.

The last piece of information is that there were hardly any statistician in the top 3 bins, which –unlike previous years– forced a substantial discrepancy between the values of the lower bins in math and stats, with our colleagues in statistics enjoying a much higher dollar value for their bin/standing. So much for Blain’s stated fundamental principles.

In any case, the most self-evident truth is that Mathematics has historically been grossly underfunded by NSERC, compared to other disciplines, while at the same time the level of mathematical talent in Canada has increased dramatically in the last 10-15 years. One of the hidden benefits of the new “binning system” is that it simply makes this truth even more evident.

NSERC is apparently not in a position right now to make public this year’s results. We are told that the government is preventing them from doing so, because of the mandatory blackout on government funding announcements during an election period.

–Wow.

What is the historical precedence for information embargoes on the release of data during Canadian elections? Is this standard practice or new?

It is clear why government announcements for new federal expenditures are forbidden during election periods.

The question here is whether this also applies to announcements of results of a peer-review based competition that allocates a previously announced budget item.

This year competition statistics came up on NSERC website:

http://www.nserc-crsng.gc.ca/_doc/Funding-Financement/DGStat-SDStat_eng.pdf

All their statistics are in terms of percentages though.

Pingback: Why is the 2011 data on NSERC’s Discovery Grants so radioactive? | Piece of Mind