Why is the 2011 data on NSERC’s Discovery Grants so radioactive?

Two months after the end of the 2011 competition for Discovery Grants, NSERC is yet to release the full data on its outcome. The reason given is that there is a gov’t-wide hold on releasing data/making announcements. However, colleagues who have “mined” the few data available have already observed three troubling policy decisions that are shaking the very foundation of NSERC’s new principles at Discovery, and leading to a generalized loss of confidence of the research community in NSERC’s new ways. It may be time to address issues of accountability, especially if NSERC-selected executive committees, who are neither recommended nor vetted by their research communities, are being led to make decisions with enormous consequences.

This gag order by government may be coming as a blessing to NSERC’s officials and some of their EG’s executive committees. However, a tightly knit and organized research community, elementary linear algebra, a yardstick, and impressionistic graphs released by NSERC, already give a good idea about the competition in at least one evaluation group (EG 1508 for  Math/Stats) . The findings explain the anomalies in the results, and clearly indicate the extent to which NSERC has violated its own principles.

NSERC has already notified the applicants of the decision regarding their proposals. This is of course very important, but not useful to gauge how an individual’s result fits in the global competition. Then, NSERC released the following data, where graphs are used to show percentages (and not actual numbers) of people falling in the various bins.

But why didn’t NSERC’s released data include the dollar value for each bin in a given discipline? After all, releasing such a parameter cannot be seen as divulging the results of a competition before a cabinet minister could do so?

Could it be that the release of such a data would lead to the disclosure of the following table for Group 1508 (Mathematics & Statistics), which will then display the major breakdown –at several levels–  in NSERC’s adherence to its own principles?

As was mentioned in previous posts, the table shows the discrepancy between the bin values in 2010 and 2011, which already contradicts the following principle stated on page 21 of the Peer Review Manual: “Bin levels, budget permitting in a given competition year, are expected to be in a similar range from year to year.” It did say however, “budget permitting“. So what was the budget for this year’s competition?

Members of the Math/NSERC liaison committee managed to “mine” the limited data available (see computation below), and figured that the most optimistic scenario for the budget assigned to Math/Stats in 2011 was $3,334,615. This indicates the following evolution of the Math/Stats budget over the last 5 years.

$3,895,750   (2007)
$3,834,833   (2008)
$3,581,450   (2009)
$3,538,427   (2010)
$3,334,615   (2011)

In other words, the budget has decreased by 14.6% over the five years 2007 – 2011.

Another way at looking at budget variations is to compare the budget assigned by NSERC to the 2011 math/stats competition, to the total of all returning grants, i.e., those expiring in March 31, 2011. According to the search engine, this total is around $3,828,803, which is almost $500,000 more than the budget available for this year’s competition. What is behind this re-allocation ?

Even more troubling is the appearance of two different values for each bin within the same evaluation group. For reasons that are yet to be explained, and by decisions whose origin is yet to be confirmed, it turned out that for this year, there are two classes of citizens in Group 1508, contradicting another NSERC principle.

“Within a given discipline group, proposals with similar scientific merit should have similar grant levels regardless of the applicant’s granting history with NSERC.”

Now why did I call them group A and group B, when the intention was to distinguish the  mathematicians from the statisticians. Well, because no one had figured out why, for example, Martin Barlow and Gail Ivanov are considered statisticians at NSERC, while Ed Perkins and Jeremy Quastel are seen as mathematicians. In other words, where do you fit an applicant who works on probability theory? Ok!Ok! one can toss a coin.

The assignment of two different dollar values for each bin in a specific group is extremely troubling, considering the precedent it sets. This act essentially torpedoes a key principle at the very foundation of the new system adopted by NSERC, and forces us to ask the following questions.

  • Is group 1508 the only group that emerged with two distinct values for each bin? Does this open the possibility that future competitions assign different values to bins in pure and applied maths?
  • Should all probability theorists be allowed to choose the subgroup where the pasture is more green?
  • Is this creating a precedent, whereas next year cell biologists could claim that their bins should be more valuable that those of plant biologists?
  • Should this precedent become a permanent fixture of the Math/Stats relationship and what is the effect of this decision on the workings of the “Long Range Plan” for mathematics and statistics?
  • Should EG Executives be empowered to make such decisions?

We are now faced with a historical precedent, where a bin in Statistics has been deemed more deserving of funds than its counterpart in Mathematics. It is also highly probable that the decision will be justified by a perceived differential in the cost of research between the two groups.

Whether the decision is justified or not, one needs to ask the question whether such a precedent, as well as the policy of depleting the middle bins in maths,  should have been  established by a group where Mathematics is represented by two NSERC-selected junior researchers, who were neither recommended nor vetted by the Canadian mathematical community.

&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&

For fun: Here is the computation of the Math/NSERC Liaison committee and the partial data on which their estimates are based:

A key assumption is that the number of proposals placed into bins A+B+C = 10.  This gives the most optimistic estimate for the budget. See below the effect of other possibilities for the top 3 bins.

By counting screen millimeters on the bar graph on page 12 of the NSERC release of data (and using #A+B+C = 10), one can estimate the number of successful grant applications to be 179 out of 274 applications. This is an overall success rate of 65%. The proportions of successful Early Career (EC) and Established Researchers (ER), can also be measured, and one finds that there should be 21 EC and 158 ER successful proposals. The success rate for EC proposals is 69%, while NSERC divided the success rate for established researchers into two (E = returning grant holders) and (O = other).

Assuming that O is actually the category of established researchers who did not hold a grant coming into the competition. The set E had a stated success rate of 81% while the set O had a success rate of 38%.

Since #E and #O are not given in the NSERC data, you have to solve for E and O using that E + O = T := total ER applications, while the success rates that NSERC stated must combine to give the above 65%. That is

0.81 E + 0.38 O = 0.65 T.

To find T, the total established researcher proposals, using the ruler on the screen again, one finds T = 245, of which the solution above gives me 156 E, with 126 successful, and 88 Other, of which 33 were successful.

Therefore, always assuming that #(A+B+C)=10, one finds that #EC = 21, #E = 126 and #O = 33 up to experimental error, which gives the above budget level $3,334,615 using the values of the average grants that NSERC has stated:

E has average grant award $20,960
O has average grant award $13,600
EC has average grant award $11, 368

Other assumptions:

If we had assumed that #(A+B+C)=9, then we would have found #EC = 19, #E = 115 and #O = 30, and if #(A+B+C)=8, then #EC = 17, #E = 100 and #O = 26. This would result in the following estimates of the Math&Stats DG 2011 budget, which are substantially lower.

Assuming that there were 9 or 8 in these three bins gives the resulting budget estimates of:

$3,034,392 (9 people in bins A+B+C)
$2,651,016 (8 people in bins A+B+C)

A final caveat: people are integers, so #(A+B+C) and #D are integers. Their ratio must be a rational with a denominator which is not too big. Again measuring the bar graph of Math&Stats results, the ratio of #D/#(A+B+C) is very close to 1/3, so #D = 3 and #(A+B+C) = 9 is probably the most sensible guess. The other choices #(A+B+C) = 8 or 10 are not so close as this. But it could be that the published NSERC data is wrong, which would make all of the above discussion completely moot.

This entry was posted in Op-eds, R&D Policy. Bookmark the permalink.

4 Responses to Why is the 2011 data on NSERC’s Discovery Grants so radioactive?

  1. Pingback: 18 NSERC panelists write S. Fortier about the 2011 Discovery Grants Competition | Piece of Mind

  2. Pingback: Business Money and Banking | Business Money and Banking

  3. Pingback: Term limits and the integrity of the peer review process | Piece of Mind

  4. Pingback: 16 NSERC panelists write to Suzanne Fortier about the 2011 Discovery Grants Competition | Piece of Mind

Leave a comment