Five years ago, NSERC’s officials rightly challenged the status quo by upholding, “that the level of a grant should be commensurate with merit, regardless of the applicant’s granting history with NSERC.” They don’t however seem to be in a hurry to tackle the equally important but more thorny issue of doing away with the blatant historical biases surrounding the funding record of the various disciplines within the Discovery Grant Program. It is of course a tougher task to openly recognize and redress historic aberrations such as –for example– the one perpetuating average grants in Mathematics and Statistics at a fraction of those in Computer science and at almost half of the average in all fields ($16,816 to $33,691 in 2011). It requires more guts to do so, but NSERC is determined to take up the challenge. It is now consulting about criteria for a new re-allocation process and wants you to be involved. So does “Piece of Mind”, as the stakes are high.
Almost a decade after the last community-driven re-allocation exercise, NSERC is still searching for an open and transparent way to recalibrate the funding among the various disciplines within its Discovery Grants program. This doesn’t mean that the NSERC folks hadn’t done any re-allocation since. They surely did. The problem is no one knew how, how often, and how much.
Indeed, the 2009 restructuring and streamlining of the 23 Grant Selection Committees (GSC) into 12 Evaluation Groups was a golden opportunity to reshuffle budgets under the radar screen. Also note that the times when GSCs used to know their annual competition budgets –because it simply consisted of all returning grants plus an across-the-board variation– are long gone. The members of the Evaluation groups are now carefully “shielded” from knowing their budgets, under the cover of “separating the process of assessing scientific or engineering merit from assigning funding.” The Math revolt of 2011 was essentially triggered by a perception that the Math & Stats budget for that year was depleted by $800K.
So, we are not talking here about initiating or restoring re-allocation exercises, but about devising clearer and more transparent processes to implement them — hopefully in the near future. It is therefore important to know what’s at stake, and where the process currently stands.
As one can expect from bureaucratically-entrenched agencies, NSERC had intended a few years ago to develop a system based on bibliometric indicators to perform automatic bureaucratic reallocations between disciplines based on some magically computed number which would indicate the strength of a discipline. More recently, NSERC commissioned a report from the Council of Canadian Academies (CCA) “to examine the international practices and supporting evidence used to assess performance of research in the natural sciences and engineering disciplines”. Some say that the bureaucrats were hoping that the CCA report would legitimize their wish and suggest a mechanism to compute such a magic number.
Fortunately, the outcome of the CCA report, “Informing Research Choices: Indicators and Judgment”, which was released on July 5, 2012, points in the opposite direction, saying that “… bibliometrics, and other standardized metrics, are potentially useful at the level of nationally aggregated fields, but should be used to inform rather than replace judgments by expert panels.”
As explained by Walter Craig in a widely distributed message, the report confirms that `indicators’ or `metrics’ such as blind publication counts or citation indices, averaged or not, are in general not good indicators of research quality, activity, capacity or trends, and certainly not for individual researchers, or in comparisons between groups of researchers in different scientific areas. The CCA panel concludes that consultation with expert panels, and in the context of the individual disciplines, is the best way to make informed decisions. But these types of indicators would possibly be of use as reference material in expert panel deliberations, which compared more broadly a discipline (such as math) with the same discipline in other countries.
NSERC has now initiated a survey to consult over these matters. The survey is in a way asking us whether we agree with the CCA report. Pointing at NSERC’s handling of previous international and national reports, the cynics are even saying that the NSERC folks are looking for a cacophony of contradictory comments and opinions in order to attempt to create confusion around the report, and then go their own way.
Another colleague wrote: “the online survey is probably the worst online survey I have ever seen – it is almost incomprehensible.” The survey is indeed a very long, badly designed, and confusing questionnaire, with many questions regarding whether various numerical indicators are valid for making decisions on Research Trends, Research Quality and Research Capacity. Little is mentioned of the CCA recommendation stressing that expert panels are most important.
Ironically (coincidentally?), another CCA report appeared recently. “The State of Science and Technology in Canada” was not commissioned by NSERC, but does conveniently make overall judgments on the quality of science and humanities in Canada, and rank the various disciplines within the international context. And guess what! It is based to a large degree on bibliometric indicators, and NSERC will likely cherish the opportunity of not ignoring its findings while implementing its re-allocations.
So, what can you do?
Well, bite the bullet, find a bit of time to fill out the painfully lengthy NSERC questionnaire, and pray that you will be heard this time around.
Assessing Science is hard! NSERC bureaucrats should know it, but then so do we!
I’m glad you wrote this. I think this is a really serious issue.
When I attempted to complete the survey (I use the word “attempted” because when I submitted the survey I received the message “Access Denied. You do not have permission to perform this action or access this resource”), I found that none of the options put forward would allow for a radical correction for Math/Stats, which seems to have drifted into very poorly funded territory.
Maybe this is appropriate that “course correction changes” should not be too large, but it does leave the real question of what is the correct way to resolve the current situation.
The mathematics publishing paradigm appears to be changing so quickly, that I think the community should be extremely wary of being locked in to any particular metric. For two examples, see Tim Gowers recent blog posts at http://gowers.wordpress.com/. Furthermore, should earning a high “reputation” on sites such as MathOverflow count at all? How about blog posts? Or riled up blog comments…
Pingback: Canadian Common CV: NSERC vs. Linkedin | Piece of Mind