The Kafkaesque grip of bureaucrats on Canada’s peer review and granting process

The observer couldn’t believe what she was hearing as she watched one of the subgroups consisting of 5 panelists in NSERC’s new conference model. “The applicant has a couple of papers in the Journal of … . Does anyone know anything about this journal?” said one. “Oh this must be one of these obscure journals”, replied another panelist.

The observer –who is also on the panel but not on this particular subgroup– couldn’t control herself and asked for permission to provide the information she had. The journal in question is one of the top journals for the discipline, information that could be crucial to the case. The NSERC staffer present denied the permission!

Why is an academic panelist, donating many hours of hard-earned time to NSERC, asking permission of an administrator to correct an error in the discussion?

Welcome to the rapidly warping universe of our granting councils, where bureaucrats are starting to rule supreme, and where scientific evaluation is becoming an exercise in policing, legalese and loopholes.

The bureaucratic grip on the Tri-council’s granting processes is becoming so alarming that it is time to start drawing the line. Leaving aside the myriad Tri-council programs that are now solely decided by bureaucrats (a list of which will be provided in a future post), the bureaucrats have now honed in on what used to be the jewel in the crown of Canada’s granting system: The NSERC Discovery Grant program.

First, the bureaucrats introduced a “binning system”, which is built on a fragmented decision-making process, the so-called conference model.  It is this fragmentation, which is causing the volatility and the lack of uniformity and consistency that is leading senior scientists like Don Fraser to speak out.

With their implementation of this binning system having HQP supervision at its core, the bureaucrats have effectively changed the very nature of the DG program, from one that promotes and supports research excellence to just another one of their many programs that sponsor training. A list and an evaluation of these programs will also be given in a future post.

Even the best-intentioned can’t predict all the consequences of their actions. Let’s hope that the bureaucrats did not foresee the negative impact on early-career scientists, and on universities without or with minimal graduate programs. Did they not anticipate that by adding three scores (for Research, Quality of Proposal and HQP), a scientist in HQP-deprived Regina would have to be a much better researcher than one in HQP-rich University of Toronto, in order to get the same level of funding?

After creating an absurdly rigid binning system, bureaucrats next try to dictate the so-called funding to binning map, sometimes overruling the panelists’ recommendation. They aim to enforce cut-offs below which there is no funding (again having a dramatic impact on smaller institutions), and then they interfere by trying to push down the upper half, supposedly to improve the standards.

But the absolutely worst part in this new era of bureaucratic rule may be the amount of “policing” of the review process that is coming with it.

Indeed, listen to the tale of this absent-minded mathematician who had forgotten to include and mention one of his best papers in his grant application. One frustrated panelist, who knew how good the applicant was, jumped in joy when he saw that the paper was mentioned by one of the external reviewers. Like a lawyer looking for a loophole to make his case, this piece of evidence can now be used in the evaluation/trial.

Further examples of excessive control include citations of particular papers, or information about specific University programs, which, unless explicitly stated in the application, cannot be used by panelists. In one instance, a trivial google search showed that an applicant was supervising 50% of all graduate students in his small department. However, since this information was not included in the official documents nor mentioned by an external referee, and since the HQP numbers looked small compared to other applicants, he received a low HQP rating. Kafka would have loved the absurdity of this exercise: knowingly, panelists make the wrong decision, forced by purely bureaucratic rules!

As well as the new importance of the art of finding loopholes at NSERC, we now hear about the relevance of mastering the correct (i.e., allowable) vocabulary in order to make a point. For example, NSERC does not allow leadership or administrative duties to be considered as a factor –for example as a reason for delay in research activities– in reviewing a grant applicant. The loophole here is to talk about the “impact of the applicant” on the community during his/her tenure on the job.

This system is set to repeat itself every year. The process for appointing evaluation group members is completely opaque, but NSERC’s bureaucratic goals and ways are at odds with obtaining qualified reviewers. In addition to issues of balance on gender, geography, and language, there is a reluctance to appoint reviewers from departments that have a large number of applicants, as this raises the number of conflicts of interest. The inevitable result is that the largest, and often most highly-rated departments in the country are under-represented on review committees.

Compare this to what happens on NSF panels, where top scientists convene and discuss in ways they see fit the merit of each case, with no interference whatsoever from the program officer, who also happens to be a scientist seconded from his/her institution. There is no controlling and policing of the proceedings by a bureaucrat.

We ask the bureaucrats to pull back and to make way for a process where reigns an atmosphere of trust in the wisdom, integrity and expertise of the outstanding scientists who volunteer their time and effort for a most valuable endeavour. It is time to draw a line in the sand and call on Canada’s scientists to regain control of the country’s scientific evaluation process.

This entry was posted in Op-eds, R&D Policy. Bookmark the permalink.

13 Responses to The Kafkaesque grip of bureaucrats on Canada’s peer review and granting process

  1. Brett eaton says:

    Speaking as a relatively new academic who has both American and Canadian citizenship, and who had job offers both in Canada and the US, I can say that the Discovery Grant system played a major role in choosing to come to UBC rather than higher ranked school in the US. My recent experience (i renewed my gant last year) with NSERC has completely changed my mind. Discovery grants were always relatively small, but the relative stability of the program was what made it effectively the most efficient research grant program in the world,according to the review of the program in 2007 or thereabouts. The recent changes have made it difficult to capitalize on the research momentum I have built up over my first 5-year cycle, and I certainly don’t have confidence that funding will be stable going forward. The system needs to change. I used to feel a great sense of pride in the NSERC model, but the current behavior seems to me to be arbitrary, short-sighted and politically motivated to a degree that is frankly astounding, considering the formerly ethical, professional and independent way in which decisions used to be made. My former pride has quickly turned to distain and disappointment. The NSERC system needs to be restored, and the Discovery Grants should be made the focus of the program going forward. That will train more students and produce much more high quality science.

  2. Anonymous says:

    I am a late academic, arriving at my first academic posting at the age of 44 after 23 years of industry experience. I have applied 3 times unsuccessfully for a NSERC Discovery grant without success (applied this year too but not holding my breath). Unfortunately I am in a difficult position that I have a good number of papers, again in journals that are de rigeur in my engineering discipline, but the Civils (there is no representation of my engineering discipline on any committee…) who review my applications have never heard of. I have graduated three Masters and one PhD during my time here. Student evaluations are in the top 20%. So in two years when I apply for tenure what is the senior ARPT committee going to say? “He doesn’t have a Discovery grant”. “By the way he has raised over $400k in research funds including a co investigator on a NCE grant…..but not a Discovery grant”. It is deplorable given the decreasing NSERC funds, a record number of applicants and a less than 30% success rate, that this is still a metric. Sure, when the committee members started out, Discovery grant success was 80+% successful, but not any more.

    We have to demand not only reform at the NSERC level but also at the University and Faculty level to acknowledge that Discovery grants are not a valid benchmark anymore, but a well funded research activity (from Industry, MITACS and other sources) is satisfying the requirement for self sustained program and not a specific metric for tenure and promotion. Perhaps more emphasis on quality of teaching, service to the community should play more of a role. How about some credit for bringing decades of industry experience to the benefit of our students.

  3. Carl Michal says:

    Brett Eaton’s comments express my feelings very well. I support the call for reform – perhaps heeding the review of the DG program that took place before these changes were implemented.

  4. How can we take steps to ensure that the quality and impact of peer reviewing in grants is strengthened?

    As an example, I was looking at the Engage grants, which aim to create industrial-academic relations. I was shocked that there’s no mention of peer reviewing in their “Selection Criteria” at: http://www.nserc-crsng.gc.ca/Professors-Professeurs/RPP-PP/Engage-Engagement_eng.asp
    except for the mild requirement that the form 100 of the applicant should have been reviewed in the last six years. There is not mention of the grant application being reviewed!

    Moreover, the company is not required to put any money into the pot! As someone who works regularly with established industry and a few startups, I believe this is throwing away money. We are talking about 25,000 Dollars, which is more than my top statistician colleagues get for all their work in data analysis, which we and others such as Google, etc. happily use in industry. How can an un-reviewed story about a collaboration with industry, where the company is not even required to be fully engaged (and in industry we know people only get seriously engaged when their money is in the pot – business101 nserc!) , be given more money than the record of a top scientist in Canada? This is nothing but scandalous.

    These reckless abuses of tax money neither contribute to increase the quality of research nor to increase the competitive edge of industry.

  5. sebastian says:

    Not that the trends you discuss are not worrying, but your depiction of the NSF panels does not quite fit my (admittedly meager) experience. The panelists discuss the applications, but the NSF motto of “funding proposals, not applicants” is constantly there. In many fields outside mathematics this makes more sense. Moreover, the panel only ranks the proposals. It is the NSF administrators who later decide on budget allocation and the precise cutoff for funding.

    • Ghoussoub says:

      All agree that we should be funding proposals. But this doesn’t mean that track records as known to, and reported by, qualified panelists should not count (in addition to a serious proposal). Bare also in mind that the “NSF administrators” who decide on budget allocations are seconded scientists themselves. The main argument of my blogpost is the degree of “policing” going on. Disrespectful and unproductive!

  6. Matei Ripeanu says:

    While I agree with many of the points the initial post makes; I think the picture it presents is incomplete.

    When I arrived at UBC six years ago the general feeling among my colleagues was that the quality of the Discovery grant application did not really matter. In their opinion the most important factor that impacted the level of funding was the seniority of the applicant (and the second most important was the number of journal publications). The statistics published by NSCER seem support this view.

    The recent change towards a more discriminatory level of funding that is more closely tied to the quality of the application are in my opinion a step in the right direction that should be acknowledged.

    • Ghoussoub says:

      Your are talking at a higher level of generality/abstraction than what we are aiming/hoping for in this forum. Indeed, NSERC changed the evaluation system in response to concerns that reviewers were spending too much time debating very small changes in grant sizes, that the system was too conservative in ramping down researchers with decreasing productivity, and that the system did not permit evaluation of interdisciplinary grants. One aspect of the solution was to have review panels rate the applications under stated criteria of quality, rather than decide dollar amounts for grants. Fair enough, this procedure is practised at many review panels. But the devil is in the details, though, and the implementation of this goal has brought with it an alarming number of consequences. Fewer reviewers assess each grant, reviewers move among panels quite frequently, and final decisions on grant amounts are not even seen by the reviewers. Bureaucrats are charged with monitoring each discussion, ostensibly in the interests of ensuring that the same standards apply to each discussion. The result is a policing of discussion that panellists find frustrating, and sometimes absurd. For a more detailed analysis see
      https://ghoussoub.wordpress.com/2010/12/09/nserc-discovery-grants-ii-on-intentions-and-consequence-old-vs-new/

  7. Anonymous says:

    While I agree it’s wise to always be vigilant – and without question I see other programs such as Engage as being fundamentally misguided and possibly corrupt, an abdication of the responsibility for reviewing research to “industry” – I would say that this piece doesn’t really jibe with my own experiences on a Discovery Grant evaluation group. There were long and impassioned discussions of how to weight the HQP component based on the program in question, with a general consensus — supported by the bureaucrats I might add, though their opinions weren’t seen as critical — that both the institution (e.g. Lakehead vs. Toronto) and even the field of study (something sexy and student-friendly versus esoteric and deeply unpopular yet also important) should be taken into account. Administrative duties were certainly taken into account as a reason for delays in research activity. We were advised by the bureaucrats to avoid bringing citation counts / impact factors into the discussion if not explicitly mentioned in the applications — given the noise and bias in those metrics — though it happened nonetheless when reviewers thought it would help shed light on the matter.
    I’d agree there are many positive tweaks to the new Discovery system that could be made, but at least in the slice I’ve directly experienced I’m not sure it needs a complete overhaul. Far more worrying, I think, is that its budget continues to shrink in real money terms, even while the number of applicants increases, with more and more money going to far less productive granting programs.

    • Ghoussoub says:

      I don’t see much disagreements. No “non-impassioned” discussion was reported. I have no problem believing that there were discrepancies between Evaluations Groups (even subgroups) in terms of how to evaluate HQP. Some of your panelists seem to have had the guts to speak up against bureaucratic rules (citations when needed). Good for them. The main issue that should definitely interest (and raise eye browse) other EGs (that had a different experience) is: “Administrative duties were certainly taken into account as a reason for delays in research activity.” In any case, my examples were just illustrations of bureaucratic meddling with scientists’ judgments. For a detailed critique of the new binning system you can take a look at https://ghoussoub.wordpress.com/2010/12/09/nserc-discovery-grants-ii-on-intentions-and-consequence-old-vs-new/

  8. Antony Hodgson says:

    I have served on the Mechanical Engineering Discovery Grant committee over the past two years and feel that there are both strengths and weaknesses to the new system. Before talking about those, I’d first like to note that it is not the conference model mentioned above that is the problem – the conference model simply refers to the fact that NSERC now allows panel members from different evaluation groups to attend evaluations in other groups in order to bring in expertise relevant to a particular application. Rather, any problems arise either from the new evaluation process or from the overly rigid rules being applied during the evaluation.

    On the positive side, the new approach has dealt with some major flaws in the old model, the most egregious of which was the de facto inertia in funding – the funding recommended for an applicant would largely be their previous level of funding plus or minus a bit to reflect the assessed excellence of the applicant over the past five years. In practice, this led to significant disparities in funding between applicants otherwise judged to be approximately equal in overall excellence, simply because one had been ‘in the game’ longer than the other. In addition, established applicants would sometimes (or even frequently) not write particularly good research proposals, counting instead on funding inertia and research reputation to carry them through. The new system effectively reduces both these problems with the old system, which allows scope for promising younger researchers to more quickly build their base level of Discovery Grant funding and requires all applicants to take care in formulating their research proposals if they want to score well. I personally have seen several applications where the applicant was judged to be Outstanding on the attributes of researcher excellence and contributions to HQP, but only Strong (or even Moderate) on merit of the proposal; I actually think this is a good thing – researchers should be required to make a careful case for their proposed research in order to win public funding for it.

    On the negative side, I concur with many of the critiques described above by Nassif and will briefly address a couple of the issues raised in his post.

    First, the prohibition against introducing evidence which is not presented in the application probably arose from the best of intentions (namely, to ensure that all applicants are treated equally and that one does not unfairly benefit relative to others), so I have some sympathy for the notion that it should be (and now is) up to the applicant to make the case for their own ‘excellence’ through describing the impact of their contributions. However, I don’t see why evaluation panel members should not be able to introduce publicly available information into the discussion. Such information can be useful in correcting errors of fact, such as in situations similar to the one described in this post’s opening anecdote (I too witnessed a case in which a reviewer incorrectly assumed that the applicant had no particular expertise in a key area relevant to the proposal, when I knew for a fact that the applicant did have this expertise. Fortunately, I was able to find a brief mention of this expertise in the application and parlayed that into a rebuttal of the reviewer). External information can also help provide useful context for the non-experts on the evaluation panel, and sharing of this information certainly happens informally whether NSERC approves or not (e.g., we did discuss the reputation of journals or conferences, even if the applicant did not explicitly address these questions). I am comfortable with NSERC advising against introducing personal information or evaluations (e.g., “I met Dr. X at a conference and thought his ideas were fabulous”), but otherwise I think panel members should be free to introduce publicly available knowledge into the discussion. I would also encourage applicants to explicitly spell out relevant contextual information (e.g., to explain their choice of publication venues) as there is no guarantee that the panel will draw the correct conclusions if they are not pointed towards this information.

    The issues that are of greatest concern to me, however, are the policy choices implicit in the new evaluation system. Nassif mentions the shift in emphasis from research excellence towards training. I was told quite emphatically that this is completely intentional – NSERC is constituted under Industry Canada and arguably takes training as its most important mandate. If you look at its list of things it supports (http://www.nserc-crsng.gc.ca/NSERC-CRSNG/Index_eng.asp), the number one item is “The agency supports university students in their advanced studies”. It is furthermore made quite clear in the orientation sessions that training of graduate students, and doctoral students in particular, is considered more important than training of undergraduate students. In my experience, although training of undergraduates in research is commended, it will be difficult or impossible for an applicant to score above a Moderate on the training of HQP criterion without training graduate students, which makes it difficult or impossible for an applicant at a small institution to obtain funding. To my mind, this is an unwise strategy – I think we should be distributing a base level of funding to all researchers who can put together a competent research proposal, regardless of their institutional situation. It’s hard to tell where future great ideas will come from, but biology teaches us that genetic diversity is an important factor; this implies that we should fund a broad pool of thinkers, not just the largest research ‘machines’.

    The other issue that particularly disturbs me is how administrative leadership is treated. In effect, taking on such positions can doom a research career, at least from the point of view of NSERC Discovery Grants. A productive researcher who cuts their research activity in order to serve such a role can easily see their ‘excellence’ rating cut from Very Strong or Outstanding to Strong or Moderate or even Insufficient in their next application, even though the very reason they were given the position in the first place was because of their research success. The reasoning seems to be that taking on such a position was ‘voluntary’ or intentional in a way that illness or having a child is not. I find this policy reprehensible. At the very least, the evaluation panel should be asked to assess how ‘excellent’ the researcher is likely to be during the period to be covered by the grant; this would give the evaluators the freedom to assess the quality of the applicant’s previous work and to infer that if they are returning to a researcher role after a period of administrative activity, they will likely continue to produce work of this quality. Indeed, in my opinion, rather than trying to somehow multiply quantity and quality of contributions, it would be far better to ask the evaluation panel to assess the quality of the applicant’s contributions given their circumstances and to make a separate determination concerning the expected availability of the applicant for research during the proposal period (and then perhaps to adjust accordingly the funding awarded). Without such changes, we increasingly risk finding that none of our capable colleagues will be willing to step forward to take on positions of academic leadership, and that will bode ill for the future of our universities.

  9. Undisclosed says:

    I have not been on NSERC panels but this explains a lot. Canada does not have a science minister with that as their single portofolio. For me, the discovery grant system was a major reason to turn down a prestigeous US offer. I am now doing well in funding, but 80% of it is foreign (and thus unsustainable as the foreign government may soon change its mind about Canada). I am soon leaving Canada for reasons that are largely personal, but I am also delighted to not be exposed to a funding system that values commercialization beyond everything).

Leave a reply to sebastian Cancel reply