As I was scratching my head trying to find new criteria to add to the NSERC formula for “binning” Canadian scientists, I contemplated adding an “Author’s Citation Index”. The thought didn’t last long.
To start with, there are many of them (Math.Sci-net, h-index, ISI, and soon PoP for Google Scholar). Each has been subjected to serious critiques. Then, I remembered that one of my own most cited papers is far from being one of my best –at least in my own opinion, which should matter here!
I also recalled how a Texan friend of mine once told me that the “Functional Analysis” paper containing the Johnson-Lindenstrauss lemma is much more cited in computer science journals than in the mathematics journals surveyed in the MathSci-Net Database for citations. Should Google Scholar be then the new measuring stick?
It is safe to say that it is definitely not in the culture of mathematicians to consider the citation index as a good measure for evaluating fellow researchers. There must be some reasons other than those I stated above.
This is, however, not the case for other scientific disciplines. I am always struck by how much certain subjects rely on such indices, whenever I serve on interdisciplinary panels, the latest being the selection committee for the Killam Prize of the Canada Council for the Arts. Various citation indices were often at the heart of the evaluation of our colleagues outside the mathematical sciences. Referee reports often included citation numbers for various papers. I admit finding this helpful at times, especially when you are faced with files displaying 500+ publication titles, some involving more than a dozen co-authors.
In any case, the question of what a citation index means for a discipline or for a researcher cannot be that easy to answer. For example, the average citation index varies enormously across the disciplines, and this also begs for an explanation. Another obvious observation is that these averages are shrinking rapidly over the years, regardless of the discipline, as one can see in the following spreadsheet for average citation rates, taken from the 1998-2008 Thomson Reuters’ Essential Science Indicators database. Are we converging towards any other steady state besides zero?
Mathematics is –as usual– on one end of the spectrum. The other end in this case is Molecular biology. One explanation is that Mathematics papers typically list few references, whereas those in other fields rely more on other publications.
Can someone clarify these puzzles for us? I also recall there was lots of controversy about the popular “journal impact factor”. This story will be for another day.