Metrics, such as measuring the number of article citations in a journal, known as the Impact Factor, can help measure the importance of academic research, but it can also be used to simplify and undermine its significance. In the past few years, such measurements have been put into place as the easy way out of dealing with complex data in large volumes, and have been used to “cheat” the system. (The Institute’s June issue covers this complex matter in the article, “Evaluating the Quality of Research.”) Here is a look at what’s taking place in academia.
IMPACT OF RESEARCH PAPERS
The European Association of Science Editors recommends “that journal impact factors are used only—and cautiously—for measuring and comparing the influence of entire journals, but not for the assessment of single papers, and certainly not for the assessment of researchers or research programs.” Although several academic institutions have endorsed the statement, the impact of individual articles continues to be assessed because it is easier to do.
So long as these numbers of inexact metrics are used, people will manipulate them. “Bibliometrics, like the impact factor in judging scientific quality, are open to gaming, and are in fact being gamed,” wrote Douglas N. Arnold, the president of the Society for Industrial and Applied Mathematics, in a bold editorial in the society’s publication [December 2009].
When it comes to research, many academics tell each other: “You cite my article and I will cite yours.” Academic departments and even the U.S. National Science Foundation have encouraged collaborative research as something positive, which can be true. However, many researchers routinely add their colleagues’ names to their papers as coauthors in order to make the researchers and their departments look good.
THE TROUBLE WITH JOURNAL RATINGS
When measuring the impact factor for journals as a whole, higher ratings mean more article submissions and library subscriptions. The most respected index for journals was developed by the Institute for Scientific Information (ISI). The ISI index is composed of journals with high impact factors, which include the number of article citations from that journal in the past two years.
However, this type of rating can also cause problems. Journals can slant their content by publishing papers authored by important people or review articles that already include a large number of citations from that journal, both of which mean a higher rating for the journal overall. An unusual way for journals to game the impact factor is by making articles that have a poor chance of being cited “uncitable.” One way to do that is by not printing their abstracts.
After an article is accepted by a journal, the author is oftentimes asked to add a few citations in the research article from that particular publication. Many, including myself, can testify to the practice of coercive citations. Authors now understand what is expected and load their articles with citations from the journal to which they are submitting before they are even asked.
ETHICS AND HONESTY
The IEEE Code of Ethics points out that we are to be “honest and realistic in stating claims or estimates based on available data.” So long as metrics that can be gamed are used as if they are scientific truths, we will all game the numbers or those of us who refuse to game the numbers will lose.
Students are bright enough to sense that we do not mean the ethics code we teach. Institutions must lead the way in not using metrics that cannot be defended and must themselves abstain from gaming the numbers before teaching others not to.
What is your take on the current methods of measuring the impact of research papers and journals? Share your thoughts in the comments section below.
S. Ratnajeevan H. Hoole is an IEEE Fellow and a professor of electrical and computer engineering at Michigan State University, in East Lansing.