Transparency Promised for Science's Most Misused and Most Vilified Metric

Thomson Reuters vows to be clearer in the future about the "impact factor," an annual ranking of more than 10,000 scientific journals

The most misused metric in science is getting a makeover — although many researchers would like it to disappear altogether.

Information firm Thomson Reuters says that it will become more transparent over how it calculates impact factors, an annual ranking of more than 10,900 scientific journals that it published on 29 July, along with the names of 39 journals that it is barring from the list.

The firm, which is headquartered in New York, is also revamping its commercial analysis product, InCites, to add metrics based on individual articles, and to allow users to make their own calculations. But critics say that more change is needed.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The impact factor was invented to help libraries decide which journals to purchase: roughly speaking, a journal with a higher impact factor attracts more citations. But it has become a seductive yardstick by which to judge the quality of researchers and their papers — angering scientists who say that they are judged by where they publish, rather than what they publish.

The result is a race to get into journals with high impact factors, and almost everyone is unhappy with this situation, says Stefano Bertuzzi, executive director of the American Society for Cell Biology in Bethesda, Maryland.

Thomson Reuters says that the problem lies in how the impact factor is being used, not in the metric itself. But even librarians and journal editors are not content, because they say that the firm is not clear about how it calculates the metric. “We’re not sure how reliable their data are,” says Bernd Pulverer, chief editor of The EMBO Journal in Heidelberg, Germany, who says that he has struggled to get his scores to match the firm's.

Clearer impact?
Last year, Bertuzzi coordinated a statement signed by hundreds of research organizations and more than 11,000 scientists, the San Francisco Declaration on Research Assessment (DORA), which deplored the abuse of the impact factor and called for better ways to evaluate research. But he and Pulverer also sent a private letter to Thomson Reuters asking it to improve the way in which it calculates impact factors. That letter was never answered, they say, so on Friday 25 July they made it public on the DORA website.

Now Thomson Reuterswhich says that it did answer the letter, says that it is “taking significant steps to increase transparency around the calculation of Journal Impact Factors”. For example, it will allow (paying) users to view every item included in the calculation.  

It is also providing citation metrics for articles, not just journals. It should now be possible for any subscriber to calculate the impact of any collection of articles — whether for a journal, a university or a scientist — and to normalize these counts, which is important because some disciplines cite more heavily than others, so it is unfair to compare biology articles with mathematics articles, for instance.

Is this enough to head off the criticisms levelled at Thomson Reuters? “We appreciate these new capabilities, but Thomson Reuters puts the onus on the user,” says Bertuzzi. That is a problem, he says, because researchers will still prefer an 'official' number. He wants the firm to improve its published metrics — by, for example, excluding review articles because they include many more citations than research articles.

Citation stacking
Thomson Reuters also announced that 39 journals will not receive an impact factor this year — a record number for journals barred in a single year — because of excessive self-citation or ‘citation stacking’ from papers in other journals.

One journal that has now been caught out in two successive years for citation stacking is the International Journal of Sensor Networks (IJSN). Thomson Reuters found that it was heavily cited in articles published in the Proceedings of the 2013 IEEE Consumer Communications and Networking Conference. And two articles in those proceedings that cited the IJSN heavily were co-authored by the IJSN's editor-in-chief, Yang Xiao. The IEEE (Institute of Electrical and Electronics Engineers) says that it is assessing the situation and "will take appropriate action as deemed necessary".

Xiao, a computer scientist at the University of Alabama in Tuscaloosa, had already seen the IJSN censured for the same practice last year, when Thomson Reuters found a 2011 paper in the Journal of Parallel and Distributed Computing that contained bursts of references to IJSN. Again, Xiao had co-authored the paper. In February this year, it was retracted by its publisher, Elsevier, who said that it violated its policy on citation manipulation.  Xiao had not responded to an e-mail by the time this article went to press.

Metric challenge
To coincide with the Thomson Reuters announcements, a group of physics journal editors also launched an attempt to ditch their journals' reliance on the impact factor altogether, in favour of their own measure based on an open citations database.

Last year, Thomson Reuters denied the Journal of Instrumentation an impact factor because it had been heavily cross-cited by electronics engineer Ryszard Romaniuk in a series of papers in SPIE Proceedings. After much argument, the journal, which is published by the London-based Institute of Physics and the Italian International School for Advanced Studies (SISSA), was reinstated. But “the delay damaged the reputation of the journal and that of its authors, not least for the lack of clarity with which the issue was handled”, says Enrico Balli, chief executive of SISSA Medialab, a non-profit company owned by the SISSA.

Instead, Balli has led the development of a parallel journal impact factor, called the Jfactor, that is based on open data collected by INSPIRE — a system of information about high-energy-physics articles and citations built by Fermilab, CERN and other labs. If physics journals adopt it, then Thomson Reuters' proprietary metric will not be needed, he notes.

Bertuzzi hopes that other that metrics will gain popularity for assessing individuals. And at a DORA webpage updated on 29 July, he and others are collecting examples of good practices in research assessment that avoid impact factors altogether. "We can discuss all the metrics that you want, but ultimately, it is what is in a paper that really matters," he says.

This article is reproduced with permission and was first published on July 20, 2014.

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can't-miss newsletters, must-watch videos, challenging games, and the science world's best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

Thank you,

David M. Ewalt, Editor in Chief, Scientific American

Subscribe