Intended for healthcare professionals

Education And Debate

Fortnightly Review: How can impact factors be improved?

BMJ 1996; 313 doi: https://doi.org/10.1136/bmj.313.7054.411 (Published 17 August 1996) Cite this as: BMJ 1996;313:411
  1. Eugene Garfield, chairman emeritus (garfield{at}aurora.cis.upenn.edu)a
  1. aInstitute for Scientific Information, 3600 Market Street, Suite 450, Philadelphia, PA 19104, USA
  • Accepted 17 May 1996

Impact factors are widely used to rank and evaluate journals. They are also often used inappropriately as surrogates in evaluation exercises. The inventor of the Science Citation Index warns against the indiscriminate use of these data. Fourteen year cumulative impact data for 10 leading medical journals provide a quantitative indicator of their long term influence. In the final analysis, impact simply reflects the ability of journals and editors to attract the best papers available.

Counting references to rank the use of scientific journals was reported as early as 1927 by Gross and Gross.1 In 1955 I suggested that reference counting could measure “impact,”2 but the term “impact factor” was not used until the publication of the 1961 Science Citation Index (SCI) in 1963. This led to a byproduct, Journal Citation Reports (JCR), and a burgeoning literature using bibliometric measures. From 1975 to 1989, JCR appeared as supplementary volumes in the annual SCI. From 1990-4, they have appeared in microfiche, and in 1995 a CD ROM edition was launched.

Figure1

Large journals that publish many papers may not have as high an impact as smaller review journals

Calculation of current impact factors

The most used data in the JCR are impact factors—ratios obtained from dividing citations received in one year by papers published in the two previous years. Thus, the 1995 impact factor counts the citations in 1995 journal issues to “items” published in 1993 and 1994. I say “items” advisedly. There are a dozen major categories of editorial matter. JCR's impact calculations are based on original research and review articles, as well as notes. Letters of the type published in the BMJ and the Lancet are not included in the publication count. The vast majority of research journals do not have such extensive correspondence sections. The effects of these differences in calculating journal impact can be considerable.3 4

Absolute citation counts

The ubiquitous and sometimes misplaced use of journal impact factors for evaluation has caused considerable controversy. They are probably the most widely used of all citation based measures. They were invented to permit reasonable comparison between large and small journals. Absolute citation counts preferentially give highest rank to the largest or the oldest journals. For example, in 1994, articles published in the BMJ regardless of age were cited 37 600 times. Of these, 5800 citations—about 15%—were to items published in 1992 and 1993.

Table 1 shows absolute citation counts for nine English language clinical journals as well as the Journal of Biological Chemistry, which is included to emphasise the difference between absolute and relative citation of large journals. Table 2 lists these same journals ranked by 1994 impact. It is important to note that of the thousands of journals published and cited in SCI only 337 achieved a current impact higher than 3.0. The SCI processes over 3300 source journals and cites thousands more, but it does not include as sources hundreds of low impact applied and clinical journals. This has been a source of frustration to editors from the Third World. They often ask how they can improve impact so as to warrant inclusion.

Table 1

-Absolute citation counts, 1994 and 1989

View this table:
Table 2

1994 current impact factors

View this table:

Impact of review journals

It should be apparent from reviewing the two tables that the ranking relationship between quality and citation is not absolute. The Journal of Biological Chemistry is one of the most cited journals in the history of science but it is also among the largest. Publishing over 4000 research manuscripts a year (as do other high volume journals like Physical Review) inevitably leads to considerable variation in quality and impact of individual articles. As a consequence, while these journals publish many papers which become “citation classics,” their current impact may not be as high as some smaller journals, especially certain review journals. Indeed, one of the highest impact journals is Annual Review of Biochemistry—with a current impact of 42.2. But as the data for Annual Review of Medicine show, this is not an absolute rule. In general, a journal is well advised to publish authoritative review articles if it would increase its impact. Since there are over 40 000 review articles published each year, not all will achieve high impact. Again, selection of reviews about active research fronts is important, as is their timing. Controversial topics may increase impact. A non-medical example is “cold fusion” by Fleischman and Pons,5 which has been cited over 500 times. A recent medical example could be the “Concorde study,” already cited in 150 papers.6 But hundreds of halfbaked controversial ideas are essentially ignored.

Nothing will replace the judgment necessary for editors to select putative citation classics and to reject trivial or outlandish papers. Nevertheless, most reputable journals have at one time or another rejected papers which proved to be blockbusters. In retrospect, we should congratulate editors who publish controversial ideas such as that of Barry Marshall concerning Heliobacter pylori and peptic ulcers.7 That paper was well on its way to citation classic status when I nominated Marshall for the John Scott Award of Philadelphia—several years before he received the Albert Lasker award this year.

Method

papers

It is widely believed that method papers are cited more than the average and thus increase journal impact. How lovely it would be if every method proved to be another Lowry method,8 cited over 8000 times in 1994 and over 250 000 in its lifetime. But the fact is that method journals do not achieve extraordinary impact since the vast majority of their papers, like clinical tests, are not unusual.

An editor could select authors on the basis of past performance. By checking their citation histories, one could undoubtedly increase the probability of publishing papers with higher potential impact. Some editors do this instinctively, especially when publishing the first few issues of a new specialty journal. In most cases, the most-cited papers for newer journals appear in the first volume published.

Over the years, the increase in multiauthored papers has been apparent. This is matched by an increase in multinational and multi-institutional clinical and epidemiological studies. At the Institute for Scientific Information (ISI), unpublished studies support the notion that these papers produce greater impact. ISI's Science Watch has regularly reported on the most-cited current papers in medicine. There is fierce competition among editors to publish these “hot papers.” These undoubtedly contribute to increased current impact. But what about long term impact?

Cumulative impact factors

In table 3, the 14 year citation impact for 1981-94 for the articles published from 1981 to 1986 in the BMJ or the Lancet are, in general, even higher than suggested by current impact. Annals of Internal Medicine is even stronger on this measure. Certain fields or topics require more time to mature because of delayed recognition (as with H pylori) or because of the time required to produce experimental or clinical results. It generally takes more time to achieve impact in dermatology than it does in molecular biology or astrophysics. Invidious comparisons between journals even in different fields of medicine do not take these subtleties into account.

Table 3

Cumulative impact 1981-94 for papers published from 1981 to 1986

View this table:

No substitute for judgment

Successful editors and publishers know that in order to improve the editorial quality of journals, there is no substitute for judgment, quality, and relevance. Impact and other citation measures merely report the facts. Authors will gravitate to journals with widespread influence. Circulation alone will not increase research impact. Otherwise, JAMA and others would rank higher in impact. But only a fraction of doctors do research. Dissemination of research results to the press may increase general awareness.9 But that role in research is played primarily by current awareness services such as Current Contents, by contacts at meetings, reprint exchange, and reading primary journals. Last but not least, the way to improve impact is to insist that authors cite all of the relevant literature. Editors should avoid artificial limits on bibliography as long as it is not obviously self serving.

In spite of dozens of presentations by myself and others, there continues to be a certain mystique about journal selection for Current Contents, the Science Citation Index, and the Social Sciences Citation Index. My 1990 essay in Current Contents is still sent to those making such enquiries.10

Of the 4500 journals covered by SCI and SSCI, probably 3000 can be described as biomedically related. Of these, 500 account for 50% of what is published and 75% of what is cited. Of the 3300 covered in Medline, hundreds of low impact journals are not included by ISI for similar reasons—space and economics.

Other factors in journal evaluation

New journals continue to appear each year and must be evaluated as early as possible. But even Nature in its periodic reviews of new journals requires the passage of time before accepting journals to be reviewed. An experienced evaluator takes into account timing, format, subject matter, past performance, and other indicators such as internationality. The first issue of many journals is full of hope, but they soon exhaust the backlog of material needed to insure continued, timely publication. Clearly a society publisher with years of experience does not launch a journal without a long term commitment—and its editorial standards will be well known. Inexperienced publishers often do not live up to their rosy expectations. The inclusion of abstracts and complete author, street, and email addresses are but a few artefacts that are factored in the judgment of minimum quality. ISI may also ask how well a particular specialty or country is represented in its coverage. And, after a few years of history, all other criteria being equal, one can look at a journal's impact.

This article has focused on journal impact factors and their role in what Stephen Lock described as “journalology.”11 Like individual authors, a variety of indicators can be used to judge journals in a current or historical sense. Impact numbers are probably less important than the rankings which are obtained. Often there are only slight quantitative differences involved. The literature is replete with recommendations for corrective factors that should be considered, but in the final analysis subjective peer judgment is essential.

Caution in use of impacts as surrogates

Journal impact data have been grafted on to certain large scale studies of university departments and even individuals. Sometimes a journal's impact is used as a substitute for the evaluation of recently published articles simply because it takes several years for the average article to be cited. However, a small percentage of articles will experience almost immediate and high citation. Using the journal's average citation impact instead of the actual article impact is tantamount to grading by the prestige of the journal involved. While expedient, it is dangerous. Although journal assessments are important, evaluation of faculty is a much more important exercise that affects individual careers. Impact numbers should not be used as surrogates except in unusual circumstances.12

Reference

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.