Meu SciELO
Serviços Personalizados
Journal
Artigo
Indicadores
- Citado por SciELO
- Acessos
Links relacionados
- Citado por Google
- Similares em SciELO
- Similares em Google
Compartilhar
Pharmacy Practice (Granada)
versão On-line ISSN 1886-3655versão impressa ISSN 1885-642X
Pharmacy Pract (Granada) vol.14 no.3 Redondela Jul./Set. 2016
https://dx.doi.org/10.18549/PharmPract.2014.03.842
EDITORIAL
Bradford's law, the long tail principle, and transparency in Journal Impact Factor calculations
Fernando Fernandez-Llimos
Fernando Fernandez-Llimos. PhD, PharmD, MBA. Editor-in-chief, Pharmacy Practice. Institute for Medicines Research (iMed.ULisboa), Department of Social Pharmacy, Faculty of Pharmacy, Universidade de Lisboa. Lisbon (Portugal).
ABSTRACT
Beyond the commonly mentioned limitations of the Journal Impact Factor, we discuss the obsolete principle of selecting journals to create a fake-representative sample of 'journals that matter' and the opacity around the calculation and listing of Impact Factors. We use the example of Pharmacy Practice in 2015 for illustration. We hypothesize that a business-oriented system of measuring the science and quality of scholarly journals may not be the best option to avoid biases and conflicts of interest.
Key words: Journal Impact Factor; Reproducibility of Results; Selection Bias; Conflict of Interest; Periodicals as Topic; Bibliometrics.
Introduction
Every year around June, the scientific community gathers around millions of computers waiting for the release of a list. This list is a database that contains the Journal Impact Factor values of the previous year. The Impact Factor was created in the early 1950s as an instrument to evaluate "the relative importance of scientific journals".1 However, the use of this metric has been controversial since its inception. As evidence of this controversy, in 2009, the National Library of Medicine created the Medical Subject Heading (MeSH) 'Journal Impact Factor'. As of today, 2386 articles have been indexed with this MeSH term, making it one of the most popular MeSH terms.
Adjectives such as misleading2, misnamed3, flawed4, confusing5, skewed6, and many others have been ascribed to the Impact Factor. Different biases associated with the numerator7 and the denominator8 have been thoroughly described.9 For example, the limitation of two years implies that only 2 of the 27 references of this editorial would count for the Impact Factor if Pharmacy Practice were indexed in the Science Citation Index. Several authors have also described the mechanisms used to produce fraudulent modifications of the Impact Factor.6,10 Many bibliometric indexes have been created to overcome these issues.11-14 However, the Impact Factor is widely used by academic institutions and funding bodies to evaluate researchers' publication track.9 The term 'impactitis' has been used in several articles.15-17.
As scientists, we are used to addressing uncertainty and errors, so we should not be scared of these potential biases. However, also as scientists, our main task is to unveil hidden data and produce interpretable results. Therefore, we do not accept opacity or a lack of transparency in any research method, and we should not rely on the results obtained with them. The Impact Factor calculation has also been criticized as opaque.18 Impact Factor calculations are frequently irreproducible. Thomson Reuters does not provide the raw data they use for their calculations. For instance, PubMed includes 1742 records of items published in The Lancet during 2014, but the Impact Factor is calculated using only 271 items cited 11651 times in 2015. Thomson Reuters' website recognizes the existence of 1426 items (more than 84% of the total) in a category called "other" that does not count in the denominator for Impact Factor calculations. Including these "others" and the 1911 "others" from 2013 in the calculations would reduce The Lancet's Impact Factor from 44.002 to 6.197.
However, perhaps the scariest aspect of this opacity is the journal selection. First, we should reflect on why should we select: Should we work with a sample or with the complete population? We learn in our very first statistics lesson that non-randomized sampling may result in biases. Impact Factor calculations are based on a non-randomized sample19 of journals just because the creators of the Impact Factor convinced scientists that "a surprisingly small number of journals generate the majority of both what is cited and what is published".20 Obviously, to make such a statement, we have to rely on the difference between 'hard sciences' and 'soft sciences' or, even more offensive to disciplines such as pharmacy practice, between "Little Science and Big Science".21 The only reason why the Impact Factor is calculated from a reduced number of journals is to save resources - specifically, money. This restriction was acceptable in the 1950s, when computers were a new and expensive technology and computing Impact Factors was a highly time-consuming task. We are in the 21st century, and we have implemented technological solutions, such as using xml language. For instance, the 1924 journals currently indexed in PubMed Central22 provide all their references coded in xml, which could be used to compute Impact Factors at virtually zero cost.
The theoretical framework commonly used to support this restrictive selection is Bradford's law of scattering.23 This law, which is actually not a law but a theorem, explains that a core group of journals contains the majority of the articles in one specific subject. This could be acceptable at a broad-picture level, but recent analyses have demonstrated that it is not true at the topic-specific level.24 These analyses reinforce the Long Tail principle, which states that in new markets, selling many small-volume items can be more profitable than selling a reduced number high-volume items.25 Is pharmacy practice research a 'new market'? Is pharmacy practice research a 'subject' in Bradford's conceptualization? In fact, pharmacy practice research is not a "Subject Area" under Impact Factor standards but part of the 'Pharmacology and Pharmacy' Subject Area.
A second justification of journals' selection for Impact Factor calculations is the quality evaluation. However, two themes emerge from this idea: Is there an objective checklist of items to evaluate with clearly stated evaluation criteria? Since 1990, the criteria have been subjective.26 Because no objective criteria exist, Thomson Reuters' editors evaluate journals in their area of expertise. But has anyone ever seen the minutes of a meeting or detailed result reports of the evaluation? This process cannot be considered an example of a transparent process, especially because many journals that have undergone the strict evaluation process of the National Library of Medicine and become indexed in Medline or in PubMed Central have failed in the evaluation of these Thomson Reuters' editors. This is the case of Pharmacy Practice. Unfortunately, these journals never receive a report that outlines the reasons why they failed, so they cannot learn from this process.
However, Pharmacy Practice has recently discovered a next level in the opacity associated with Impact Factor calculation. After failing in the Thomson Reuters journal selection process some years ago, the editorial board decided to persist in its effort to increase the quality of the journal and to keep working with the highest standards possible. As a result of these efforts, Pharmacy Practice was twice awarded the Spanish Ministry of Science Quality Seal, and we started monitoring the citations received using Thomson Reuters' computing sources. We thought that not being included on Thomson Reuters' Master Journal List would only lead to a loss of self-citations in the Impact Factor calculation. Last spring, we performed our calculations (data available in the Online Appendix):
An Impact Factor of 0.754 may not be a high Impact Factor, but it seems not to be a bad starting point. Of the 11391 journals listed in the 2015 Journal Citation Reports, 3709 (31.1%) have an Impact Factor lower than 0.754. In comparison with Pharmacy Practice's sister journals in the Pharmacology and Pharmacy Subject Category, the score of 0.754 remains in the middle of the distribution:
• American Journal of Health-System Pharmacy (2.451)
• Annals of Pharmacotherapy (2.119)
• International Journal of Clinical Pharmacy (1.339)
• Journal of The American Pharmacists Association (1.285)
• American Journal of Pharmaceutical Education (1.196)
• Brazilian Journal of Pharmaceutical Sciences (0.485)
• European Journal of Hospital Pharmacy-Science and Practice (0.432)
• Latin American Journal of Pharmacy (0.329)
• Indian Journal of Pharmaceutical Education and Research (0.109)
Unfortunately, when the 2015 Journal Citation Reports was published, Pharmacy Practice was not on the list. We wrote an email to Thomson Reuters on June 21st providing these data and seeking to be listed, but after another evaluation process, the answer we obtained on August 31st was as follows: "Though it has improved since the 2013 evaluation in the qualities of author internationality and grant support, Pharmacy Practice is receiving too few citations from the journals we currently cover. Pharmacy Practice ranks near the bottom of our Pharmacology & Pharmacy category. We have sufficient coverage of the topic including with high quality journals from your geographic region. And we already index well-cited titles specifically on pharmacy". As stated in that email, Pharmacy Practice will not be re-evaluated for "several years". Should we give up because of the tyranny of the Impact Factor calculation processes?27
At the end of the day, more than the tyranny of the Impact Factor, we are complaining about the tyranny of the business-oriented system of measuring the science and quality of scholarly journals. Let's play fair. Let's all count and be counted. Let's use a transparent and reproducible method to measure journals' impact. We accept that Gardfield's original idea1 was a good idea, but let's also acknowledge that the methods used in the 1950s may be completely obsolete in the 21st century. Moreover, let's consider, in the turbulent current environment of scientific publications with so many commercial interests involved, whether private companies are the best possible option to assess the performance of researchers and academic institutions. Easier and cheaper options exist.
Finally, before complaining about the low Impact Factor of pharmacy journals, pharmacy practice researchers should keep in mind that increasing the Impact Factor of pharmacy journals depends only upon their willingness. This means that before inquiring about the Impact Factor of a given pharmacy journal, researchers should first ask about what his/her contribution was to that journal's Impact Factor. This is to say, how many times have I cited during the last year an article published in that journal in the previous two years? A number of research questions emerge from these considerations regarding the lack of pharmacy journals with a high Impact Factor: Do pharmacy practice researchers have a citation practice consequent with the enhancement of pharmacy practice as an area of knowledge? Do pharmacy practice researchers publish their best articles in pharmacy journals? Why do some pharmacy practice researchers consider that publishing in other areas' journals increases their visibility or their impact? Do pharmacy practice articles published in other areas have a greater impact than those published in pharmacy journals? Then, we should also invest some research effort in other, more methodological questions: Are pharmacy journals sufficiently represented in databases? Is the reduced number of pharmacy journals indexed in databases affecting bibliometric indexes? It seems that we have a considerable amount of work to do if we want to increase the impact of pharmacy practice research.
References
1. Garfield E. Citation indexes for science; a new dimension in documentation through association of ideas. Science. 1955;122(3159):108-111. [ Links ]
2. Hansson S. Impact factor as a misleading tool in evaluation of medical journals. Lancet. 1995;346(8979):906. [ Links ]
3. Hecht F, Hecht BK, Sandberg AA. The journal "impact factor": a misnamed, misleading, misused measure. Cancer Genet Cytogenet. 1998;104(2):77-81. [ Links ]
4. Eston R. The impact factor: a misleading and flawed measure of research quality. J Sports Sci. 2005 Jan;23(1):1-3. [ Links ]
5. Quindós G. Confusing the confused: thoughts on impact factor, h(irsch) index, Q value, and other cofactors that influence the researcher's happiness. Rev Iberoam Micol. 2009;26(2):97-102. doi: 10.1016/S1130-1406(09)70018-X. [ Links ]
6. Mayor J. Are scientists nearsighted gamblers? The misleading nature of impact factors. Front Psychol. 2010;1:215. doi: 10.3389/fpsyg.2010.00215. [ Links ]
7. Chew M, Villanueva EV, Van Der Weyden MB. Life and times of the impact factor: retrospective analysis of trends for seven medical journals (1994-2005) and their Editors' views. J R Soc Med. 2007;100(3):142-150. [ Links ]
8. McVeigh ME, Mann SJ. The journal impact factor denominator: defining citable (counted) items. JAMA. 2009;302(10):1107-1109. doi: 10.1001/jama.2009.1301. [ Links ]
9. Liu XL, Gai SS, Zhou J. Journal Impact Factor: Do the Numerator and Denominator Need Correction? PLoS One. 2016;11(3):e0151414. doi: 10.1371/journal.pone.0151414. [ Links ]
10. FrandsenTB. Journal self-citations - Analysing the JIF mechanism. J Informetrics. 2007;1(1):47-58. doi: 10.1016/j.joi.2006.09.002. [ Links ]
11. Hirsch JE. An index to quantify an individual's scientific research output. Proc Natl Acad Sci U S A. 2005;102(46):16569-16572. [ Links ]
12. Lando T, Bertoli-Barsotti L.A new bibliometric index based on the shape of the citation distribution. PLoS One. 2014;9(12):e115962. doi: 10.1371/journal.pone.0115962. [ Links ]
13. On impact (editorial). Nature. 2016 Jul 27;535(7613):466. doi: 10.1038/535466a. [ Links ]
14. Van Noorden R. Metrics: A profusion of measures. Nature. 2010;465(7300):864-866. doi: 10.1038/465864a. [ Links ]
15. van Diest PJ, Holzel H, Burnett D, Crocker J. Impactitis: new cures for an old disease. J Clin Pathol. 2001;54(11):817-819. [ Links ]
16. Alfonso F, Bermejo J, Segovia J. (Impactology, impactitis, impactotherapy). Rev Esp Cardiol. 2005;58(10):1239-1245. [ Links ]
17. Elsaie ML, Kammer J. Impactitis: the impact factor myth syndrome. Indian J Dermatol. 2009;54(1):83-85. doi: 10.4103/0019-5154.48998. [ Links ]
18. Rossner M, Van Epps H, Hill E. Show me the data. J Cell Biol. 2007 Dec 17;179(6):1091-1092. [ Links ]
19. Testa J. The Thomsom Reuters journal selection process. http://wokinfo.com/essays/journal-selection-process/ (accessed Sep 10, 2016). [ Links ]
20. Garfield E. The significant scientific literature appears in a small core of journals. Scientist. 1996;10(17):13. [ Links ]
21. Price DJD. Little science, big science. New York: Columbia University Press; 1963. [ Links ]
22. National Library of Medicine. PubMed Central. Available at: http://www.ncbi.nlm.nih.gov/pmc/ (accessed Sep 10, 2016). [ Links ]
23. Bradford SC. Sources of information on specific subjects. Engineering (London). 1934;137:85-86. [ Links ]
24. Nash-Stewart CE, Kruesi LM, Del Mar CB. Does Bradford's Law of Scattering predict the size of the literature in Cochrane Reviews? J Med Libr Assoc. 2012;100(2):135-138. doi: 10.3163/1536-5050.100.2.013. [ Links ]
25. Anderson C. The Long Tail: why the future of business is selling less of more. New York: Hachette books; 2008. ISBN: 978-1401309664. [ Links ]
26. Garfield E. How ISI selects journals for coverage: quantitative and qualitative considerations. Scientist. 1990;13(22)185.193. [ Links ]
27. Colquhoun D. Challenging the tyranny of impact factors. Nature. 2003;423(6939):479. [ Links ]