A journal’s impact factor has, for decades, been used as a key indicator signifying the journal’s quality and prestige. Unfortunately, when you look at this metric, it falls short as a true evaluator of a journal’s quality or the quality of the individual articles. Are there alternatives to journal impact factor (JIF) that help researchers compare the quality of journals and help them decide which ones are appropriate for their submissions?
What is JIF??
JIF is a method of ranking a journal by determining how often researchers cite the papers within its pages. It is a measure of the average number of citations that articles in a particular journal have received during the previous two years. The higher the impact factor, the more prestigious the journal — in theory, at least.
This method was originally developed to help libraries select quality journals for their collections, but today it is often used as a measure of the quality and prestige of a journal. Taking it one step beyond that, it is also used by researchers as a factor for selecting journals to which they want to submit their papers, because being published in a high-impact journal is viewed as more prestigious.
Why JIF is insufficient for evaluating a journal or article
Although it is widely used by publishers and editors as a metric to measure quality, JIF is not a good indicator of journal quality for the following reasons:
- It measures only one aspect of a journal’s purpose.
- It can be misleading because it can give disproportionate significance to a few highly cited papers.
- It erroneously implies that papers with just a few citations are unimportant.
- Controversial papers, such as those based on fraudulent data, may be highly cited, distorting the impact factor of a journal.
- Citation bias, such as favoring English language, can exist.
- Authors may cite their own work, making it score higher.
- It can influence researchers choosing a journal to submit their papers to, which can undermine good science.
- It should not be used as a way to evaluate a researcher’s achievements. JIF should not be used in place of a solid peer review to judge the quality; as a way to evaluate a researcher’s achievements; or to judge the importance of a scientific paper. Even more importantly, it should never be used as a factor for judging the quality of an individual article or for decisions on tenure, grants, or promotions.
Where do we go from here?
Although it is still widely used as a ranking metric, JIF is garnering significant criticism from the scientific community. It is a one-dimensional measure of an article’s importance — and researchers are calling for more accurate and fair indicators of quality.
JIF measures only one aspect of a journal’s function. A more accurate and complete set of indicators would factor in all of a journal’s functions (i.e., registering authorship; peer review; and curating, disseminating, or archiving content). These indicators could take into consideration a journal’s acceptance rate, transparency of the peer review process, and the number and diversity of peer reviews.
There are a number of tools and technologies that may be an alternative indicator of a journal or article’s importance. For example:
- Declaration on Research Assessment (DORA): An initiative that calls for research to be judged on scientific merit rather than by using JIFs
- EigenFactor: A ranking based on incoming citations for a journal with more weight given to citations from significant and larger journals
- bioRxiv: A simple method for generating the citation distribution underlying JIFs
- Article Influence: A measure of the average influence of each of a journal’s articles over the first five years after publication (determined by the journal’s EigenFactor score divided by the fraction of articles published by the journal)
Although JIF is currently the most used indicator for evaluating journals, it is drawing a lot of criticism from scientists because it can be misleading and not a true measure of quality or importance. It may be replaced over time by other ranking methods that are better at judging an article based on merit. Publishers should take note and start recognizing other metrics to gauge an article’s influence and impact.
Contact your Sheridan representative for a consultation or visit our contact page to learn how Sheridan experts can help streamline and simplify your publishing processes.