Skip to main content

Impact Factors, Peer Review, and Elitism in Science and What We Can Do About It
By: Andrew Vigotsky

Introduction

There are a number of issues within science and academia that I believe stifle scientific progress and promote elitism. I am certainly not the first to write about many of these perverse ideologies, as much ‘larger’, well-respected names in science and academia have written about them. Below are just a handful of articles by some big names in science.

  1. Sick of Impact Factors by Stephen Curry
  2. Open access, peer review, grants and other academic conundrums by David Colquhoun
  3. How journals like Nature, Cell and Science are damaging science by Randy Schekman
  4. The widely held notion that high-impact publications determine who gets academic jobs, grants and tenure is wrong. Stop using it as an excuse. by Michael Eisen
  5. Peer review is f***ed up – let’s fix it by Michael Eisen
  6. and many, many more

Impact Factors

For years, impact factor, or the average (arithmetic mean) number of citations to recent (usually preceding two years) articles in a given journal, has erroneously been used as a surrogate measure of the quality of scientific journals, articles published within those journals, and in some cases, the authors of said articles. There are a number of issues with this logic; for one, as described by David Colquhoun, it is “statistically illiterate” to use the arithmetic mean of citations to describe a journal’s impact, due to the distribution of article citation rates – rather, median would be a more appropriate measure, as just 15% of journal articles make up 50% of a journal’s citations (Seglen, 1992). Also, the number of citations alone is not a measure of quality, as argued by Seglen (1992), but is rather a measure of utility, and a poor one at that. In other words, even if impact factors were an accurate representation of the number of citations to a given article in a journal, it is not necessarily indicative of the clinical impact or relevance/application of that article, as it is just the number of times it has been cited by other scientific articles.

Impact Factors

A great example of how the number of times an article is cited is not at all indicative of the quality of said article is the original study to falsely describe a link between autism and MMR vaccines (RETRACTED), as it has, according to Google Scholar, over 2000 citations, but this paper’s quality is laughable, as the data was fraudulent. Although, admittedly, this paper may be an outlier, it exemplifies how higher impact factor journals are not perfect. The journal in which it was published, The Lancet, currently has the second highest impact factor (NEJM is first).

It is important that we understand how journals with high impact factors screen papers. Not only are the peer-reviewers and editors asked to evaluate study methodology and scientific soundness, but also ‘perceived impact’. Of course, this is subjective and allows journals to reject articles, not based upon the quality of the study, but because the study is not expected to make a large ‘impact’. So, although a study may be of the highest methodological quality, the authors are then forced to resubmit elsewhere, and there are a few problems with this. Firstly, it takes time to hear back from peer-reviewers – from a few weeks to a few months. After this first round of reviews and rejection, the authors must then go through another first round of peer-review, but in a different journal, which increases the time until an article is published. In short, this slows scientific progress, by slowing the time until a study can be read, applied, scrutinized, built upon, or replicated. Research is a slow enough process as-is, and it does not need to be made slower than it already is due to a subjective measure of research popularity. This is well documented (Kravitz & Baker 2011; Nosek & Bar-Anan 2012; Statzner & Resh 2010). The implications of this lengthy process can be read about in a recent article about CRISPR/Cas9 – a gene-editing tool discovered by different labs, simultaneously, and how publishing time and scooping (when someone else publishes something that you’re currently working on or were planning to publish) can affect who receives recognition for new discoveries.

Importantly, there is evidence to suggest that articles published in higher ranked journals are (Brembs et al. 2013):

  • of equal or lower scientific soundness
  • more likely to be unreplicable
  • more likely to be retracted, even when considering visibility
  • more likely to be subject to the decline effect (wherein an initial study reports larger effects than later studies find)
  • have a higher incidence of fraud and misconduct
  • not more likely to be more novel
  • rated by experts to be more important (when not masked)
  • have a weak correlation with future citation rates, likely due to visibility
  • less likely to have no future citations

Many of these tendencies may have to do with the prestige associated with high impact factors and publication bias, in that articles with positive and novel outcomes are more likely to be published, and that, in reality, scientific breakthroughs are terribly difficult and rare.

Perhaps one of the largest problems is that impact factor is not objectively reproducible. It turns out that impact factor is negotiated by the journal, wherein the journals can negotiate the denominator in the impact factor formula (Editorial 2006). This fundamental flaw may lead to differences as large as 19% (Rossner et al. 2007)!

For those interested in reading more about this, I would highly suggest the phenomenal, open-access review by Brembs and colleagues.

Costs and Greed

In short, legacy publishers charge exorbitant sums of money for journal memberships and individual articles. These companies include, but are not limited to, Elsevier, Springer, Lippinkott Williams & Wilkins, John Wiley & Sons, and Taylor & Francis. An individual article costs about $30, whereas an institutional subscription to a journal may cost as much as $40,000/year. Such fees amount to profit margins that are upwards of 30% per year, which is in the same ballpark as Apple. For these reasons, back in 2012, Harvard cut back on journal subscriptions and has since started encouraging its scientists to publish open access. Additionally, over 16,000 scientists have decided to boycott Elselvier, one of the big names in publishing. Moreover, despite the inflated costs of journal subscriptions and articles, peer-reviewers and editors, the ones who ensure that the articles are of acceptable quality, do not see a dime.

The late Aaron Swartz has summarized these issues much more eloquently than I can in his Guerilla Open Access Manifesto. For those that are interested in this topic and Aaron Swartz’s vision, I highly recommend his documentary.

Peer Review

Getting a paper through peer review is arguably the bane of a researcher’s existence. Just when you think you’re done with the paper, it is torn apart by peer-reviewers. While, in theory, this is a good idea and improves the quality of a paper, the peer-review process is not evidence-based and without its flaws.

Like the rest of us, peer-reviewers are subject to cognitive biases and ignorance. Thus, the Dunning-Kruger effect may indeed apply to peer-reviewers; in which case, three criteria must be met: “reviewers (i) can be ignorant of the subject matter concerned; (ii) are not aware of it; and (iii) act as if they are experts when in fact they often are not, thereby misleading editorial boards” (Huang 2013). In cases where a peer-reviewer does suffer from the Dunning-Kruger effect, it makes the peer-review process that much more painstaking, as authors must unnecessarily modify, remove, or justify parts of the manuscript. Due to the interdisciplinary nature of modern research, this is becoming more and more prevalent, as a paper will have several authors with different specialties, but each peer-reviewer can only know so much. This leads to one of the main arguments put forth in an editorial by Sui Huang, in that peer-reviewers are often not “peers” (that is, on the same level as the authors with regards to pertinent knowledge), but are less than peers. By this, he means that authors often know more than the reviewers about the topic that the reviewer must scrutinize, which makes the process frustrating and inefficient.

Richard Smith, the former editor-in-chief of the British Journal of Medicine, has done a lot of research and writing on peer-review and its effectiveness. One of his studies placed deliberate errors (9 major) in articles sent to peer-reviews, and not one reviewer spotted all of the errors, and most spotted only a quarter of them (Schroter et al. 2004). Clearly, peer-review is not perfect. Richard Smith has since written an editorial about peer-review that is open-access. In short, the process is not based on evidence, is subject to abuse and bias, and is in needing of an overhaul. That is not to say, however, that peer-review should be removed completely; just changed.

Elitism

Although I’m still young and don’t yet have dozens of publications, I’ve probably done more reading on the actual publishing process than have many tenured scientists. The dogma of needing positive effects, “statistical significance” [sic], and to be impactful is, well, not helping science. Scientists are fighting to publish first and in more reputable journals, but potentially at the expense of scientific quality and, even more likely, time. This process has led to egos getting in the way – everyone wants to be the one to make the big discovery and publish it in a hard-hitting journal, but this is selfish, myopic, and is not what science is about. Science is the process of discovering how the natural world works, and via scientific integrity, this can be done. Instead of competing, scientists should collaborate. Don’t agree on a mechanism? Plan a multi-center, collaborative study that will get to the bottom of it, rather than publishing one from your lab and having it be scrutinized post-publication. Two brains are better than one; through collaboration and outside perspectives, I truly believe that questions can be answered in a more efficient, complete manner.

What can we do about it?

Science is not about the individual performing research – it is about the knowledge base. It is of the utmost importance that all information be freely available, accessible, and able to be publicly scrutinized. With the advent of gold open-access journals, such as PeerJ and PLoS, there is truly no reason not to publish open-access and release datasets. From my experience with PeerJ, a paper is able to get published in a fraction of the time of legacy publishers, due to the rapid peer-review process and willingness to accept papers based upon their scientific soundness alone, rather than perceived impact. The releasing of datasets allows others to look at your data from different perspectives, rerun analyses, and perhaps use the data in ways that never occurred to the researcher that published the study. This is why public datasets, like NHANES, are so good for science – different scientists do different things with it: run different analyses and come to different conclusions. One perspective on a dataset may not be enough to getting closer to the truth, and as researchers, it is important that we realize that our perspective on our data may be short-sighted or incomplete. Lastly, in order to avoid investigator bias, all clinical trials should be registered. I am truly blown away by how few trials in exercise science are, and I think it’s time we start.

Conclusion

To wrap things up, I want to stress the importance of forgetting about journal metrics, and instead, focus on article quality and scientific soundness. All results matter, no matter the p-value, and should be published so long as they are scientifically sound. Moreover, the subjective importance of an article is irrelevant – every paper is a piece of the puzzle that makes the body of literature, and no standalone paper is important in the grand scheme of things. Forget about journals, forget about impact; forget everything that isn’t scientific soundness. Remember: scientific findings are for increasing what we, the human race, know – it’s not a popularity contest, and it’s not meant to feed narcissism. We’re all in this together.

References

  1. Huang S. 2013. When peers are not peers and don’t know it: The Dunning-Kruger effect and self-fulfilling prophecy in peer-review. Bioessays 35:414-416. 10.1002/bies.201200182
  2. Kravitz DJ, and Baker CI. 2011. Toward a new model of scientific publishing: discussion and a proposal. Frontiers in Computational Neuroscience 5:55. 10.3389/fncom.2011.00055
  3. Nosek BA, and Bar-Anan Y. 2012. Scientific utopia: I. Opening scientific communication. Psychological Inquiry 23:217-243.
  4. Editorial. 2006. The impact factor game. It is time to find a better way to assess the scientific literature. PLoS Medicine 3:e291. 10.1371/journal.pmed.0030291
  5. Rossner M, Van Epps H, and Hill E. 2007. Show me the data. Journal of Cell Biology 179:1091-1092. 10.1083/jcb.200711140
  6. Schroter S, Black N, Evans S, Carpenter J, Godlee F, and Smith R. 2004. Effects of training on quality of peer review: randomised controlled trial. BMJ 328:673. 10.1136/bmj.38023.700775.AE
  7. Statzner B, and Resh VH. 2010. Negative changes in the scientific publication process in ecology: potential causes and consequences. Freshwater Biology 55:2639-2653.

About the author

Andrew Vigotsky

Formerly Bret Contreras’ intern, Andrew has a BS in Kinesiology from Arizona State University and is now a graduate student in Biomedical Engineering at Northwestern University, where he studies biomechanics and musculoskeletal modeling. In addition to these concentrations, Andrew has published on topics relating to resistance training, hypertrophy, electromyography, foam rolling and manual therapy, and clinical testing. You can follow him on Twitter or Facebook.

Leave a Reply

SIGN UP FOR THE FREE NEWSLETTER

and receive my FREE Lower Body Progressions eBook!

You have Successfully Subscribed!