Scientists are the smartest idiots I know.
If one explains the current system of peer review to a non-scientist, the response is typically, “that’s insane, I thought you guys were supposed to be smart”.
When we apply for a grant or want to publish our science, we secretly get the work reviewed by our peers, some of which are competing with us for precious funding, or a bizarre version of fame. Under the veil of anonymity, a reviewer can write anything, included false statements, or incorrect statements to justify a decision. The decision is most often, “do not fund” or “reject”, even if the review is based off of inaccuracies, lack of expertise, or even blatant slander. There are no rules, there are no repercussions. There are few integrity guidelines, or oversight, nor rules of ethics in the review process for the most part. It can lead to internet trolling at a level of high art. In funding decisions, these mistakes can be missed by inattentive panels, but were definitely missed in the CIHR reform scheme before panels were re-introduced. We still have a problem of reviewers self-identifying expertise they simply do not have.
Scientists have to follow strict rules of ethics when submitting data, including conflicts of interest, research ethics, etc. No such rules are often formally stated in the review process and can vary widely between journals.
This system is historic, back to an era when biomedical research was a fraction of the size it is today, and journal Editors were typically active scientists. The community was small. But as science rapidly expanded in the 90s, so did scientific publishing, and soon editors became professional editors, with some never running a lab or research program. Then, came the digital revolution, and journals were no longer being read on paper and the pipeline to publish increased exponentially.
What drove the massive expansion of journals? Money. Big money. And like many historic industries, it’s thriving, mostly based off free slave labor.
CELL was sold it to Elsevier press in 1999. While the sales number was never formally revealed, it was rumored to exceed $US100M. No person who reviewed for this journal received a thin dime. The analogy would be hiring workers to build a road, pay them nothing, insist the road get paved in under 14 days, then charge them to use the road. Why? for the prestige of being associated with a road (which is fundamentally no different than any other road).
What CELL and Nature started, mushroomed wildly in the next 20 years, with journals starting up weekly, now numbering in the thousands, on a simple business model: hire Editors, accept submission, get three reviews, charge to publish, with numbers in the thousands of dollars per manuscript. It’s a money making machine, based off free labor. Why not? Scientists are idiots, they work for free, they do hard work just based off ideology.
What happens under the veil of anonymity? Papers are trivially reviewed, either quickly dismissed or accepted without much scientific input, which has driven a lot of fraud or just bad science, that when revealed often leads to the question: how did the reviewers not see this? It gets worse, “knowledge leaders” in some fields can manipulate the process, hold up or block manuscripts while post-docs race to reproduce the data as their own, as outlined in a public letter to the Editors of Nature in 2010. Journals return comments to authors, but many take secret comments from reviewers directly to the Editors, without author knowledge. Why? Horror stories abound about revision after revision, then a final rejection after a year or more because the Editor lost interest. Meanwhile, careers and lives are stalled. This is very problematic when a field of research becomes dogmatic, and truly innovative theories or approaches are presented: to accept this work means having to remove dogma, and this can mean invalidation of “knowledge leaders” entire publication records.
These publications can set careers or lack of them can ruin careers and gain or lose funding. PDFs get hired by institutions that look like they can walk on water based on their CVs, only to drown in a few steps as an independent investigator.
Often overheard at symposium by senior scientists: ” we had a problem with reviewer #2, so I called the Editor and sorted it out”. Called? How? No journal lists phone numbers for Editors, what magic Rolodex does this involve?
We have a system in place that is used because it is historic. It’s not working, it’s not fair, it benefits fraud, and it’s bad for science. This failure needs to be addressed with a series of ethical guidelines and transparency, because the process has been corrupted and failure is now so common, there are entire websites dedicated to it. Suggestions:
- Editors need to be active scientists. The Journal of Biological Chemistry is an excellent example.
- Reviewers and academic editors need to be paid. The Public Library of Online Science (PLoS) sounds like an altruistic organization to disseminate scientific knowledge, but executive compensations can exceed $330,000-$540,000 a year. Clearly, PloS feels expertise and talent should be rewarded, which is fair, but not when it comes to reviewers who put in hours of work to review manuscripts. The same reviewers then have to pay $1500+ to publish, and the journal decided to just stop copy editing manuscripts, leading to sloppy publications. The line between legitimate journals and “predatory” journals is blurring. This is not unique to PloS. Scientific publishing is a massive profit business. The NY times revealed a “shocking” number of $500 for page charges in “predatory” journals. Yet, many established journals charge $500-600 per color figure alone. I cannot think of another profession that requires so many years of expertise under draconian standards that has so little value applied to our time. Try getting a free 3 hours from a lawyer, accountant, or consultant. Good luck with that, or look out for what you get.
- Reviewers need to be scored. By both Editors and submitting authors. We recently were reviewed at a leading cell biology journal, and while the paper was not accepted for publication, we received deeply detailed, outstanding reviews from all three reviewers. Their intent was obvious: address these criticisms and this will be better work. We were also reviewed at two leading magazines recently, and what we got back were late reviews, of 5-7 lines or less with terms like “unconvincing”, or simply incorrect statements, with no chance to respond (at 18 years as PI, I have not received the magic Editorial Rolodex). Review without scientific justification. These scores should be tied to ORICID. Editors should be able to flag scientific misconduct to home institutions or funding agencies, for reviewers behaving inappropriately. Low scoring reviewers should be asked to justify this score to home institutions. Scoring then allows justification to pay good reviewers, and insist on sincere efforts to avoid trivial reviews.
- No more gatekeeping. Famous journals do not review most of their submissions, with most rejections coming from the desk of non-expert, non-scientist Editors looking for name and institutional recognition and trendy buzzwords. The issue is they simply have too many submissions if they are regarded as “high impact” journals. Yet, regardless of the science, they will not decline submission from high profile institutions, with fear the institution’s senior scientists will not submit. What is compounding this problem is that relative new or lesser known journals are doing this in an attempt to boost “impact” based on trendy subjects, which just demonstrates what their priorities are: not science, but gaming the impact factor metrics. This would require a new system, onto point 5….
- No more direct submissions. Manuscripts should be openly submitted to free access sites like BioRxiv, and go live in hours, and journal Editors can bid to authors to send to review in a clearing house type model. What this does is allow the Editors to judge impact based on comments from the community if they lack direct expertise. As it stands, the current process is stochastic and decision to review is based on often one opinion. It can now take months before a paper is even reviewed, as journals can sit on the decision to send to review up to a month or more (remember, they don’t have to follow any rules). It can take half a day at time just to submit a manuscript. The cycles of submission and editorial desk rejection can suck half a year out of the publication process -this does nothing for science.
- One manuscript and reference format. One journal format. Pick one, any one. the current need for software to deal with 1000s of journal reference styles for 1000s of journals is asinine. It’s like trying to do science in 1000 different standards of measurement. We picked the metric system and moved on.
- Manuscript and funding agency reviews should be public, as this is publicly funded. This allows readers to know exactly how well a manuscript or grant was reviewed, and if a journals press hype matches actual scientific opinion, and if any obvious bias occurred in the review process. This would help with the media coverage of manuscripts as the journalists almost entirely rely on PR hype.
- All Reviews should be addressable by authors before decision. This is particularly a problem on grant panels that lack expertise, they can rank and score based on reviewer errors, but this cannot be addressed until the next competition. Same problem with rejection after first submission at journals. There should be a brief ability to respond to reviews before a decision is made. The current system relies on pure chance that our work is reviewed properly. We might as well have a lottery, especially in Canada where biomedical research grants can be reviewed, scored, and ranked by non-scientists (seriously).
- Reviewers should discuss and unmask prior to decision, after reading the responses. Currently some journals do unmask reviewers to each other and allow discussion (EMBO J., Current Biology, eLife…). Nothing is more discouraging than spending hours on review to improve a manuscript, only to have another reviewer dismiss the manuscript with an obvious minimal effort and comments like, “unconvincing” plus secret comments to the editor I cannot see. I don’t see the point to unblinding reviewers to authors, this will just discourage participation and fear of vindictive authors.
- Define Misconduct in the Scientific Review Process. There needs to be repercussions for unethical activity.
- Have a Higher bar for Authorship. Many clinicians have networks that yield their names on hundreds of manuscripts, for zero effort on the actual work, and it’s very likely they never read the manuscripts. This is simply not ethical, and unfair to authors with real effort on manuscripts. This is a real problem in funding agencies that then use reviewers that count papers, coming to the conclusion a good scientist publishes seriously every two to three weeks of their lives.
- Keep individual manuscript metrics, ban journal impact metrics. Journal impact scores can be gamed, and are gamed, and make no sense. It’s like saying a single driver of a Honda is more intelligent, because on average, Honda drivers have a high IQ, and thus, driving a Honda makes you smarter. Using metrics like impact factors or H-index to judge careers is lazy, incompetent administration. You drive a Honda? Hired! We denied tenure? Not my fault, he/she drove a Honda!
- Retracted manuscripts due to figure fraud should reveal who reviewed the manuscript. Maybe these guys will pay attention next time. Or, maybe if we paid them, this would happen a lot less. It’s very likely if we got to see the reviews of these manuscripts, we would see that they were trivially reviewed.
- Canada needs an Office of Research Integrity. For a variety of reasons, fraudsters can flourish in the Canadian system, as funding institutions defer fraud investigations to home institutions, who have a perverse incentive to bury any conclusions. The US has the ORI, independent of any institution, and in some countries, scientific fraud is legally regarded as fraud of the public trust and can result in civil action or jail time. If Canadian science wants serious public support, as the Naylor report recommends, it should come with equally serious scientific integrity.