If we as a society fail to publish all competent research, we have committed irreparable world-scale malpractice.1 The moral imperative of “publish or perish” is now broad and urgent with the advent of easy and prompt publication. If we fail to publish data, the data perish; with data’s demise, people (whose clinicians should have had the advantage of knowledge in the literature) suffer and perish, along with the public investment we have made as taxpayers, donors to, participants in and fundraisers for research. Currently configured journals will perish as well, because we will have failed as guarantors of public access to the knowledge the public has earned with their tax dollars and charitable donations.
For Canada, a relatively well-resourced country, which has a strong, logically constructed and innovative health sciences education system, this opportunity becomes an ethical imperative. Health sciences journals are a social good. They are inextricably bound to health sciences education, and we should plan and act accordingly. What would the impact be on the peer review process if all passable Canadian medical research was published, if the publication bar was set at mere competence so that no data were wasted?
I propose a few components for such a model. Some form of prepublication peer review is our gold standard, and I have nothing better to propose than peer review. However, what if peer review were reconfigured?
What if a staff biostatistician (an essential eye for all manuscripts) and three peers (who obligated themselves to do this when they recently submitted their paper for peer review) first determined whether an article was competent based on a checklist: have the authors properly tested the question of interest (useful metrics, adequate power, reasonable analyses); have they accurately portrayed the data; have they written reasonably clearly and discussed their findings in the context of relevant literature? If the paper checks out, it is published online. In addition, those four reviewers, perhaps with a volunteer content expert (our current “peers” in peer review), now rank the paper Amazon-style: we can test whether those papers averaging 4.9 stars from these reviewers for importance, clinical relevance and novelty eventually receive more hits, re-tweets and traditional citations.
What might we lose? Worst case, this could discredit a particular journal. But what could we gain? This could solve the issue of dissemination, one of the most critical, global problems in research.
Every major medical journal has been struggling to determine how to continue publishing in an era where “publication” means simply that my data, and what I negotiate with journal editors about its interpretation, has been allowed by those editors to be shown on your computer.
Even as a devoted journal editor and journal board member for 30 years, as well as being a committed CMAJ Editorial Advisory Board member (for 8 years and ongoing) and first author of original research published in CMAJ, I question the importance of any individual journal’s demise.1 But our collective value as traditional publishers and editors perishes as rapidly as democratized knowledge and global self-publishing grow. Let us not allow defence of fiefdoms, commercial interests or the inertia of history thwart our unprecedented opportunity to thrive as knowledge disseminators — and have our constituents similarly thrive.