Intended for healthcare professionals

Editorials

The case for structuring the discussion of scientific papers

BMJ 1999; 318 doi: https://doi.org/10.1136/bmj.318.7193.1224 (Published 08 May 1999) Cite this as: BMJ 1999;318:1224

Much the same as that for structuring abstracts

  1. Michael Docherty, Professor of rheumatology,
  2. Richard Smith, Editor
  1. City Hospital, Nottingham NG5 1PB
  2. BMJ

    Structure is the most difficult part of writing, no matter whether you are writing a novel, a play, a poem, a government report, or a scientific paper. If the structure is right then the rest can follow fairly easily, but no amount of clever language can compensate for a weak structure. Structure is important so that readers don't become lost. They should know where they've come from, where they are, and where they are headed. A strong structure also allows readers to know where to look for particular information and makes it more likely that all important information will be included.

    Readers of scientific papers in medical journals are used to the IMRaD structure (Introduction, Methods, Results, and Discussion)1 and either consciously or unconsciously know the function of each section. Readers have also become used to structured abstracts, which have been shown to include more important information than unstructured summaries. 2 3 Journals are now introducing specific structures for particular types of papers—such as the CONSORT structure for reporting randomised trials.4 Now we are proposing that the discussion of scientific reports should be structured—because it is often the weakest part of the paper where careful explanation gives way to polemic.5

    Old fashioned papers often comprised small amounts of new data—perhaps a case report—with extensive discussion. The function of the discussion seemed to be to convince readers of the rightness of the authors' interpretation of data and speculation. It was not a dispassionate examination of the evidence. Times have changed, and greater emphasis has been placed on methods and results, particularly as methods have become more complicated and scientifically valid. But still we see many papers where the job of the discussion seems to be to “sell” the paper.

    Richard Horton, editor of the Lancet, and others have described how authors use rhetoric in the discussion of papers. 6 7 Authors may use extensive text without subheadings; expand reports with comment relating more to the generalities than to the specifics of the study; and introduce bias by emphasising the strengths of the study more than its weaknesses, reiterating selected results, and inflating the importance and generalisability of the findings. Commonly authors go beyond the evidence they have gathered and draw unjustified conclusions.

    Our proposal for a structured discussion is shown in the box. The discussion should begin with a restatement of the principal finding. Ideally, this should be no more than one sentence. Next should come a comprehensive examination of the strengths and weaknesses of the study, with equal emphasis given to both. Indeed, editors and readers are likely to be most interested in the weaknesses of the study: all medical studies have them. If editors and readers identify weaknesses that are not discussed then their trust in the paper may be shaken: what other weaknesses might there be that neither they nor the authors have identified?

    Suggested structure for discussion of scientific papers

    • Statement of principal findings

    • Strengths and weaknesses of the study

    • Strengths and weaknesses in relation to other studies, discussing particularly any differences in results

    • Meaning of the study: possible mechanisms and implications for clinicians or policymakers

    • Unanswered questions and future research

    The next job is to relate the study to what has gone before. The task here is not to show how your study is better than previous studies but rather to compare strengths and weaknesses. Do not hide the weaknesses of your study relative to other studies. Importantly, you should discuss why you might have reached different conclusions from others. But go easy on the speculation. If you don't know why your results are different from those of others then don't pretend you do, and you should certainly not assume that your results are right and the others wrong.

    Now you should begin the difficult study of discussing what your study might “mean.” What might be the explanation of your findings and what might they mean for clinicians or policymakers? Here you are on dangerous ground, and most editors and readers will appreciate you being cautious, not moving beyond what is often limited evidence. Leave readers to make up their own minds on meaning: they will anyway. You might even emphasise what your evidence does not mean, holding readers back from reaching overdramatic, unjustified conclusions. Finally, you should discuss what questions remain unanswered and what further work is needed. Again editors and readers will enjoy restraint. Indeed, this is the part of the paper where authors often run amok. There is nothing to stop you writing another piece that is all speculation, but don't corrupt your evidence with speculation.

    Other subheadings might sometimes be needed, but we think that our suggested structure should fit most studies. Although some may find uniform structuring difficult and even restrictive,8 we believe that our proposed structure should reduce overall length; prevent unjustified extrapolation and selective repetition; reduce reporting bias; and improve the overall quality of reporting. Such a supposition could readily be tested. We invite comment from authors and readers of the BMJ, and if reaction is positive then we will introduce structured discussions.

    References

    View Abstract