Johnston and colleagues1 recently reported that participants poorly understood minimal important difference (MID) compared with other formats for treatment effect estimates. I believe understanding would have been improved if the mathematical definition were accompanied with a concrete example, such as “2 MID units means the effect is twice the size of what an average person would consider important.”
Also, a “correct” answer meant participants agreed with the authors’ value judgments about whether the effect magnitude (e.g., 0.6 MID, 0.2 standardized mean difference) is trivial and probably not important, or small and probably important. Only the MID provides information about importance (≥ 1 is important, < 1 is unimportant); interpreting all other estimates requires information and assumptions not provided. Even for MID, the probability that the true effect is ≥ 1 MID when the estimate from the population average equals 0.6 MID requires Bayesian credible intervals. The probability that some participants might benefit requires knowledge of the standard deviation (SD) of the treatment responses (assuming normality). If the SD equals 0.1 MID, no patients had a response of ≥ 1 MID. If the SD equals 0.2, 2.5% of patients had a response of 1 MID. Even then, considering 2.5% as probably not v. probably important, and whether small is 1 MID or 1.5 MID, is a value judgment rather than correct or incorrect. Similar arguments are applicable for the other measures.
To help move the field forward, the authors might consider definitions that require less numerical literacy and give a better differentiation between “value judgments” and “correct responses.”