Authors Should Be Accurate When Describing Study Design
To the Editor:
Kimura and colleagues,1 in presenting their study on the impact of nortriptyline on cognitive impairment after stroke, do not describe clearly the research design they have used. In the paper’s title and in the text, they report the study as a randomized placebo-controlled trial, but they also describe it as a secondary analysis of data pooled from 2 previous trials—in other words, a meta-analysis. Although the paper does not directly reference the trials that were included, the details of research grants which supported the present work match those for 2 previously published trials,2 3 as do many of the methodological details. There are discrepancies, for example, in descriptions of the trials’ entry criteria and in the numbers of subjects included, but overall the evidence suggests that the present paper reports a meta-analysis of these 2 trials.
We need to be clear which sort of study this is for 2 reasons. First, if it is a meta-analysis, then it may produce misleading results if it is not based on all available trials or if it relies on subgroup analysis. In this case, the data on cognitive impairment were obtained as secondary outcomes in only 2 previous trials, and the present article is based on a subgroup analysis of results for some of the patients on whom these secondary outcomes were available. There is a strong possibility that such selectivity can lead to erroneous conclusions, which is not sufficiently acknowledged in the paper.
Second, Kimura and colleagues present results on the impact of nortriptyline on depression; it is not clear whether these are new data or a re-presentation of data from the previous trials. Duplicate publication can make difficulties in weighing up the evidence, because it leads to bias from certain results being overrepresented through double-counting. We have recently completed a systematic review of the evidence on effectiveness of treatments for depression after stroke4 and decided to exclude this paper (despite its title), since it represents re-presentation of already-published data.
Duplicate publications can be hard to spot if not clearly signaled and yet can form a substantial part of the literature on some topics. To take an example from a related field: a recent systematic review of the evidence on stroke lesion location and its relation to depression5 excluded 34 reports on the basis of overt or covert duplicate publication, a number almost identical to the 35 papers eventually included in the meta-analysis.
For both the reasons illustrated by this example, it is important for authors to report the design of their research unambiguously.
- Copyright © 2001 by American Heart Association
Kimura M, Robinson R, Kosier J. Treatment of cognitive impairment after poststroke depression: a double-blind treatment trial Stroke.. 2000;31:1482–1486.
House A, Hackett M, Anderson C. Effectiveness of antidepressants and psychological therapies for the prevention and treatment of depression in patients with stroke. In: Proceeding of the Royal College of Physicians of Edinburgh, Consensus Conference on Stroke Treatment and Service Delivery; November 7–8, 2000; Edinburgh, UK.
House et al make 2 major points in their letter. The first is that our study in Stroke was based on prior studies that were not referenced. This lack of reference was simply an oversight. It was clearly indicated in the paper that this was an analysis of data from 2 sites (ie, Baltimore, Md, and Iowa City, Iowa). The composition of the study included 18 patients from the 1984 study of Lipsey et al,R1 8 patients from the 1993 study of Robinson et al,R2 16 patients from the 2000 study of Robinson et al study,R3 and 5 patients whose data had never previously been reported but who had been treated using double-blind methodology. It is not true that this represented a convenience sample. It was composed of a consecutive series of patients who met the criteria as outlined in the Stroke publication from all of our patients who had been treated using double-blind methodology.
The second point made by House et al was that this was a “re-presentation of already published data.” I know of no other investigators who do not use large and well-studied data sets for secondary analyses of their data. The hypothesis being tested in the current study was whether remission of depression, regardless of whether the patient had been treated with placebo or nortriptyline, would be associated with a significant improvement of cognitive function. An examination of previously published data would not lead any investigator to be able to extract this from our prior publications, and in fact, there were 5 patients for whom data had never previously been published.
It is true that when data sets are used for secondary analyses, it is often difficult to know what data overlaps prior publications, and this unfortunately makes meta-analyses more difficult. This problem, however, also leads to the all-too-rapid exclusion of relevant studies, as indicated in the last example by House et al. As an illustration of this point, our publication, included in the meta-analysis of Carson et al,R4 stated in the first line of the methods section that all patients were right-handed. These authors, however, excluded another of our studies in which patients were enrolled only if they were left-handed. Clearly, anybody carefully evaluating the data would have recognized that those populations represented 2 entirely different samples.
Lipsey JR, Robinson RG, Pearlson GD, Rao K, Price TR. Nortriptyline treatment of post-stroke depression: a double-blind study. Lancet. 1984;1:297–300.
Robinson RG, Schultz SK, Castillo C, Kopel T, Kosier T. Nortriptyline versus fluoxetine in the treatment of depression and in short term recovery after stroke: a placebo controlled, double-blind study. Am J Psychiatr. 2000;157:351–359.
Carson AJ, MacHale S, Allen K, Lawrie SM, Dennis M, House A, Sharpe M. Depression after stroke and lesion location: a systematic review. Lancet. 2000;356:122–126.