Predictions Models in Acute Stroke
Potential Uses and Limitations
See related article, pages 1821–1826.
Konig et al1 report an external validation of their simple prediction model in patients with acute ischemic stroke enrolled into several randomized trials participating in the VISTA collaboration. Their model includes just age and the total NIHSS score and predicts survival and survival with functional recovery (Barthel Index ≥95) at 90 days. They report reasonable predictive accuracy which is slightly improved if they “tweak” their model. This improvement is predictable because models almost always perform best in the cohort in which they were derived and less well in independent external validation. By tweaking their model they have in effect derived a new model in their validation cohort. The true test is whether their tweaked model performs any better than their original in further independent cohorts.
As the authors point out, theirs’ is the second simple statistical model developed using robust techniques and externally validated in large independent cohorts. The so call “six simple variable (SSV)” model was developed in the Oxfordshire Community Stroke Project (OCSP) and has been externally validated in other community-based,2 hospital-based,2,3 and trial-based4 cohorts. This model includes age, dependency and living alone before the stroke and ability to lift arms off the bed, ability to talk normally and ability to walk independently after the stroke. It aims to predict, among other outcomes, survival free of dependency (modified Rankin Scale <3). Unlike the Konig model, it was developed and validated in mixed cohorts of patients, some of whom had prior disability and hemorrhagic stroke. It has recently been validated in patients with hyper-acute ischemic stroke where it performed satisfactorily.3,5
So, how do these 2 models stack up against each other? On the face of it the Konig model is simpler with just 2 variables; however, this overlooks the fact that the NIHSS comprises 11 items, and more subitems. So, the Konig model is simpler, but only in environments in which the staff have received appropriate training to use the NIHSS and where it is collected routinely. Those units which prefer an alternative stroke scale, eg, Scandinavian Stroke Scale, may be irritated by the fact that several of the NIHSS items probably add little to the predictive accuracy of the model.
The accuracy of the models are similar with areas under the curve on the receiver operator characteristic curves of about 0.8 in external validations. The Konig models were developed and validated in cohorts which excluded patients with hemorrhagic stroke and preexisting dependency,6 and thus tended to exclude older patients—this may make it potentially less generalizable to settings in which such patients are included. Both have been presented in the form of nomograms to facilitate easy calculation without a programmable calculator or computer.
An important question is whether such predictive models are useful? They have been used to adjust for differences in casemix in stroke audit.7,8 These studies have demonstrated that most of the variation in outcomes observed between hospitals is probably due to variation in casemix rather than quality of care. Simple models, based on easily collected clinical data, should be (but infrequently are!) the starting point for the myriad of studies which attempt to show that an imaging technique or biomarker predicts outcome—the question should be whether the proposed predictive technique adds significantly to the clinical predictors.
Predictive models have been used in randomized trials to ensure that treatment groups are balanced for prognosis.4 Konig et al argue that their model might be used in trials to include only patients who are likely to benefit from the treatment such as those who “have a high chance of incomplete recovery but low probability of mortality which is usually not affected by new medical treatment options.” It does not seem sensible to exclude patients who are most likely to die because one goal must be to find treatments which do save lives such as stroke unit care and decompressive craniectomy. Also, to include only those predicted to have an outcome of interest assumes that our treatments will be equally effective in those with a predicted high and low risk of that outcome—this may not be the case.
The ultimate aim of predictive models must be to enable us to predict the outcome of individual patients which would allow us to advise the patients and their families and to better plan their treatment. Although, their accuracy may be similar to those of informal predictions made by clinicians,6,9 many patients will be misclassified. The numeric output of a nomogram, or predictive system embedded in an electronic patient record, may give a false sense of scientific validity which could be detrimental to patient care. Treatments might be withdrawn in those with a predicted poor outcome—creating self fulfilling prophecies. The costs, in terms of worse outcomes, efficiency and misinformation, arising from incorrect predictions have to be balanced against any potential benefits. Before predictive systems are introduced into everyday clinical practice their accuracy should be optimized and their impact on patient care must be assessed properly.
The opinions in this editorial are not necessarily those of the editors or of the American Heart Association.
König IR, Ziegler A, Bluhmki E, Hacke W, Bath P, Sacco RL, Diener HC, Weimar C; on behalf of the VISTA investigators. Predicting long-term outcome after acute ischemic stroke: a simple index works in patients from controlled clinical trials. Stroke. 2008; 39: 1821–1826.
Counsell C, Dennis M, McDowall M, Warlow C. Predicting outcome after acute and subacute stroke: development and validation of new prognostic models. Stroke. 2002; 33: 1041–1047.
Reid JM, Gubitz GJ, Dai D, Reidy Y, Christian C, Counsell C, Dennis M, Phillips SJ. External validation of a six simple variable model of stroke outcome and verification in hyper-acute stroke. J Neurol Neurosurg Psychiatry. 2007; 78: 1390–1391.
Counsell C, Dennis MS, Lewis S, Warlow C. Performance of a statistical model to predict stroke outcome in the context of a large, simple, randomized, controlled trial of feeding. Stroke. 2003; 34: 127–133.
Lewis S, Dennis M, Sandercock P, for the International Stroke Trial (IST3) and Stroke Complications and Outcomes Prediction Engine (SCOPE) Collaborations. Predicting outcome in hyper-acute stroke: validation of a prognostic model in the Third International Stroke Trial (IST3). J Neurol Neurosurg. Psychiatry published online 31 Aug 2007 doi:10.1136/jnnp.2007.126045.
Weimar C, König IR, Kraywinkel K, Ziegler A, Diener HC. Age and the National Institutes of Health-Stroke Scale within 6h after onset are accurate predictors of outcome after cerebral ischemia: development and external validation of prognostic models. Stroke. 2004; 35: 158–162.
Weir N, Dennis M. Scottish Stroke Outcomes Group. Towards a National System for Monitoring the Quality of hospital-Based Stroke Services. Stroke. 2001; 32: 1415–1421.
Dennis M, Flaig R, McDowall M, Bishop J, McDonald A, Kelso L. National Report on Stroke Services in Scottish Hospitals 2005/2006 Scottish Stroke Care Audit. Available at: http://www.strokeaudit.scot.nhs. uk/Downloads. Accessed March 14, 2008.
Counsell C, McDowall M, Dennis M. Predicting functional outcome in acute stroke: comparison of a simple six variable model with other predictive systems and informal clinical prediction. JNNP. 2004; 75: 401–405.