Performance of a Statistical Model to Predict Stroke Outcome in the Context of a Large, Simple, Randomized, Controlled Trial of Feeding
Background and Purpose— Statistical models to predict the outcome of stroke patients have several uses. Their utility depends on their predictive accuracy in patients other than those on whom they were developed (ie, external validity). We sought to test the external validity of some recently described models in patients enrolled in the FOOD (Feed Or Ordinary Diet) trial: a large randomized trial evaluating feeding policies in patients with stroke.
Methods— The predictive variables were collected during a telephone call to randomize the patient a median of 5 days after stroke onset. Patients were followed up 6 months later to establish their survival, functional status, and residence. Charts were plotted to demonstrate the discrimination and calibration of the models.
Results— The models performed well in the first 2955 patients enrolled and followed up in the FOOD trial. The area under the receiver operating characteristic curves varied between 0.78 and 0.81 (with 0.5 indicating no discrimination and 1.0 indicating perfect discrimination). The discrimination was marginally better for patients enrolled within the first day of stroke than later. The models tended to provide rather pessimistic predictions in all groups except those predicted to have a high likelihood of surviving free of dependency.
Conclusions— As one might predict, the discriminatory power in the selected cohort of trial patients was marginally less good than in previously studied unselected cohorts used to test their external validity. These models provide a well-tested tool for stratification in trials, comparing outcomes in different cohorts and examining the additional predictive power of novel factors.
Although predictive models, if accurate, could potentially be used to aid management of individual stroke patients, they are more often and appropriately used to predict outcomes in groups of patients.1 For example, they have been used to adjust for differences in patients’ baseline characteristics when outcomes of stroke patients treated in different hospitals are compared.2 They may also be useful in stratifying patients at baseline in randomized controlled trials to increase the likelihood of balance between the different treatment groups.
We have previously described the development and testing of simple predictive models that included just 6 factors: age, prestroke independence, living circumstances, and 3 factors reflecting stroke severity (ie, normal verbal subsection of Glasgow Coma Scale, ability to lift both arms off the bed, and ability to walk independently).3 We demonstrated their use in adjusting for case mix when using outcomes to reflect the quality of hospital care.2 These studies have indicated that the models performed well in independent cohorts. However, their testing in independent cohorts relied on the predictive variables being extracted retrospectively from existing data sets3 (where definitions may not have been applied equally) or from clinical notes.2 Until now, we have not had the opportunity to test the models in a cohort of patients in which the predictive variables have been collected prospectively. In this study we seek to show how the variables can be used to predict outcome in groups of patients and stratify stroke patients enrolled in an ongoing international multicenter randomized controlled trial: the FOOD (Feed Or Ordinary Diet) trial (www.dcn.ed.ac.uk/FOOD).4
Subjects and Methods
The FOOD trial comprises a family of 3 randomized controlled trials that share the same randomization, data collection, and follow-up systems. The aim of these trials is to compare the outcomes of hospitalized stroke patients managed with different feeding policies (Table 1).
The trial has broad eligibility criteria. Any patient who is admitted to a participating hospital with a recent (within 7 days) stroke can be enrolled if the responsible clinician is uncertain of the best feeding policy. Baseline data are collected during a randomization telephone call to the trial center, before treatment allocation. The following questions are asked at baseline: What is the patient’s age (in years)? Did the patient live alone before the stroke? Was the patient independent in everyday activities before the stroke? Can the patient talk, and is he or she orientated in time, place, and person (ie, normal Glasgow Coma Scale verbal subscore)? Can the patient lift both arms off the bed? Can the patient walk without help from another person? The yes/no answers are used to calculate the probability of a good outcome (alive and independent [modified Rankin Scale score <3] 6 months after stroke onset), which is used in a minimization procedure (a statistical method to minimize imbalance of key prognostic factors between treatment groups5) to ensure balance in predicted outcome between treatment groups. Table 2 includes this equation used for minimization as well as 2 additional prediction models to predict 6-month survival and whether the patient is alive and at home 6 months after stroke onset.6 Six months after enrollment, patients are followed up by the national coordinating center, blind to the treatment allocation and baseline data, with either a postal questionnaire or semistructured telephone interview. The follow-up aims to establish the patients’ survival, place of residence, and modified Rankin Scale score.
We tested the performance of the models shown in Table 2 by estimating their calibration and discrimination.7 We used the coefficients exactly as they are shown in Table 2, rather than refitting the models to our data and allowing the coefficients to vary. Calibration is an assessment of whether predicted probabilities in groups of patients are too high or too low compared with the observed probabilities. We assessed this by plotting calibration graphs (observed versus predicted outcome among patients grouped by deciles of predicted probability of a good outcome). A model is well calibrated if the points on the graph follow a 45° line from the origin (ie, the predicted and observed probabilities are similar). The distance between the points and the diagonal indicates how optimistic or pessimistic the predictions are. Discrimination is an assessment of how well the model differentiates between patients who will do well and patients who will not, ie, the model should give a greater predicted probability of a patient doing well in those who actually do well than in those who actually do badly. We assessed this by calculating the area under the receiver operating characteristic (ROC) curve, which is a plot of sensitivity of predictions against 1−specificity of predictions. An area under the ROC curve (AUC) of 0.5 indicates no discrimination (ie, the line follows the 45° diagonal), and an area of 1.0 (ie, the line includes the entire area within the horizontal and vertical axes) indicates perfect discrimination. We calculated the AUC for our models applied to the whole cohort and also for subgroups defined by the time of onset of stroke symptoms to recruitment. We assessed the statistical significance of any differences between AUCs using standard errors calculated by the method described by Hanley and McNeil.8
Between November 1996 and February 28, 2001, 112 hospitals in 16 countries had enrolled 3012 patients. Baseline data were complete for all 3012 patients, and by November 2001, data on 6-month survival, modified Rankin Scale scores, and living circumstances were available for 2955 of the total (98%). Of the 57 without outcome data available, 3 had emigrated, 3 withdrew consent, 7 were untraceable, and 44 were still being followed. These patients were younger (mean age, 66 versus 73 years) and had less severe strokes (median probability of being alive and independent at 6 months=0.24 versus 0.14) compared with the patients who had been followed up. Of the 2955 outcome data forms, 870 (29%) were completed by the patient, 1390 (47%) by a caregiver or relative, 58 (2%) by a physician, and 5 (0.2%) by an unknown person, and 632 patients (21%) had died. The patients’ baseline characteristics are shown in Table 3. Patients were enrolled a median of 5 days (interquartile range, 2 to 8) after stroke onset and 4 days (interquartile range, 2 to 7) after hospital admission. Three hundred seventy-eight patients (13%) were enrolled on day of onset (day 0) or the following day (day 1). The follow-up data were collected a median of 196 days after enrollment (interquartile range, 177 to 224).
The patients’ outcomes are shown in Table 3. The 6-month case fatality was 21%. At final follow-up, 37% of survivors had a modified Rankin Scale score of 0 to 2 (ie, independent in everyday activities), while the remainder were dependent. The majority (1782 [77%]) of survivors were living in their own or a relative’s home.
Figure 1 shows the ROC curve for the model predicting survival with a modified Rankin Scale score of <3, which has an AUC of 0.79. The calibration of this model is shown in Figure 2A. The AUCs for the models predicting 6-month survival and living at home were both 0.78. The calibrations for these models are shown in Figure 2B and 2C, respectively. The model predicting survival free of dependency tended to be slightly optimistic in patients predicted to have a good prognosis and slightly pessimistic in those predicted to have a poor prognosis (Figure 2A). The models predicting survival at 6 months and predicting survival and living at home were both slightly pessimistic for most levels of predicted risk (Figure 2).
Although we did not have enough data to assess whether the model to predict independent survival worked in very acute patients (within 6 hours of onset), it was marginally more discriminatory, but not significantly so (P=0.2), when applied only to patients randomized on day 0 or 1 (AUC=0.80) compared with those enrolled after the first day (AUC=0.76) or on day 0 to 4 (AUC=0.81) compared with those enrolled after the fourth day (AUC=0.78) (P=0.1). However, this was an exploratory rather than a prespecified analysis.
We have shown previously that the performance of our models is reasonable when applied to cohorts identified in community-based and hospital-based stroke registers.3 In community-based cohorts, the AUC for the model predicting survival free of dependency was 0.84, that for the model predicting survival was 0.86, and that predicting being alive and at home was 0.86. In the hospital-based register, the AUCs were 0.84, 0.86, and 0.84, respectively.6
We have now shown that these 3 models have performed almost as well when used prospectively in a large, multicenter randomized controlled trial, which included patients from 16 countries and with varied baseline characteristics. Only 6 variables were needed, and these were collected by a variety of professionals, including physicians, nurses, speech and language therapists, and dieticians, and were transmitted to our coordinating center in a few minutes by telephone. The slightly poorer discrimination in the FOOD trial was almost certainly due to the more selected nature of the patients enrolled. As one would expect in a therapeutic trial, patients with the most severe (ie, those who were likely to die early) and mildest strokes (ie, those whose symptoms were resolving rapidly) were not often enrolled, whereas such patients were included in the cohorts from observational studies used to develop and validate the models. The FOOD trial also excluded outpatients and those patients who were likely to be in the hospital only a few days. Predictive models are likely to be able to discriminate between such extreme cases more accurately than between those in the middle range of stroke severity. The differences between the cohorts are reflected in their different baseline characteristics and outcomes (Table 3).
Our predictions of survival and survival and living at home in the FOOD trial tended to be pessimistic (Figure 2b and 2c), while predictions for survival free of dependency were pessimistic for those with more severe stroke but tended to be optimistic for those with milder strokes. If treatment in the FOOD trial was better (ie, leading to better outcomes) than in the cohorts in which the models were derived (community-based cohort in Oxfordshire, England, in the 1980s) or externally validated (community-based cohorts from Perugia, Italy, and Perth, Australia, in late 1980s and a hospital-based cohort from Edinburgh, Scotland, from the 1990s), then this might explain the pessimistic predictions. Since the FOOD trial is evaluating feeding regimens that are already in widespread use, it seems unlikely that our validation of the prognostic models will have been confounded by this particular aspect of treatment. However, unlike the older development and validation cohorts, many of the FOOD patients will have been managed on stroke units, which are known to improve outcome, and this might have led to the slightly pessimistic predictions.
It is difficult to compare the predictive accuracy of different models without performing a prospective comparative study since performance is influenced by the nature of the test cohort (see above), and often all the variables for different models have not been collected. Unfortunately, in the few previous studies that have externally validated predictive models, a variety of measures of accuracy have been used, which further limits comparison.9–11 The National Institutes of Health Stroke Scale and other commonly used scales were found to be superior to Guy’s Prognostic Score, a mathematically derived model.11 However, Lai and colleagues12 found that the Orpington Prognostic Scale was both simpler to use and more accurate than the National Institutes of Health Stroke Scale in predicting patients’ functional outcome.
We have established our models’ performance in predicting 6-month outcomes within day 1 or 2 of stroke. Few of our patients were enrolled in the hyperacute phase of stroke (eg, <6 hours), and therefore further studies are needed to establish their performance in that context. The 6 predictive factors are being collected, along with others, in the International Stroke Trial 3 (IST3) (www.dcn.ed.ac.uk/IST3), which is testing thrombolytic treatment in patients 0 to 6 hours after stroke. This will provide an opportunity to test performance in the hyperacute phase. In situations in which patients are being evaluated in the subacute phase, the models offer a simple, robust alternative to a neurological scale that is reasonably well calibrated and discriminates well between patients who will have good and poor outcomes. Additionally, data for the models can easily and quickly be collected at baseline in large randomized controlled trials in stroke patients.
The predictive models evaluated in this report have undergone far more rigorous testing than any other previously described models.13 They provide a useful tool for stratifying patients in randomized controlled trials and in adjusting for case mix.2 They also provide a valuable standard against which new models can be compared, and they can be used to assess the additional predictive information provided by, for example, brain imaging.14
FOOD Trial Coordinating Center
Dr M.S. Dennis (Principal Investigator and Grant Holder), G. Cranswick (Trial Coordinator), A. Fraser (Data Management Team), S. Grant (Data Management Team), A. Gunkel (Interim Trial Coordinator), J. Hunter (Data Management Team), Dr S. Lewis (Trial Statistician), D. Perry (Information Technology Manager), V. Soosay (Computer Programmer), A. Williamson (Trial Secretary), A. Young (Data Management Team).
Independent Data Monitoring Committee
J. Bulpitt, A. Grant (Chair), G. Murray, P. Sandercock.
Dr N. Anderson (New Zealand), Dr S. Bahar (Turkey), Dr G. Hankey (Australia), Dr S. Ricci (Italy).
G. Bathgate, C. Chalmers, G. Cranswick, Dr M.S. Dennis, B. Farrell (Grant Holder), J. Forbes (Grant Holder), Dr S. Ghosh, Dr P. Langhorne, Dr S. Lewis, J. MacIntyre, C.A. McAteer, Dr P. O’Neill, Dr J. Potter, Dr M. Roberts, C. Warlow (Grant Holder).
Chest, Heart and Stroke Scotland, Chief Scientist Office, NHS R&D HTA Program Grant, The Stroke Association.
Dr C. Counsel, Dr M.S. Dennis (Chair), Dr S. Lewis, C. Warlow.
Members of the FOOD Trial Collaboration
The number in parentheses indicates the number of patients randomized by September 30, 2001, in that center.
Royal Perth Hospital Perth (21): Dr G.J. Hankey, S. McDonald; Redcliffe Hospital Redcliffe (18): T. Bennett, Dr J. Karrasch, C. Lowe; Royal North Shore Hospital Sydney, NSW (15): E. O’Brien, F. Simpson; The Alfred Prahran (7): A. Bramley, Dr J. Frayne; New England Regional Hospital Armidale (4): Dr G. Baker, Dr G. DeGabriele, J. Kennett, Dr J. Nevin; Princess Alexandra Hospital Brisbane (4): Dr P.D. Aitken, K. Boch, Dr G. Hall.
Algemeen Ziekenhuis Sint-Jan Brugge (11): V. Schotte, C. Vandenbruaene, Dr G.T.O. Vanhooren, C. Vanmaele.
Hospital Universitario Fraga Filho Rio de Janeiro (23): C. Andre, Dr M.A.S.D. Lima, Dr M.O. Py.
Saint John Regional Hospital New Brunswick (11): S. Alward, Dr P. Bailey, P. Cook; Halfax Infirmary Halifax, Nova Scotia (3): Dr S.J. Phillips, Y. Reidy.
District Hospital Pardubice (31): Dr E. Ehler, Dr P. Geier, Dr P. Vyhnalek; City Hospital Ostrava (2): Dr C. Majvald, Dr D. Skoloudik.
HS Hvidovre Hospital Hvidovre (20): L. Bech, Dr D. Rizzi, Dr T. Soerensen; Bispebjerg Hospital Kobenhavn NV (1): Dr D. Rasmussen.
Ruttonjee Hospital Wanchai (21): Dr K.Y. Chan, Dr E.S.L. Chow, Dr C.K.L. Kng, Dr C.P. Wong.
St John’s Medical College Hospital Bangalore (167): Dr U. Devraj, Dr Manjari, Dr J.L. Pinheiro, A.K. Roy; All India Institute of Medical Sciences New Delhi (5): Dr S. Panda, Dr K. Prasad, Dr M. Tripathi.
Ospedale Beato Giacomo Villa Citta Della Pieve (PG) (50): L. Ambrosius, Dr G. Benemio, Dr M.G. Celani, C. Ottaviani, B. Randolph, Dr S. Ricci, Dr E. Righetti, A. Tufi; Ospedale Civile S Matteo Degli Infermi Spoleto (PG) (41); Ospedale di Pistoia Pistioa (25): Dr D. Sita, Dr P. Vanni, Dr G. Volpi; Ospedale “Sestilli”–INRCA Ancona (22): Dr M. Del Gobbo, Dr O. Scarpino; Morgagni-Pierantoni Hospital Forli (21): Dr G. Benati, Dr V. Pedone; Ospedale Niguarda Milano (16): Dr A. Ciccone, Dr I. Santilli, Dr R. Sterzi; Ospedale Gubbio Gubbio (15): Dr O. Cazzato, Dr P. Parise; Hospital Civile S. Vito Al Tagliamento (14): Dr A.G. Gregoris, M.L. Lorenzet, Dr M.T. Tonizzo; Ospedale Maggiore Trieste (13): Dr L. Antonutti, Dr F. Chiodo Grandi, Dr N. Koscica, Dr G. Nider; Castiglione del Lago Hospital, Perugia (11): Dr S. Cipolloni, N. Deboli, Dr C. Dembech, Dr M. Guerrieri, Dr S. Ricci, Dr E. Vignai; Ospedale Giovanni Ceccarini Riccione (11): Dr M. Cornia, Dr M.A. Passauiti; Universita di Sassari Sassari (11): L. Bianco, P. Canu, E. Mereu, Dr A. Pirisi, Dr B. Zanda, Dr M. Zuddas; Ospedale Civile Sassari (8): Dr G. Casu; Ospedale di Todi Todi (PG) (6): Dr B. Biscottini, Dr A. Boccali, Dr P. Del Sindaco, Dr T. Mazzoli; Ospedale Civile Citta di Castello (6): Dr C.S. Cenciarelli, Dr G.L. Girelli, Dr G.M. Giuglietti; Ospedale “S. Maria delle Croci” Ravenna (4): Dr G. Bianchedi, Dr G. Ciucci; Perugia University Hospital Perugia (3): Dr G.A. Aisa, Dr M.F. Fiueddio, Dr P.M.C. Polidora, Dr U. Senin; IRCCS C Mondino Pavia (1): Dr A. Cavallini, Dr S. Marcheselli, Dr G. Micieli; Ospedale S Michele Cagliari (1): Dr M. Melis; Ospedale Don Calabria Verona (1): Dr B. Rimondi, Dr P. Spagnolli; Hospital Abbadia San Salvatore Abbadia San Salvatore (1): Dr G. Campanella, Dr R. Castro, M. Fiorella, Dr A.R. Gobbini, Dr G. Parisi, Dr G. Vedouini.
University of Auckland Auckland (202): Dr N.E. Anderson, P. Bennett, Dr A.J. Charleston, Dr D.A. Spriggs; Hawkes Bay Hospital Hastings (77): Dr T. Frendin, Dr J. Gommans, L. Wall; Tauranga Hospital Tauranga (46): P. Blattmann, Dr A.M. Chancellor.
Teaching Hospital Lublin (4): Z. Stelmasiak; Hospital of a Province Siedlce (3): Dr M. Lyczywek-Zwierz, Dr A. Wlodek.
Hospital Geral de Santo Antonio Porto (17): Dr A. Bastosleite, Dr M. Correia, Dr C. Ferreira, Dr M.G. Lopes; Centro Hospitalar Coimbra Coimbra (14): Dr J.A. Grilo Goncalves; Hospital S Joao Porto (11): Dr P.M. Abren, Dr M. Carvalho, Dr R. Martins; Hospital de Santa Maria Lisbon (7): Dr P. Canhao, Dr Falcao; Dr T. Pinho e Melo, Dr A. Verdelho; Hospital Conde de Sao Bento Santo Tirso (5): Dr C. Ferreira; Hospital Distrital de Oliveira de Azemeis Oliveira de Azemeis (4): Dr E. Marques, Dr F. Pais, Dr M. Veloso.
Republic of Ireland
St Vincent’s Hospital Dublin (2): Dr M. Crowe.
Singapore General Hospital (46): C.F. Chan, Dr H-M. Chang, Dr CPL-H. Chen, H.P.A. Goh, Dr W. Luman, Dr M.C. Wong.
Istanbul Medical School Istanbul (53): Dr S. Bahar, Dr O. Coban, Dr M. Degirmencioglu, Dr M.E. Gurol, Dr Y. Krespi, Dr B. Tugcu; Bakirkoy Inme Tedavi ve Arastirma Merkezi Istanbul (32): Dr G. Bakac, Dr D. Kirbas, Dr D. Yandim Kuscu; Bakirkoy Ruh ve Sinir Hastaliklari Istanbul (1): Dr H. Acar, Dr S. Baybas, Dr S. Kabey, Dr E. Seakin, Dr B. Yalginer.
Western General Hospitals NHS Trust Edinburgh (210): Dr M.S. Dennis, Dr R. Lindley, D. Shaw, P. Taylor; University Hospital Aintree Liverpool (125): S. Evans, Dr A.K. Sharma; Ulster Hospital Belfast (111): Dr J. Finnerty, Dr M.J.P. Power; Scarborough Hospital Scarborough (110): P. Davies, Dr J. Paterson; Christchurch Hospital NHS Trust Christchurch (100): A. Graham, Dr A. Hanrahan, Dr D. Jenkinson, Dr J. Kwan, Dr S. Ragab; Brighton General Hospital Brighton (99): Dr M.J. Bradshaw, Dr M. Eddleston, Dr S. Jamil; St Thomas’s Hospital London (84): Dr A.G. Rudd, C. O’Conner; Poole Hospital Poole (69): Dr M.T.A. Villar, Dr A. Winson; Northwick Park Hospital Harrow (68): K. Butchard, Dr D.L. Cohen; Falkirk & District Royal Infirmary Falkirk (64): Dr S. Grant, N. Henderson, J. McCall; Barnsley District General Hospital Barnsley (56): Dr M.K. Al-Bazzaz, Dr A.I. Khan; Bishop Auckland General Hospital Bishop Auckland (47): Dr K.V. Baliga, Dr A.A. Mehrzad; Burnley General Hospital Burnley (43): S. Davies, Dr M.N. Goorah, A. Joshi; Wishaw District General Hospital Wishaw (40): Dr E. Forrest, Dr A. Hendry; Leeds General Infirmary Leeds (40): Dr P. Wanklyn; William Harvey Hospital Willesborough, Ashford (38): Dr D.G. Smithard; Victoria Infirmary NHS Trust Glasgow (37): Dr J. Potter, Dr M. Roberts, Dr A. Watt; Luton & Dunstable NHS Trust Luton (34): E. Hutchins, Dr K. Mylvaganam; Northampton General Hospital Northampton (33): Dr L. Brawn, J. Collier, Dr A. Gordon; Royal Victoria Hospital Belfast (31): A. Hunter, Dr M. Watt, Dr I. Wiggam; North Bristol NHS Trust (Southmead) Bristol (31): Dr T. Allain, Dr P. Easton, A. Russ; St James’s University Hospital Leeds (29): Dr J.M. Bamford, J. Hayes, E. Jackson, Dr T. Moorby; Salford Royal Hospitals Trust Salford (24): A. Betteley, Dr P. Tyrrell; Royal Liverpool & Broadgreen University Hospital Liverpool (24): H. Dickinson, H. Gardner, Dr C.I.A. Jack, K. Johnson; Peterborough District Hospital Peterborough (23): C. Gerstner, Dr S. Guptha, Dr P. Owusu-Agyei; Stirling Royal Infirmary Stirling (22): F.J. Dick, Dr D. Kennie; Stobhill NHS Trust Glasgow (21): Dr C. McAlpine, J. Rodger; Hereford General Hospital Hereford (17): Dr P.W. Overstall, M. Probert, Dr E. Wales; The Princess Royal Hospital Haywards Heath (15): M. Dormer, Dr M. Jones, R. Polley; Princess Margaret Hospital Swindon (14): Dr B. Dewan, Dr S. Kavsar, Dr H. Newton, Dr A. Paddon; King’s College Hospital London (14): Dr I. Perez; Birmingham Heartlands Hospital Birmingham (13): S. Bradley, Dr R. Shinton; Chesterfield & North Derbyshire Royal Hospital Chesterfield (13): Dr M. Cooper, Dr P. Metcalf; Perth Royal Infirmary Perth (12): Dr B. Keegan, Dr S. Johnston; Glasgow Royal Infirmary Glasgow (10): Dr P. Langhorne, M. Shields; Queen Margaret Hospital NHS Trust Dunferlime (10): Dr N. Chapman, Dr S. Pound; Royal Devon & Exeter Hospital Exeter (10): C. Fox, Dr M.A. James; St Thomas’s Hospital Stockport (10): Dr Y. Adennala, Dr M.L. Datta Chaudhuri; Glan Clwyd Hospital Bodelwyddan (10): Dr B.K. Bhowmick, I. Evans; Kingston Hospital Kingston on Thames (9): Dr C. Lee, Dr C. Rodrigues; Lagan Valley Hospital Lisburn, Belfast (9): Dr S.P. Gawley, K. Page; Bronllys Hospital Brecon (8): Dr J. Buchan, Dr A. Dunn, J. Hallam, Dr S. Manthri; Epsom General Hospital Epsom (7): A.M. Daniels, Dr G. Lim; North Middlesex Hospital London (7): Dr R.I. Luder; North Tyneside Health Care North Shields (6): Dr R. Curless; Bronglais General Hospital Ceredigion (6): L. Hudson, Dr P. Jones; Eastern General Hospital Edinburgh (5): Dr L. Morrison; Aberdeen Royal Infirmary Aberdeen (5): Dr R.S. Dijkhuizen, Dr M.J. Maclean; Gloucestershire Royal Hospital NHS Trust Gloucester (5): L.L. Bech, Dr D. Rizzi, Dr T. Sorensen; Royal Infirmary of Edinburgh Edinburgh (4): M. Brogan, Dr G. Mead; Sandwell General Hospital NHS Trust West Bromwich (3): Dr E.M. Smith; Dryburn Hospital Durham (2): J. Clark, Dr P.M. Earnshaw, Dr M. Jain; Nottingham City Hospital Nottingham (1); West Cumberland Hospital Cumbria (1): Dr E.O. Orugun, Dr N. Russell; Withington Hospital Manchester (1): P.A. O’Neill, S.J. Welsh; Singleton Hospital Swansea (1): Dr W. Harris; Hairmyres Hospital East Kilbride (1): Dr S. Marletta, Dr J. Santamaria; Colchester General Hospital Colchester (1): M.J. Keating, Dr T. Shawis.
The FOOD trial was supported by grants from the Health Technology Assessment Board of National Health Service Research and Development (UK), The Stroke Association, The Chief Scientist Office of the Scottish Executive, and Chest, Heart and Stroke Scotland.
A complete list of the members of the FOOD Trial Collaboration appears in the Appendix.
The views and opinions expressed are those of the authors and do not necessarily reflect those of the Department of Health or the funding bodies.
- Received May 14, 2002.
- Revision received July 1, 2002.
- Accepted July 26, 2002.
Wyatt JC, Altman DG. Prognostic models: clinically useful or quickly forgotten? BMJ. 1995; 311: 1539–1541.
Weir N, Dennis M, on behalf of Scottish Stroke Outcomes Group. Towards a national system for monitoring the quality of hospital-based stroke services. Stroke. 2001; 32: 1415–1421.
Counsell C, Dennis M, McDowall M, Warlow C. Predicting outcome after acute stroke: development and validation of new models. Stroke. 2002; 33: 1041–1047.
Dennis M. FOOD (Feed Or Ordinary Diet): a family of randomized trials evaluating feeding policies for patients admitted to hospital with recent stroke. Cerebrovasc Dis. 2001; 11: 32. Abstract.
Pocock SJ. Clinical Trials: A Practical Approach. Chichester, UK: John Wiley & Sons; 1983: 84.
Counsell C. The Prediction of Outcome in Patients With Acute Stroke [dissertation]. Cambridge, UK: University of Cambridge; 1998.
Adams HP Jr, Davis PH, Leira EC, Chang KC, Bendixen BH, Clarke WR, Woolson RF, Hansen MD. Baseline NIH Stroke Scale score strongly predicts outcome after stroke: a report of the Trial of Org 10172 in Acute Stroke Treatment (TOAST). Neurology. 1999; 53: 126–131.
Gladman JRF, Harwood DMJ, Barer DH. Predicting the outcome of acute stroke: prospective evaluation of five multivariate models and comparison with simple methods. J Neurol Neurosurg Psychiatry. 1992; 55: 347–351.
Muir KW, Weir CJ, Murray GD, Povey C, Lees KR. Comparison of neurological scales and scoring systems for acute stroke prognosis. Stroke. 1996; 27: 1817–1820.
Lai SM, Duncan PW, Keighley J. Prediction of functional outcome after stroke: comparison of the Orpington Prognostic Scale and the NIH Stroke Scale. Stroke. 1998; 29: 1838–1842.
Wardlaw JM, Lewis SC, Dennis MS, Counsell C, McDowall M. Is visible infarction on computed tomography associated with an adverse prognosis in acute ischemic stroke? Stroke. 1998; 29: 1315–1319.