Patient-Level and Hospital-Level Determinants of the Quality of Acute Stroke Care
A Multilevel Modeling Approach
Background and Purpose—Quality of care may be influenced by patient and hospital factors. Our goal was to use multilevel modeling to identify patient-level and hospital-level determinants of the quality of acute stroke care in a stroke registry.
Methods—During 2001 to 2002, data were collected for 4897 ischemic stroke and TIA admissions at 96 hospitals from 4 prototypes of the Paul Coverdell National Acute Stroke Registry. Duration of data collection varied between prototypes (range, 2–6 months). Compliance with 8 performance measures (recombinant tissue plasminogen activator treatment, antithrombotics <24 hours, deep venous thrombosis prophylaxis, lipid testing, dysphagia screening, discharge antithrombotics, discharge anticoagulants, smoking cessation) was summarized in a composite opportunity score defined as the proportion of all needed care given. Multilevel linear regression analyses with hospital specified as a random effect were conducted.
Results—The average hospital composite score was 0.627. Hospitals accounted for a significant amount of variability (intraclass correlation=0.18). Bed size was the only significant hospital-level variable; the mean composite score was 11% lower in small hospitals (≤145 beds) compared with large hospitals (≥500 beds). Significant patient-level variables included age, race, ambulatory status documentation, and neurologist involvement. However, these factors explained <2.0% of the variability in care at the patient level.
Conclusions—Multilevel modeling of registry data can help identify the relative importance of hospital-level and patient-level factors. Hospital-level factors accounted for 18% of total variation in the quality of care. Although the majority of variability in care occurred at the patient level, the model was able to explain only a small proportion.
The goal of the Paul Coverdell National Acute Stroke Registry is to track the delivery of care to hospitalized acute stroke patients and to guide quality improvement.1 Previous analyses have focused on patient-level determinants of care;1 however, quality of care is also determined by hospital-level and other system-level factors.2 Data collected from hospital-based quality-of-care registries are inherently hierarchical; patients and physicians are nested within hospitals, and hospitals may be nested within larger health systems. This multilevel data structure has implications regarding the most appropriate statistical analyses; for example, methods should account for hospital clustering because patients within a hospital do not represent independent observations.3 Failure to account for the multilevel nature of registry data has been shown to result in an inflated number of statistically significant associations and errors in identifying hospital-level determinants of care.4 The latter may be especially important given that interventions to improve quality often involve manipulation of hospital-level factors.
Multilevel or hierarchical modeling is an analytic technique designed for complex nested data structures.5,6 It is commonly used in educational and social research7–9 and is increasingly being used in health services research.10,11 Multilevel modeling has several advantages for the analysis of hospital-based registry data, including the ability to partition sources of variation between levels (ie, patient-level vs hospital-level), to model the interaction between variables at different levels and to provide more precise estimates of hospital-specific effects, especially when faced with small sample sizes.3,4,6
Our primary objective was to identify patient-level and hospital-level variables associated with the quality of acute stroke care. To address this objective, we used multilevel analysis applied to data from a large prototype stroke registry. Specific questions of interest included: (1) How much of the variation in the quality of care is attributable to hospital-level factors?; (2) What are the patient-level and hospital-level determinants of quality?; and (3) Are there interactions between patient-level and hospital-level determinants?
Materials and Methods
Detailed information on the design of the 4 state registry prototypes (Georgia, Massachusetts, Michigan, Ohio) has been published previously.1 Briefly, each registry developed its own sampling design to obtain a representative sample of hospitals using a combination of sampling with certainty and stratified sampling (with strata defined by hospital size or location). Overall, 98 hospitals were included: 34 from Georgia, 12 from Massachusetts, 16 from Michigan, and 36 from Ohio.
Acute Stroke Case Ascertainment and Case Definition
Acute stroke admissions were identified through either a prospective or a retrospective approach.1 In Massachusetts and Michigan, admissions were identified prospectively based on presenting clinical signs and symptoms. The Georgia and Ohio eligible cases were identified retrospectively based on stroke discharge codes. All admissions had to present to the hospital with signs and symptoms consistent with acute stroke.1 This analysis was restricted to cases of ischemic stroke, TIA, or ischemic stroke of uncertain duration. Cases of TIA were included only if there were stroke signs and symptoms on presentation (and therefore were potentially eligible for thrombolysis intervention). The ischemic stroke of uncertain duration definition was only used at the 2 sites using prospective methods (Michigan, Massachusetts) when the duration of clinical signs (ie, <24 hours or >24 hours) was not documented.
Registry information was collected between October 2001 and November 2002. The exact duration of case ascertainment varied between sites (2 months for Ohio, 3 months for Georgia and Massachusetts, and 6 months for Michigan).1 All consecutive acute stroke admissions that occurred during these time periods were included. Human subject approval was obtained from each hospital’s Institutional Review Board before starting data collection.1 All sites collected the same set of core Paul Coverdell National Acute Stroke Registry data elements using either retrospective chart abstraction (Georgia, Ohio) or, at the 2 prospective sites (Massachusetts, Michigan), a combination of concurrent data collection and chart abstraction. Findings from audits designed to assess case ascertainment12 and data reliability13 have been published.
Information on the hospital’s capacity to provide acute stroke care was collected using a survey that included items identified by the Brain Attack Coalition.14 The 12-item survey was completed by a representative of the hospital’s stroke service and included information on availability of an acute stroke team, written treatment guidelines, intravenous thrombolysis (intravenous recombinant tissue plasminogen activator) treatment, stroke neurologists, imaging capabilities, in-hospital rehabilitation services, stroke case manager/specialist, and previous involvement in stroke quality-improvement or databanks. Information was also collected on bed size, annual stroke admissions, urban vs nonurban location, and teaching status.
Quality of Care Definitions
The following 8 performance measures selected by the Paul Coverdell National Acute Stroke Registry15 were used as quality indicators: (1) intravenous recombinant tissue plasminogen activator; (2) antithrombotic medication within 24 hours; (3) deep vein thrombosis prophylaxis; (4) dysphagia screening; (5) lipid testing; (6) discharge on antithrombotics; (7) discharge on anticoagulation; and (8) smoking cessation counseling. All measures excluded subjects who died in the hospital, who were discharged to hospice, or who left against medical advice. Cases of terminal disease and comfort measures only were also excluded from the intravenous recombinant tissue plasminogen activator, discharge antithrombotics, and discharge anticoagulation measures. Detailed inclusion criteria are listed in Table 3.
We calculated a summary composite measure of care, also referred to as an opportunity-based score,16 which was defined as the total number of interventions performed in each subject divided by the total number of interventions the subject was eligible for (range, 1–8). Aggregated at the hospital level, it provides a summary of the proportion of care opportunities fulfilled by each hospital.
Of 6860 original subjects, we excluded 595 (8.7%) who died in-hospital, 187 (2.7%) with missing data on age or gender, 166 (2.4%) hospital transfers, and 210 (3.0%) with other miscellaneous reasons. Of the remaining 5702 subjects, we excluded 805 (14%) with hemorrhagic or undefined stroke, leaving 4897 acute ischemic stroke, TIA, or ischemic stroke of uncertain duration admissions from 96 hospitals.
The composite score (expressed as a proportion) was generated for each individual patient and then aggregated at the hospital level to create mean scores. Plots were generated to confirm that the composite score was approximately normally distributed. The relationships between patient-level and hospital-level variables and the composite score were examined with multilevel linear regression7,9 using Proc Mixed in SAS version 9.1 (Statistical Analysis Systems, Inc, Cary, NC).17 Hospital (n=96) and state (ie, Georgia, Massachusetts, Michigan, Ohio) were tested as random effects. All hospital-level variables, ie, bed size (quartiles), urban vs rural, teaching status, and the 12 hospital capacity survey questions were considered in the multilevel analysis as fixed effects. Patient-level factors considered were chosen based on variables identified as important in a previous analysis of these data1 and included age, gender, race (black, white/other), ambulatory status at discharge (documented vs not), medical history (stroke, TIA, coronary heart disease, and atrial fibrillation), and whether a neurologist was involved in care.
The multilevel analysis was implemented in a stepwise manner.8,17 First, an unconditional means model was used to determine the significance of the 2 random-effect terms (hospital and state). The unconditional means model also provided an estimate of the intraclass correlation coefficient (P), which describes the proportion of the total variance that is attributable to clustering within hospital.17 Second, using a backwards elimination approach, all hospital-level variables were added to the unconditional means model as fixed effects, and nonsignificant variables were removed sequentially until only significant (ie, P<0.05) variables remained. Third, patient-level variables were added, and using the same backwards elimination approach a final patient-level and hospital-level fixed-effect model was identified. Higher-order polynomial terms for age were explored after age was centered at its mean (69.7 years). Finally, to determine whether there were significant cross-level interaction effects between patient-level and hospital-level variables, we tested patient-level variables as random effects.17
To confirm that the final linear model was appropriate, we examined the distribution of residuals to check that they were approximately normally distributed. We also confirmed that the predicted values from the final model were approximately normally distributed and that none was outside the range of 0.0 to 1.0. We also subjected the final model to the following sensitivity analyses: (1) to explore the impact of hospitals with small numbers of cases, we eliminated hospitals with <10 observations; (2) to determine whether patients with TIA were unduly influencing our conclusions, we excluded them and reran the models; and (3) we examined alternative modeling strategies for the composite score, including a Poisson model (in which the number of interventions received is modeled relative to the number the subject was eligible for) and an arcsine-root transformation model, which is a variance-stabilizing transformation designed to “normalize” the outcome variable.
The hospital-level characteristics are shown in Table 1. The distribution of bed size varied by state, with Georgia having more small hospitals (<145 beds) and Massachusetts having more large hospitals (>500 beds). The majority of hospitals reported having access to written treatment guidelines, acute stroke teams, and neurologists. Patient-level characteristics are shown in Table 2. The mean age was 69.7 years. The racial distribution varied across the prototypes, with Georgia having the highest proportion of blacks (37%) and Massachusetts having the lowest proportion (9%).
The overall patient-level mean composite score was 0.68 (SD=0.12), indicating that, on average, patients received 68% of the measures they were eligible for. At the hospital level, the mean composite score was 0.63 and varied from 0.57 (Georgia) to 0.70 (Massachusetts). The mean hospital scores were also approximately normally distributed (Figure 1). The compliance data for the 8 performance measures are shown in Table 3. Overall, compliance was lowest for recombinant tissue plasminogen activator treatment (27%), lipid profile checked (41%), and smoking cessation counseling (26%), and it was highest for discharge antithrombotics (99%).
In a 3-level unconditional means model that included both state and hospital as random effects, the hospital term was statistically significant (P<0.0001), whereas the state term was not (P=0.14) and therefore was eliminated from further consideration. The intraclass correlation (P) from a 2-level unconditional means model (including only hospital as a random effect) was 0.18 (ie, 0.00743/0.04072). These data indicate that there was only a moderate degree of clustering within hospital and that, after taking into account hospital-level random effects, considerable unexplained variability in composite scores remains.
Results of the final multilevel model are shown in Table 4. Bed size was the only hospital-level variable that was significantly associated with composite score (P<0.001). Compared to the largest hospitals (>500 beds), the proportion of care opportunities fulfilled by the smallest hospitals (<145 beds) was 11% lower (P<0.0001). Age had a significant curvilinear relationship with composite score. A plot of the mean composite scores by 10-year age intervals illustrates that the mean composite care scores increased up until approximately age 60 and then declined with increasing age (Figure 2). The composite score was 4.8% lower (P<0.001) among patients whose ambulatory status was not documented compared to those whose ambulatory status was documented. Neurologist involvement in the care of patients was a significant determinant of quality; compared to patients who did not have a neurologist involved, the proportion of care opportunities fulfilled was 4.9% higher (P<0.0001) in those who did.
None of the patient-level variables had statistically significant random effects. The random effect for hospital remained significant in the final multilevel model, indicating that after accounting for patient characteristics and hospital characteristics there was still considerable unexplained variability in hospital composite scores. Compared to the unconditional means model, the variance associated with the hospital-level composite scores was reduced by 30% in the final model (ie, 0.00743–0.0052/0.00743), almost all of which was attributable to the addition of bed size. In contrast to the findings at the hospital level, the variance associated with the patient-level composite scores was reduced by only 1.8% (ie, 0.04072–0.040/0.0472) by the inclusion of the patient-level variables in the final model. The correlation between the observed composite scores and the model’s predictive scores was 0.232 (P<0.001), indicating that the final model had at least moderate predictive power.
With respect to the sensitivity analyses, after excluding 17 hospitals with <10 observations (n=76 subjects), the model results were essentially identical. Similarly, after excluding the 1182 TIA cases, the model coefficients were similar, although their statistical significances were attenuated given the smaller sample size. The magnitude and statistical significance of the model coefficients were also similar in the Poisson and arcsine-root transformation models (data not shown), indicating that our conclusions were robust when alternative statistical approaches were used.
In this multilevel analysis of the quality of stroke care in 4 prototype stroke registries, we found that only approximately two-thirds of interventions that patients were eligible for were documented as having been provided. We found that mean composite scores varied substantially across hospitals, and much of this variation remained even after significant hospital-level and patient-level factors were identified. Bed size was the only hospital-level variable that was a significant determinant of quality of care. Hospital size has been shown to be an important predictor of quality of care for many conditions, including stroke.18 The lower quality of care in small hospitals is probably a reflection of more limited resources, such as physician and staff expertise and specialty services. Bed size, which is strongly correlated with stroke volume, has been shown to be an important predictor of short-term mortality in some,19 although not all, stroke studies.20 Our finding that smaller hospitals provide lower quality of care provides evidence to support the hypothesis that part of the widely reported “volume–outcome relationship”19 is attributable to the delivery of poorer care in smaller hospitals. Hospital size is also positively correlated with several other hospital characteristics important to stroke care, including teaching status, presence of stroke units or stroke teams, quality-improvement infrastructure, stroke pathways, and standing orders.11 Although many of these variables were associated with quality of care in unadjusted analyses, we did not find any of them to be significant determinants independent of bed size. The lack of benefit of standing orders is contrary to some,11,21 but not all,22 previous reports. It is possible that the lack of significant findings associated with the Brain Attack Coalition items might be because more detailed information is required to understand the particular context and application of these factors within each hospital.23 Although previous studies have found the Brain Attack Coalition items to be highly correlated and to have limited variability between hospitals,11,23 we did not find this to be the case. Most of the characteristics were present in ≈60% to 80% of hospitals, which probably reflects the fact that the prototypes included a broader range of hospitals. Overall, our results suggest that further work is required to identify the organizational, structural, and process measures that could explain the lower-quality care observed in smaller hospitals.
In this analysis, compared to patients in their early 60s, both younger and older patients had lower quality of care. The finding that older age is associated with lower quality of stroke care is not uncommon;24 however, we are not aware of previous studies showing that younger patients also have lower-quality care. The reasons for this are unknown, although we speculate that it might be because younger patients have fewer risk factors and comorbidities, or that their risk of stroke recurrence is falsely underestimated by hospitals and physicians. We found that the involvement of a neurologist was associated with higher-quality care. There have been several previous reports evaluating the impact of neurologists on the quality and efficiency of stroke care.23,25,26 A study of stroke outcomes in academic medical centers found that in-hospital mortality was lower in those centers that had a vascular neurologist.23 A study of Medicare recipients found that compared to other specialists, neurologists provided better but more expensive care to hospitalized stroke patients—neurologists ordered more MRI scans and were more likely to prescribe warfarin and to discharge patients to inpatient rehabilitation, but their patients had lower mortality 90 days after discharge.25 Similar findings of increased testing but better outcomes for stroke patients treated by neurologists were also found in a Veterans’ Administration study.26
A major rationale for using the multilevel approach in the assessment of health care quality is the fact that many policy decisions affecting the delivery of hospital-based care require an understanding of how hospital-level factors influence care and how they interact with patient-level variables. In studies that include a large number of hospitals, only multilevel modeling has the ability to determine if factors operating at the patient level are modified by group or contextual factors operating at the hospital level.5,6 Other advantages of the multilevel approach include the ability to simultaneously examine the relative contributions of individual-level and group-level variables on the outcomes of interest and to account for the nonindependence of observations (ie, clustering) within groups.6 To date, many previous analyses of stroke quality of care data have focused on patient-level factors using the generalized estimating equation (GEE) approach to account for clustering within hospitals while either ignoring or including hospital-level characteristics as fixed effects.18,23,27 The GEE model provides a different interpretation from the multilevel model; the GEE model averages across random effects, thereby providing a marginal or population average estimate, whereas the multilevel model provides estimates that are conditional on the group-level random effects.5,6 Random-effect models also make an assumption that the group-level units (ie, hospitals) represent a random sample of a larger underlying population to which the model results can be applied.5 GEE models may be more appropriate when the group random effect is known to be small or the focus is only on patient-level factors.6 To illustrate this latter point, we reran the analyses using a GEE model and found that the results for the patient-level variables were essentially identical to those of the multilevel analysis (data not shown). However, what the GEE analysis is unable to do is to determine if significant cross-level interaction effects are present between patient-level and hospital-level variables (none was significant in this analysis), or to quantify the magnitude of the hospital-level variation that remains in the final multilevel model.
By partitioning the variance components, the multilevel approach illustrates the relative contributions of hospital-level and patient-level factors to the variation in quality of care. In this analysis, we found that 18% of the total variance occurred at the hospital level and that almost one-third of this was explained by bed size. The final model found that the majority of variability in quality of care occurred at the patient level (ie, 82%), but that the patient-level variables in the model (ie, age, race, documentation of ambulatory status, and neurologist) only accounted for a tiny fraction of this (ie, 1.8%). These findings illustrate that there remains a large amount of unexplained variability in quality of care, the origins of which should be the focus of future studies.
There are several potential limitations associated with the data used in this report. First, although all 4 prototypes shared some common design features, their sampling designs were different.1 Although variability in the size and demographic make-up of the states justified an adaptable design, the representativeness of these prototype designs remains unknown. Second, as previously mentioned, information on the completeness and reliability of the data in this registry data are limited to 2 previous studies.12,13 Clearly, data accuracy is limited by the quality and completeness of the medical records, and despite the use of standardized data definitions, variations in their interpretation undoubtedly occurred across hospitals and prototypes. Third, the measures used in this study were not formally recognized as performance measures until after these data were collected,28 and so these results represent a baseline measure of hospital performance before the instigation of any formal quality improvement efforts. Fourth, TIA cases were limited to only those who had signs and symptoms on presentation to the hospital and therefore are not representative of all TIA cases. Fifth, because data on stroke severity, stroke location, and ischemic stroke subtype were not available, their impact on the results cannot be determined. Sixth, the use of a composite opportunity score to measure quality of care has both inherent strengths and limitations (as reviewed by Peterson et al16), and our understanding of how such measures relate to patient-orientated outcomes is limited but represents an important area of future research.28 Limitations associated with the collection of hospital-level variables have already been mentioned.
In summary, this study has shown the value of using a multilevel approach for the analysis of hospital-based stroke registry data when the focus is on understanding the relative contributions of patient-level and hospital-level factors. Although the majority of variance in quality of care occurred at the patient level, the model was only able to explain a small proportion of this. Involvement of a neurologist and data documentation were the only potentially modifiable patient-level factors that predicted quality of care. Hospital-level variability accounted for a minority of the total variation in care (18%), but bed size was found to explain almost one-third of this variation. Future analyses of stroke registries should aim to include a larger number of hospitals and stroke admissions, to use more consistent sampling designs, and to include more detailed information on both hospital-level and patient-level variables to explain a greater proportion of the variability in care.
The authors acknowledge the staff from the following hospitals who participated in the registries. In Georgia: The Paul Coverdell Georgia Stroke Registry Pilot Prototype thanks the hospitals and their staff who agreed to participate in the prototype on a confidential basis. In Massachusetts: Baystate Medical Center, Springfield; Berkshire Medical Center, Pittsfield; Beth Israel Deaconess Medical Center, Boston; Boston Medical Center, Boston; Brigham & Women’s Hospital, Boston; Caritas Carney Hospital, Dorchester; Faulkner Hospital, Jamaica Plain; Lahey Clinic Medical Center, Burlington; Martha’s Vineyard Hospital, Oak Bluffs; St. Elizabeth’s Medical Center, Brighton; and Tufts-New England Medical Center, Boston. In Michigan: Spectrum Health Systems, Grand Rapids; St. Joseph Mercy Hospital, Ann Arbor; University of Michigan Hospital, Ann Arbor; Borgess Medical Center, Kalamazoo; Sparrow Health Systems, Lansing; Ingham Regional Medical Center, Lansing; Detroit Receiving Hospital; Henry Ford Wyandotte Hospital; St. Joseph Mercy of Macomb; Northern Michigan Regional Health System, Petoskey; St. Mary’s Hospital, Saginaw; Bronson Methodist Hospital, Kalamazoo; Harper University Hospital, Detroit; Alpena General Hospital; and St. Joseph Health Systems, Tawas. In Northeast Ohio regional sites (Cleveland area): Cleveland Clinic Health System (Cleveland Clinic Foundation, Euclid Hospital, Hillcrest Hospital, Huron Hospital, South Pointe Hospital, Fairview Hospital; Lakewood Hospital; Lutheran Hospital; Marymount Hospital); MetroHealth Medical Center; Southwest General Health Center; University Hospitals of Cleveland; and University Hospitals Health System Geauga Regional Hospital. In Southwest Ohio regional sites (Greater Cincinnati area): Deaconess Hospital; Christ Hospital; Jewish Hospital; University Hospital; The Mercy Health Partners (Mercy Hospital Anderson, Mercy Hospital Clermont, Mercy Hospital Fairfield, Mercy Hospital Mount Airy, and Mercy Hospital Western Hills); Tri-Health (Bethesda North, Good Samaritan Hospital); and Veterans Affairs Medical Center. In other sites in Ohio: Barnesville Hospital Association, Barnesville; Cuyahoga Falls General Hospital, Cuyahoga Falls; Fairfield Medical Center, Lancaster; Genesis Healthcare System, Zanesville; Humility of Mary Health Partners, Youngstown, Warren, Boardman; Joint Township District Memorial Hospital–St. Mary’s; MedCentral Health System, Mansfield; Ohio State University Medical Center, Columbus; Pomerene Hospital, Millersburg; and Riverside Methodist Hospital, Columbus.
Sources of Funding
The Paul Coverdell National Acute Stroke Registry was funded by a cooperative agreement from the U.S. Centers for Disease Control and Prevention (CDC).
- Received July 30, 2010.
- Accepted August 12, 2010.
Reeves MJ, Arora S, Broderick JP, Frankel M, Heinrich JP, Hickenbottom S, Karp H, LaBresh KA, Malarcher A, Mensah G, Moomaw CJ, Schwamm L, Weiss P, Paul Coverdell Prototype Registries Writing Group. Acute stroke care in the US: results from 4 pilot prototypes of the Paul Coverdell National Acute Stroke Registry. Stroke. 2005; 36: 1232–1240.
Krumholz HM, Brindis RG, Brush JE, Cohen DJ, Epstein AJ, Furie K, Howard G, Peterson ED, Rathore SS, Smith SC Jr, Spertus JA, Wang Y, Normand SL, American Heart Association, Quality of Care and Outcome Research Interdisciplinary Writing Group, Council on Epidemiology and Prevention, Stroke Council, American College of Cardiology Foundation. Standards for statistical models used for public reporting of health outcomes: an American Heart Association Scientific Statement from the Quality of Care and Outcomes Research Interdisciplinary Writing Group: cosponsored by the Council on Epidemiology and Prevention and the Stroke Council. Endorsed by the American College of Cardiology Foundation. Circulation. 2006; 113: 456–462.
Raudenbush SW, Bryk AS. Hierarchical linear models: applications and data analysis methods. Newbury Park, CA: Sage Publications; 1992.
Seltzer MH. Studying variation in program success: a multilevel modeling approach. Evaluation Review. 1994; 18: 342–361.
Snijders TAB, Bosker RJ. Multilevel analysis: an introduction to basic and advanced multilevel modeling. Thousand Oaks, CA: Sage Publications; 1999.
Gupta M, Chang WC, Van de Werf F, Granger CB, Midodzi W, Barbash G, Pehrson K, Oto A, Toutouzas P, Jansky P, Armstrong PW, ASSENT II Investigators. International differences in in-hospital revascularization and outcomes following acute myocardial infarction: a multilevel analysis of patients in ASSENT-2. Eur Heart J. 2003; 24: 1640–1650.
Hinchey JA, Shephard T, Tonn ST, Ruthazer R, Selker HP, Kent DM. Benchmarks and determinants of adherence to stroke performance measures. Stroke. 2008; 39: 1619–1620.
Centers for Disease Control. The Paul Coverdell National Acute Stroke Registry. March 13, 2008. http://www.cdc.gov/DHDSP/stroke_registry.htm. Accessed February 12, 2010.
Peterson ED, Delong ER, Masoudi FA, O'Brien SM, Peterson PN, Rumsfeld JS, Shahian DM, Shaw RE, ACCF/AHA Task Force on Performance Measures, Goff DC Jr, Grady K, Green LA, Jenkins KJ, Loth A, Radford MJ. ACCF/AHA 2010 Position Statement on Composite Measures for Healthcare Performance Assessment: A Report of the American College of Cardiology Foundation/American Heart Association Task Force on Performance Measures (Writing Committee to develop a position statement on composite measures). Circulation. 2010; 121: 1780–1791.
Singer JD. Using SAS Proc Mixed to fit multilevel models, hierarchical models, and individual growth models. J Educat Behavior Stat. 1998; 24: 323–355.
Schwamm LH, Fonarow GC, Reeves MJ, Pan W, Frankel MR, Smith EE, Ellrodt G, Cannon CP, Liang L, Peterson E, Labresh KA. Get with the Guidelines-Stroke is associated with sustained improvement in care for patients hospitalized with acute stroke or transient ischemic attack. Circulation. 2009; 119: 107–115.
Saposnik G, Baibergenova A, O'Donnell M, Hill MD, Kapral MK, Hachinski V, Stroke Outcome Research Canada (SORCan) Working Group. Hospital volume and stroke outcome: does it matter? Neurology. 2007; 69: 1142–1151.
Heuschmann PU, Kolominsky-Rabas PL, Misselwitz B, Hermanek P, Leffmann C, Janzen RW, Rother J, Buecker-Nott HJ, Berger K, German Stroke Registers Study Group. Predictors of in-hospital mortality and attributable risks of death after ischemic stroke: the German Stroke Registers Study Group. Arch Intern Med. 2004; 164: 1761–1768.
California Acute Stroke Pilot Registry Investigators. The impact of standardized stroke orders on adherence to best practices. Neurology. 2005; 65: 360–365.
Kwan J, Sandercock P. In-hospital care pathways for stroke - an updated systematic review. Stroke. 2005; 36: 1348–1349.
Gillum LA, Johnston SC. Characteristics of academic medical centers and ischemic stroke outcomes. Stroke. 2001; 32: 2137–2142.
Saposnik G, Black SE, Hakim A, Fang J, Tu JV, Kapral MK, Investigators of the Registry of the Canadian Stroke Network (RCSN), Stroke Outcomes Research Canada (SORCan) Working Group. Age disparities in stroke quality of care and delivery of health services. Stroke. 2009; 40: 3328–3335.
Mitchell JB, Ballard DJ, Whisnant JP, Ammering CJ, Samsa GP, Matchar DB. What role do neurologists play in determining the costs and outcomes of stroke patients? Stroke. 1996; 27: 1937–1943.
Goldstein LB, Matchar DB, Hoff-Lindquist J, Samsa GP, Horner RD. VA Stroke Study: neurologist care is associated with increased testing but improved outcomes. Neurology. 2003; 61: 792–796.
Mullard AJ, Reeves MJ, Jacobs BS, Kothari RU, Birbeck GL, Maddox K, Stoeckle-Roberts S, Wehner S, Paul Coverdell National Acute Stroke Registry Michigan Prototype Investigators. Lipid testing and lipid-lowering therapy in hospitalized ischemic stroke and transient ischemic attack patients: results from a statewide stroke registry. Stroke. 2006; 37: 44–49.
Reeves MJ, Parker C, Fonarow GC, Smith EE, Schwamm LH. Development of stroke performance measures: definitions, methods, and current measures. Stroke. 2010; 41: 1573–1578.