Reliability of Structured Modified Rankin Scale Assessment
To the Editor:
The modified Rankin Scale (mRS) is the most prevalent stroke outcome assessment in clinical trials, yet literature describing the properties of the scale remains limited,1 so we were pleased to see 2 papers describing clinometric assessment of mRS in the May issue of the Journal. Saver and colleagues describe a Rankin Focused Assessment Tool (RFAT),2 whereas Bruno’s group describes a simplified mRS questionnaire.3 The proposed use of structured assessment is in saving interviewers’ time and decreasing interobserver variability; both of these points are worthy of further discussion.
The issue of time spent conducting mRS assessment is interesting. Based on collected data from 100 video-recorded, paired mRS interviews, we performed multivariate analysis to explore if clinical, demographic, interview, or interviewer-specific features were associated with disagreement in mRS scoring.4 The only factor significantly associated with variability in mRS scoring was interview length. Counterintuitively, it was longer, more detailed interviews that were associated with greatest interobserver variation. This could suggest that there is no value in lengthy discourses with the patient and that meaningful assessment can be made fairly promptly. Alternatively, it may suggest that there are some patients with more complex disability who, despite thorough assessment, are difficult to grade. However, with regard to actual time saved, simplified mRS questionnaire assessments last approximately 2 minutes and RFAT 3 to 5 minutes. In our studies, median duration of unstructured mRS was 4.1 minutes (SD 2.07); thus, benefits of any time-saving with these new structured assessments are debatable.
Although attractive to the busy researcher, there are potential problems with a reductionist approach to mRS assessment. A strength of mRS as an outcome tool is the global approach to patient ability. Properly conducted, mRS interviews score patients based on perception of functioning within the context of their own lives and as such have potential to offer a more meaningful assessment than scales that focus solely on activities of daily living. In focusing the mRS interview to concrete ability, there is a danger this important extra information could be lost. Saver’s group seems to have recognized this in development of RFAT, but we would urge caution in any attempts to further “structure” mRS assessment.
Our group has experience in standard, structured, and centralized group assessment of mRS for clinical trial use. We are prospectively collecting data and will present in the future. Anecdotally, we have found that with focused assessment, patients struggle to answer categorically and interpretation of responses without qualification increases uncertainty in scoring.
In terms of reducing observer variability, the reliability scores described for RFAT and simplified mRS questionnaire are encouraging compared with previous estimates.5 However, like with any novel tool, validation in independent populations is required before acceptance into routine clinical practice. We note that Wilson’s original structured interview performed well in his validation cohort,6 but results were less impressive when the structured interview was used by other groups.7 To this end, we would encourage further study of RFAT, simplified mRS questionnaire, and other proposed assessment aids. In particular, comparison of RFAT and simplified mRS questionnaire reliability against “standard” mRS assessment would be useful. Structuring assessment is only one approach to improving mRS reliability and perhaps the greatest benefit will be seen in combining approaches, for example, partially structured interview with centralized committee scoring of a video-based assessment.
Sources of Funding
This article was funded departmentally. We have published on use of mRS as an outcome scale and have successfully applied for grant funding from various bodies to further explore methods for improving outcome assessment in stroke.
Saver JL, Filip B, Hamilton S, Yanes A, Craig S, Cho M, Conwit R, Starkman S. Improving the reliability of stroke disability grading in clinical trials and clinical practice: the Rankin Focused Assessment (RFA). Stroke. 2010; 41: 992–995.
Bruno A, Shah N, Lin C, Close B, Hess DC, Davis K, Baute V, Switzer JA, Waller JL, Nichols FT. Improving modified Rankin Scale assessment with a simplified questionnaire. Stroke. 2010; 41: 1048–1050.
Quinn TJ, Dawson J, Walters M, Lees KR. Reliability of the modified Rankin Scale: a systematic review. Stroke. 2009; 40: 3393–3395.
Wilson JTL, Harendran A, Grant M, Baird T, Schulz UGR, Muir KW, Bone I. Improving the assessment of outcomes in stroke: use of a structured interview to assign grades on the modified Rankin Scale. Stroke. 2002; 33: 2243–2246.
Quinn TJ, Dawson J, Walters MR, Lees KR. Exploring the reliability of the modified Rankin Scale. Stroke. 2009; 40: 762–766.