Response to Letter by Iosa et al Regarding Article, “Reliability and Validity of Bilateral Ankle Accelerometer Algorithms for Activity Recognition and Walking Speed After Stroke”
Thanks to Iosa et al for their comments about the application of sensors to fine measurements of walking and exercise.1 I agree that the Pearson correlation coefficient is not necessarily the best method for examining agreement. Our statistician examined the intraclass correlation coefficient as well, but it was not reported. The Pearson correlation demonstrated a strong linear association between measures of walking speed by a stopwatch-timed walk across distances initially known, then not known, to a machine-learning algorithm. The Pearson r was used in conjunction with other analyses to assess agreement—namely, mean absolute deviation, bias, and error SD, which were reported. The bias calculation shows that, overall, the 2 measures are similar in average magnitude. The reported mean absolute deviation and error SD demonstrate the typical range of differences observed. Iosa et al partially repeat these analyses with the Bland-Altman analysis. Their results with respect to variability are similar to what our study described. Thank you for 2 supporting additions: that the estimated regression coefficient is close to 1 (0.991) and that there is not a correlation between observed differences and true values.
The allowable word count for the article led us to drop a discussion of your point about possible systematic under- and overestimations of walking speed by the algorithm. Your observation is correct. I was concerned that this might be occurring as the data were being collected, but could not weigh possible deviations until all subjects had been studied. The following accounts for the problem noted. For the short distance walk that served as a training tool for the Bayesian activity recognition and walking speed estimations, the study engrained a slight error, because subjects often walked 1 to 2 steps beyond the finish line before stopping. The stopwatch captured the subject as the lead foot crossed the line, but the algorithm included the extra steps, which could have led to underestimating the speed. Also, in retrospect, the stopwatch device used was a smartphone application that may not have been as precise as a real stopwatch. A subtle timer delay sometimes occurred upon touching the screen's start button after telling the subject to begin the walk. These procedures were corrected in subsequent sensor reliability and validity studies of patients with other neurological diseases that affect mobility, which eliminated the deviations.
Regarding stork and birth data correlations, which inherently pose a different issue than does examining 2 ways to assess walking speed, the Matthews moral may be that ecological fallacy must be recognized before sending data to the statistician. Otherwise, the work of clinical scientists will move no closer to the truth than the words of politicians during a campaign for office. Hopefully, continued applications of inexpensive sensors to monitor the types, quantity, and quality of activities in the community to enable more ecologically sound outcome measures will get us closer to the truth about the real utility of our mobility and exercise interventions for patients.
Bruce H. Dobkin, MD
Department of Neurology
Geffen/University of California Los Angeles School of
Los Angeles, CA
Stroke welcomes Letters to the Editor and will publish them, if suitable, as space permits. Letters must reference a Stroke published-ahead-of-print article or an article printed within the past 3 weeks. The maximum length is 750 words including no more than 5 references and 3 authors. Please submit letters typed double-spaced. Letters may be shortened or edited. Include a completed copyright transfer agreement form (available online at http://stroke.ahajournals.org and http://submit-stroke.ahajournals.org).
- © 2011 American Heart Association, Inc.