Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
State-of-the-art automatic speech recognition (ASR) models like Whisper, perform poorly on atypical speech, such as that produced by individuals with dysarthria. Past works for atypical speech have mostly investigated fully personalized (or idiosyncratic) models, but modeling strategies that can both generalize and and handle idiosyncracy could be more effective for capturing atypical speech. To investigate this, we compare four strategies: (a) normative models trained on typical speech (no personalization), (b) idiosyncratic models completely personalized to individuals, (c) dyarthric-normative models trained on other dysarthric speakers and (d) dyarthric-idiosyncratic models which combine strategies by first modeling normative patterns before adapting to individual speech. We find the dysarthric-idiosyncratic model performs better than idiosyncratic approach while requiring less than half as much personalized data (36.43 WER with 128 train size vs 36.99 with 256). Further, we found that tuning the speech encoder alone (as opposed to the LM decoder) yielded the best results reducing word error rate from 71% to 32% on average. Our findings highlight the value of leveraging both normative (cross-speaker) and idiosyncratic (speaker-specific) patterns to improve ASR for underrepresented speech populations.