![]() In this sense, the racial disparities we find are even larger than indicated by the average differences in WER alone. Thus, if one considers a WER of 0.5 to be the bar for a useful transcription, more than 10 times as many snippets of black speakers fail to meet that standard. ![]() For example, more than 20% of snippets of black speakers have an error rate of at least 0.5 in contrast, fewer than 2% of snippets of white speakers are above that threshold. 2 shows the complementary cumulative distribution function (CCDF): for each value of WER on the horizontal axis, it shows the proportion of snippets having an error rate at least that large. 2 plots the distribution of this average WER across snippets, disaggregated by race. To do so, for each snippet, we first compute the average WER across the five ASRs we consider. To add more detail to the average error rates discussed above, we next consider the full distribution of error rates across our populations of white and black speakers. We conclude by proposing strategies-such as using more diverse training datasets that include African American Vernacular English-to reduce these performance differences and ensure speech recognition technology is inclusive. We trace these disparities to the underlying acoustic models used by the ASR systems as the race gap was equally large on a subset of identical phrases spoken by black and white individuals in our corpus. We found that all five ASR systems exhibited substantial racial disparities, with an average word error rate (WER) of 0.35 for black speakers compared with 0.19 for white speakers. In total, this corpus spans five US cities and consists of 19.8 h of audio matched on the age and gender of the speaker. Here, we examine the ability of five state-of-the-art ASR systems-developed by Amazon, Apple, Google, IBM, and Microsoft-to transcribe structured interviews conducted with 42 white speakers and 73 black speakers. There is concern, however, that these tools do not work equally well for all subgroups of the population. Over the last several years, the quality of these systems has dramatically improved, due both to advances in deep learning and to the collection of large-scale datasets used to train the systems. Automated speech recognition (ASR) systems, which use sophisticated machine-learning algorithms to convert spoken language to text, have become increasingly widespread, powering popular virtual assistants, facilitating automated closed captioning, and enabling digital dictation platforms for health care.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |