Recently, Jon Hamilton of NPR’s All Things Considered interviewed Dr. Edward Chang, one of the neurosurgeons and investigators involved in a study focused on decoding cortical activity into spoken words.
Currently, those who cannot produce speech rely upon technology that allows them to use eye gaze to produce synthesized speech one letter at a time. While this gives those who otherwise could not speak a voice, it is considerably slower than natural speech production.
In the current study, cortical electrodes gathered information as subjects read hundreds of sentences. The electrodes monitored various portions of the cortex involved in speech production. This information was processed and resulted in intelligible synthesized speech.
Reference
Anumanchipalli GK, Chartier J, Change E. (2019) Speech synthesis from neural decoding of spoken sentences. Nature568:493–498.
Related Posts
Echoes of Risk: Noise-Induced Hearing Loss in Dentistry
For audiologists, it is no surprise that dental professionals remain at risk of developing noise-induced hearing loss (NIHL). This risk is due to prolonged and…
The Hobbies Most Likely to Cause Hearing Loss
Hobbies are important. They contribute to our overall health and well-being by helping us relax and escape everyday stressors. Audiologists know that some hobbies may…
NCAA Approves Gallaudet’s Use of a Helmet for Deaf or Hard of Hearing Players This Season
A helmet designed by Gallaudet University and AT&T has been approved for use by the NCAA. Approval for the helmet in Division III college football…