Skip to content

News & Events

Reverse engineering the brain computations involved in speech production and perception

Nima Mesgarani (U of Maryland/U of California, San Francisco)

CSE 520 Colloquium Joint with UW Electrical Engineering

Thursday, February 14, 2013, 3:30pm

EEB-105

Abstract

NOTE: No live broadcast or on-demand or future UWTV taping! This will be taped for internal use only!

The brain empowers humans and other animals with remarkable abilities to sense and perceive their acoustic environment in highly degraded conditions. These seemingly trivial tasks for humans have proven extremely difficult to model and implement in machines. One crucial limiting factor has been the need for a deep interaction between two very different disciplines, that of neuroscience and computer engineering. In this talk, I will present results of an interdisciplinary research effort to address the following fundamental
questions: 1) what computation is performed in the brain when we listen to complex sounds? 2) How could this computation be modeled and implemented in computational systems? and 3) how could one build an interface to connect brain signals to machines? I will present results from recent invasive neural recordings in human auditory cortex that show a distributed representation of speech in auditory cortical areas. This representation remains unchanged even when an interfering speaker is added, as if the second voice is filtered out by the brain.

In addition, I will show how this knowledge has been successfully incorporated in novel automatic speech processing applications and used by DARPA and other agencies for their superior performance.

Finally, I will demonstrate how speech can be read directly from the brain that eventually, can allow for communication by people who have lost their ability to speak. This integrated research approach leads to better scientific understanding of the brain, innovative computational algorithms, and a new generation of Brain-Machine interfaces.