Title: Multimodal machine learning techniques for naturalistic unlabelled long-term neural, audio and video recordings

Advisors: Rajesh Rao and Bingni Brunton

Supervisory Committee: Rajesh Rao (Co-Chair), Bingni Brunton (Co-Chair), Katherine Steele (GSR, ME), and Ali Farhadi

Abstract

In our data-rich world today, the need for better capture and storage of large everyday data are spurring the development of many techniques. However, methods for analysis are still often only trained and tested on small sets of well-curated and labelled samples. To truly garner the potential of the huge amounts of real-world data being recorded every second, we must develop techniques that can not only interpret the signals in lab-like conditions but remain robust in natural circumstances. Data recorded from and for bioelectronic technologies are especially noisy, such as ones for developing useful interfaces between brains and machines (BCI). My research develops and integrates multimodal machine learning techniques to tackle these large, noisy and unlabelled recordings. In my presentation, I will explore the connections between the fields of computer vision, speech recognition, machine learning and neural decoding. I will also summarize my previous work in applying deep learning to a long-term naturalistic multimodal dataset with neural, video and audio recordings to predict future human movements. The proposed work will regress as well as detect future movements, predict natural speech and attempt to map the human brain with semantic associations. As well, I propose to share this rare dataset with the wider scientific community.

Place: 
CSE 678
When: 
Wednesday, October 4, 2017 - 11:00 to Tuesday, April 23, 2024 - 19:01