In 2006, our group became one of the first to demonstrate the control of a humanoid robot using a non-invasive brain computer interface (BCI). The system consists of a robot, an electrode cap for sensing brainwaves, and a graphical user interface for controlling the robot remotely. Our original research demonstrated that the BCI can be used to command a HOAP-2 humanoid robot to select and fetch desired objects from remote locations. We have more recently proposed a framework for adaptive hierarchical brain-computer interfacing that allows the user to teach a robot new behaviors on-the-fly.
More recently, we have begun to explore probabilistic methods for co-adaptive BCIs. With this design, the BCI and the user interact cooperatively to solve a given task. Due to the generality of the approach, it is applicable to a wide variety of noisy control problems, in addition to various types of BCI applications.
Our research focuses on understanding the brain using computational models and simulations,. The primary goal of this group is to discover the computational principles underlying the brain's remarkable ability to learn, process and store information. How does the brain learn efficient representations of objects and events occurring in the natural environment? What are the algorithms that allow useful sensorimotor behaviors to be learned? What computational mechanisms allow the brain to adapt to changing circumstances and remain fault-tolerant and robust?