In 2006, our group became one of the first to demonstrate the control of a humanoid robot using a non-invasive brain computer interface (BCI). The system consists of a robot, an electrode cap for sensing brainwaves, and a graphical user interface for controlling the robot remotely. Our original research demonstrated that the BCI can be used to command a HOAP-2 humanoid robot to select and fetch desired objects from remote locations. We have more recently proposed a framework for adaptive hierarchical brain-computer interfacing that allows the user to teach a robot new behaviors on-the-fly. These behaviors can later be invoked as high-level commands, relieving the user of tedious low-level control. We have demonstrated this approach using Hobbes (our PR2 robot) for remote manipulation tasks. Such a system can allow a severely paralyzed patient to use a robot as a proxy to perform a task without the need for physical movement.

More recently, we have begun to explore probabilistic methods for co-adaptive BCIs. With this design, the BCI and the user interact cooperatively to solve a given task. Due to the generality of the approach, it is applicable to a wide variety of noisy control problems, in addition to various types of BCI applications.