Machine-learning approaches to BCI in Tübingen
Our emphasis is on machine-learning approaches to BCI, i.e. using modern adaptive algorithms to identify automatically which features of brain signals are most informative. BCI publications frequently report good performance from such techniques after only a two- or three-hour training session with a healthy subject, but it is unusual to hear of patients achieving results so quickly. The most successful approaches to patient BCI are still those in which the user rather than the computer has to do most of the learning, and this usually takes many weeks. Our long-term goal is therefore to bring the benefits of a machine-learning approach to patients, and we cooperate closely with Professor Birbaumer‘s department in Tuebingen in order to do this. We are working continuously to develop visualization techniques for the fast and flexible screening of data. In addition to applying these techniques to patient data, we also perform experiments on healthy subjects in order to develop new approaches to BCI. These include: - Motor imagery experiments in EEG and MEG (healthy subjects) and in ECoG (implanted patients at the Epileptology clinic, Bonn). We have demonstrated good performance in all three settings, and many users have been able to use a decision-tree speller to write by modulating their mu activity. - The development of new paradigms, for example a system in which a binary decision can be expressed by shifting covert attention to one of two auditory stimulus streams. Future development of such systems will be important for patients for whom mu systems do not work and whose vision is poor. - Visual speller experiments, in which we explore the psychophysical parameters of the stimulus display that lead to the best performance. A common ingredient to all of our work is the use of reliable modern classification and regression techniques such as Support Vector Machines. The SVM is particularly well-suited to BCI data because it is a so-called kernel method, whose computational complexity depends far more on the number of training trials than on the number of features used to describe each one. For brain signal data, in which large numbers of trials are difficult to collect, and in which each trial contains many thousands of sample points across dozens of channels, this is a great advantage. Another common factor is the importance of feature-selection methods that isolate the relatively small proportion of useful information that there is in the incoming data stream. In particular we have shown that the technique of Recursive Feature Elimination, especially in combination with Independent Components Analysis, localizes the useful information in the brain very reliably. Our planned future directions include the extension of the above work to multi-class and continuous-output settings, the application of MEG to patients in order to provide a faster and more accurate screening of likely successful approaches, and the development of machine-learning techniques that are invariant with respect to session-to-session or even use-to-user differences.