Monday, September 17, 2012

Morning Registration
13:45 - 14:15
Klaus-Robert Müller
14:15 - 15:00 Workshop

Benjamin Blankertz
Mental State Decoding and Applications of Neurotechnology
This talk will present several novel applications of neurotechnology that exploit techniques developed in BCI research. Differently from BCI research, the aim here is not to read out the intent of the user, but rather to quantify certain aspects of her/his mental state. The appeal of such neurotechnological applications is that they often have a higher sensitivity than behavioral measures as they allow access to subconscious processes. Furthermore, neurotechnological measures are unaffected by subjective evaluation and therefore have a great potential in complementing conventional methods, e.g., in usability studies.

15:00 - 15:15 Coffee Break
15:15 - 16:00 Workshop

José del R. Millán
Design Principles for Neuroprosthetics
BCI holds a high, and perhaps bold, promise: human augmentation through the acquisition of new brain capabilities that will allow us to communicate and interact with our environment directly by 'thinking’. This is particularly relevant for physically-disabled people, but is not limited to this population. Yet, how is it possible to fulfil this dream using a 'noisy channel' like brain signals? In this talk I will argue that, despite the quality of the brain signals that we can monitor, truly operational brain-computer interaction is embedded in a more complex system. I will put forward four principles to design such brain-controlled devices, which I will illustrate through working prototypes of brain-controlled robots and applications for disabled and able-bodied people alike.

16:00 - 16:45 Workshop

Dario Farina
Modulation of Cortical Excitability by Detection of Motor Intention and Artificial Afferent Feedback
Brain-computer interfaces (BCI) have applications in function restoration, function replacement, and communication. A further potential application of BCI systems, which has been less extensively investigated, is their use for promoting plastic changes during the recovery of motor and/or sensory functions (neuromodulation). In this talk, I will focus on methods that have been developed to change the cortical excitability in humans by associating motor commands and artificial afferent volley (e.g., electrical stimulation) delivered with precise temporal delay. In such systems, the online detection of cortical activation associated to movement imagination (motor evoked cortical potential, MRCP) is equivalent to an asynchronous BCI system that triggers an external device. It has been shown that, using these systems, the excitability of the neural projections connecting the relevant brain areas to the target muscle is increased only when the afferent inflow arrives during the highest cortical activation phase (measured from MRCPs). The full asynchronous BCI system with triggering of electrical stimulation has also been recently tested in stroke patients, showing that it promotes plastic cortical changes also in this patient population, with a limited number of trials. Moreover, in stroke patients, such intervention, repeated over several days, has also been shown to improve functional recovery.

16:45 - 17:15 Coffee Break
17:15 - 18:15 Workshop

Christof Koch
Project MindScope: Large Scale Science for Mapping the Mind
The Allen Institute is engaged in a ten year project to study the principles by which information is encoded, transformed and represented in the mammalian cerebral cortex and related structures. The goals of MindScope are to provide a quantitative taxonomy of cell types and their interconnections in visual cortex and associated brain regions, to observe their dynamics under physiological conditions in behaving mice, to construct cellular models of how their dynamics and function arise from the structural description, and to understand how this  function relates to visual perception. This is a large-scale, in-house team effort to synthesize anatomical, physiological and theoretical knowledge into a description of the wiring scheme of the cortex, at both the structural and the functional levels. The fruits of this cerebroscope will be freely available to the public.

Co-organized by the School of Mind and Brain. This talk is free to all visitors.

18:15 - OPEN Poster Session

Tuesday, September 18, 2012

09:00 - 09:45 Workshop

Seong-Whan Lee
Spatio-Spectral Filter Optimization in a Bayesian Framework for Single-Trial EEG Classification in Brain-Computer Interface
There are two challenging problems in classifying a single-trial EEG of motor imagery. One is spectral filter optimization - The frequency bands, in which ERD/ERS patterns reflect activation and deactivation of rhythmic activity over motor and sensorimotor cortices, are highly variable across subjects and across even trials for the same subject. The other problem is spatial filter optimization - The EEG electrodes measure the superimposed signals that originated from various sources in the brain and the EEG signals are generally contaminated with artifacts and noise that can cause performance degradation in pattern classification. In this work, we propose a novel method for class-discriminative feature extraction by means of optimizing spatio-spectral filters in a Bayesian framework for EEG-based Brain-Computer Interfaces. In our method, the problem of optimizing spatio-spectral filter is formulated as estimation of a posterior probability density function (pdf). In order to estimate the unknown posterior pdf, about which, in this paper, there is no functional assumption, a particle-based approximation method is proposed by extending a factored-sampling technique with a diffusion process. An information-theoretic observation model is also devised to measure discriminative power of features between classes. The feasibility and effectiveness of the proposed method are demonstrated by analyzing the results and its success on public databases.

09:45 - 10:30 Workshop

Klaus-Robert Müller
Multimodal  Imaging and BCI 
Each method for imaging brain activity has technical or physiological limits. Thus, combinations of neuroimaging modalities that can alleviate these limitations such as simultaneous recordings of neurophysiological and hemodynamic activity have become increasingly popular. Multimodal imaging setups can take advantage of complementary views on neural activity and enhance our understanding about how neural information processing is reflected in each modality. However, dedicated analysis methods are needed to exploit the potential of multimodal methods. The talk will first spend some time on the multimodal data fusion problem from the Machine Learning point of view and introduce useful algorithms. Then I will discuss a hybrid noninvasive Brain Computer Interface (BCI) that combines electroencephalography (EEG) and near-infrared spectroscopy (NIRS).  In particular I will show that near-infrared spectroscopy (NIRS) can be used to enhance the EEG-BCI approach. In our study both methods were applied simultaneously in a real-time Sensory Motor Rhythm (SMR)-based BCI paradigm, involving executed movements as well as motor imagery. We tested how the classification of NIRS data can complement ongoing real-time EEG classification. Our results show that simultaneous measurements of NIRS and EEG can significantly improve the classification accuracy of motor imagery in over 90% of considered subjects and increases performance by 5% on average. However, the long time delay of the hemodynamic response may hinder an overall increase of bit-rates. Furthermore we find that EEG and NIRS complement each other in terms of information content and are thus a viable multimodal imaging technique, suitable for BCI.

10:30 - 11:00 Coffee Break
11:00 - 11:45 Workshop

John-Dylan Haynes
Decoding and predicting intentions from neuroimaging signals
There has been a long debate on the existence of brain signals that precede the outcome of decisions, even before subjects believe they are consciously making up their mind. Multivariate decoding of neuroimaging signals allows to extract such choice-predictive information contained in neural signals leading up to a decision. New results show that the specific outcome of free choices between different plans can be interpreted from brain activity, not only after a decision has been made, but even several seconds before it is made. This suggests that a causal chain of events can occur outside subjective awareness even before a subject makes up his/her mind. Such signals may in future be useful for brain-computer-interfacing.

11:45 - 12:30 Workshop

Vadim Nikulin
Novel computational and recording techniques for studying neuronal oscillations acquired with EEG/MEG
In the first part of the talk I will present a new type of EEG electrodes. Current mainstream EEG electrode setups in BCI research permit efficient recordings, but are often bulky and uncomfortable for subjects. Recently we introduced a novel type of EEG electrode, which is designed for an optimal wearing comfort. This novel electrode is not felt by the subject and therefore recordings of EEG are performed in a most convenient way. Moreover, the electrode is close to invisible to an external observer. This is important especially for the situations when discomfort/unnecessary attention can be aroused either in the person wearing EEG electrodes or in persons who observe a subject with EEG electrodes. In the second part of the talk I will present a novel algorithm for the extraction of neuronal oscillations. Neuronal oscillations have been shown to underlie various cognitive, perceptual and motor functions in the brain and their amplitude reactivity is used commonly in BCI research. However, studying these oscillations is notoriously difficult with EEG/MEG recordings due to a massive overlap of activity from multiple sources and also due to the strong background noise. I will present a novel method for the reliable and fast extraction of neuronal oscillations from multi-channel EEG/MEG/LFP recordings. The method is based on a linear decomposition of recordings: it maximizes the signal power at a peak frequency while simultaneously minimizing it at the neighboring, surrounding frequency bins. Such procedure leads to the optimization of signal-to-noise ratio and allows extraction of components with a characteristic "peaky" spectral profile, which is typical for oscillatory processes. We refer to this method as spatio-spectral decomposition (SSD). Due to the high accuracy and speed, we suggest that SSD can be used as a reliable method for the extraction of neuronal oscillations from multi-channel electrophysiological recordings.

12:30 - 14:00 Lunch Break
14:00 - 14:45 Workshop

Felix Bießmann
Towards an estimate of the neural information of the BOLD signal
The blood oxygen level dependent (BOLD) signal as measured by functional magnetic resonance imaging (fMRI) has become a standard marker of neural activity. The relationship between neural activity and its hemodynamic response is characterized by complex spatiotemporal dynamics. Most analyses rely on simple models of neurovascular coupling and thus underestimate the neural information in the BOLD signal. We use machine learning methods to obtain a model free estimate of the neural information in BOLD contrast data from simultaneous recordings of high resolution fMRI signals and intracranially measured neural bandpower signals. These estimates can help to guide parameter selection in fMRI studies for optimal decoding of stimulus information and might serve as a baseline to which studies without intracranial neural measurements can be compared.

14:45 - 15:30 Workshop

Yukiyasu Kamitani
Methods for neural mind-reading
Recent advances in neural recording and analysis techniques are making it reality to read out a person's mental state from brain activity. In my laboratory, starting from classification-based decoding of subvoxel representations, we have proposed several new approaches to the decoding of detailed mental contents. We have particularly aimed at enabling the decoder to have numerous potential outputs, so that it could cover much of the whole mental space. In this talk, I will present how our methods have evolved from simple classification to modular, bidirectional, and other advanced models. I will also introduce a 'neural code converter' that transforms a person's brain activity to another person's, which could potentially be used for brain-to-brain communication.
15:30 - 16:00 Coffee Break
16:00 - 18:00 Poster Session

18:00 - OPEN Discussions / Press

Wednesday, September 19, 2012

09:00 - 09:45 Workshop

Gerwin Schalk
ECoG-Based Neuroscience and Neuroengineering
The intersection of signal processing/machine learning, computer science, material engineering and neuroscience is beginning to open up exciting opportunities for important advances in systems and cognitive neuroscience and in translational neuroengineering.  Our work over the past 15 years has focused on taking advantages of these opportunities. Our neuroscience research investigates the neural basis of motor, language, and cognitive function by applying computational techniques to recordings from the surface of the brain (electrocorticography (ECoG)) in humans.  For example, we study how local field potentials in different cortical areas prepare for and execute hand or finger movements.  Our neuroengineering research is taking advantage of the resulting neuroscientific understanding and aims to address particular clinical problems.  This work includes statistical signal processing, machine learning, and real-time system design and implementation.  For example, we have been developing a new real-time imaging technique for invasive brain surgery. In this talk, I will describe the types of signals that can be detected in ECoG and the emerging understanding of their physiological origin.  I will then demonstrate that ECoG encodes detailed aspects of function, such as actual or imagined speech.  Finally, I will show demonstrations of ECoG-based communication and control, and of our real-time passive functional mapping technique.  Overall, this talk aims to communicate the substantial research and emerging commercial opportunities that arise from integration of neuroscience and neuroengineering, and hopes to inspire the neurotechnology community to participate in them.

09:45 - 10:30 Workshop

Andrea Kübler
A user centred approach for bringing BCI controlled applications to end-users 
In the past 20 years research on BCI has been increasing almost exponentially. While a great deal of experimentation was dedicated to offline analysis for improving signal detection and translation, online studies with the target population are less common. More recently the potential user of a BCI came more into the focus of BCI development and user-centred approaches were adopted (Millan et al., 2010; Zickler et al., 2011). Studies with severely disabled end-users were conducted with a BCI for communication and interaction and one for entertainment. In the first study BCI control was integrated in a commercially available AT while the second used the Brain Painting application (Münßinger et al., 2010). Control of AT was realized by means of a BCI that evokes event-related potentials with visual stimulation, commonly referred to as the P300-BCI (Farwell and Donchin, 1988). Reliability and learnability were rated high while speed and aesthetic design were only moderate. Obstacles for use in daily life were (1) low speed, (2) time needed to set up the system, (3) handling of the complicated software and the (4) demanding strain that accompanies EEG recordings (washing hair, etc.). To address speed and time to set-up the system, we developed a so-called optimized communication interface which allows for auto-calibration and word completion. No familiarity with technical or scientific details of the BCI is required. If successfully calibrated, communication with the P300-BCI can be initiated with another button press. All subjects (N=19) handled the BCI software on their own and stated that the procedure was easy to understand and that they could explain it to a third person. A text completion option significantly decreased communication speed (Kaufmann et al., 2012). To further address speed and reliability of the BCI, we changed the stimulation mode of the widely used P300 spelling matrix. Instead of flashing the letters of the matrix we overlaid row- and column-wise a famous face (Kaufmann et al., 2011). In healthy subject as well as in those with severe motor impairment spelling speed was significantly increased. Importantly in people with disease who were unable to operate the traditional P300-BCI, the face stimulation lead to an increase of effectiveness with an accuracy of up to 100%. Finally, we implemented BCI controlled brain painting at the home of a locked-in patient diagnosed with amyotrophic lateral sclerosis who used to be a painter and another one is currently included. Her family members and caregivers regularly set her up for painting and if problems occur those can be remotely solved by our experts via remote internet access. Satisfaction with device strongly depended on its functioning. We conclude that milestones were achieved in bringing BCIs to end-users. Despite severe obstacles when bringing BCIs to end-users the user-centred iterative process between developers and end-users proved successful and the results are powerful demonstrators that BCIs are well coming of age and can face the transfer out of the lab to the end-users’ home.
10:30 - 11:00 Coffee Break
11:00 - 11:45 Workshop

Thomas Wiegand
Towards Measurement of Perceived Differences Using EEG
An approach towards the direct measurement of perceived differences using electroencephalography (EEG) is presented. Subjects viewed video clips while their brain activity was registered using EEG. The presented video signals contained compressed as well as uncompressed video sequences. The distortions were introduced by a hybrid video codec. Subjects had to indicate whether or not they had perceived a quality change. In response to a quality change, a voltage change in EEG was observed for all subjects. Potentially, a neuro-technological approach to video assessment could lead to a more objective quantification of quality change detection, overcoming the limitations of subjective approaches (such as subjective bias and the requirement of an overt response). Furthermore, it allows for real-time applications wherein the brain response to a video clip is monitored while it is being viewed.

11:45 - 12:30 Workshop

Gernot Müller-Putz
Hybrid brain-computer interfaces: technology and current applications
Persons with movement disabilities can use a wide range of assistive devices (ADs). The set of ADs ranges from simple switches connected to a remote controller to complex sensors (e.g., mouth mouse) attached to a computer and to eye tracking systems. All of these systems work very well after being adjusted individually for each person. However, there are still situations where the systems do not work properly, e.g., when residual muscles become fatigued or users have such severe disabilities that no movement is possible. In such situations, a Brain-Computer Interface (BCI) might be the only available option, since it uses brain signals (usually the electroencephalogram, EEG) for control without requiring any movement whatsoever. A BCI could replace an existing AD. However, it would be even better to couple the BCI with the existing AD and develop a new system called a hybrid BCI (hBCI). Ideally, an hBCI should let the user extend the types of inputs available to an assistive technology, or choose not to use the BCI at all. The hBCI might decide which input channel(s) offer(s) the most reliable signal(s) and switch between input channels to improve information transfer rate, usability, …, or could instead fuse various input channels.
12:30 - 14:00 Lunch Break
14:00 - 14:45 Workshop

Bo Hong
Visual Motion Brain Computer Interface: from scalp to cortex
Visual motion is processed over the middle temporal (MT) area of visual cortex. Visual motion stimulus elicits prominent neural response over MT area that can be detected at both cortical and scalp level (as motion VEP). Not until recently have motion VEP been introduced as a new modality for brain computer interface (BCI) applications (Guo, et al. 2008). The attended visual motion stimuli elicit a more negative N200 over parietal-occipital areas than the unattended stimuli, which constitutes the basis for BCI classification. Compared to traditional visual P300, motion VEP may be a better candidate towards building practical BCI systems due to its non-flashing nature, less requirement of visual luminance and contrast, stable response across trials and subjects, localized spatial distribution, etc. We implemented both N200 speller (Hong et al. 2009; Zhang et al, 2012) and motion VEP based Google search BCI (Liu at al. 2010), showing feasibility of a motion VEP based BCI which could be embedded into computer screen elements, such as menu, button and icon, for various applications. With the high signal-to-noise ratio and rich frequency components of ECoG, we further explored the possibility of implementing an cortical visual motion BCI with single subdural electrode. Promising results shed light on the minimally invasive BCI implant over visual cortex.
14:45 - 15:30 Workshop

Motoaki Kawanabe 
Challenges towards brain-machine interfaces for supporting elderly and disabled people in daily life
Thanks to the huge efforts by researchers in this field, basic technologies of brain-computer interfaces (BCIs) has been established and practical applications, for instance, BCI rehabilitations are being developed. Based on our previous achievements of non-invasive brain-machine interfaces (BMIs) in experimental rooms, ATR-BICR has started the network BMI project in order to develop real-world BMI for supporting elderly and disable people in daily life. We will tackle this challenging problem by simultaneous measurement of human behavior and brain activities and also by parallel and distributed processing of large-scale data. For conducting real-world BMI experiments and acquiring daily-life brain activities, we have prepared in the premises of ATR the BMI smart house with various ambient sensors. In this talk, I will introduce our network BMI project and show some pilot results in the first year.
15:30 - 16:00 Coffee Break
16:00 - 16:45 Workshop

Mark Cohen
Informative Brain-Mind Feature Space
A wide variety of brain-derived signals presently are available to drive brain computer interface devices. These include the popular EEG recordings, magnetoencephalography, functional magnetic resonance imaging (fMRI), near-infrared spectroscopy and others. Each are known to be quantitatively altered by intentional mental activity and, with the power provided by statistical machine learning, each to varying degrees may be decoded to the end of controlling devices. I will speak to the question of the reverse challenge of understanding better the operations of the brain through analysis of the control signals and argue that careful selection of the features themselves might serve the dual purposes of improving the efficiency and accuracy of the brain-computer interface, and serve to improve our understanding of the underlying neurophysiology. The discussion will focus on the use of brain network features exposed through fMRI and on understanding the temporal dynamics of the individual features and their state transitions.

16:45 - 17:30 Workshop

Rainer Goebel
Decoding fMRI Brain Activity Patterns in Real-Time: From Basic Research to Clinical Applications
Recent progress in computer hard- and software allows the sophisticated analysis of fMRI data in real-time including "brain reading" methods such as multivariate pattern analysis. Advanced online fMRI data analysis provides the basis for brain-computer Interface (BCI) applications such as neurofeedback and motor-independent communication. In neurofeedback studies, subjects observe and learn to modulate their own brain activity during an ongoing fMRI measurement. Many neurofeedback studies have demonstrated that with sufficient practice, subjects are indeed able to learn to modulate brain activity in specific brain areas or networks using mental tasks. These results are important for basic neuroscience research, because they allow to study the degree to which the brain can modulate its own activity and to potentially unravel the function of hitherto unknown brain areas. Besides basic research applications, we have recently shown that fMRI neurofeedback may become a valuable therapeutic tool to help patients suffering from Parkinson's disease and mood disorders such as depression. Furthermore, we have shown that activation patterns evoked by participants can be ‘decoded’ and interpreted online as letters of the alphabet offering the possibility for people with severe motor impairments to ‘write’ letters purely controlled by mental imagery. In order to allow patients with severe motor impairments to use the developed communication tool at the bedside, we currently transfer our approach to functional near-infrared spectroscopy (fNIRS) that, like fMRI, measures hemodynamic brain signals. Finally we will present recent results from ultra-high field fMRI measurements (7 Tesla scanners) that achieve sub-millimeter functional spatial resolution allowing to crack the representational code within specialized brain areas at the level of cortical columns and cortical layers. These new possibilities are extremely important to advance our knowledge of brain organization but they will also enable more content-specific BCI applications.
17:30 - 18:00 Goodbye
Closing Remarks
Presentation of the "Best Poster Award".
19:00 Speakers Dinner