Comparison of sensor selection mechanisms
by Dipl.Math. Mario Michael Krell
University of Bremen, Robotics Innovation Center, Germany
The article upon which this research summary is based on was originally published in PLoS ONE, volume 8, number 7 by David Feess, Mario Michael Krell and Jan Hendrik Metzen with the title “Comparison of Sensor Selection Mechanisms for an ERP-Based Brain-Computer Interface“. For further details, we encourage you to read the original paper [1]. The pictures and some text passages were taken from this article.
Currently there is a tendency to more and more use EEG not only for clinical analysis but also for Brain-Computer Interfaces (BCIs) for disabled people, for gaming, or for supervision, e.g, during driving. Here, it is important to know how many sensors are needed, and how they should be placed. This enables an easy setup of the electrode cap with reduced resources, while at the same time maintaining the system’s performance. The summarized article deals with this topic and introduces and compares several methods to choose relevant electrodes (sensors). As the evaluation was performed on a particular dataset the resulting ranking of sensor selection algorithms is tailored to this application. However, the approach is transferable to other applications straightforwardly. The methods presented are all implemented in an open source software framework, which enables an easy transfer to other EEG data (Brain Products’ native format is supported), or potentially even to data of other sensor types.
Experimental Set Up and Data Acquisition
Our empirical evaluation was conducted on data recorded in the Labyrinth Oddball scenario, a testbed for the use of passive BCIs in robotic telemanipulation (see Fig. 1). In this scenario, participants were instructed to play a simulated ball labyrinth game, which was presented through a head-mounted display. While playing, one of two types of visual stimuli was displayed every 1 second with a jitter of ±100 ms. The subjects were instructed to ignore the standard stimuli and to press a button as a reaction to the rare target stimuli. It is expected that the targets in such scenarios elicit an ERP called P300 [2] whereas the standards do not.
Figure 1:
Labyrinth Oddball: The subject plays a physical simulation of a ball labyrinth game. He has to respond to rare target stimuli by pressing a buzzer and ignore the more frequent standard stimuli. The insets show the shape of the stimuli, which can be distinguished by the length of the edges. The graphs to the left depict the event-related potentials (ERPs) evoked by both stimulus types at electrode Pz. Both stimuli elicit an early negative potential attributed to visual processing, but only targets evoke an additional strong, positive potential around 600 ms after the stimulus. [1]
Five subjects participated in the experiment and carried out two sessions on different days each. A session consisted of five runs with 720 standard and 120 target stimuli per run. EEG data were recorded at 1kHz with an actiCAP EEG system (Brain Products GmbH) from 62 channels following the extended 10–10 layout. Average analysis of the data was performed with the BrainVision Analyzer software version 2 (Brain Products GmbH).
Question of Interest
Aside from the strive for high reliability and performance of BCI systems, the interest to enhance these systems in terms of ease of use, low preparation time, high comfort, and reduced resources has recently in- creased. A reduction of the number of required sensors can be a significant step in this direction. To this aim, the first step is to figure out which algorithms for sensor selection one can rely on. In a second step it is important to investigate, if a high performance of a particular choice of sensors transfers to a different usage session on a different day.
Methods
Baselines
For each number of possible sensors, we generated 100 random sensor constellations to have a baseline in the evaluation. Furthermore, we used standard electrode positioning according to the 10–10 and 10–20 system for 32 and 19 electrodes, respectively.
Performance Ranking
As first and perhaps most straightforward method to select sensors we propose a procedure we call performance ranking. The aim of any sensor selection is eventually to come up with a small set of sensors, for which the system’s performance is as good as possible. An intuitive iterative procedure to find such a set is therefore to use the performance for ranking as follows: Starting from an initial set of sensors, find the single sensor for which the system’s performance drops as little as possible when this sensor is removed. Remove this sensor and start over.
Spatial Filter Based Ranking
Linear spatial filtering is a common step in EEG data processing, especially to reduce dimensionality and to reduce noise. Often a set of spatial filters is constructed, where only a small number of them contains the information relevant for the classification task. For this reason, we always only considered the four most important filters. Each of these filters defines weights for each sensor, which can be interpreted as measure for the importance of a sensor. Based on these weights we constructed a ranking of the sensors and iteratively removed the sensor with the lowest ranking. As spatial filters, xDAWN [3], principle component analysis (PCA) [4], and common spatial patterns (CSP) [5] were used.
Signal to Signal Plus Noise Ratio Estimate in Actual (SSNRAS) or Virtual Sensor Space (SSNRVS)
Rivet et al. [3] propose to use the Signal to Signal-Plus-Noise Ratio (SSNR) as evaluation criterion in the context of ERP detection. Based on a mixed effects model, the SSNR is defined as the ratio of the ERP component’s energy to the recorded signal’s energy. The first evaluation criterion they propose rates the sensors in the actual sensor space. In the other case the SSNR is calculated in a virtual sensor space which is based on the xDAWN algorithm.
SVM Coefficient Ranking
The final step in our BCI signal processing chain – the classification of a data segment – is performed by a linear support vector machine (SVM). Much like spatial filters, SVMs use coefficients to weight the contribution of each date feature to the classification outcome. These weights can again be interpreted as importance of particular data features. A sensor ranking can therefore be constructed by adding the absolute values of all weights that originate from one sensor [6]. This procedure was done with the standard support vector machine (2SVM) and a sparse variant (1SVM) [7].
Evaluation Schemes
For observing the behavior of sensor selection algorithms, the number of sensors was recursively reduced and the resulting classification performances were determined. In the first evaluation scheme, the data of one recording session was divided into training and testing data (intra-session). In the second case, the data of one session was used for selecting the sensors, and then the data from the other session was used to evaluate the performance with the selected sensors. This second approach simulates an actual application where one session with the full set of sensors is recorded for sensor selection, and the resulting selection is then used in further sessions (inter-session).
Framework: pySPACE
All computations were performed with pySPACE, an open source signal processing and classification environment written in Python (http://pyspace.github.io/pyspace/). This framework is easy to use with simple text configuration files. Furthermore, it supports parallel execution, which was very much required for the evaluation, especially for the baseline algorithms.
Results
The results are depicted in Fig. 2 and Fig. 3. The SSNR approach in the virtual sensor space clearly outperforms the other algorithms, but the SVM, xDAWN, and the performance based ranking still perform better than random choice, especially for the lower numbers of used sensors. The ranking between the methods is very similar in the intra- and inter-session evaluation. In the inter-session case, however, all methods show a reduced performance which might result from small differences in the exact placement of the sensors on the scalp, or from overfitting of the selection algorithm.
Figure 2: Intra-session evaluation of the classification performance versus the number of EEG electrodes for different sensor selection approaches. The horizontal line All is a reference showing the performance using all available 62 electrodes. The grey patches correspond to histograms of performances of 100 randomly sampled electrode constellations. The curves depict the mean classification performance over all subjects and cross validation splits. The results for 1–10 sensors are shown separately in the inset. [1]
Figure 3: Inter-session evaluation of the classification performance versus the number of EEG electrodes for different sensor selection approaches. For more details, please see Figure 2. [1]
Conclusion
We could show that one recording session suffices to choose an appropriate sensor selection algorithm. However, a second session is necessary in order to estimate the performance of a smaller set of sensors in further application sessions. For our P300 data, the SSNR approach is superior to all other methods. This result is a bit surprising. One might have expected the performance based ranking to yield the best performing sensor sets, as the criterium for the sensor ranking is identical to the criterium for the ranking of the algorithms (i.e., classification performance). It will be interesting to see, how the presented methods perform relative to each other in other BCI context. Future steps would be to extend the number of available sensor selection algorithms in pySPACE, and to analyze in how far the selection of sensors is stable across sessions or even subjects.
References
[1] Feess D, Krell MM, Metzen JH (2013)
Comparison of sensor selection mechanisms for an ERP- based brain-computer interface.
PLoS ONE 8: e67543.[2] Courchesne E, Hillyard SA, Courchesne RY (1977)
P3 waves to the discrimination of targets in homogeneous and heterogeneous stimulus sequences.
Psychophysiology 14: 590–7.[3] Rivet B, Cecotti H, Maby E, Mattout J (2012)
Impact of spatial filters during sensor selection in a visual P300 brain-computer interface.
Brain Topogr 25: 55–63.[4] Abdi H, Williams LJ (2010)
Principal component analysis.
Wiley Interdiscip Rev Comput Stat 2: 433–59.[5] Blankertz B, Tomioka R, Lemm S, Kawanabe M, Müller KR (2008)
Optimizing spatial filters for robust EEG single-trial analysis.
IEEE Signal Pro- cess Mag 25: 41–56.[6] Lal TN, Schröder M, Hinterberger T, Weston J, Bogdan M, et al. (2004)
Support vector channel selection in BCI.
IEEE Trans Biomed Eng 51: 1003–10.[7] Bradley PS, Mangasarian OL (1998)
Feature selection via concave minimization and support vector machines.
In: Proc. Int. Conf. Mach. Learn. pp. 82–90.