# How to decide on the right EEG spectral analysis method in BrainVision Analyzer

*by Dr. Yin Fen Low
Scientific Consultant (Brain Products)
*

Spectral analysis entails looking into the oscillatory characteristics of a signal, and it has gained increasing attention in various EEG research fields. However, there is a multitude of spectral analyses. BrainVision Analyzer offers four modules that implement some widely used methods namely * FFT *(Fast Fourier transform),

*,*

**Wavelets***(Event-Related Synchronization/Desynchronization) and*

**ERS/ERD***.*

**Complex Demodulation**In this article, we intend to provide practical guidance for applying these dedicated transformations to your EEG data. Concerns such as when spectral analysis is required, how to decide which method to apply and what important parameters should be considered in each spectral analysis method will be addressed.

For theoretical background on each spectral analysis method, please check our webinar recordings here.

## Overview

## When is spectral analysis required?

In principle, spectral analysis is required when:

Generally, whether to perform spectral analysis is mainly depending on your analysis aims or research hypotheses.

## Which spectral method to select?

To find out the answer to this question, it is necessary to ask yourself: **Do I only want to look at the frequency of the signal or do I also need the time information simultaneously?**

#### Only frequency information is needed

FFT would be the best option since it transforms time-domain EEG data into frequency domain. It tells you how much of a frequency is contained in the total period of observation. This means that the outcome of FFT, which is termed spectrum, has no time but only frequency information. Commonly, FFT is performed on data collected in blocked designs. For instance, the aim would be to compare the induced frequency modulation in resting blocks versus active blocks, task A versus task B or before versus after medication. Usually, data is split into short epochs or segments, each segment goes through frequency decomposition, and the obtained frequencies for all segments per condition are averaged together to get an idea of the frequency activity within the blocks. Through averaging the spectra, the non-task related variability will be eliminated and gives a better signal-to-noise-ratio.

#### Both frequency and time information are needed

The so-called time-frequency analysis methods would be the right choice. In Analyzer, you have three transformations to select from – ** Wavelets**,

*and*

**ERS/ERD***. In principle, they serve the same purpose which is enabling you to investigate the spectral dynamics of a signal across time. They are commonly applied in event-related designs in the same way as ERPs.*

**Complex Demodulation**If time-frequency analysis is your final goal, to further narrow down the method selection, the next question to be answered is: **Do I already have an idea which frequency range to investigate, or do I rather want to explore a wider range of frequencies?**

* ERS/ERD* as well as

**are suitable methods for investigating the spectral dynamics of EEG data across time within a specific frequency band (i.e., narrow band). Thus, a priori information regarding the frequency band of interest is required. With this information, the common approach is to isolate the EEG data to a desired frequency band (e.g., the alpha band) by filtering, prior to applying**

*Complex Demodulation**or*

**ERS/ERD****.**

*Complex Demodulation*On the other hand, ** Wavelets** are used in a situation where you want to examine the spectral dynamics across multiple frequency bands. Depending on the sampling rate (Fs) used during data recording, it is possible to generate a time-frequency plot (commonly represented as a heatmap) that covers activities from 0 Hz up to Nyquist frequency (i.e., half of the sampling rate).

Figure 1 illustrates a decision tree that helps in identifying the optimal spectral analysis method.

*Figure 1:
*

*The decision flowchart for identifying
the right spectral analysis method
based on your research requirements.*

## What to consider in your spectral analysis?

Having decided on the spectral analysis method, you are just one step away from generating the results by using the transformations implemented in Analyzer. All of them can be found in the ** Frequency and Component Analysis group** under the

*Transformations*tab of the ribbon menu. Nevertheless, to ensure that the analysis is performed correctly, some factors should be taken into consideration so that the analysis results are interpretable.

#### Segment length

The length of the data segment plays an important role when conducting spectral analysis. This is because with an improper segment length, the outcome of the spectral analysis could be jeopardized. Therefore, it is worthwhile to think of this before the analysis. In some cases, one might already need to adjust parameters when designing the experimental paradigm. Essentially, spectral analysis is mostly performed on segmented data or shorter segments instead of long continuous data.

**FFT**

One crucial assumption for FFT is that the data should be stationary or at least quasi-stationary, in the sense that the spectral properties vary slowly with respect to the main frequency. Depending on which frequencies you’re interested in, in theory the segment should be long enough to include at least one cycle of the lowest frequency of interest. To give an example, if 8 Hz is the lowest frequency of interest, the segment length should be at least 125 milliseconds (i.e., Period =1/frequency) long. But practically, it is recommended to have several cycles of the signal to ensure full frequency information can be extracted. Lower frequency means longer data segment. In sum, segment lengths of one or two seconds are suitable for studying frequencies from delta to beta or higher. Meanwhile, please keep in mind that longer segments should be avoided as the signal might become non-stationary.

If you’re interested in a workaround to perform Welch’s power spectral density estimation in Analyzer, please contact us.

**Wavelets**

Like in FFT, segment length should be adapted to cover several cycles of the lowest frequency of interest. Thus you may choose a different length as for ERP. In particular, if you consider normalizing the wavelet result, it is recommended to include safety margins at the beginning and end of the reference interval as well as the post-stimulus interval.

The safety margin is half of the Wavelet length, which is mathematically expressed as c/(2*min(f)). c is the *Morlet Parameter*, indicating number of Wavelet cycles and min(f) represents the lowest frequency of interest [1]. As an example, by fixing the *Morlet Parameter* to five, and the lowest frequency to be investigated is in the theta frequency band, the safety margin should be at least 625 milliseconds (i.e., 5/(2*4 Hz)). This calculated value needs to be considered in determining the segment length.

**ERS/ERD and Complex Demodulation**

In general, the segment length should be long enough to avoid transient effects at both segment borders due to filtering. As mentioned, the input data for ** ERS/ERD** and

*are required to be narrow-band and band-pass filtering is normally applied to achieve this. In*

**Complex Demodulation****another low-pass filter is applied to the down-modulated data [2].**

*Complex Demodulation*The filter in ERS/ERD and Complex Demodulation is the same filter that we offer in the IIR Filters transformation, i.e., a zero-phase shift Butterworth filter (only that the order is hardcoded to 8).

Caution: Filtering of short segments can lead to filter transient phenomena. Thus, it is recommended to apply a band-pass filter on continuous data before data segmentation.

#### Frequency resolution in FFT

For a fixed sampling rate, frequency resolution is entirely determined by the number of data points or segment length. Longer segments will have a higher frequency resolution. This means the frequency bands can be divided into more frequency bins, allowing for a finer distinction within bands. The frequency resolution is defined as the ratio of sampling rate to number of data samples in a segment (i.e., Fs/N). For example, for a dataset sampled with 512 Hz and a segment length of one second, the maximum frequency resolution is 1 Hz (i.e., 512/512), while segments of two seconds allow for 0.5 Hz (512/1024). In Analyzer, the maximum resolution will be internally calculated and displayed when applying ** FFT**.

If the number of data points per segment is not a power of two, the segment will be zero-padded internally as the FFT algorithm requires this for computation efficiency. However, no meaningful information will be added, so it is advisable to avoid zero-padding.

Caution: To achieve the desired resolution, one could either downsample the data or adjust the segment size. Upsampling is not recommended as it will introduce artificial data values into the data which is not desired.

#### Time-Frequency trade off in Wavelets

The *Morlet Parameter (c)* plays an important role in adjusting the trade-off between time and frequency resolution. It indicates the duration of each Wavelet and the spectral bandwidth that is captured with it. A smaller c implies fewer cycles, thus shorter Wavelets, lower frequency resolution and a higher temporal resolution; a larger c implies more cycles, thus longer Wavelets, higher frequency resolution and a lower temporal resolution. Wavelets with five cycles provide a good trade-off between time and frequency resolution, therefore it is commonly used in various EEG analyses [3].

Regarding the range and number of frequency steps, there are no standard guidelines. The maximum frequency is restricted either to Nyquist frequency or to below the cutoff of a low-pass filter. Effectively, it should be defined depending on the frequencies of interest, e.g., 1 – 40 Hz. Meanwhile, the number of frequencies steps (i.e., Wavelet layers) should be set to achieve the desired width of frequency bands, e.g., 40 frequency steps.

If you’re interested in studying the effects across frequency layers, we often advise to use the option logarithmic steps due to the consistent overlap of the wavelets. Nevertheless, the linear scale can be used if you need to extract a layer at a precise frequency or if you want to interpret each frequency layer separately.

*Figure 2:*

*On the second dialog page of Wavelets,
various parameter settings are available. *

*If a wavelet frequency layer is selected,
the corresponding Wavelet is displayed
(see lower panel in red). *

*When the option Logarithmic Steps is selected,
frequencies are sampled with consistent overlap
(see middle panel in green).*

The number of Wavelet cycles is fixed across frequency layers in Analyzer. However, it is always possible to run the transformation multiple times using different Morlet Parameters for different frequency bands.

#### Normalization

Sometimes you might want to express the spectral analysis result on a different scale for comparison purposes. That is where normalization comes into play. Let’s take a look at the normalization options available in Analyzer for each spectral analysis method.

**FFT**

You have the option to specify the frequency range for the normalization factor. The values in each frequency bin of the computed amplitude/power spectrum are expressed as relative values (i.e., expressed in percentage (%)) with respect to the sum within the defined range, as shown in Figure 3. This relative measure facilitates comparisons between subjects or channels, as it eliminates inter-individual differences in absolute amplitude/power. The relation can be calculated with a single band (e.g., to get theta-beta ratio), a broader band or the full spectrum.

*Figure 3:
*

*After normalization, the amplitude/power spectrum is expressed in percentage (%) instead of absolute values.
*

*In this example, bin 11.5 Hz represents 8.98% of the total power of all bins between 0.5 and 40 Hz.*

**Wavelets**

Five normalization options are available in the Wavelets transformation – *Baseline Correction*, *Normed Output*, *Percent Change*, *Decibel* and *Z-transform*. They are similar to each other with regard to what they are doing, however, what is used as reference value for the normalization is different. You can find detailed descriptions and corresponding mathematical formulations underlying the normalization options in the *Analyzer User Manual*. The reference interval should be selected so that it covers a time period that does not show pronounced activity and that it avoids overlap with segment borders as well as stimulus onset.

Briefly, *Baseline Correction* will subtract the mean activity of a reference interval, while *Normed Output* and *Decibel* will rescale the data, and *Percent Change* and *Z-transform* will do both. Sometimes, it would be sufficient to use the *Baseline Correction*, as it subtracts background activity that might be task-unspecific and thereby result in a common baseline level across groups/conditions. This could already alleviate variability across Wavelet layers and participants and make experimental groups more comparable.

However, it might be desired to also apply a rescaling to the data, so that any comparisons would not have to be based on absolute values but rather on relative values. This would be beneficial if your experimental groups vary within different amplitude ranges.

Normalization in Analyzer is performed per segment, frequency layer and channel, so called single-trial normalization, which is suitable for clean segments without outliers in the baseline period. Alternatively, normalization can also be performed on the averaged data [4]. If you’re interested in a workaround for this type of normalization, please contact us.

It is recommended to check which normalizations are usually reported in publications if you want to make your results more easily interpretable to peers and comparable to similar studies. This is because there might be different preferences for certain normalizations across research fields or maybe even for certain paradigms.

Baseline Correction within the Wavelets transformation and the Baseline Correction transformation follow different strategies. The former is applied layer-wise to subtract background activity, while the latter is applied on time domain data and removes offsets. It is possible to apply Baseline Correction prior to Wavelet analysis if desired.

**ERS/ERD**

In contrast to ** FFT **and

**, normalization is considered an integral part of the**

*Wavelets***calculation. There are two options available. You can choose to express the analysis result either in percentage or decibel with respect to a reference value. The mathematical formulations of these options can be found in the**

*ERS/ERD**Analyzer User Manual*. Adhering to the standard definition of ERS/ERD [5], mean power within the reference interval is computed. But median can be used alternatively if there are a lot of outliers in the reference interval. The reference interval should be defined in the same way as in Wavelets. Unlike Wavelets, the normalization in

*is performed on the averaged power values.*

**ERS/ERD****Complex Demodulation**

There is no normalization option incorporated in this transformation, but it is possible to normalize the outcome using the official ** Percent Change** solution that can be downloaded from our website.

Caution: Using a different reference interval, the relation of the data will become a different one and the interpretation of the data might change.

#### Smoothing in ERS/ERD

There is an additional important feature of the ** ERS/ERD** method which might be worth to know, though it is an optional step. As we often do in ERPs when we extract the amplitude information of a peak or time range, we average across data points. This gives us a more reliable estimate of amplitude values that is not biased that much by fast fluctuations or outliers.

In the * ERS/ERD* transformation, this is implemented via a moving average. The moving average is performed by first specifying a time window, then each data point of the ERS/ERD result will be replaced by the mean of the specified time window around that data point. Eventually, the ERS/ERD result is smoothed. However, note that there will be some artifacts at the edges of the segment because of missing values outside of the segment borders. This effect disappears after half of the smoothing window.

Increasing the window size improves the smoothing, but at the same time decreases the accuracy. The authors of a review paper [5] recommend using a longer average time across data points if there are very few trials, while a shorter average interval should be sufficient if there is a lot of trials. This is because the effect of both averages (across time and across trials) leads to a similar result. Another study suggests that half a period of the frequency of interest would be an optimal window for averaging, that is 50 milliseconds for studying the alpha band [6].

## ERS/ERD, Complex Demodulation and Wavelets: Are they different?

From an algorithmic perspective they are obviously quite different from each other. But how about their final results?

To answer this question, one can compare the results produced by these analysis methods, which we did based on an example dataset from an alpha blocking paradigm. The power values in the alpha band (i.e., 8 – 12 Hz) were computed using * ERS/ERD* and

**, while the Wavelet layer that is corresponding to the center frequency of 10 Hz was extracted from the Wavelet power analysis output using the**

*Complex Demodulation***transformation. The same reference interval was used for normalizing the power values and the results are expressed in percent change by using the**

*Wavelet Layer Extraction***solution. As shown in Figure 4, the results exhibit same degrees of alpha power suppression after the stimulation onset at time zero. Nevertheless, the data within the safety margin should be interpreted carefully. The reasons are that the edges of the transformed data are affected due to missing data beyond the edges, and the post-stimulus period might contain mixed effects of the stimulation and the response. For additional details on this comparison, please watch our webinar recording on “An overview of spectral analysis methods”.**

*Percent Change**Figure 4:
*

*Comparison of the results in terms of percent change of channel Oz.
*

*Dataset was collected from an alpha blocking task.
*

*Alpha oscillations get suppressed when paying attention to the visual stimulus.
*

*The Percent Change solution was used to compute the relative measure for outputs from Complex Demodulation (CD) and Wavelet Layer Extraction.*

## A summary of the spectral analysis methods

To recapitulate, FFT is done when oscillatory effects are expected and to get a spectral characterization of a quasi-static brain state. An example use-case would be computing alpha power in a resting-state paradigm. Time-frequency analysis methods are used when effects are expected to be transient in time. They can be used in an event-related paradigm to investigate phase-locked and non-phase locked activities.

The table below summarizes important prerequisites and depicts how the output of each method looks like.

## Concluding Remarks

Analyzer has made spectral analysis of EEG data easy and efficient by implementing four dedicated transformations. Having performed your spectral analysis, you will likely extract quantitative measures for further statistical analysis, please refer to our article here to learn about the export options. If there are further questions on this topic or around Analyzer, you’re welcome to reach out to us anytime.

References

[1] Roach, B. J., & Mathalon, D. H. (2008). Event-related EEG time-frequency analysis: an overview of measures and an analysis of early gamma band phase locking in schizophrenia. Schizophrenia bulletin, 34(5), 907–926. https://doi.org/10.1093/schbul/sbn093

[2] Ktonas, P. Y., & Papp, N. (1980). Instantaneous envelope and phase extraction from real signals: Theory, implementation, and an application to EEG analysis. Signal Processing, 2(4), 373-385. https://doi.org/10.1016/0165-1684(80)90079-1

[3] Tallon-Baudry, C. & Bertrand, O. (1999). Oscillatory gamma activity in humans and its role in object representation. Trends in Cognitive Sciences, 3(4), 151-162. https://doi.org/10.1016/s1364-6613(99)01299-1

[4] Cohen, M. X. (2014). Time-frequency power and baseline normalizations. In Analyzing neural time series data: Theory and practice (pp. 217-240). The MIT Press.

[5] Graimann, B., & Pfurtscheller, G. (2006). Quantification and visualization of event-related changes in oscillatory brain activity in the time-frequency domain. Progress in Brain Research, 159, 79-97. https://doi.org/10.1016/S0079-6123(06)59006-5

[6] Knoesche, T. R., Bastiaansen, M. C. M. (2002). On the time resolution of event-related desynchronization: a simulation study. Clin Neurophysiol., 113(5), 754-763. https://doi.org/10.1016/s1388-2457(02)00055-x

[7] Bruns, A. (2004). Fourier-, Hilbert- and wavelet-based signal analysis: are they really different approaches? Journal of Neuroscience Methods, 137(2), 321-332. https://doi.org/10.1016/j.jneumeth.2004.03.002