How to reprocess EEG data to correct offsets between EEG and coregistered markers

by Lydia Timm, Ph.D.
Scientific Consultant (Brain Products)

Logo AnalyzerWe all know that in ERP research accurate timing is crucial since we identify the ERP components via topography, amplitude but also strongly on their latency. We do describe a negative deflection around 100 ms as an N1 or N100. We know from the literature how cognitive states or certain subjects groups may show differences in their ERP’s amplitudes and latencies. Therefore being sure about your marker timing is a critical component of your research results.

Several of our BrainVision Analyzer 2 users acquire their data with recording stations from third party companies. It may be the case that for such a recording set-up that the actual stimulus timing may not match the marker in the EEG stream for various reasons. If such an offset is know it can be easily accounted for during the analyses. However, if the delay is not known the user will analyze the data and get wrong results. Those data need to be reprocessed to obtain correct results. This article is about how to do this quick and easy with BrainVision Analyzer 2.

Edit Markers

Imagine we have recorded an auditory ERP experiment at a sampling rate of 500 Hz and we learned only afterwards that due to the digital high pass filtering that was used by the recording station, our data has a delay of 66 ms in regard to the markers that were set. What we would expect in a healthy subject is an N1 peak latency around 100 ms. However, here the N1 shows a peak latency at 170 ms and the respective topography in the expected time range of 100 ms shows a rather scattered pattern.

How to reprocess EEG data to correct offsets between EEG and coregistered markers

Figure 1: Delayed N1 peak and topography of the expected N1 latency window.

So what could be a convenient approach to correct the offset between data and markers?

In BrainVision Analyzer 2 we may not be able to shift our data in time, but what we can do is shift the markers. You may already be familiar with the “Edit Markers” transform to set and edit markers in your EEG. However this transformation also allows you to shift your markers in time. No previous processing steps are required before this transform may be used. So let’s use the transformation “Edit Markers and set it in “automatic” mode since this will allow us to make the timing shift for all of the desired markers in our dataset.

After clicking “Next” we have to choose the description for the marker and the respective name, in our case S 22 as it indicates the auditory stimulus marker. In the “timeShift in ms” text box, you enter the value for the time shift of the marker. The relevant marker is moved by this value to the left (negative value) or right (positive value) on the axis, our data requires the positive shift.

Here we can adapt the marker’s timing by the desired +66 ms from our example above. Remember the triggers were recorded instantly but the EEG was delayed by the filter. So from the perspective of the EEG the trigger is early and has to be shifted to the right by 66 ms, hence the “+”. As a last step in the wizard window we need to “Add” the editing process to the table (see figure 2 below).

How to reprocess EEG data to correct offsets between EEG and coregistered markers

Figure 2: Shifting the markers with BrainVision Analyzer’s 2.0 “Edit marker” functionality.

Then you click “Finish” and you will have a resulting Edit Markers History node that will have a changed marker position. In our example the markers are now shifted 66 ms forward. Please note that I only shift my S 22 markers, of course you need to shift all of the markers present in your dataset by adding them to the table.

Let’s have a final look and compare now the results of the timing shift and the effect it had on the ERP

How to reprocess EEG data to correct offsets between EEG and coregistered markers

Figure 3a and 3b: Comparison between original and corrected N1 time course and topography.

In the upper figure (3a) we see a wrong “N1”. That is, we are actually seeing a map of the activity around 36 ms. In the lower figure (3b) it looks like the potential we would have expected. We have successfully corrected for the previous timing issues and can confidently export our amplitude and latency results for further statistical processing.


Once you have applied the shift to one dataset you are of course able to use this node via a template and/or drag & drop and apply it on your other datasets. You may of course use the marker editing functionality after having already done some processing steps. Filters and re-referencing are not affected by this, whether the markers are shifted before or after those transformations.

We recommend that you shift the markers when the data is still continuous and unsegmented. Unfortunately this means that processing steps after segmentation that contain manual or semiautomatic input have to be reprocessed by you again. But then, you want to do this anyways, as the segmented data stretch is now different. This would for example affect manual or semi-automatic artifact rejection and manual peak detection. In our case several channels had to be recreated due to excessive noise or because they dried out over time. This was done by interpolating those channels from neighbouring channels. As those channels are most likely different for each subject, it cannot be done in a template mode, however drag and drop works for this case.

Here is a trick to save you a bit time while reprocessing your data. You can create, out of your previous history tree, a history template and break it into smaller parts. The idea is to run all automatic steps for all subjects, without the need for you to be there. To do so, semiautomatic steps are done within one template, while steps such as linear derivation, baseline correction and average computation and the like are part of a different template or templates that can run automatically without further input from your side.

I hope that this short overview may help you when encountering issues with timing inaccuracies between markers and your data. In case of any further questions we are happy to assist you (

Most recent articles