Essay: Artifacts and filtering

Essay details:

  • Subject area(s): Engineering essays
  • Reading time: 8 minutes
  • Price: Free download
  • Published: November 24, 2020*
  • File format: Text
  • Words: 2,410 (approx)
  • Number of pages: 10 (approx)
  • Artifacts and filtering
    0.0 rating based on 12,345 ratings
    Overall rating: 0 out of 5 based on 0 reviews.

Text preview of this essay:

This page of the essay has 2,410 words. Download the full version above.

Artifacts from many different sources can contaminate MEEG data and must be identified and/or removed. Artifacts can be of non-physiological (bad electrode contact, power line, etc.) or physiological (pulse, muscle activity, sweating, movement, etc.) origin. The data should first be visually inspected (where the investigator is blind to experimental conditions) to ascertain what types of artifact are actually present in the data. Subsequently, established artifact identification/removal pipelines can be run, or a plan can be constructed for data segments with artifacts to be excluded.

If automatic artifact detection methods are used, they should be followed up by visual inspection of the data. Any operations performed on the data (see Section 4.1 workflow) must be described, specifying the parameters of the algorithm used. It is recommended to describe in detail the type of detrending performed and the algorithm order (e.g., linear 1st order, piecewise, etc). When automatic artifact rejection/correction is performed, which method was used and what was the range of parameters (e.g., EEG data with a range bigger than 75 microV, epoch rejected based on 3 SD deviation from the mean kurtosis). Similarly for channel interpolation, one must specify the method and order (e.g., trilinear, spline nth order). When ICA is used, describe the algorithm and parameters used, including the number of ICs. If artefacts are rejected using ICA, report how these were identified. It is worthwhile to also consider including topographies of components in the Supplementary Materials section of manuscripts. If manual artifact rejection procedures are used, it is essential to describe what types of features in the MEEG signal were identified and define the criteria used to reject segments of data. This is also essential for allowing the reader to reproduce the results, as well as compare results between studies (see above on reporting visually removed trials (epochs) for instance). Once artefacts have been removed, the average number of remaining trials per condition should also be reported.

In MEG specialized procedures for correcting data that contain artifacts can be used, eliminating the need to reject data. These include techniques such as signal-space projection methods (SSP, Uusiltalo & Ilmoniemi, 1997)for that use “empty room” measurements with MEG data to differentiate external sources of interference from brain activity, and signal space separation (SSS) methods and their temporally extended variants (tSSS, Taulu et al., 2004; Taulu & Simola, 2006) that rely on the geometric separation of brain activity from noise signals in MEG data. SSS methods have been recommended as being superior to SSP (Haumann et al., 2016). The ordering of preprocessing steps for cleaning MEG data is particularly important, due to potential data transformation – for some caveats see Gross et al., 2013.

For both MEG and EEG data, particular attention must be taken to describe filtering, as this can have dramatic consequences on estimating time-courses and phases (Rousselet, 2012; Widmann & Schröger, 2012). Some investigators have advocated the use of a sampling rate that is 4 times above the putative cut-off frequency of the low pass filter used (Luck et al 2014 and latest IFCN guidelines). That said, the roll-off rate/slope of the filter must also be taken into consideration and therefore specified with a description of the type, bands etc. of the filters used. The type and parameters of any applied post-hoc filter and (for EEG, EOG and EMG) re-computed references must be specified as they crucially affect the outputs of waveform or frequency analyses (but not topographies). For frequency and time-frequency analyses, details on the transformation algorithm must similarly be reported.

Data preprocessing also forms an essential part of multivariate techniques, and can dramatically affect decoding performance (Guggenmos et al., 2018). We recommend to carefully describe the method used, in particular, if noise normalization is performed channel wise (univariate normalization) or for all channels together (multivariate normalization, or whitening). For the latter, the covariance characterization must be specified (based on baseline, epochs, or for each time point).

Re-referencing

EEG is a differential measure and in non-clinical EEG is usually recorded relative to a fixed reference (in contrast to clinical practice, which usually uses bipolar montages). While EEG is recorded relative to some reference, it can later be re-referenced by subtracting the values of another channel or weighted sum of channels from all channels. The need for re-referencing depends on the goals of the analysis and EEG measures used (e.g., the average reference, see below) is beneficial for evaluation of coherence and for source modelling, although it may produce an inflated level of synchrony or leakage of activity from one region to another when assessed in sensor space). Re-referencing does not change the shape of the scalp topography at a single time sample, however, this can cause issues when working in sensor space with isolated sensors (Hari & Puce, 2017). Specifically, the shape of the recorded waveforms at specific electrodes can be altered and this will also affect the degree of distortion of waveforms by artifacts. Hence, when comparing across experiments, the references used should be taken into account, and if unusual, the reference choice should be justified.

Re-referencing relative to the average of all channels (common average reference, CAR) is most common for high-density recordings as the first step in EEG source localization. The main assumption behind the AR is that the summed potentials from electrodes spaced evenly across the entire head should be zero (Bertrand et al., 1985). While this is usually not a problem for EEG datasets of 128 sensors or more (Srinivasan et al., 1998; Nunez & Srinivasan, 2006), for low density recordings and ROI-based analyses in sensor space there is a serious risk of violating the assumptions for the average reference and the possibility of introducing shifts in potentials (Hari & Puce, 2017). Hence, the AR should be avoided in low-density recordings (e.g., < 64 channels).

An alternative to the AR is the Reference Electrode Standardization Technique (or REST) (Yao, Dezhong, 2001). Both the AR and REST have been shown to be the extremes of a family of Bayesian reference estimators (Hu et al. 2018). If the focus of the data analysis will be on source space inference no re-referencing is theoretically necessary. That said, re-referencing of the data may be useful for comparisons to existing literature.

For MEG data, the order (e.g., 3rd order with CTF data) of gradiometer re-referencing should be reported (if applicable), as well as when in the preprocessing pipeline this step occurs.

Spectral and time-frequency analysis
A common approach for the analysis of MEEG data is to examine the data in terms of its frequency content, and these analyses are applicable for both task-related as well as resting state designs. One important caveat for these types of analyses is that the acquired data should have been sampled at around 5 times the oscillatory frequencies of interest, due to the analogue anti-aliasing filter characteristics – underscoring the importance of planning all data analyses prior to data acquisition, ideally during the design of the study.

In task-related designs MEEG activity can be classified as evoked (i.e., be phase-locked to task events/stimulus presentation) or induced (i.e., related to the event, but not exactly phase-locked to it). Hence, it is important to specify what type of activity is being studied. The domain in which the analysis proceeds (time and frequency or frequency alone) should be specified, as should the spectral decomposition method used (e.g., wavelets, spectral analyses), and whether the data are expressed in sensor or source space. These methods can be the precursor to the assessment of functional connectivity (see Section 4.6).

The spectral decomposition algorithm, as well as parameters used, should be specified in sufficient detail since these crucially affect the outcome. Therefore, depending on the decomposition method used (e.g., wavelet convolution, Fourier decomposition, Hilbert transformation of bandpass-filtered signals, or parametric spectral estimation), one should describe the type of wavelet (including the tuning parameters), the exact frequency or time-frequency parameters (frequency and time resolutions), exact frequency bands, number of data points, zero padding, windowing (e.g., a Hann or Hanning window), spectral smoothing need to be specified. It is relevant to note that the required frequency resolution is defined as the minimum frequency interval that two distinct underlying oscillatory components need to have in order to be dissociated in the analysis (Bloomfield, 2004; Boashash, 2003). This should not be mistaken with the increments at which the frequency values are reported (e.g., when smoothing or oversampling is used in the analyses). When using overlapping windows (e.g., in Welch’s method) or using Multi-taper windows for robust estimation, the potential spectral smoothing may lead to closely spaced narrow frequency bands to blend. This should be carefully considered and reported.

Source modelling
MEEG data are recorded from outside the head. Source modelling is an attempt to explain the spatio-temporal pattern of the recorded data in sensor space as resulting from the activity of specific neural sources within the brain (within source space), a process known as solving the inverse problem. Since there is no unique solution to the inverse problem, it is mathematically ill-posed and needs to be estimated using additional assumptions. Source modelling first requires a solution of the forward problem, which predicts the effect of the tissues in the head on the propagation of activity to MEEG sensors. These procedures require a volume conduction model of the head and a source model, both of which can crucially influence the accuracy and reliability of the results (Baillet et al., 2001; Michel & He, 2018). Practically, the forward model (or lead field matrix) describes the magnetic field or potential distributions in sensor space that result from a predefined set of (unit amplitude) sources. The sources are typically defined either in a volumetric grid or on a cortically constrained sheet. Information from the forward model is then used to estimate the solution of the inverse problem, in which the measured MEEG signals are attributed to active sources within the brain. It is important to note that source modelling procedures essentially provide approximations of the inverse solution as solved under very specific assumptions or constraints.

In addition to the MEEG data itself, two other types of data are needed to generate solutions to the forward and inverse problems. These are the spatial locations of the sensors and fiducials relative to the head (Section 3.2), and details of the type of geometric data that the MEEG data will be coregistered e.g., spherical head model or structural MRI. The procedure used to coregister the locations of measurement sensors and fiducials with geometric data must be described (see Section 2.1 for definitions; Section 3.2 for sensor digitization methods). If using structural MRI data, it should be made clear if a normalized structural MRI volume such as the MNI152 template, or individual participant MRIs are being used for data analyses. If high-resolution structural MRIs are being acquired in individual participants, all data acquisition parameters, as well as normalization procedures should be described.

It is essential that all details of the head model and the source model are given. For EEG, the numerical method (e.g., boundary element modelling (BEM), finite element modelling (FEM)) used to model the head by reconstruction from the structural MRI of the head must be reported, and the values of electrical conductivity of the different tissues that were used in the calculations must be specified. This is less of a problem for MEG where magnetic fields are not greatly distorted by passing through different tissue types (Baillet, 2017). For the volume conductor model, the type of distribution of the solution space and the distance between points in the space need to be reported, as well as the method used to extract the gray matter mantle, if the solution points are restricted to it. In this case, the head model should ideally be based on an individual MRI of the participant’s head, especially in clinical studies where brain lesions or malformations may be involved. In certain clinical settings, approximate head models might be adequate, although their limitations should be explicitly acknowledged (Valdés-Hernández et al. 2009). The source localization method (e.g., equivalent current dipole fitting, distributed model, dipole scanning), software and its version (e.g., BESA, Brainstorm, Fieldtrip, LORETA, MNE, Nutmeg, SPM, etc.) must be reported, with inclusion of parameters used (e.g., the regularization parameter) and appropriate reference to the technical paper describing the method in detail.

Connectivity analysis
We refer here to connectivity analyses as any methods that aim to detect functional coupling between two or more channels or sources. It is important to report and justify the use of either sensor or source space for the calculation of derived metrics of coupling (e.g., network measures such as centrality or complexity). A recent general reference on connectivity measures can be found in O’Neill et al. (2018).

When using multivariate measures (either data-driven or model-based) such as partial coherence and multiple coherence, the exact variables that have been analysed and the exact variables with respect to which the data was partialised, marginalised, or conditioned should be determined. When reporting measures of data-driven spectral coherence or synchrony (Halliday et al. 1995) to follow should be considered and reported: the exact formulation (or reference), whether the measure has been debiased, any subtraction or normalisation with respect to an experimental condition or a mathematical criterion. In case of Auto-Regressive (AR)-based multivariate modelling (e.g., in the Partial Directed Coherence group of measures; Baccala and Sameshima (2001), the exact model parameters (number of variables, data points and window lengths, as well as the estimation methods and fitting criteria) should be reported.

While the committee agrees that statistical metrics of dependency can be obtained at the channel level, it should be clear that these are not per se measures of neural connectivity Haufe et al., 2012). The latter can only be obtained by an inferential process that compensates for volume conduction and spurious connections due to unobserved common sources or cascade effects. In spite of that, dependency measures can be useful for e.g., biomarking. The possible insight into brain function derived from these measures should be critically discussed. This is particularly important since the interpretation of MEEG-based connectivity metrics may be confounded by aspects of the data that do not directly reflect true neural events (Schoffelen & Gross J. 2009, Valdes-Sosa et al. 2011). Inference about connectivity between neural masses can only be performed with dependency measures at the source level and correct inferential procedures….

About Essay Sauce

Essay Sauce is the free student essay website for college and university students. We've got thousands of real essay examples for you to use as inspiration for your own work, all free to access and download.

...(download the rest of the essay above)

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Artifacts and filtering. Available from:<https://www.essaysauce.com/engineering-essays/artifacts-and-filtering/> [Accessed 28-01-22].

These Engineering essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on Essay.uk.com at an earlier date.

Review this essay:

Please note that the above text is only a preview of this essay.

Name
Email
Rating
Review Content

Latest reviews: