pyriemann.spatialfilters.AJDC

class pyriemann.spatialfilters.AJDC(window=128, overlap=0.5, fmin=None, fmax=None, fs=None, dim_red=None, verbose=True)

AJDC algorithm.

The approximate joint diagonalization of Fourier cospectral matrices (AJDC) [1] is a versatile tool for blind source separation (BSS) tasks based on Second-Order Statistics (SOS), estimating spectrally uncorrelated sources.

It can be applied:

  • on a single subject, to solve the classical BSS problem [1],

  • on several subjects, to solve the group BSS (gBSS) problem [2],

  • on several experimental conditions (for eg, baseline versus task), to exploit the diversity of source energy between conditions in addition to generic coloration and time-varying energy [1].

AJDC estimates Fourier cospectral matrices by the Welch’s method, and applies a trace-normalization. If necessary, it averages cospectra across subjects, and concatenates them along experimental conditions. Then, a dimension reduction and a whitening are applied on cospectra. An approximate joint diagonalization (AJD) [3] allows to estimate the joint diagonalizer, not constrained to be orthogonal. Finally, forward and backward spatial filters are computed.

Parameters:
windowint, default=128

The length of the FFT window used for spectral estimation.

overlapfloat, default=0.5

The percentage of overlap between window.

fminfloat | None, default=None

The minimal frequency to be returned. Since BSS models assume zero-mean processes, the first cospectrum (0 Hz) must be excluded.

fmaxfloat | None, default=None

The maximal frequency to be returned.

fsfloat | None, default=None

The sampling frequency of the signal.

dim_redNone | dict, default=None

Parameter for dimension reduction of cospectra, because Pham’s AJD is sensitive to matrices conditioning.

If None :

no dimension reduction during whitening.

If {'n_components': val} :

dimension reduction defining the number of components; val must be an integer superior to 1.

If {'expl_var': val} :

dimension reduction selecting the number of components such that the amount of variance that needs to be explained is greater than the percentage specified by val. val must be a float in (0,1], typically 0.99.

If {'max_cond': val} :

dimension reduction selecting the number of components such that the condition number of the mean matrix is lower than val. This threshold has a physiological interpretation, because it can be viewed as the ratio between the power of the strongest component (usually, eye-blink source) and the power of the lowest component you don’t want to keep (acquisition sensor noise). val must be a float strictly superior to 1, typically 100.

If {'warm_restart': val} :

dimension reduction defining the number of components from an initial joint diagonalizer, and then run AJD from this solution. val must be a square ndarray.

verbosebool, default=True

Verbose flag.

See also

CospCovariances

Notes

New in version 0.2.7.

References

[1] (1,2,3)

On the blind source separation of human electroencephalogram by approximate joint diagonalization of second order statistics M. Congedo, C. Gouy-Pailler, C. Jutten. Clinical Neurophysiology, Elsevier, 2008, 119 (12), pp.2677-2686.

[2]

Group indepedent component analysis of resting state EEG in large normative samples M. Congedo, R. John, D. de Ridder, L. Prichep. International Journal of Psychophysiology, Elsevier, 2010, 78, pp.89-99.

[3]

Joint approximate diagonalization of positive definite Hermitian matrices D.-T. Pham. SIAM Journal on Matrix Analysis and Applications, Volume 22 Issue 4, 2000

Attributes:
n_channels_int

If fit, the number of channels of the signal.

freqs_ndarray, shape (n_freqs,)

If fit, the frequencies associated to cospectra.

n_sources_int

If fit, the number of components of the source space.

diag_filters_ndarray, shape (n_sources_, n_sources_)

If fit, the diagonalization filters, also called joint diagonalizer.

forward_filters_ndarray, shape (n_sources_, n_channels_)

If fit, the spatial filters used to transform signal into source, also called deximing or separating matrix.

backward_filters_ndarray, shape (n_channels_, n_sources_)

If fit, the spatial filters used to transform source into signal, also called mixing matrix.

__init__(window=128, overlap=0.5, fmin=None, fmax=None, fs=None, dim_red=None, verbose=True)

Init.

fit(X, y=None)

Fit.

Compute and diagonalize cospectra, to estimate forward and backward spatial filters.

Parameters:
Xndarray, shape (n_subjects, n_conditions, n_channels, n_times) | list of n_subjects of list of n_conditions ndarray of shape (n_channels, n_times), with same n_conditions and n_channels but different n_times

Multi-channel time-series in channel space, acquired for different subjects and under different experimental conditions.

yNone

Currently not used, here for compatibility with sklearn API.

Returns:
selfAJDC instance

The AJDC instance.

fit_transform(X, y=None, **fit_params)

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters:
Xarray-like of shape (n_samples, n_features)

Input samples.

yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None

Target values (None for unsupervised transformations).

**fit_paramsdict

Additional fit parameters.

Returns:
X_newndarray array of shape (n_samples, n_features_new)

Transformed array.

get_metadata_routing()

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:
routingMetadataRequest

A MetadataRequest encapsulating routing information.

get_params(deep=True)

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

get_src_expl_var(X)

Estimate explained variances of sources.

Estimate explained variances of sources, see Appendix D in [1].

Parameters:
Xndarray, shape (n_matrices, n_channels, n_times)

Multi-channel time-series in channel space.

Returns:
src_varndarray, shape (n_matrices, n_sources)

Explained variance for each source.

inverse_transform(X, supp=None)

Transform source space to channel space.

Transform source space to channel space, applying backward spatial filters, with the possibility to suppress some sources, like in BSS filtering/denoising.

Parameters:
Xndarray, shape (n_matrices, n_sources, n_times)

Multi-channel time-series in source space.

supplist of int | None, default=None

Indices of sources to suppress. If None, no source suppression.

Returns:
signalndarray, shape (n_matrices, n_channels, n_times)

Multi-channel time-series in channel space.

set_inverse_transform_request(*, supp: bool | None | str = '$UNCHANGED$') AJDC

Request metadata passed to the inverse_transform method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to inverse_transform if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to inverse_transform.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
suppstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for supp parameter in inverse_transform.

Returns:
selfobject

The updated object.

set_output(*, transform=None)

Set output container.

See sphx_glr_auto_examples_miscellaneous_plot_set_output.py for an example on how to use the API.

Parameters:
transform{“default”, “pandas”}, default=None

Configure output of transform and fit_transform.

  • “default”: Default output format of a transformer

  • “pandas”: DataFrame output

  • None: Transform configuration is unchanged

Returns:
selfestimator instance

Estimator instance.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

transform(X)

Transform channel space to source space.

Transform channel space to source space, applying forward spatial filters.

Parameters:
Xndarray, shape (n_matrices, n_channels, n_times)

Multi-channel time-series in channel space.

Returns:
sourcendarray, shape (n_matrices, n_sources, n_times)

Multi-channel time-series in source space.