pyriemann.classification.FgMDM¶
- class pyriemann.classification.FgMDM(metric='riemann', tsupdate=False, n_jobs=1)¶
Classification by Minimum Distance to Mean with geodesic filtering.
Apply geodesic filtering described in [1], and classify using MDM. The geodesic filtering is achieved in tangent space with a Linear Discriminant Analysis, then data are projected back to the manifold and classifier with a regular MDM. This is basically a pipeline of FGDA and MDM.
- Parameters:
- metricstring | dict, default=”riemann”
Metric used for reference matrix estimation (for the list of supported metrics, see
pyriemann.utils.mean.mean_covariance()
), for distance estimation (seepyriemann.utils.distance.distance()
) and for tangent space map (seepyriemann.utils.tangent_space.tangent_space()
). The metric can be a dict with three keys, “mean”, “dist” and “map” in order to pass different metrics.- tsupdatebool, default=False
Activate tangent space update for covariante shift correction between training and test, as described in [2]. This is not compatible with online implementation. Performance are better when the number of matrices for prediction is higher.
- n_jobsint, default=1
The number of jobs to use for the computation. This works by computing each of the class centroid in parallel. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all, which is useful for debugging. For n_jobs below -1, (n_cpus + 1 + n_jobs) are used. Thus for n_jobs = -2, all CPUs but one are used.
- Attributes:
- classes_ndarray, shape (n_classes,)
Labels for each class.
See also
MDM
FGDA
TangentSpace
References
[1]Riemannian geometry applied to BCI classification A. Barachant, S. Bonnet, M. Congedo and C. Jutten. 9th International Conference Latent Variable Analysis and Signal Separation (LVA/ICA 2010), LNCS vol. 6365, 2010, p. 629-636.
[2]Classification of covariance matrices using a Riemannian-based kernel for BCI applications A. Barachant, S. Bonnet, M. Congedo and C. Jutten. Neurocomputing, Elsevier, 2013, 112, pp.172-178.
- __init__(metric='riemann', tsupdate=False, n_jobs=1)¶
Init.
- fit(X, y, sample_weight=None)¶
Fit FgMDM.
- Parameters:
- Xndarray, shape (n_matrices, n_channels, n_channels)
Set of SPD matrices.
- yndarray, shape (n_matrices,)
Labels for each matrix.
- sample_weightNone | ndarray, shape (n_matrices,), default=None
Weights for each matrix. If None, it uses equal weights.
- Returns:
- selfFgMDM instance
The FgMDM instance.
- fit_transform(X, y=None, **fit_params)¶
Fit to data, then transform it.
Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.
- Parameters:
- Xarray-like of shape (n_samples, n_features)
Input samples.
- yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
- **fit_paramsdict
Additional fit parameters.
- Returns:
- X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
- get_metadata_routing()¶
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
- Returns:
- routingMetadataRequest
A
MetadataRequest
encapsulating routing information.
- get_params(deep=True)¶
Get parameters for this estimator.
- Parameters:
- deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns:
- paramsdict
Parameter names mapped to their values.
- predict(X)¶
Get the predictions after FGDA filtering.
- Parameters:
- Xndarray, shape (n_matrices, n_channels, n_channels)
Set of SPD matrices.
- Returns:
- predndarray of int, shape (n_matrices,)
Predictions for each matrix according to the nearest centroid.
- predict_proba(X)¶
Predict proba using softmax after FGDA filtering.
- Parameters:
- Xndarray, shape (n_matrices, n_channels, n_channels)
Set of SPD matrices.
- Returns:
- probndarray, shape (n_matrices, n_classes)
The softmax probabilities for each class.
- score(X, y, sample_weight=None)¶
Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
- Parameters:
- Xarray-like of shape (n_samples, n_features)
Test samples.
- yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
- sample_weightarray-like of shape (n_samples,), default=None
Sample weights.
- Returns:
- scorefloat
Mean accuracy of
self.predict(X)
w.r.t. y.
- set_fit_request(*, sample_weight: bool | None | str = '$UNCHANGED$') FgMDM ¶
Request metadata passed to the
fit
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed tofit
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it tofit
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
- sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
sample_weight
parameter infit
.
- Returns:
- selfobject
The updated object.
- set_output(*, transform=None)¶
Set output container.
See Introducing the set_output API for an example on how to use the API.
- Parameters:
- transform{“default”, “pandas”, “polars”}, default=None
Configure output of transform and fit_transform.
“default”: Default output format of a transformer
“pandas”: DataFrame output
“polars”: Polars output
None: Transform configuration is unchanged
Added in version 1.4: “polars” option was added.
- Returns:
- selfestimator instance
Estimator instance.
- set_params(**params)¶
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters:
- **paramsdict
Estimator parameters.
- Returns:
- selfestimator instance
Estimator instance.
- set_score_request(*, sample_weight: bool | None | str = '$UNCHANGED$') FgMDM ¶
Request metadata passed to the
score
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed toscore
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it toscore
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
- sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
sample_weight
parameter inscore
.
- Returns:
- selfobject
The updated object.
- transform(X)¶
Get the distance to each centroid after FGDA filtering.
- Parameters:
- Xndarray, shape (n_matrices, n_channels, n_channels)
Set of SPD matrices.
- Returns:
- distndarray, shape (n_matrices, n_cluster)
The distance to each centroid according to the metric.
Examples using pyriemann.classification.FgMDM
¶
Ensemble learning on functional connectivity