pyriemann.regression.SVR

class pyriemann.regression.SVR(*, metric='riemann', kernel_fct=None, Cref=None, tol=0.001, C=1.0, epsilon=0.1, shrinking=True, cache_size=200, verbose=False, max_iter=-1)

Regression by support-vector machine.

Support-vector machine (SVM) regression with precomputed Riemannian kernel matrix according to different metrics, extending the idea described in [1] to regression.

Parameters
metric{‘riemann’, ‘euclid’, ‘logeuclid’}, default=’riemann’

Metric for kernel matrix computation.

CrefNone | ndarray, shape (n_channels, n_channels)

Reference point for kernel matrix computation. If None, the mean of the training data according to the metric is used.

kernel_fct‘precomputed’ | callable

If ‘precomputed’, the kernel matrix for datasets X and Y is estimated according to pyriemann.utils.kernel(X, Y, Cref, metric). If callable, the callable is passed as the kernel parameter to sklearn.svm.SVC() [2]. The callable has to be of the form kernel(X, Y, Cref, metric).

tolfloat, default=1e-3

Tolerance for stopping criterion.

Cfloat, default=1.0

Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. The penalty is a squared l2 penalty.

epsilonfloat, default=0.1

Epsilon in the epsilon-SVR model. It specifies the epsilon-tube within which no penalty is associated in the training loss function with points predicted within a distance epsilon from the actual value.

shrinkingbool, default=True

Whether to use the shrinking heuristic.

cache_sizefloat, default=200

Specify the size of the kernel cache (in MB).

verbosebool, default=False

Enable verbose output. Note that this setting takes advantage of a per-process runtime setting in libsvm that, if enabled, may not work properly in a multithreaded context.

max_iterint, default=-1

Hard limit on iterations within solver, or -1 for no limit.

Notes

New in version 0.3.

References

1

Classification of covariance matrices using a Riemannian-based kernel for BCI applications A. Barachant, S. Bonnet, M. Congedo and C. Jutten. Neurocomputing, Elsevier, 2013, 112, pp.172-178.

2

https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html

Attributes
data_ndarray, shape (n_matrices, n_channels, n_channels)

If fitted, training data.

__init__(*, metric='riemann', kernel_fct=None, Cref=None, tol=0.001, C=1.0, epsilon=0.1, shrinking=True, cache_size=200, verbose=False, max_iter=-1)

Init.

property coef_

Weights assigned to the features when kernel=”linear”.

Returns
ndarray of shape (n_features, n_classes)
fit(X, y, sample_weight=None)

Fit.

Parameters
Xndarray, shape (n_matrices, n_channels, n_channels)

Set of SPD matrices.

yndarray, shape (n_matrices,)

Target values for each matrix.

sample_weightNone | ndarray, shape (n_matrices,), default=None

Weights for each matrix. Rescale C per matrix. Higher weights force the classifier to put more emphasis on these matrices. If None, it uses equal weights.

Returns
selfSVR instance

The SVR instance.

get_params(deep=True)

Get parameters for this estimator.

Parameters
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns
paramsdict

Parameter names mapped to their values.

property n_support_

Number of support vectors for each class.

predict(X)

Perform regression on samples in X.

For an one-class model, +1 (inlier) or -1 (outlier) is returned.

Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)

For kernel=”precomputed”, the expected shape of X is (n_samples_test, n_samples_train).

Returns
y_predndarray of shape (n_samples,)

The predicted values.

score(X, y, sample_weight=None)

Return the coefficient of determination of the prediction.

The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0.

Parameters
Xarray-like of shape (n_samples, n_features)

Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.

yarray-like of shape (n_samples,) or (n_samples, n_outputs)

True values for X.

sample_weightarray-like of shape (n_samples,), default=None

Sample weights.

Returns
scorefloat

\(R^2\) of self.predict(X) wrt. y.

Notes

The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score(). This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters
**paramsdict

Estimator parameters.

Returns
selfestimator instance

Estimator instance.