doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
is_stationary() [source]
Returns whether the kernel is stationary. | sklearn.modules.generated.sklearn.gaussian_process.kernels.product#sklearn.gaussian_process.kernels.Product.is_stationary |
property n_dims
Returns the number of non-fixed hyperparameters of the kernel. | sklearn.modules.generated.sklearn.gaussian_process.kernels.product#sklearn.gaussian_process.kernels.Product.n_dims |
property requires_vector_input
Returns whether the kernel is stationary. | sklearn.modules.generated.sklearn.gaussian_process.kernels.product#sklearn.gaussian_process.kernels.Product.requires_vector_input |
set_params(**params) [source]
Set the parameters of this kernel. The method works on simple kernels as well as on nested kernels. The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Returns
self | sklearn.modules.generated.sklearn.gaussian_process.kernels.product#sklearn.gaussian_process.kernels.Product.set_params |
property theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale. Returns
thetandarray of shape (n_dims,)
The non-fixed, log-transformed hyperparameters of the kernel | sklearn.modules.generated.sklearn.gaussian_process.kernels.product#sklearn.gaussian_process.kernels.Product.theta |
__call__(X, Y=None, eval_gradient=False) [source]
Return the kernel k(X, Y) and optionally its gradient. Parameters
Xarray-like of shape (n_samples_X, n_features) or list of object
Left argument of the returned kernel k(X, Y)
Yarray-like of shape (n_samples_Y, n_features) or list of object, default=None
Right argument of the returned kernel k(X, Y). If None, k(X, X) is evaluated instead.
eval_gradientbool, default=False
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Returns
Kndarray of shape (n_samples_X, n_samples_Y)
Kernel k(X, Y)
K_gradientndarray of shape (n_samples_X, n_samples_X, n_dims), optional
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when eval_gradient is True. | sklearn.modules.generated.sklearn.gaussian_process.kernels.product#sklearn.gaussian_process.kernels.Product.__call__ |
class sklearn.gaussian_process.kernels.RationalQuadratic(length_scale=1.0, alpha=1.0, length_scale_bounds=1e-05, 100000.0, alpha_bounds=1e-05, 100000.0) [source]
Rational Quadratic kernel. The RationalQuadratic kernel can be seen as a scale mixture (an infinite sum) of RBF kernels with different characteristic length scales. It is parameterized by a length scale parameter \(l>0\) and a scale mixture parameter \(\alpha>0\). Only the isotropic variant where length_scale \(l\) is a scalar is supported at the moment. The kernel is given by: \[k(x_i, x_j) = \left( 1 + \frac{d(x_i, x_j)^2 }{ 2\alpha l^2}\right)^{-\alpha}\] where \(\alpha\) is the scale mixture parameter, \(l\) is the length scale of the kernel and \(d(\cdot,\cdot)\) is the Euclidean distance. For advice on how to set the parameters, see e.g. [1]. Read more in the User Guide. New in version 0.18. Parameters
length_scalefloat > 0, default=1.0
The length scale of the kernel.
alphafloat > 0, default=1.0
Scale mixture parameter
length_scale_boundspair of floats >= 0 or “fixed”, default=(1e-5, 1e5)
The lower and upper bound on ‘length_scale’. If set to “fixed”, ‘length_scale’ cannot be changed during hyperparameter tuning.
alpha_boundspair of floats >= 0 or “fixed”, default=(1e-5, 1e5)
The lower and upper bound on ‘alpha’. If set to “fixed”, ‘alpha’ cannot be changed during hyperparameter tuning. Attributes
bounds
Returns the log-transformed bounds on the theta. hyperparameter_alpha
hyperparameter_length_scale
hyperparameters
Returns a list of all hyperparameter specifications.
n_dims
Returns the number of non-fixed hyperparameters of the kernel.
requires_vector_input
Returns whether the kernel is defined on fixed-length feature vectors or generic objects.
theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. References
1
David Duvenaud (2014). “The Kernel Cookbook: Advice on Covariance functions”. Examples >>> from sklearn.datasets import load_iris
>>> from sklearn.gaussian_process import GaussianProcessClassifier
>>> from sklearn.gaussian_process.kernels import Matern
>>> X, y = load_iris(return_X_y=True)
>>> kernel = RationalQuadratic(length_scale=1.0, alpha=1.5)
>>> gpc = GaussianProcessClassifier(kernel=kernel,
... random_state=0).fit(X, y)
>>> gpc.score(X, y)
0.9733...
>>> gpc.predict_proba(X[:2,:])
array([[0.8881..., 0.0566..., 0.05518...],
[0.8678..., 0.0707... , 0.0614...]])
Methods
__call__(X[, Y, eval_gradient]) Return the kernel k(X, Y) and optionally its gradient.
clone_with_theta(theta) Returns a clone of self with given hyperparameters theta.
diag(X) Returns the diagonal of the kernel k(X, X).
get_params([deep]) Get parameters of this kernel.
is_stationary() Returns whether the kernel is stationary.
set_params(**params) Set the parameters of this kernel.
__call__(X, Y=None, eval_gradient=False) [source]
Return the kernel k(X, Y) and optionally its gradient. Parameters
Xndarray of shape (n_samples_X, n_features)
Left argument of the returned kernel k(X, Y)
Yndarray of shape (n_samples_Y, n_features), default=None
Right argument of the returned kernel k(X, Y). If None, k(X, X) if evaluated instead.
eval_gradientbool, default=False
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Only supported when Y is None. Returns
Kndarray of shape (n_samples_X, n_samples_Y)
Kernel k(X, Y)
K_gradientndarray of shape (n_samples_X, n_samples_X, n_dims)
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when eval_gradient is True.
property bounds
Returns the log-transformed bounds on the theta. Returns
boundsndarray of shape (n_dims, 2)
The log-transformed bounds on the kernel’s hyperparameters theta
clone_with_theta(theta) [source]
Returns a clone of self with given hyperparameters theta. Parameters
thetandarray of shape (n_dims,)
The hyperparameters
diag(X) [source]
Returns the diagonal of the kernel k(X, X). The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated. Parameters
Xndarray of shape (n_samples_X, n_features)
Left argument of the returned kernel k(X, Y) Returns
K_diagndarray of shape (n_samples_X,)
Diagonal of kernel k(X, X)
get_params(deep=True) [source]
Get parameters of this kernel. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
property hyperparameters
Returns a list of all hyperparameter specifications.
is_stationary() [source]
Returns whether the kernel is stationary.
property n_dims
Returns the number of non-fixed hyperparameters of the kernel.
property requires_vector_input
Returns whether the kernel is defined on fixed-length feature vectors or generic objects. Defaults to True for backward compatibility.
set_params(**params) [source]
Set the parameters of this kernel. The method works on simple kernels as well as on nested kernels. The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Returns
self
property theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale. Returns
thetandarray of shape (n_dims,)
The non-fixed, log-transformed hyperparameters of the kernel | sklearn.modules.generated.sklearn.gaussian_process.kernels.rationalquadratic#sklearn.gaussian_process.kernels.RationalQuadratic |
sklearn.gaussian_process.kernels.RationalQuadratic
class sklearn.gaussian_process.kernels.RationalQuadratic(length_scale=1.0, alpha=1.0, length_scale_bounds=1e-05, 100000.0, alpha_bounds=1e-05, 100000.0) [source]
Rational Quadratic kernel. The RationalQuadratic kernel can be seen as a scale mixture (an infinite sum) of RBF kernels with different characteristic length scales. It is parameterized by a length scale parameter \(l>0\) and a scale mixture parameter \(\alpha>0\). Only the isotropic variant where length_scale \(l\) is a scalar is supported at the moment. The kernel is given by: \[k(x_i, x_j) = \left( 1 + \frac{d(x_i, x_j)^2 }{ 2\alpha l^2}\right)^{-\alpha}\] where \(\alpha\) is the scale mixture parameter, \(l\) is the length scale of the kernel and \(d(\cdot,\cdot)\) is the Euclidean distance. For advice on how to set the parameters, see e.g. [1]. Read more in the User Guide. New in version 0.18. Parameters
length_scalefloat > 0, default=1.0
The length scale of the kernel.
alphafloat > 0, default=1.0
Scale mixture parameter
length_scale_boundspair of floats >= 0 or “fixed”, default=(1e-5, 1e5)
The lower and upper bound on ‘length_scale’. If set to “fixed”, ‘length_scale’ cannot be changed during hyperparameter tuning.
alpha_boundspair of floats >= 0 or “fixed”, default=(1e-5, 1e5)
The lower and upper bound on ‘alpha’. If set to “fixed”, ‘alpha’ cannot be changed during hyperparameter tuning. Attributes
bounds
Returns the log-transformed bounds on the theta. hyperparameter_alpha
hyperparameter_length_scale
hyperparameters
Returns a list of all hyperparameter specifications.
n_dims
Returns the number of non-fixed hyperparameters of the kernel.
requires_vector_input
Returns whether the kernel is defined on fixed-length feature vectors or generic objects.
theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. References
1
David Duvenaud (2014). “The Kernel Cookbook: Advice on Covariance functions”. Examples >>> from sklearn.datasets import load_iris
>>> from sklearn.gaussian_process import GaussianProcessClassifier
>>> from sklearn.gaussian_process.kernels import Matern
>>> X, y = load_iris(return_X_y=True)
>>> kernel = RationalQuadratic(length_scale=1.0, alpha=1.5)
>>> gpc = GaussianProcessClassifier(kernel=kernel,
... random_state=0).fit(X, y)
>>> gpc.score(X, y)
0.9733...
>>> gpc.predict_proba(X[:2,:])
array([[0.8881..., 0.0566..., 0.05518...],
[0.8678..., 0.0707... , 0.0614...]])
Methods
__call__(X[, Y, eval_gradient]) Return the kernel k(X, Y) and optionally its gradient.
clone_with_theta(theta) Returns a clone of self with given hyperparameters theta.
diag(X) Returns the diagonal of the kernel k(X, X).
get_params([deep]) Get parameters of this kernel.
is_stationary() Returns whether the kernel is stationary.
set_params(**params) Set the parameters of this kernel.
__call__(X, Y=None, eval_gradient=False) [source]
Return the kernel k(X, Y) and optionally its gradient. Parameters
Xndarray of shape (n_samples_X, n_features)
Left argument of the returned kernel k(X, Y)
Yndarray of shape (n_samples_Y, n_features), default=None
Right argument of the returned kernel k(X, Y). If None, k(X, X) if evaluated instead.
eval_gradientbool, default=False
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Only supported when Y is None. Returns
Kndarray of shape (n_samples_X, n_samples_Y)
Kernel k(X, Y)
K_gradientndarray of shape (n_samples_X, n_samples_X, n_dims)
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when eval_gradient is True.
property bounds
Returns the log-transformed bounds on the theta. Returns
boundsndarray of shape (n_dims, 2)
The log-transformed bounds on the kernel’s hyperparameters theta
clone_with_theta(theta) [source]
Returns a clone of self with given hyperparameters theta. Parameters
thetandarray of shape (n_dims,)
The hyperparameters
diag(X) [source]
Returns the diagonal of the kernel k(X, X). The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated. Parameters
Xndarray of shape (n_samples_X, n_features)
Left argument of the returned kernel k(X, Y) Returns
K_diagndarray of shape (n_samples_X,)
Diagonal of kernel k(X, X)
get_params(deep=True) [source]
Get parameters of this kernel. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
property hyperparameters
Returns a list of all hyperparameter specifications.
is_stationary() [source]
Returns whether the kernel is stationary.
property n_dims
Returns the number of non-fixed hyperparameters of the kernel.
property requires_vector_input
Returns whether the kernel is defined on fixed-length feature vectors or generic objects. Defaults to True for backward compatibility.
set_params(**params) [source]
Set the parameters of this kernel. The method works on simple kernels as well as on nested kernels. The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Returns
self
property theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale. Returns
thetandarray of shape (n_dims,)
The non-fixed, log-transformed hyperparameters of the kernel
Examples using sklearn.gaussian_process.kernels.RationalQuadratic
Illustration of prior and posterior Gaussian process for different kernels
Gaussian process regression (GPR) on Mauna Loa CO2 data. | sklearn.modules.generated.sklearn.gaussian_process.kernels.rationalquadratic |
property bounds
Returns the log-transformed bounds on the theta. Returns
boundsndarray of shape (n_dims, 2)
The log-transformed bounds on the kernel’s hyperparameters theta | sklearn.modules.generated.sklearn.gaussian_process.kernels.rationalquadratic#sklearn.gaussian_process.kernels.RationalQuadratic.bounds |
clone_with_theta(theta) [source]
Returns a clone of self with given hyperparameters theta. Parameters
thetandarray of shape (n_dims,)
The hyperparameters | sklearn.modules.generated.sklearn.gaussian_process.kernels.rationalquadratic#sklearn.gaussian_process.kernels.RationalQuadratic.clone_with_theta |
diag(X) [source]
Returns the diagonal of the kernel k(X, X). The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated. Parameters
Xndarray of shape (n_samples_X, n_features)
Left argument of the returned kernel k(X, Y) Returns
K_diagndarray of shape (n_samples_X,)
Diagonal of kernel k(X, X) | sklearn.modules.generated.sklearn.gaussian_process.kernels.rationalquadratic#sklearn.gaussian_process.kernels.RationalQuadratic.diag |
get_params(deep=True) [source]
Get parameters of this kernel. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.gaussian_process.kernels.rationalquadratic#sklearn.gaussian_process.kernels.RationalQuadratic.get_params |
property hyperparameters
Returns a list of all hyperparameter specifications. | sklearn.modules.generated.sklearn.gaussian_process.kernels.rationalquadratic#sklearn.gaussian_process.kernels.RationalQuadratic.hyperparameters |
is_stationary() [source]
Returns whether the kernel is stationary. | sklearn.modules.generated.sklearn.gaussian_process.kernels.rationalquadratic#sklearn.gaussian_process.kernels.RationalQuadratic.is_stationary |
property n_dims
Returns the number of non-fixed hyperparameters of the kernel. | sklearn.modules.generated.sklearn.gaussian_process.kernels.rationalquadratic#sklearn.gaussian_process.kernels.RationalQuadratic.n_dims |
property requires_vector_input
Returns whether the kernel is defined on fixed-length feature vectors or generic objects. Defaults to True for backward compatibility. | sklearn.modules.generated.sklearn.gaussian_process.kernels.rationalquadratic#sklearn.gaussian_process.kernels.RationalQuadratic.requires_vector_input |
set_params(**params) [source]
Set the parameters of this kernel. The method works on simple kernels as well as on nested kernels. The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Returns
self | sklearn.modules.generated.sklearn.gaussian_process.kernels.rationalquadratic#sklearn.gaussian_process.kernels.RationalQuadratic.set_params |
property theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale. Returns
thetandarray of shape (n_dims,)
The non-fixed, log-transformed hyperparameters of the kernel | sklearn.modules.generated.sklearn.gaussian_process.kernels.rationalquadratic#sklearn.gaussian_process.kernels.RationalQuadratic.theta |
__call__(X, Y=None, eval_gradient=False) [source]
Return the kernel k(X, Y) and optionally its gradient. Parameters
Xndarray of shape (n_samples_X, n_features)
Left argument of the returned kernel k(X, Y)
Yndarray of shape (n_samples_Y, n_features), default=None
Right argument of the returned kernel k(X, Y). If None, k(X, X) if evaluated instead.
eval_gradientbool, default=False
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Only supported when Y is None. Returns
Kndarray of shape (n_samples_X, n_samples_Y)
Kernel k(X, Y)
K_gradientndarray of shape (n_samples_X, n_samples_X, n_dims)
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when eval_gradient is True. | sklearn.modules.generated.sklearn.gaussian_process.kernels.rationalquadratic#sklearn.gaussian_process.kernels.RationalQuadratic.__call__ |
class sklearn.gaussian_process.kernels.RBF(length_scale=1.0, length_scale_bounds=1e-05, 100000.0) [source]
Radial-basis function kernel (aka squared-exponential kernel). The RBF kernel is a stationary kernel. It is also known as the “squared exponential” kernel. It is parameterized by a length scale parameter \(l>0\), which can either be a scalar (isotropic variant of the kernel) or a vector with the same number of dimensions as the inputs X (anisotropic variant of the kernel). The kernel is given by: \[k(x_i, x_j) = \exp\left(- \frac{d(x_i, x_j)^2}{2l^2} \right)\] where \(l\) is the length scale of the kernel and \(d(\cdot,\cdot)\) is the Euclidean distance. For advice on how to set the length scale parameter, see e.g. [1]. This kernel is infinitely differentiable, which implies that GPs with this kernel as covariance function have mean square derivatives of all orders, and are thus very smooth. See [2], Chapter 4, Section 4.2, for further details of the RBF kernel. Read more in the User Guide. New in version 0.18. Parameters
length_scalefloat or ndarray of shape (n_features,), default=1.0
The length scale of the kernel. If a float, an isotropic kernel is used. If an array, an anisotropic kernel is used where each dimension of l defines the length-scale of the respective feature dimension.
length_scale_boundspair of floats >= 0 or “fixed”, default=(1e-5, 1e5)
The lower and upper bound on ‘length_scale’. If set to “fixed”, ‘length_scale’ cannot be changed during hyperparameter tuning. Attributes
anisotropic
bounds
Returns the log-transformed bounds on the theta. hyperparameter_length_scale
hyperparameters
Returns a list of all hyperparameter specifications.
n_dims
Returns the number of non-fixed hyperparameters of the kernel.
requires_vector_input
Returns whether the kernel is defined on fixed-length feature vectors or generic objects.
theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. References
1
David Duvenaud (2014). “The Kernel Cookbook: Advice on Covariance functions”.
2
Carl Edward Rasmussen, Christopher K. I. Williams (2006). “Gaussian Processes for Machine Learning”. The MIT Press. Examples >>> from sklearn.datasets import load_iris
>>> from sklearn.gaussian_process import GaussianProcessClassifier
>>> from sklearn.gaussian_process.kernels import RBF
>>> X, y = load_iris(return_X_y=True)
>>> kernel = 1.0 * RBF(1.0)
>>> gpc = GaussianProcessClassifier(kernel=kernel,
... random_state=0).fit(X, y)
>>> gpc.score(X, y)
0.9866...
>>> gpc.predict_proba(X[:2,:])
array([[0.8354..., 0.03228..., 0.1322...],
[0.7906..., 0.0652..., 0.1441...]])
Methods
__call__(X[, Y, eval_gradient]) Return the kernel k(X, Y) and optionally its gradient.
clone_with_theta(theta) Returns a clone of self with given hyperparameters theta.
diag(X) Returns the diagonal of the kernel k(X, X).
get_params([deep]) Get parameters of this kernel.
is_stationary() Returns whether the kernel is stationary.
set_params(**params) Set the parameters of this kernel.
__call__(X, Y=None, eval_gradient=False) [source]
Return the kernel k(X, Y) and optionally its gradient. Parameters
Xndarray of shape (n_samples_X, n_features)
Left argument of the returned kernel k(X, Y)
Yndarray of shape (n_samples_Y, n_features), default=None
Right argument of the returned kernel k(X, Y). If None, k(X, X) if evaluated instead.
eval_gradientbool, default=False
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Only supported when Y is None. Returns
Kndarray of shape (n_samples_X, n_samples_Y)
Kernel k(X, Y)
K_gradientndarray of shape (n_samples_X, n_samples_X, n_dims), optional
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when eval_gradient is True.
property bounds
Returns the log-transformed bounds on the theta. Returns
boundsndarray of shape (n_dims, 2)
The log-transformed bounds on the kernel’s hyperparameters theta
clone_with_theta(theta) [source]
Returns a clone of self with given hyperparameters theta. Parameters
thetandarray of shape (n_dims,)
The hyperparameters
diag(X) [source]
Returns the diagonal of the kernel k(X, X). The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated. Parameters
Xndarray of shape (n_samples_X, n_features)
Left argument of the returned kernel k(X, Y) Returns
K_diagndarray of shape (n_samples_X,)
Diagonal of kernel k(X, X)
get_params(deep=True) [source]
Get parameters of this kernel. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
property hyperparameters
Returns a list of all hyperparameter specifications.
is_stationary() [source]
Returns whether the kernel is stationary.
property n_dims
Returns the number of non-fixed hyperparameters of the kernel.
property requires_vector_input
Returns whether the kernel is defined on fixed-length feature vectors or generic objects. Defaults to True for backward compatibility.
set_params(**params) [source]
Set the parameters of this kernel. The method works on simple kernels as well as on nested kernels. The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Returns
self
property theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale. Returns
thetandarray of shape (n_dims,)
The non-fixed, log-transformed hyperparameters of the kernel | sklearn.modules.generated.sklearn.gaussian_process.kernels.rbf#sklearn.gaussian_process.kernels.RBF |
sklearn.gaussian_process.kernels.RBF
class sklearn.gaussian_process.kernels.RBF(length_scale=1.0, length_scale_bounds=1e-05, 100000.0) [source]
Radial-basis function kernel (aka squared-exponential kernel). The RBF kernel is a stationary kernel. It is also known as the “squared exponential” kernel. It is parameterized by a length scale parameter \(l>0\), which can either be a scalar (isotropic variant of the kernel) or a vector with the same number of dimensions as the inputs X (anisotropic variant of the kernel). The kernel is given by: \[k(x_i, x_j) = \exp\left(- \frac{d(x_i, x_j)^2}{2l^2} \right)\] where \(l\) is the length scale of the kernel and \(d(\cdot,\cdot)\) is the Euclidean distance. For advice on how to set the length scale parameter, see e.g. [1]. This kernel is infinitely differentiable, which implies that GPs with this kernel as covariance function have mean square derivatives of all orders, and are thus very smooth. See [2], Chapter 4, Section 4.2, for further details of the RBF kernel. Read more in the User Guide. New in version 0.18. Parameters
length_scalefloat or ndarray of shape (n_features,), default=1.0
The length scale of the kernel. If a float, an isotropic kernel is used. If an array, an anisotropic kernel is used where each dimension of l defines the length-scale of the respective feature dimension.
length_scale_boundspair of floats >= 0 or “fixed”, default=(1e-5, 1e5)
The lower and upper bound on ‘length_scale’. If set to “fixed”, ‘length_scale’ cannot be changed during hyperparameter tuning. Attributes
anisotropic
bounds
Returns the log-transformed bounds on the theta. hyperparameter_length_scale
hyperparameters
Returns a list of all hyperparameter specifications.
n_dims
Returns the number of non-fixed hyperparameters of the kernel.
requires_vector_input
Returns whether the kernel is defined on fixed-length feature vectors or generic objects.
theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. References
1
David Duvenaud (2014). “The Kernel Cookbook: Advice on Covariance functions”.
2
Carl Edward Rasmussen, Christopher K. I. Williams (2006). “Gaussian Processes for Machine Learning”. The MIT Press. Examples >>> from sklearn.datasets import load_iris
>>> from sklearn.gaussian_process import GaussianProcessClassifier
>>> from sklearn.gaussian_process.kernels import RBF
>>> X, y = load_iris(return_X_y=True)
>>> kernel = 1.0 * RBF(1.0)
>>> gpc = GaussianProcessClassifier(kernel=kernel,
... random_state=0).fit(X, y)
>>> gpc.score(X, y)
0.9866...
>>> gpc.predict_proba(X[:2,:])
array([[0.8354..., 0.03228..., 0.1322...],
[0.7906..., 0.0652..., 0.1441...]])
Methods
__call__(X[, Y, eval_gradient]) Return the kernel k(X, Y) and optionally its gradient.
clone_with_theta(theta) Returns a clone of self with given hyperparameters theta.
diag(X) Returns the diagonal of the kernel k(X, X).
get_params([deep]) Get parameters of this kernel.
is_stationary() Returns whether the kernel is stationary.
set_params(**params) Set the parameters of this kernel.
__call__(X, Y=None, eval_gradient=False) [source]
Return the kernel k(X, Y) and optionally its gradient. Parameters
Xndarray of shape (n_samples_X, n_features)
Left argument of the returned kernel k(X, Y)
Yndarray of shape (n_samples_Y, n_features), default=None
Right argument of the returned kernel k(X, Y). If None, k(X, X) if evaluated instead.
eval_gradientbool, default=False
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Only supported when Y is None. Returns
Kndarray of shape (n_samples_X, n_samples_Y)
Kernel k(X, Y)
K_gradientndarray of shape (n_samples_X, n_samples_X, n_dims), optional
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when eval_gradient is True.
property bounds
Returns the log-transformed bounds on the theta. Returns
boundsndarray of shape (n_dims, 2)
The log-transformed bounds on the kernel’s hyperparameters theta
clone_with_theta(theta) [source]
Returns a clone of self with given hyperparameters theta. Parameters
thetandarray of shape (n_dims,)
The hyperparameters
diag(X) [source]
Returns the diagonal of the kernel k(X, X). The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated. Parameters
Xndarray of shape (n_samples_X, n_features)
Left argument of the returned kernel k(X, Y) Returns
K_diagndarray of shape (n_samples_X,)
Diagonal of kernel k(X, X)
get_params(deep=True) [source]
Get parameters of this kernel. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
property hyperparameters
Returns a list of all hyperparameter specifications.
is_stationary() [source]
Returns whether the kernel is stationary.
property n_dims
Returns the number of non-fixed hyperparameters of the kernel.
property requires_vector_input
Returns whether the kernel is defined on fixed-length feature vectors or generic objects. Defaults to True for backward compatibility.
set_params(**params) [source]
Set the parameters of this kernel. The method works on simple kernels as well as on nested kernels. The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Returns
self
property theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale. Returns
thetandarray of shape (n_dims,)
The non-fixed, log-transformed hyperparameters of the kernel
Examples using sklearn.gaussian_process.kernels.RBF
Plot classification probability
Classifier comparison
Illustration of Gaussian process classification (GPC) on the XOR dataset
Gaussian process classification (GPC) on iris dataset
Illustration of prior and posterior Gaussian process for different kernels
Probabilistic predictions with Gaussian process classification (GPC)
Gaussian process regression (GPR) with noise-level estimation
Gaussian Processes regression: basic introductory example
Gaussian process regression (GPR) on Mauna Loa CO2 data. | sklearn.modules.generated.sklearn.gaussian_process.kernels.rbf |
property bounds
Returns the log-transformed bounds on the theta. Returns
boundsndarray of shape (n_dims, 2)
The log-transformed bounds on the kernel’s hyperparameters theta | sklearn.modules.generated.sklearn.gaussian_process.kernels.rbf#sklearn.gaussian_process.kernels.RBF.bounds |
clone_with_theta(theta) [source]
Returns a clone of self with given hyperparameters theta. Parameters
thetandarray of shape (n_dims,)
The hyperparameters | sklearn.modules.generated.sklearn.gaussian_process.kernels.rbf#sklearn.gaussian_process.kernels.RBF.clone_with_theta |
diag(X) [source]
Returns the diagonal of the kernel k(X, X). The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated. Parameters
Xndarray of shape (n_samples_X, n_features)
Left argument of the returned kernel k(X, Y) Returns
K_diagndarray of shape (n_samples_X,)
Diagonal of kernel k(X, X) | sklearn.modules.generated.sklearn.gaussian_process.kernels.rbf#sklearn.gaussian_process.kernels.RBF.diag |
get_params(deep=True) [source]
Get parameters of this kernel. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.gaussian_process.kernels.rbf#sklearn.gaussian_process.kernels.RBF.get_params |
property hyperparameters
Returns a list of all hyperparameter specifications. | sklearn.modules.generated.sklearn.gaussian_process.kernels.rbf#sklearn.gaussian_process.kernels.RBF.hyperparameters |
is_stationary() [source]
Returns whether the kernel is stationary. | sklearn.modules.generated.sklearn.gaussian_process.kernels.rbf#sklearn.gaussian_process.kernels.RBF.is_stationary |
property n_dims
Returns the number of non-fixed hyperparameters of the kernel. | sklearn.modules.generated.sklearn.gaussian_process.kernels.rbf#sklearn.gaussian_process.kernels.RBF.n_dims |
property requires_vector_input
Returns whether the kernel is defined on fixed-length feature vectors or generic objects. Defaults to True for backward compatibility. | sklearn.modules.generated.sklearn.gaussian_process.kernels.rbf#sklearn.gaussian_process.kernels.RBF.requires_vector_input |
set_params(**params) [source]
Set the parameters of this kernel. The method works on simple kernels as well as on nested kernels. The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Returns
self | sklearn.modules.generated.sklearn.gaussian_process.kernels.rbf#sklearn.gaussian_process.kernels.RBF.set_params |
property theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale. Returns
thetandarray of shape (n_dims,)
The non-fixed, log-transformed hyperparameters of the kernel | sklearn.modules.generated.sklearn.gaussian_process.kernels.rbf#sklearn.gaussian_process.kernels.RBF.theta |
__call__(X, Y=None, eval_gradient=False) [source]
Return the kernel k(X, Y) and optionally its gradient. Parameters
Xndarray of shape (n_samples_X, n_features)
Left argument of the returned kernel k(X, Y)
Yndarray of shape (n_samples_Y, n_features), default=None
Right argument of the returned kernel k(X, Y). If None, k(X, X) if evaluated instead.
eval_gradientbool, default=False
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Only supported when Y is None. Returns
Kndarray of shape (n_samples_X, n_samples_Y)
Kernel k(X, Y)
K_gradientndarray of shape (n_samples_X, n_samples_X, n_dims), optional
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when eval_gradient is True. | sklearn.modules.generated.sklearn.gaussian_process.kernels.rbf#sklearn.gaussian_process.kernels.RBF.__call__ |
class sklearn.gaussian_process.kernels.Sum(k1, k2) [source]
The Sum kernel takes two kernels \(k_1\) and \(k_2\) and combines them via \[k_{sum}(X, Y) = k_1(X, Y) + k_2(X, Y)\] Note that the __add__ magic method is overridden, so Sum(RBF(), RBF()) is equivalent to using the + operator with RBF() + RBF(). Read more in the User Guide. New in version 0.18. Parameters
k1Kernel
The first base-kernel of the sum-kernel
k2Kernel
The second base-kernel of the sum-kernel Attributes
bounds
Returns the log-transformed bounds on the theta.
hyperparameters
Returns a list of all hyperparameter.
n_dims
Returns the number of non-fixed hyperparameters of the kernel.
requires_vector_input
Returns whether the kernel is stationary.
theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. Examples >>> from sklearn.datasets import make_friedman2
>>> from sklearn.gaussian_process import GaussianProcessRegressor
>>> from sklearn.gaussian_process.kernels import RBF, Sum, ConstantKernel
>>> X, y = make_friedman2(n_samples=500, noise=0, random_state=0)
>>> kernel = Sum(ConstantKernel(2), RBF())
>>> gpr = GaussianProcessRegressor(kernel=kernel,
... random_state=0).fit(X, y)
>>> gpr.score(X, y)
1.0
>>> kernel
1.41**2 + RBF(length_scale=1)
Methods
__call__(X[, Y, eval_gradient]) Return the kernel k(X, Y) and optionally its gradient.
clone_with_theta(theta) Returns a clone of self with given hyperparameters theta.
diag(X) Returns the diagonal of the kernel k(X, X).
get_params([deep]) Get parameters of this kernel.
is_stationary() Returns whether the kernel is stationary.
set_params(**params) Set the parameters of this kernel.
__call__(X, Y=None, eval_gradient=False) [source]
Return the kernel k(X, Y) and optionally its gradient. Parameters
Xarray-like of shape (n_samples_X, n_features) or list of object
Left argument of the returned kernel k(X, Y)
Yarray-like of shape (n_samples_X, n_features) or list of object, default=None
Right argument of the returned kernel k(X, Y). If None, k(X, X) is evaluated instead.
eval_gradientbool, default=False
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Returns
Kndarray of shape (n_samples_X, n_samples_Y)
Kernel k(X, Y)
K_gradientndarray of shape (n_samples_X, n_samples_X, n_dims), optional
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when eval_gradient is True.
property bounds
Returns the log-transformed bounds on the theta. Returns
boundsndarray of shape (n_dims, 2)
The log-transformed bounds on the kernel’s hyperparameters theta
clone_with_theta(theta) [source]
Returns a clone of self with given hyperparameters theta. Parameters
thetandarray of shape (n_dims,)
The hyperparameters
diag(X) [source]
Returns the diagonal of the kernel k(X, X). The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated. Parameters
Xarray-like of shape (n_samples_X, n_features) or list of object
Argument to the kernel. Returns
K_diagndarray of shape (n_samples_X,)
Diagonal of kernel k(X, X)
get_params(deep=True) [source]
Get parameters of this kernel. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
property hyperparameters
Returns a list of all hyperparameter.
is_stationary() [source]
Returns whether the kernel is stationary.
property n_dims
Returns the number of non-fixed hyperparameters of the kernel.
property requires_vector_input
Returns whether the kernel is stationary.
set_params(**params) [source]
Set the parameters of this kernel. The method works on simple kernels as well as on nested kernels. The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Returns
self
property theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale. Returns
thetandarray of shape (n_dims,)
The non-fixed, log-transformed hyperparameters of the kernel | sklearn.modules.generated.sklearn.gaussian_process.kernels.sum#sklearn.gaussian_process.kernels.Sum |
sklearn.gaussian_process.kernels.Sum
class sklearn.gaussian_process.kernels.Sum(k1, k2) [source]
The Sum kernel takes two kernels \(k_1\) and \(k_2\) and combines them via \[k_{sum}(X, Y) = k_1(X, Y) + k_2(X, Y)\] Note that the __add__ magic method is overridden, so Sum(RBF(), RBF()) is equivalent to using the + operator with RBF() + RBF(). Read more in the User Guide. New in version 0.18. Parameters
k1Kernel
The first base-kernel of the sum-kernel
k2Kernel
The second base-kernel of the sum-kernel Attributes
bounds
Returns the log-transformed bounds on the theta.
hyperparameters
Returns a list of all hyperparameter.
n_dims
Returns the number of non-fixed hyperparameters of the kernel.
requires_vector_input
Returns whether the kernel is stationary.
theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. Examples >>> from sklearn.datasets import make_friedman2
>>> from sklearn.gaussian_process import GaussianProcessRegressor
>>> from sklearn.gaussian_process.kernels import RBF, Sum, ConstantKernel
>>> X, y = make_friedman2(n_samples=500, noise=0, random_state=0)
>>> kernel = Sum(ConstantKernel(2), RBF())
>>> gpr = GaussianProcessRegressor(kernel=kernel,
... random_state=0).fit(X, y)
>>> gpr.score(X, y)
1.0
>>> kernel
1.41**2 + RBF(length_scale=1)
Methods
__call__(X[, Y, eval_gradient]) Return the kernel k(X, Y) and optionally its gradient.
clone_with_theta(theta) Returns a clone of self with given hyperparameters theta.
diag(X) Returns the diagonal of the kernel k(X, X).
get_params([deep]) Get parameters of this kernel.
is_stationary() Returns whether the kernel is stationary.
set_params(**params) Set the parameters of this kernel.
__call__(X, Y=None, eval_gradient=False) [source]
Return the kernel k(X, Y) and optionally its gradient. Parameters
Xarray-like of shape (n_samples_X, n_features) or list of object
Left argument of the returned kernel k(X, Y)
Yarray-like of shape (n_samples_X, n_features) or list of object, default=None
Right argument of the returned kernel k(X, Y). If None, k(X, X) is evaluated instead.
eval_gradientbool, default=False
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Returns
Kndarray of shape (n_samples_X, n_samples_Y)
Kernel k(X, Y)
K_gradientndarray of shape (n_samples_X, n_samples_X, n_dims), optional
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when eval_gradient is True.
property bounds
Returns the log-transformed bounds on the theta. Returns
boundsndarray of shape (n_dims, 2)
The log-transformed bounds on the kernel’s hyperparameters theta
clone_with_theta(theta) [source]
Returns a clone of self with given hyperparameters theta. Parameters
thetandarray of shape (n_dims,)
The hyperparameters
diag(X) [source]
Returns the diagonal of the kernel k(X, X). The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated. Parameters
Xarray-like of shape (n_samples_X, n_features) or list of object
Argument to the kernel. Returns
K_diagndarray of shape (n_samples_X,)
Diagonal of kernel k(X, X)
get_params(deep=True) [source]
Get parameters of this kernel. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
property hyperparameters
Returns a list of all hyperparameter.
is_stationary() [source]
Returns whether the kernel is stationary.
property n_dims
Returns the number of non-fixed hyperparameters of the kernel.
property requires_vector_input
Returns whether the kernel is stationary.
set_params(**params) [source]
Set the parameters of this kernel. The method works on simple kernels as well as on nested kernels. The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Returns
self
property theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale. Returns
thetandarray of shape (n_dims,)
The non-fixed, log-transformed hyperparameters of the kernel | sklearn.modules.generated.sklearn.gaussian_process.kernels.sum |
property bounds
Returns the log-transformed bounds on the theta. Returns
boundsndarray of shape (n_dims, 2)
The log-transformed bounds on the kernel’s hyperparameters theta | sklearn.modules.generated.sklearn.gaussian_process.kernels.sum#sklearn.gaussian_process.kernels.Sum.bounds |
clone_with_theta(theta) [source]
Returns a clone of self with given hyperparameters theta. Parameters
thetandarray of shape (n_dims,)
The hyperparameters | sklearn.modules.generated.sklearn.gaussian_process.kernels.sum#sklearn.gaussian_process.kernels.Sum.clone_with_theta |
diag(X) [source]
Returns the diagonal of the kernel k(X, X). The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated. Parameters
Xarray-like of shape (n_samples_X, n_features) or list of object
Argument to the kernel. Returns
K_diagndarray of shape (n_samples_X,)
Diagonal of kernel k(X, X) | sklearn.modules.generated.sklearn.gaussian_process.kernels.sum#sklearn.gaussian_process.kernels.Sum.diag |
get_params(deep=True) [source]
Get parameters of this kernel. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.gaussian_process.kernels.sum#sklearn.gaussian_process.kernels.Sum.get_params |
property hyperparameters
Returns a list of all hyperparameter. | sklearn.modules.generated.sklearn.gaussian_process.kernels.sum#sklearn.gaussian_process.kernels.Sum.hyperparameters |
is_stationary() [source]
Returns whether the kernel is stationary. | sklearn.modules.generated.sklearn.gaussian_process.kernels.sum#sklearn.gaussian_process.kernels.Sum.is_stationary |
property n_dims
Returns the number of non-fixed hyperparameters of the kernel. | sklearn.modules.generated.sklearn.gaussian_process.kernels.sum#sklearn.gaussian_process.kernels.Sum.n_dims |
property requires_vector_input
Returns whether the kernel is stationary. | sklearn.modules.generated.sklearn.gaussian_process.kernels.sum#sklearn.gaussian_process.kernels.Sum.requires_vector_input |
set_params(**params) [source]
Set the parameters of this kernel. The method works on simple kernels as well as on nested kernels. The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Returns
self | sklearn.modules.generated.sklearn.gaussian_process.kernels.sum#sklearn.gaussian_process.kernels.Sum.set_params |
property theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale. Returns
thetandarray of shape (n_dims,)
The non-fixed, log-transformed hyperparameters of the kernel | sklearn.modules.generated.sklearn.gaussian_process.kernels.sum#sklearn.gaussian_process.kernels.Sum.theta |
__call__(X, Y=None, eval_gradient=False) [source]
Return the kernel k(X, Y) and optionally its gradient. Parameters
Xarray-like of shape (n_samples_X, n_features) or list of object
Left argument of the returned kernel k(X, Y)
Yarray-like of shape (n_samples_X, n_features) or list of object, default=None
Right argument of the returned kernel k(X, Y). If None, k(X, X) is evaluated instead.
eval_gradientbool, default=False
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Returns
Kndarray of shape (n_samples_X, n_samples_Y)
Kernel k(X, Y)
K_gradientndarray of shape (n_samples_X, n_samples_X, n_dims), optional
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when eval_gradient is True. | sklearn.modules.generated.sklearn.gaussian_process.kernels.sum#sklearn.gaussian_process.kernels.Sum.__call__ |
class sklearn.gaussian_process.kernels.WhiteKernel(noise_level=1.0, noise_level_bounds=1e-05, 100000.0) [source]
White kernel. The main use-case of this kernel is as part of a sum-kernel where it explains the noise of the signal as independently and identically normally-distributed. The parameter noise_level equals the variance of this noise. \[k(x_1, x_2) = noise\_level \text{ if } x_i == x_j \text{ else } 0\] Read more in the User Guide. New in version 0.18. Parameters
noise_levelfloat, default=1.0
Parameter controlling the noise level (variance)
noise_level_boundspair of floats >= 0 or “fixed”, default=(1e-5, 1e5)
The lower and upper bound on ‘noise_level’. If set to “fixed”, ‘noise_level’ cannot be changed during hyperparameter tuning. Attributes
bounds
Returns the log-transformed bounds on the theta. hyperparameter_noise_level
hyperparameters
Returns a list of all hyperparameter specifications.
n_dims
Returns the number of non-fixed hyperparameters of the kernel.
requires_vector_input
Whether the kernel works only on fixed-length feature vectors.
theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. Examples >>> from sklearn.datasets import make_friedman2
>>> from sklearn.gaussian_process import GaussianProcessRegressor
>>> from sklearn.gaussian_process.kernels import DotProduct, WhiteKernel
>>> X, y = make_friedman2(n_samples=500, noise=0, random_state=0)
>>> kernel = DotProduct() + WhiteKernel(noise_level=0.5)
>>> gpr = GaussianProcessRegressor(kernel=kernel,
... random_state=0).fit(X, y)
>>> gpr.score(X, y)
0.3680...
>>> gpr.predict(X[:2,:], return_std=True)
(array([653.0..., 592.1... ]), array([316.6..., 316.6...]))
Methods
__call__(X[, Y, eval_gradient]) Return the kernel k(X, Y) and optionally its gradient.
clone_with_theta(theta) Returns a clone of self with given hyperparameters theta.
diag(X) Returns the diagonal of the kernel k(X, X).
get_params([deep]) Get parameters of this kernel.
is_stationary() Returns whether the kernel is stationary.
set_params(**params) Set the parameters of this kernel.
__call__(X, Y=None, eval_gradient=False) [source]
Return the kernel k(X, Y) and optionally its gradient. Parameters
Xarray-like of shape (n_samples_X, n_features) or list of object
Left argument of the returned kernel k(X, Y)
Yarray-like of shape (n_samples_X, n_features) or list of object, default=None
Right argument of the returned kernel k(X, Y). If None, k(X, X) is evaluated instead.
eval_gradientbool, default=False
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Only supported when Y is None. Returns
Kndarray of shape (n_samples_X, n_samples_Y)
Kernel k(X, Y)
K_gradientndarray of shape (n_samples_X, n_samples_X, n_dims), optional
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when eval_gradient is True.
property bounds
Returns the log-transformed bounds on the theta. Returns
boundsndarray of shape (n_dims, 2)
The log-transformed bounds on the kernel’s hyperparameters theta
clone_with_theta(theta) [source]
Returns a clone of self with given hyperparameters theta. Parameters
thetandarray of shape (n_dims,)
The hyperparameters
diag(X) [source]
Returns the diagonal of the kernel k(X, X). The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated. Parameters
Xarray-like of shape (n_samples_X, n_features) or list of object
Argument to the kernel. Returns
K_diagndarray of shape (n_samples_X,)
Diagonal of kernel k(X, X)
get_params(deep=True) [source]
Get parameters of this kernel. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
property hyperparameters
Returns a list of all hyperparameter specifications.
is_stationary() [source]
Returns whether the kernel is stationary.
property n_dims
Returns the number of non-fixed hyperparameters of the kernel.
property requires_vector_input
Whether the kernel works only on fixed-length feature vectors.
set_params(**params) [source]
Set the parameters of this kernel. The method works on simple kernels as well as on nested kernels. The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Returns
self
property theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale. Returns
thetandarray of shape (n_dims,)
The non-fixed, log-transformed hyperparameters of the kernel | sklearn.modules.generated.sklearn.gaussian_process.kernels.whitekernel#sklearn.gaussian_process.kernels.WhiteKernel |
sklearn.gaussian_process.kernels.WhiteKernel
class sklearn.gaussian_process.kernels.WhiteKernel(noise_level=1.0, noise_level_bounds=1e-05, 100000.0) [source]
White kernel. The main use-case of this kernel is as part of a sum-kernel where it explains the noise of the signal as independently and identically normally-distributed. The parameter noise_level equals the variance of this noise. \[k(x_1, x_2) = noise\_level \text{ if } x_i == x_j \text{ else } 0\] Read more in the User Guide. New in version 0.18. Parameters
noise_levelfloat, default=1.0
Parameter controlling the noise level (variance)
noise_level_boundspair of floats >= 0 or “fixed”, default=(1e-5, 1e5)
The lower and upper bound on ‘noise_level’. If set to “fixed”, ‘noise_level’ cannot be changed during hyperparameter tuning. Attributes
bounds
Returns the log-transformed bounds on the theta. hyperparameter_noise_level
hyperparameters
Returns a list of all hyperparameter specifications.
n_dims
Returns the number of non-fixed hyperparameters of the kernel.
requires_vector_input
Whether the kernel works only on fixed-length feature vectors.
theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. Examples >>> from sklearn.datasets import make_friedman2
>>> from sklearn.gaussian_process import GaussianProcessRegressor
>>> from sklearn.gaussian_process.kernels import DotProduct, WhiteKernel
>>> X, y = make_friedman2(n_samples=500, noise=0, random_state=0)
>>> kernel = DotProduct() + WhiteKernel(noise_level=0.5)
>>> gpr = GaussianProcessRegressor(kernel=kernel,
... random_state=0).fit(X, y)
>>> gpr.score(X, y)
0.3680...
>>> gpr.predict(X[:2,:], return_std=True)
(array([653.0..., 592.1... ]), array([316.6..., 316.6...]))
Methods
__call__(X[, Y, eval_gradient]) Return the kernel k(X, Y) and optionally its gradient.
clone_with_theta(theta) Returns a clone of self with given hyperparameters theta.
diag(X) Returns the diagonal of the kernel k(X, X).
get_params([deep]) Get parameters of this kernel.
is_stationary() Returns whether the kernel is stationary.
set_params(**params) Set the parameters of this kernel.
__call__(X, Y=None, eval_gradient=False) [source]
Return the kernel k(X, Y) and optionally its gradient. Parameters
Xarray-like of shape (n_samples_X, n_features) or list of object
Left argument of the returned kernel k(X, Y)
Yarray-like of shape (n_samples_X, n_features) or list of object, default=None
Right argument of the returned kernel k(X, Y). If None, k(X, X) is evaluated instead.
eval_gradientbool, default=False
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Only supported when Y is None. Returns
Kndarray of shape (n_samples_X, n_samples_Y)
Kernel k(X, Y)
K_gradientndarray of shape (n_samples_X, n_samples_X, n_dims), optional
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when eval_gradient is True.
property bounds
Returns the log-transformed bounds on the theta. Returns
boundsndarray of shape (n_dims, 2)
The log-transformed bounds on the kernel’s hyperparameters theta
clone_with_theta(theta) [source]
Returns a clone of self with given hyperparameters theta. Parameters
thetandarray of shape (n_dims,)
The hyperparameters
diag(X) [source]
Returns the diagonal of the kernel k(X, X). The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated. Parameters
Xarray-like of shape (n_samples_X, n_features) or list of object
Argument to the kernel. Returns
K_diagndarray of shape (n_samples_X,)
Diagonal of kernel k(X, X)
get_params(deep=True) [source]
Get parameters of this kernel. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
property hyperparameters
Returns a list of all hyperparameter specifications.
is_stationary() [source]
Returns whether the kernel is stationary.
property n_dims
Returns the number of non-fixed hyperparameters of the kernel.
property requires_vector_input
Whether the kernel works only on fixed-length feature vectors.
set_params(**params) [source]
Set the parameters of this kernel. The method works on simple kernels as well as on nested kernels. The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Returns
self
property theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale. Returns
thetandarray of shape (n_dims,)
The non-fixed, log-transformed hyperparameters of the kernel
Examples using sklearn.gaussian_process.kernels.WhiteKernel
Comparison of kernel ridge and Gaussian process regression
Gaussian process regression (GPR) with noise-level estimation
Gaussian process regression (GPR) on Mauna Loa CO2 data. | sklearn.modules.generated.sklearn.gaussian_process.kernels.whitekernel |
property bounds
Returns the log-transformed bounds on the theta. Returns
boundsndarray of shape (n_dims, 2)
The log-transformed bounds on the kernel’s hyperparameters theta | sklearn.modules.generated.sklearn.gaussian_process.kernels.whitekernel#sklearn.gaussian_process.kernels.WhiteKernel.bounds |
clone_with_theta(theta) [source]
Returns a clone of self with given hyperparameters theta. Parameters
thetandarray of shape (n_dims,)
The hyperparameters | sklearn.modules.generated.sklearn.gaussian_process.kernels.whitekernel#sklearn.gaussian_process.kernels.WhiteKernel.clone_with_theta |
diag(X) [source]
Returns the diagonal of the kernel k(X, X). The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated. Parameters
Xarray-like of shape (n_samples_X, n_features) or list of object
Argument to the kernel. Returns
K_diagndarray of shape (n_samples_X,)
Diagonal of kernel k(X, X) | sklearn.modules.generated.sklearn.gaussian_process.kernels.whitekernel#sklearn.gaussian_process.kernels.WhiteKernel.diag |
get_params(deep=True) [source]
Get parameters of this kernel. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.gaussian_process.kernels.whitekernel#sklearn.gaussian_process.kernels.WhiteKernel.get_params |
property hyperparameters
Returns a list of all hyperparameter specifications. | sklearn.modules.generated.sklearn.gaussian_process.kernels.whitekernel#sklearn.gaussian_process.kernels.WhiteKernel.hyperparameters |
is_stationary() [source]
Returns whether the kernel is stationary. | sklearn.modules.generated.sklearn.gaussian_process.kernels.whitekernel#sklearn.gaussian_process.kernels.WhiteKernel.is_stationary |
property n_dims
Returns the number of non-fixed hyperparameters of the kernel. | sklearn.modules.generated.sklearn.gaussian_process.kernels.whitekernel#sklearn.gaussian_process.kernels.WhiteKernel.n_dims |
property requires_vector_input
Whether the kernel works only on fixed-length feature vectors. | sklearn.modules.generated.sklearn.gaussian_process.kernels.whitekernel#sklearn.gaussian_process.kernels.WhiteKernel.requires_vector_input |
set_params(**params) [source]
Set the parameters of this kernel. The method works on simple kernels as well as on nested kernels. The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Returns
self | sklearn.modules.generated.sklearn.gaussian_process.kernels.whitekernel#sklearn.gaussian_process.kernels.WhiteKernel.set_params |
property theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale. Returns
thetandarray of shape (n_dims,)
The non-fixed, log-transformed hyperparameters of the kernel | sklearn.modules.generated.sklearn.gaussian_process.kernels.whitekernel#sklearn.gaussian_process.kernels.WhiteKernel.theta |
__call__(X, Y=None, eval_gradient=False) [source]
Return the kernel k(X, Y) and optionally its gradient. Parameters
Xarray-like of shape (n_samples_X, n_features) or list of object
Left argument of the returned kernel k(X, Y)
Yarray-like of shape (n_samples_X, n_features) or list of object, default=None
Right argument of the returned kernel k(X, Y). If None, k(X, X) is evaluated instead.
eval_gradientbool, default=False
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Only supported when Y is None. Returns
Kndarray of shape (n_samples_X, n_samples_Y)
Kernel k(X, Y)
K_gradientndarray of shape (n_samples_X, n_samples_X, n_dims), optional
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when eval_gradient is True. | sklearn.modules.generated.sklearn.gaussian_process.kernels.whitekernel#sklearn.gaussian_process.kernels.WhiteKernel.__call__ |
sklearn.get_config() [source]
Retrieve current values for configuration set by set_config Returns
configdict
Keys are parameter names that can be passed to set_config. See also
config_context
Context manager for global scikit-learn configuration.
set_config
Set global scikit-learn configuration. | sklearn.modules.generated.sklearn.get_config#sklearn.get_config |
class sklearn.impute.IterativeImputer(estimator=None, *, missing_values=nan, sample_posterior=False, max_iter=10, tol=0.001, n_nearest_features=None, initial_strategy='mean', imputation_order='ascending', skip_complete=False, min_value=- inf, max_value=inf, verbose=0, random_state=None, add_indicator=False) [source]
Multivariate imputer that estimates each feature from all the others. A strategy for imputing missing values by modeling each feature with missing values as a function of other features in a round-robin fashion. Read more in the User Guide. New in version 0.21. Note This estimator is still experimental for now: the predictions and the API might change without any deprecation cycle. To use it, you need to explicitly import enable_iterative_imputer: >>> # explicitly require this experimental feature
>>> from sklearn.experimental import enable_iterative_imputer # noqa
>>> # now you can import normally from sklearn.impute
>>> from sklearn.impute import IterativeImputer
Parameters
estimatorestimator object, default=BayesianRidge()
The estimator to use at each step of the round-robin imputation. If sample_posterior is True, the estimator must support return_std in its predict method.
missing_valuesint, np.nan, default=np.nan
The placeholder for the missing values. All occurrences of missing_values will be imputed. For pandas’ dataframes with nullable integer dtypes with missing values, missing_values should be set to np.nan, since pd.NA will be converted to np.nan.
sample_posteriorboolean, default=False
Whether to sample from the (Gaussian) predictive posterior of the fitted estimator for each imputation. Estimator must support return_std in its predict method if set to True. Set to True if using IterativeImputer for multiple imputations.
max_iterint, default=10
Maximum number of imputation rounds to perform before returning the imputations computed during the final round. A round is a single imputation of each feature with missing values. The stopping criterion is met once max(abs(X_t - X_{t-1}))/max(abs(X[known_vals])) < tol, where X_t is X at iteration t. Note that early stopping is only
applied if ``sample_posterior=False`.
tolfloat, default=1e-3
Tolerance of the stopping condition.
n_nearest_featuresint, default=None
Number of other features to use to estimate the missing values of each feature column. Nearness between features is measured using the absolute correlation coefficient between each feature pair (after initial imputation). To ensure coverage of features throughout the imputation process, the neighbor features are not necessarily nearest, but are drawn with probability proportional to correlation for each imputed target feature. Can provide significant speed-up when the number of features is huge. If None, all features will be used.
initial_strategystr, default=’mean’
Which strategy to use to initialize the missing values. Same as the strategy parameter in SimpleImputer Valid values: {“mean”, “median”, “most_frequent”, or “constant”}.
imputation_orderstr, default=’ascending’
The order in which the features will be imputed. Possible values: “ascending”
From features with fewest missing values to most. “descending”
From features with most missing values to fewest. “roman”
Left to right. “arabic”
Right to left. “random”
A random order for each round.
skip_completeboolean, default=False
If True then features with missing values during transform which did not have any missing values during fit will be imputed with the initial imputation method only. Set to True if you have many features with no missing values at both fit and transform time to save compute.
min_valuefloat or array-like of shape (n_features,), default=-np.inf
Minimum possible imputed value. Broadcast to shape (n_features,) if scalar. If array-like, expects shape (n_features,), one min value for each feature. The default is -np.inf. Changed in version 0.23: Added support for array-like.
max_valuefloat or array-like of shape (n_features,), default=np.inf
Maximum possible imputed value. Broadcast to shape (n_features,) if scalar. If array-like, expects shape (n_features,), one max value for each feature. The default is np.inf. Changed in version 0.23: Added support for array-like.
verboseint, default=0
Verbosity flag, controls the debug messages that are issued as functions are evaluated. The higher, the more verbose. Can be 0, 1, or 2.
random_stateint, RandomState instance or None, default=None
The seed of the pseudo random number generator to use. Randomizes selection of estimator features if n_nearest_features is not None, the imputation_order if random, and the sampling from posterior if sample_posterior is True. Use an integer for determinism. See the Glossary.
add_indicatorboolean, default=False
If True, a MissingIndicator transform will stack onto output of the imputer’s transform. This allows a predictive estimator to account for missingness despite imputation. If a feature has no missing values at fit/train time, the feature won’t appear on the missing indicator even if there are missing values at transform/test time. Attributes
initial_imputer_object of type SimpleImputer
Imputer used to initialize the missing values.
imputation_sequence_list of tuples
Each tuple has (feat_idx, neighbor_feat_idx, estimator), where feat_idx is the current feature to be imputed, neighbor_feat_idx is the array of other features used to impute the current feature, and estimator is the trained estimator used for the imputation. Length is self.n_features_with_missing_ *
self.n_iter_.
n_iter_int
Number of iteration rounds that occurred. Will be less than self.max_iter if early stopping criterion was reached.
n_features_with_missing_int
Number of features with missing values.
indicator_MissingIndicator
Indicator used to add binary indicators for missing values. None if add_indicator is False.
random_state_RandomState instance
RandomState instance that is generated either from a seed, the random number generator or by np.random. See also
SimpleImputer
Univariate imputation of missing values. Notes To support imputation in inductive mode we store each feature’s estimator during the fit phase, and predict without refitting (in order) during the transform phase. Features which contain all missing values at fit are discarded upon transform. References
1
Stef van Buuren, Karin Groothuis-Oudshoorn (2011). “mice: Multivariate Imputation by Chained Equations in R”. Journal of Statistical Software 45: 1-67.
2
S. F. Buck, (1960). “A Method of Estimation of Missing Values in Multivariate Data Suitable for use with an Electronic Computer”. Journal of the Royal Statistical Society 22(2): 302-306. Examples >>> import numpy as np
>>> from sklearn.experimental import enable_iterative_imputer
>>> from sklearn.impute import IterativeImputer
>>> imp_mean = IterativeImputer(random_state=0)
>>> imp_mean.fit([[7, 2, 3], [4, np.nan, 6], [10, 5, 9]])
IterativeImputer(random_state=0)
>>> X = [[np.nan, 2, 3], [4, np.nan, 6], [10, np.nan, 9]]
>>> imp_mean.transform(X)
array([[ 6.9584..., 2. , 3. ],
[ 4. , 2.6000..., 6. ],
[10. , 4.9999..., 9. ]])
Methods
fit(X[, y]) Fits the imputer on X and return self.
fit_transform(X[, y]) Fits the imputer on X and return the transformed X.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Imputes all missing values in X.
fit(X, y=None) [source]
Fits the imputer on X and return self. Parameters
Xarray-like, shape (n_samples, n_features)
Input data, where “n_samples” is the number of samples and “n_features” is the number of features.
yignored
Returns
selfobject
Returns self.
fit_transform(X, y=None) [source]
Fits the imputer on X and return the transformed X. Parameters
Xarray-like, shape (n_samples, n_features)
Input data, where “n_samples” is the number of samples and “n_features” is the number of features.
yignored.
Returns
Xtarray-like, shape (n_samples, n_features)
The imputed input data.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Imputes all missing values in X. Note that this is stochastic, and that if random_state is not fixed, repeated calls, or permuted input, will yield different results. Parameters
Xarray-like of shape (n_samples, n_features)
The input data to complete. Returns
Xtarray-like, shape (n_samples, n_features)
The imputed input data. | sklearn.modules.generated.sklearn.impute.iterativeimputer#sklearn.impute.IterativeImputer |
sklearn.impute.IterativeImputer
class sklearn.impute.IterativeImputer(estimator=None, *, missing_values=nan, sample_posterior=False, max_iter=10, tol=0.001, n_nearest_features=None, initial_strategy='mean', imputation_order='ascending', skip_complete=False, min_value=- inf, max_value=inf, verbose=0, random_state=None, add_indicator=False) [source]
Multivariate imputer that estimates each feature from all the others. A strategy for imputing missing values by modeling each feature with missing values as a function of other features in a round-robin fashion. Read more in the User Guide. New in version 0.21. Note This estimator is still experimental for now: the predictions and the API might change without any deprecation cycle. To use it, you need to explicitly import enable_iterative_imputer: >>> # explicitly require this experimental feature
>>> from sklearn.experimental import enable_iterative_imputer # noqa
>>> # now you can import normally from sklearn.impute
>>> from sklearn.impute import IterativeImputer
Parameters
estimatorestimator object, default=BayesianRidge()
The estimator to use at each step of the round-robin imputation. If sample_posterior is True, the estimator must support return_std in its predict method.
missing_valuesint, np.nan, default=np.nan
The placeholder for the missing values. All occurrences of missing_values will be imputed. For pandas’ dataframes with nullable integer dtypes with missing values, missing_values should be set to np.nan, since pd.NA will be converted to np.nan.
sample_posteriorboolean, default=False
Whether to sample from the (Gaussian) predictive posterior of the fitted estimator for each imputation. Estimator must support return_std in its predict method if set to True. Set to True if using IterativeImputer for multiple imputations.
max_iterint, default=10
Maximum number of imputation rounds to perform before returning the imputations computed during the final round. A round is a single imputation of each feature with missing values. The stopping criterion is met once max(abs(X_t - X_{t-1}))/max(abs(X[known_vals])) < tol, where X_t is X at iteration t. Note that early stopping is only
applied if ``sample_posterior=False`.
tolfloat, default=1e-3
Tolerance of the stopping condition.
n_nearest_featuresint, default=None
Number of other features to use to estimate the missing values of each feature column. Nearness between features is measured using the absolute correlation coefficient between each feature pair (after initial imputation). To ensure coverage of features throughout the imputation process, the neighbor features are not necessarily nearest, but are drawn with probability proportional to correlation for each imputed target feature. Can provide significant speed-up when the number of features is huge. If None, all features will be used.
initial_strategystr, default=’mean’
Which strategy to use to initialize the missing values. Same as the strategy parameter in SimpleImputer Valid values: {“mean”, “median”, “most_frequent”, or “constant”}.
imputation_orderstr, default=’ascending’
The order in which the features will be imputed. Possible values: “ascending”
From features with fewest missing values to most. “descending”
From features with most missing values to fewest. “roman”
Left to right. “arabic”
Right to left. “random”
A random order for each round.
skip_completeboolean, default=False
If True then features with missing values during transform which did not have any missing values during fit will be imputed with the initial imputation method only. Set to True if you have many features with no missing values at both fit and transform time to save compute.
min_valuefloat or array-like of shape (n_features,), default=-np.inf
Minimum possible imputed value. Broadcast to shape (n_features,) if scalar. If array-like, expects shape (n_features,), one min value for each feature. The default is -np.inf. Changed in version 0.23: Added support for array-like.
max_valuefloat or array-like of shape (n_features,), default=np.inf
Maximum possible imputed value. Broadcast to shape (n_features,) if scalar. If array-like, expects shape (n_features,), one max value for each feature. The default is np.inf. Changed in version 0.23: Added support for array-like.
verboseint, default=0
Verbosity flag, controls the debug messages that are issued as functions are evaluated. The higher, the more verbose. Can be 0, 1, or 2.
random_stateint, RandomState instance or None, default=None
The seed of the pseudo random number generator to use. Randomizes selection of estimator features if n_nearest_features is not None, the imputation_order if random, and the sampling from posterior if sample_posterior is True. Use an integer for determinism. See the Glossary.
add_indicatorboolean, default=False
If True, a MissingIndicator transform will stack onto output of the imputer’s transform. This allows a predictive estimator to account for missingness despite imputation. If a feature has no missing values at fit/train time, the feature won’t appear on the missing indicator even if there are missing values at transform/test time. Attributes
initial_imputer_object of type SimpleImputer
Imputer used to initialize the missing values.
imputation_sequence_list of tuples
Each tuple has (feat_idx, neighbor_feat_idx, estimator), where feat_idx is the current feature to be imputed, neighbor_feat_idx is the array of other features used to impute the current feature, and estimator is the trained estimator used for the imputation. Length is self.n_features_with_missing_ *
self.n_iter_.
n_iter_int
Number of iteration rounds that occurred. Will be less than self.max_iter if early stopping criterion was reached.
n_features_with_missing_int
Number of features with missing values.
indicator_MissingIndicator
Indicator used to add binary indicators for missing values. None if add_indicator is False.
random_state_RandomState instance
RandomState instance that is generated either from a seed, the random number generator or by np.random. See also
SimpleImputer
Univariate imputation of missing values. Notes To support imputation in inductive mode we store each feature’s estimator during the fit phase, and predict without refitting (in order) during the transform phase. Features which contain all missing values at fit are discarded upon transform. References
1
Stef van Buuren, Karin Groothuis-Oudshoorn (2011). “mice: Multivariate Imputation by Chained Equations in R”. Journal of Statistical Software 45: 1-67.
2
S. F. Buck, (1960). “A Method of Estimation of Missing Values in Multivariate Data Suitable for use with an Electronic Computer”. Journal of the Royal Statistical Society 22(2): 302-306. Examples >>> import numpy as np
>>> from sklearn.experimental import enable_iterative_imputer
>>> from sklearn.impute import IterativeImputer
>>> imp_mean = IterativeImputer(random_state=0)
>>> imp_mean.fit([[7, 2, 3], [4, np.nan, 6], [10, 5, 9]])
IterativeImputer(random_state=0)
>>> X = [[np.nan, 2, 3], [4, np.nan, 6], [10, np.nan, 9]]
>>> imp_mean.transform(X)
array([[ 6.9584..., 2. , 3. ],
[ 4. , 2.6000..., 6. ],
[10. , 4.9999..., 9. ]])
Methods
fit(X[, y]) Fits the imputer on X and return self.
fit_transform(X[, y]) Fits the imputer on X and return the transformed X.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Imputes all missing values in X.
fit(X, y=None) [source]
Fits the imputer on X and return self. Parameters
Xarray-like, shape (n_samples, n_features)
Input data, where “n_samples” is the number of samples and “n_features” is the number of features.
yignored
Returns
selfobject
Returns self.
fit_transform(X, y=None) [source]
Fits the imputer on X and return the transformed X. Parameters
Xarray-like, shape (n_samples, n_features)
Input data, where “n_samples” is the number of samples and “n_features” is the number of features.
yignored.
Returns
Xtarray-like, shape (n_samples, n_features)
The imputed input data.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Imputes all missing values in X. Note that this is stochastic, and that if random_state is not fixed, repeated calls, or permuted input, will yield different results. Parameters
Xarray-like of shape (n_samples, n_features)
The input data to complete. Returns
Xtarray-like, shape (n_samples, n_features)
The imputed input data.
Examples using sklearn.impute.IterativeImputer
Imputing missing values with variants of IterativeImputer
Imputing missing values before building an estimator | sklearn.modules.generated.sklearn.impute.iterativeimputer |
fit(X, y=None) [source]
Fits the imputer on X and return self. Parameters
Xarray-like, shape (n_samples, n_features)
Input data, where “n_samples” is the number of samples and “n_features” is the number of features.
yignored
Returns
selfobject
Returns self. | sklearn.modules.generated.sklearn.impute.iterativeimputer#sklearn.impute.IterativeImputer.fit |
fit_transform(X, y=None) [source]
Fits the imputer on X and return the transformed X. Parameters
Xarray-like, shape (n_samples, n_features)
Input data, where “n_samples” is the number of samples and “n_features” is the number of features.
yignored.
Returns
Xtarray-like, shape (n_samples, n_features)
The imputed input data. | sklearn.modules.generated.sklearn.impute.iterativeimputer#sklearn.impute.IterativeImputer.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.impute.iterativeimputer#sklearn.impute.IterativeImputer.get_params |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.impute.iterativeimputer#sklearn.impute.IterativeImputer.set_params |
transform(X) [source]
Imputes all missing values in X. Note that this is stochastic, and that if random_state is not fixed, repeated calls, or permuted input, will yield different results. Parameters
Xarray-like of shape (n_samples, n_features)
The input data to complete. Returns
Xtarray-like, shape (n_samples, n_features)
The imputed input data. | sklearn.modules.generated.sklearn.impute.iterativeimputer#sklearn.impute.IterativeImputer.transform |
class sklearn.impute.KNNImputer(*, missing_values=nan, n_neighbors=5, weights='uniform', metric='nan_euclidean', copy=True, add_indicator=False) [source]
Imputation for completing missing values using k-Nearest Neighbors. Each sample’s missing values are imputed using the mean value from n_neighbors nearest neighbors found in the training set. Two samples are close if the features that neither is missing are close. Read more in the User Guide. New in version 0.22. Parameters
missing_valuesint, float, str, np.nan or None, default=np.nan
The placeholder for the missing values. All occurrences of missing_values will be imputed. For pandas’ dataframes with nullable integer dtypes with missing values, missing_values should be set to np.nan, since pd.NA will be converted to np.nan.
n_neighborsint, default=5
Number of neighboring samples to use for imputation.
weights{‘uniform’, ‘distance’} or callable, default=’uniform’
Weight function used in prediction. Possible values: ‘uniform’ : uniform weights. All points in each neighborhood are weighted equally. ‘distance’ : weight points by the inverse of their distance. in this case, closer neighbors of a query point will have a greater influence than neighbors which are further away. callable : a user-defined function which accepts an array of distances, and returns an array of the same shape containing the weights.
metric{‘nan_euclidean’} or callable, default=’nan_euclidean’
Distance metric for searching neighbors. Possible values: ‘nan_euclidean’ callable : a user-defined function which conforms to the definition of _pairwise_callable(X, Y, metric, **kwds). The function accepts two arrays, X and Y, and a missing_values keyword in kwds and returns a scalar distance value.
copybool, default=True
If True, a copy of X will be created. If False, imputation will be done in-place whenever possible.
add_indicatorbool, default=False
If True, a MissingIndicator transform will stack onto the output of the imputer’s transform. This allows a predictive estimator to account for missingness despite imputation. If a feature has no missing values at fit/train time, the feature won’t appear on the missing indicator even if there are missing values at transform/test time. Attributes
indicator_MissingIndicator
Indicator used to add binary indicators for missing values. None if add_indicator is False. References Olga Troyanskaya, Michael Cantor, Gavin Sherlock, Pat Brown, Trevor Hastie, Robert Tibshirani, David Botstein and Russ B. Altman, Missing value estimation methods for DNA microarrays, BIOINFORMATICS Vol. 17 no. 6, 2001 Pages 520-525. Examples >>> import numpy as np
>>> from sklearn.impute import KNNImputer
>>> X = [[1, 2, np.nan], [3, 4, 3], [np.nan, 6, 5], [8, 8, 7]]
>>> imputer = KNNImputer(n_neighbors=2)
>>> imputer.fit_transform(X)
array([[1. , 2. , 4. ],
[3. , 4. , 3. ],
[5.5, 6. , 5. ],
[8. , 8. , 7. ]])
Methods
fit(X[, y]) Fit the imputer on X.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Impute all missing values in X.
fit(X, y=None) [source]
Fit the imputer on X. Parameters
Xarray-like shape of (n_samples, n_features)
Input data, where n_samples is the number of samples and n_features is the number of features. Returns
selfobject
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Impute all missing values in X. Parameters
Xarray-like of shape (n_samples, n_features)
The input data to complete. Returns
Xarray-like of shape (n_samples, n_output_features)
The imputed dataset. n_output_features is the number of features that is not always missing during fit. | sklearn.modules.generated.sklearn.impute.knnimputer#sklearn.impute.KNNImputer |
sklearn.impute.KNNImputer
class sklearn.impute.KNNImputer(*, missing_values=nan, n_neighbors=5, weights='uniform', metric='nan_euclidean', copy=True, add_indicator=False) [source]
Imputation for completing missing values using k-Nearest Neighbors. Each sample’s missing values are imputed using the mean value from n_neighbors nearest neighbors found in the training set. Two samples are close if the features that neither is missing are close. Read more in the User Guide. New in version 0.22. Parameters
missing_valuesint, float, str, np.nan or None, default=np.nan
The placeholder for the missing values. All occurrences of missing_values will be imputed. For pandas’ dataframes with nullable integer dtypes with missing values, missing_values should be set to np.nan, since pd.NA will be converted to np.nan.
n_neighborsint, default=5
Number of neighboring samples to use for imputation.
weights{‘uniform’, ‘distance’} or callable, default=’uniform’
Weight function used in prediction. Possible values: ‘uniform’ : uniform weights. All points in each neighborhood are weighted equally. ‘distance’ : weight points by the inverse of their distance. in this case, closer neighbors of a query point will have a greater influence than neighbors which are further away. callable : a user-defined function which accepts an array of distances, and returns an array of the same shape containing the weights.
metric{‘nan_euclidean’} or callable, default=’nan_euclidean’
Distance metric for searching neighbors. Possible values: ‘nan_euclidean’ callable : a user-defined function which conforms to the definition of _pairwise_callable(X, Y, metric, **kwds). The function accepts two arrays, X and Y, and a missing_values keyword in kwds and returns a scalar distance value.
copybool, default=True
If True, a copy of X will be created. If False, imputation will be done in-place whenever possible.
add_indicatorbool, default=False
If True, a MissingIndicator transform will stack onto the output of the imputer’s transform. This allows a predictive estimator to account for missingness despite imputation. If a feature has no missing values at fit/train time, the feature won’t appear on the missing indicator even if there are missing values at transform/test time. Attributes
indicator_MissingIndicator
Indicator used to add binary indicators for missing values. None if add_indicator is False. References Olga Troyanskaya, Michael Cantor, Gavin Sherlock, Pat Brown, Trevor Hastie, Robert Tibshirani, David Botstein and Russ B. Altman, Missing value estimation methods for DNA microarrays, BIOINFORMATICS Vol. 17 no. 6, 2001 Pages 520-525. Examples >>> import numpy as np
>>> from sklearn.impute import KNNImputer
>>> X = [[1, 2, np.nan], [3, 4, 3], [np.nan, 6, 5], [8, 8, 7]]
>>> imputer = KNNImputer(n_neighbors=2)
>>> imputer.fit_transform(X)
array([[1. , 2. , 4. ],
[3. , 4. , 3. ],
[5.5, 6. , 5. ],
[8. , 8. , 7. ]])
Methods
fit(X[, y]) Fit the imputer on X.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Impute all missing values in X.
fit(X, y=None) [source]
Fit the imputer on X. Parameters
Xarray-like shape of (n_samples, n_features)
Input data, where n_samples is the number of samples and n_features is the number of features. Returns
selfobject
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Impute all missing values in X. Parameters
Xarray-like of shape (n_samples, n_features)
The input data to complete. Returns
Xarray-like of shape (n_samples, n_output_features)
The imputed dataset. n_output_features is the number of features that is not always missing during fit.
Examples using sklearn.impute.KNNImputer
Release Highlights for scikit-learn 0.22
Imputing missing values before building an estimator | sklearn.modules.generated.sklearn.impute.knnimputer |
fit(X, y=None) [source]
Fit the imputer on X. Parameters
Xarray-like shape of (n_samples, n_features)
Input data, where n_samples is the number of samples and n_features is the number of features. Returns
selfobject | sklearn.modules.generated.sklearn.impute.knnimputer#sklearn.impute.KNNImputer.fit |
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | sklearn.modules.generated.sklearn.impute.knnimputer#sklearn.impute.KNNImputer.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.impute.knnimputer#sklearn.impute.KNNImputer.get_params |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.impute.knnimputer#sklearn.impute.KNNImputer.set_params |
transform(X) [source]
Impute all missing values in X. Parameters
Xarray-like of shape (n_samples, n_features)
The input data to complete. Returns
Xarray-like of shape (n_samples, n_output_features)
The imputed dataset. n_output_features is the number of features that is not always missing during fit. | sklearn.modules.generated.sklearn.impute.knnimputer#sklearn.impute.KNNImputer.transform |
class sklearn.impute.MissingIndicator(*, missing_values=nan, features='missing-only', sparse='auto', error_on_new=True) [source]
Binary indicators for missing values. Note that this component typically should not be used in a vanilla Pipeline consisting of transformers and a classifier, but rather could be added using a FeatureUnion or ColumnTransformer. Read more in the User Guide. New in version 0.20. Parameters
missing_valuesint, float, string, np.nan or None, default=np.nan
The placeholder for the missing values. All occurrences of missing_values will be imputed. For pandas’ dataframes with nullable integer dtypes with missing values, missing_values should be set to np.nan, since pd.NA will be converted to np.nan.
features{‘missing-only’, ‘all’}, default=’missing-only’
Whether the imputer mask should represent all or a subset of features. If ‘missing-only’ (default), the imputer mask will only represent features containing missing values during fit time. If ‘all’, the imputer mask will represent all features.
sparsebool or ‘auto’, default=’auto’
Whether the imputer mask format should be sparse or dense. If ‘auto’ (default), the imputer mask will be of same type as input. If True, the imputer mask will be a sparse matrix. If False, the imputer mask will be a numpy array.
error_on_newbool, default=True
If True, transform will raise an error when there are features with missing values in transform that have no missing values in fit. This is applicable only when features='missing-only'. Attributes
features_ndarray, shape (n_missing_features,) or (n_features,)
The features indices which will be returned when calling transform. They are computed during fit. For features='all', it is to range(n_features). Examples >>> import numpy as np
>>> from sklearn.impute import MissingIndicator
>>> X1 = np.array([[np.nan, 1, 3],
... [4, 0, np.nan],
... [8, 1, 0]])
>>> X2 = np.array([[5, 1, np.nan],
... [np.nan, 2, 3],
... [2, 4, 0]])
>>> indicator = MissingIndicator()
>>> indicator.fit(X1)
MissingIndicator()
>>> X2_tr = indicator.transform(X2)
>>> X2_tr
array([[False, True],
[ True, False],
[False, False]])
Methods
fit(X[, y]) Fit the transformer on X.
fit_transform(X[, y]) Generate missing values indicator for X.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Generate missing values indicator for X.
fit(X, y=None) [source]
Fit the transformer on X. Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
Input data, where n_samples is the number of samples and n_features is the number of features. Returns
selfobject
Returns self.
fit_transform(X, y=None) [source]
Generate missing values indicator for X. Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
The input data to complete. Returns
Xt{ndarray or sparse matrix}, shape (n_samples, n_features) or (n_samples, n_features_with_missing)
The missing indicator for input data. The data type of Xt will be boolean.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Generate missing values indicator for X. Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
The input data to complete. Returns
Xt{ndarray or sparse matrix}, shape (n_samples, n_features) or (n_samples, n_features_with_missing)
The missing indicator for input data. The data type of Xt will be boolean. | sklearn.modules.generated.sklearn.impute.missingindicator#sklearn.impute.MissingIndicator |
sklearn.impute.MissingIndicator
class sklearn.impute.MissingIndicator(*, missing_values=nan, features='missing-only', sparse='auto', error_on_new=True) [source]
Binary indicators for missing values. Note that this component typically should not be used in a vanilla Pipeline consisting of transformers and a classifier, but rather could be added using a FeatureUnion or ColumnTransformer. Read more in the User Guide. New in version 0.20. Parameters
missing_valuesint, float, string, np.nan or None, default=np.nan
The placeholder for the missing values. All occurrences of missing_values will be imputed. For pandas’ dataframes with nullable integer dtypes with missing values, missing_values should be set to np.nan, since pd.NA will be converted to np.nan.
features{‘missing-only’, ‘all’}, default=’missing-only’
Whether the imputer mask should represent all or a subset of features. If ‘missing-only’ (default), the imputer mask will only represent features containing missing values during fit time. If ‘all’, the imputer mask will represent all features.
sparsebool or ‘auto’, default=’auto’
Whether the imputer mask format should be sparse or dense. If ‘auto’ (default), the imputer mask will be of same type as input. If True, the imputer mask will be a sparse matrix. If False, the imputer mask will be a numpy array.
error_on_newbool, default=True
If True, transform will raise an error when there are features with missing values in transform that have no missing values in fit. This is applicable only when features='missing-only'. Attributes
features_ndarray, shape (n_missing_features,) or (n_features,)
The features indices which will be returned when calling transform. They are computed during fit. For features='all', it is to range(n_features). Examples >>> import numpy as np
>>> from sklearn.impute import MissingIndicator
>>> X1 = np.array([[np.nan, 1, 3],
... [4, 0, np.nan],
... [8, 1, 0]])
>>> X2 = np.array([[5, 1, np.nan],
... [np.nan, 2, 3],
... [2, 4, 0]])
>>> indicator = MissingIndicator()
>>> indicator.fit(X1)
MissingIndicator()
>>> X2_tr = indicator.transform(X2)
>>> X2_tr
array([[False, True],
[ True, False],
[False, False]])
Methods
fit(X[, y]) Fit the transformer on X.
fit_transform(X[, y]) Generate missing values indicator for X.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Generate missing values indicator for X.
fit(X, y=None) [source]
Fit the transformer on X. Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
Input data, where n_samples is the number of samples and n_features is the number of features. Returns
selfobject
Returns self.
fit_transform(X, y=None) [source]
Generate missing values indicator for X. Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
The input data to complete. Returns
Xt{ndarray or sparse matrix}, shape (n_samples, n_features) or (n_samples, n_features_with_missing)
The missing indicator for input data. The data type of Xt will be boolean.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Generate missing values indicator for X. Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
The input data to complete. Returns
Xt{ndarray or sparse matrix}, shape (n_samples, n_features) or (n_samples, n_features_with_missing)
The missing indicator for input data. The data type of Xt will be boolean. | sklearn.modules.generated.sklearn.impute.missingindicator |
fit(X, y=None) [source]
Fit the transformer on X. Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
Input data, where n_samples is the number of samples and n_features is the number of features. Returns
selfobject
Returns self. | sklearn.modules.generated.sklearn.impute.missingindicator#sklearn.impute.MissingIndicator.fit |
fit_transform(X, y=None) [source]
Generate missing values indicator for X. Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
The input data to complete. Returns
Xt{ndarray or sparse matrix}, shape (n_samples, n_features) or (n_samples, n_features_with_missing)
The missing indicator for input data. The data type of Xt will be boolean. | sklearn.modules.generated.sklearn.impute.missingindicator#sklearn.impute.MissingIndicator.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.impute.missingindicator#sklearn.impute.MissingIndicator.get_params |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.impute.missingindicator#sklearn.impute.MissingIndicator.set_params |
transform(X) [source]
Generate missing values indicator for X. Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
The input data to complete. Returns
Xt{ndarray or sparse matrix}, shape (n_samples, n_features) or (n_samples, n_features_with_missing)
The missing indicator for input data. The data type of Xt will be boolean. | sklearn.modules.generated.sklearn.impute.missingindicator#sklearn.impute.MissingIndicator.transform |
class sklearn.impute.SimpleImputer(*, missing_values=nan, strategy='mean', fill_value=None, verbose=0, copy=True, add_indicator=False) [source]
Imputation transformer for completing missing values. Read more in the User Guide. New in version 0.20: SimpleImputer replaces the previous sklearn.preprocessing.Imputer estimator which is now removed. Parameters
missing_valuesint, float, str, np.nan or None, default=np.nan
The placeholder for the missing values. All occurrences of missing_values will be imputed. For pandas’ dataframes with nullable integer dtypes with missing values, missing_values should be set to np.nan, since pd.NA will be converted to np.nan.
strategystring, default=’mean’
The imputation strategy. If “mean”, then replace missing values using the mean along each column. Can only be used with numeric data. If “median”, then replace missing values using the median along each column. Can only be used with numeric data. If “most_frequent”, then replace missing using the most frequent value along each column. Can be used with strings or numeric data. If there is more than one such value, only the smallest is returned. If “constant”, then replace missing values with fill_value. Can be used with strings or numeric data. New in version 0.20: strategy=”constant” for fixed value imputation.
fill_valuestring or numerical value, default=None
When strategy == “constant”, fill_value is used to replace all occurrences of missing_values. If left to the default, fill_value will be 0 when imputing numerical data and “missing_value” for strings or object data types.
verboseinteger, default=0
Controls the verbosity of the imputer.
copyboolean, default=True
If True, a copy of X will be created. If False, imputation will be done in-place whenever possible. Note that, in the following cases, a new copy will always be made, even if copy=False: If X is not an array of floating values; If X is encoded as a CSR matrix; If add_indicator=True.
add_indicatorboolean, default=False
If True, a MissingIndicator transform will stack onto output of the imputer’s transform. This allows a predictive estimator to account for missingness despite imputation. If a feature has no missing values at fit/train time, the feature won’t appear on the missing indicator even if there are missing values at transform/test time. Attributes
statistics_array of shape (n_features,)
The imputation fill value for each feature. Computing statistics can result in np.nan values. During transform, features corresponding to np.nan statistics will be discarded.
indicator_MissingIndicator
Indicator used to add binary indicators for missing values. None if add_indicator is False. See also
IterativeImputer
Multivariate imputation of missing values. Notes Columns which only contained missing values at fit are discarded upon transform if strategy is not “constant”. Examples >>> import numpy as np
>>> from sklearn.impute import SimpleImputer
>>> imp_mean = SimpleImputer(missing_values=np.nan, strategy='mean')
>>> imp_mean.fit([[7, 2, 3], [4, np.nan, 6], [10, 5, 9]])
SimpleImputer()
>>> X = [[np.nan, 2, 3], [4, np.nan, 6], [10, np.nan, 9]]
>>> print(imp_mean.transform(X))
[[ 7. 2. 3. ]
[ 4. 3.5 6. ]
[10. 3.5 9. ]]
Methods
fit(X[, y]) Fit the imputer on X.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X) Convert the data back to the original representation.
set_params(**params) Set the parameters of this estimator.
transform(X) Impute all missing values in X.
fit(X, y=None) [source]
Fit the imputer on X. Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
Input data, where n_samples is the number of samples and n_features is the number of features. Returns
selfSimpleImputer
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X) [source]
Convert the data back to the original representation. Inverts the transform operation performed on an array. This operation can only be performed after SimpleImputer is instantiated with add_indicator=True. Note that inverse_transform can only invert the transform in features that have binary indicators for missing values. If a feature has no missing values at fit time, the feature won’t have a binary indicator, and the imputation done at transform time won’t be inverted. New in version 0.24. Parameters
Xarray-like of shape (n_samples, n_features + n_features_missing_indicator)
The imputed data to be reverted to original data. It has to be an augmented array of imputed data and the missing indicator mask. Returns
X_originalndarray of shape (n_samples, n_features)
The original X with missing values as it was prior to imputation.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Impute all missing values in X. Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
The input data to complete. | sklearn.modules.generated.sklearn.impute.simpleimputer#sklearn.impute.SimpleImputer |
sklearn.impute.SimpleImputer
class sklearn.impute.SimpleImputer(*, missing_values=nan, strategy='mean', fill_value=None, verbose=0, copy=True, add_indicator=False) [source]
Imputation transformer for completing missing values. Read more in the User Guide. New in version 0.20: SimpleImputer replaces the previous sklearn.preprocessing.Imputer estimator which is now removed. Parameters
missing_valuesint, float, str, np.nan or None, default=np.nan
The placeholder for the missing values. All occurrences of missing_values will be imputed. For pandas’ dataframes with nullable integer dtypes with missing values, missing_values should be set to np.nan, since pd.NA will be converted to np.nan.
strategystring, default=’mean’
The imputation strategy. If “mean”, then replace missing values using the mean along each column. Can only be used with numeric data. If “median”, then replace missing values using the median along each column. Can only be used with numeric data. If “most_frequent”, then replace missing using the most frequent value along each column. Can be used with strings or numeric data. If there is more than one such value, only the smallest is returned. If “constant”, then replace missing values with fill_value. Can be used with strings or numeric data. New in version 0.20: strategy=”constant” for fixed value imputation.
fill_valuestring or numerical value, default=None
When strategy == “constant”, fill_value is used to replace all occurrences of missing_values. If left to the default, fill_value will be 0 when imputing numerical data and “missing_value” for strings or object data types.
verboseinteger, default=0
Controls the verbosity of the imputer.
copyboolean, default=True
If True, a copy of X will be created. If False, imputation will be done in-place whenever possible. Note that, in the following cases, a new copy will always be made, even if copy=False: If X is not an array of floating values; If X is encoded as a CSR matrix; If add_indicator=True.
add_indicatorboolean, default=False
If True, a MissingIndicator transform will stack onto output of the imputer’s transform. This allows a predictive estimator to account for missingness despite imputation. If a feature has no missing values at fit/train time, the feature won’t appear on the missing indicator even if there are missing values at transform/test time. Attributes
statistics_array of shape (n_features,)
The imputation fill value for each feature. Computing statistics can result in np.nan values. During transform, features corresponding to np.nan statistics will be discarded.
indicator_MissingIndicator
Indicator used to add binary indicators for missing values. None if add_indicator is False. See also
IterativeImputer
Multivariate imputation of missing values. Notes Columns which only contained missing values at fit are discarded upon transform if strategy is not “constant”. Examples >>> import numpy as np
>>> from sklearn.impute import SimpleImputer
>>> imp_mean = SimpleImputer(missing_values=np.nan, strategy='mean')
>>> imp_mean.fit([[7, 2, 3], [4, np.nan, 6], [10, 5, 9]])
SimpleImputer()
>>> X = [[np.nan, 2, 3], [4, np.nan, 6], [10, np.nan, 9]]
>>> print(imp_mean.transform(X))
[[ 7. 2. 3. ]
[ 4. 3.5 6. ]
[10. 3.5 9. ]]
Methods
fit(X[, y]) Fit the imputer on X.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X) Convert the data back to the original representation.
set_params(**params) Set the parameters of this estimator.
transform(X) Impute all missing values in X.
fit(X, y=None) [source]
Fit the imputer on X. Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
Input data, where n_samples is the number of samples and n_features is the number of features. Returns
selfSimpleImputer
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X) [source]
Convert the data back to the original representation. Inverts the transform operation performed on an array. This operation can only be performed after SimpleImputer is instantiated with add_indicator=True. Note that inverse_transform can only invert the transform in features that have binary indicators for missing values. If a feature has no missing values at fit time, the feature won’t have a binary indicator, and the imputation done at transform time won’t be inverted. New in version 0.24. Parameters
Xarray-like of shape (n_samples, n_features + n_features_missing_indicator)
The imputed data to be reverted to original data. It has to be an augmented array of imputed data and the missing indicator mask. Returns
X_originalndarray of shape (n_samples, n_features)
The original X with missing values as it was prior to imputation.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Impute all missing values in X. Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
The input data to complete.
Examples using sklearn.impute.SimpleImputer
Release Highlights for scikit-learn 0.23
Combine predictors using stacking
Permutation Importance vs Random Forest Feature Importance (MDI)
Imputing missing values with variants of IterativeImputer
Imputing missing values before building an estimator
Column Transformer with Mixed Types | sklearn.modules.generated.sklearn.impute.simpleimputer |
fit(X, y=None) [source]
Fit the imputer on X. Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
Input data, where n_samples is the number of samples and n_features is the number of features. Returns
selfSimpleImputer | sklearn.modules.generated.sklearn.impute.simpleimputer#sklearn.impute.SimpleImputer.fit |
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | sklearn.modules.generated.sklearn.impute.simpleimputer#sklearn.impute.SimpleImputer.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.impute.simpleimputer#sklearn.impute.SimpleImputer.get_params |
inverse_transform(X) [source]
Convert the data back to the original representation. Inverts the transform operation performed on an array. This operation can only be performed after SimpleImputer is instantiated with add_indicator=True. Note that inverse_transform can only invert the transform in features that have binary indicators for missing values. If a feature has no missing values at fit time, the feature won’t have a binary indicator, and the imputation done at transform time won’t be inverted. New in version 0.24. Parameters
Xarray-like of shape (n_samples, n_features + n_features_missing_indicator)
The imputed data to be reverted to original data. It has to be an augmented array of imputed data and the missing indicator mask. Returns
X_originalndarray of shape (n_samples, n_features)
The original X with missing values as it was prior to imputation. | sklearn.modules.generated.sklearn.impute.simpleimputer#sklearn.impute.SimpleImputer.inverse_transform |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.impute.simpleimputer#sklearn.impute.SimpleImputer.set_params |
transform(X) [source]
Impute all missing values in X. Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
The input data to complete. | sklearn.modules.generated.sklearn.impute.simpleimputer#sklearn.impute.SimpleImputer.transform |
class sklearn.inspection.PartialDependenceDisplay(pd_results, *, features, feature_names, target_idx, pdp_lim, deciles, kind='average', subsample=1000, random_state=None) [source]
Partial Dependence Plot (PDP). This can also display individual partial dependencies which are often referred to as: Individual Condition Expectation (ICE). It is recommended to use plot_partial_dependence to create a PartialDependenceDisplay. All parameters are stored as attributes. Read more in Advanced Plotting With Partial Dependence and the User Guide. New in version 0.22. Parameters
pd_resultslist of Bunch
Results of partial_dependence for features.
featureslist of (int,) or list of (int, int)
Indices of features for a given plot. A tuple of one integer will plot a partial dependence curve of one feature. A tuple of two integers will plot a two-way partial dependence curve as a contour plot.
feature_nameslist of str
Feature names corresponding to the indices in features.
target_idxint
In a multiclass setting, specifies the class for which the PDPs should be computed. Note that for binary classification, the positive class (index 1) is always used. In a multioutput setting, specifies the task for which the PDPs should be computed. Ignored in binary classification or classical regression settings.
pdp_limdict
Global min and max average predictions, such that all plots will have the same scale and y limits. pdp_lim[1] is the global min and max for single partial dependence curves. pdp_lim[2] is the global min and max for two-way partial dependence curves.
decilesdict
Deciles for feature indices in features.
kind{‘average’, ‘individual’, ‘both’}, default=’average’
Whether to plot the partial dependence averaged across all the samples in the dataset or one line per sample or both.
kind='average' results in the traditional PD plot;
kind='individual' results in the ICE plot. Note that the fast method='recursion' option is only available for kind='average'. Plotting individual dependencies requires using the slower method='brute' option. New in version 0.24.
subsamplefloat, int or None, default=1000
Sampling for ICE curves when kind is ‘individual’ or ‘both’. If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to be used to plot ICE curves. If int, represents the maximum absolute number of samples to use. Note that the full dataset is still used to calculate partial dependence when kind='both'. New in version 0.24.
random_stateint, RandomState instance or None, default=None
Controls the randomness of the selected samples when subsamples is not None. See Glossary for details. New in version 0.24. Attributes
bounding_ax_matplotlib Axes or None
If ax is an axes or None, the bounding_ax_ is the axes where the grid of partial dependence plots are drawn. If ax is a list of axes or a numpy array of axes, bounding_ax_ is None.
axes_ndarray of matplotlib Axes
If ax is an axes or None, axes_[i, j] is the axes on the i-th row and j-th column. If ax is a list of axes, axes_[i] is the i-th item in ax. Elements that are None correspond to a nonexisting axes in that position.
lines_ndarray of matplotlib Artists
If ax is an axes or None, lines_[i, j] is the partial dependence curve on the i-th row and j-th column. If ax is a list of axes, lines_[i] is the partial dependence curve corresponding to the i-th item in ax. Elements that are None correspond to a nonexisting axes or an axes that does not include a line plot.
deciles_vlines_ndarray of matplotlib LineCollection
If ax is an axes or None, vlines_[i, j] is the line collection representing the x axis deciles of the i-th row and j-th column. If ax is a list of axes, vlines_[i] corresponds to the i-th item in ax. Elements that are None correspond to a nonexisting axes or an axes that does not include a PDP plot. New in version 0.23.
deciles_hlines_ndarray of matplotlib LineCollection
If ax is an axes or None, vlines_[i, j] is the line collection representing the y axis deciles of the i-th row and j-th column. If ax is a list of axes, vlines_[i] corresponds to the i-th item in ax. Elements that are None correspond to a nonexisting axes or an axes that does not include a 2-way plot. New in version 0.23.
contours_ndarray of matplotlib Artists
If ax is an axes or None, contours_[i, j] is the partial dependence plot on the i-th row and j-th column. If ax is a list of axes, contours_[i] is the partial dependence plot corresponding to the i-th item in ax. Elements that are None correspond to a nonexisting axes or an axes that does not include a contour plot.
figure_matplotlib Figure
Figure containing partial dependence plots. See also
partial_dependence
Compute Partial Dependence values.
plot_partial_dependence
Plot Partial Dependence. Methods
plot(*[, ax, n_cols, line_kw, contour_kw]) Plot partial dependence plots.
plot(*, ax=None, n_cols=3, line_kw=None, contour_kw=None) [source]
Plot partial dependence plots. Parameters
axMatplotlib axes or array-like of Matplotlib axes, default=None
If a single axis is passed in, it is treated as a bounding axes
and a grid of partial dependence plots will be drawn within these bounds. The n_cols parameter controls the number of columns in the grid.
If an array-like of axes are passed in, the partial dependence
plots will be drawn directly into these axes.
If None, a figure and a bounding axes is created and treated
as the single axes case.
n_colsint, default=3
The maximum number of columns in the grid plot. Only active when ax is a single axes or None.
line_kwdict, default=None
Dict with keywords passed to the matplotlib.pyplot.plot call. For one-way partial dependence plots.
contour_kwdict, default=None
Dict with keywords passed to the matplotlib.pyplot.contourf call for two-way partial dependence plots. Returns
displayPartialDependenceDisplay | sklearn.modules.generated.sklearn.inspection.partialdependencedisplay#sklearn.inspection.PartialDependenceDisplay |
sklearn.inspection.PartialDependenceDisplay
class sklearn.inspection.PartialDependenceDisplay(pd_results, *, features, feature_names, target_idx, pdp_lim, deciles, kind='average', subsample=1000, random_state=None) [source]
Partial Dependence Plot (PDP). This can also display individual partial dependencies which are often referred to as: Individual Condition Expectation (ICE). It is recommended to use plot_partial_dependence to create a PartialDependenceDisplay. All parameters are stored as attributes. Read more in Advanced Plotting With Partial Dependence and the User Guide. New in version 0.22. Parameters
pd_resultslist of Bunch
Results of partial_dependence for features.
featureslist of (int,) or list of (int, int)
Indices of features for a given plot. A tuple of one integer will plot a partial dependence curve of one feature. A tuple of two integers will plot a two-way partial dependence curve as a contour plot.
feature_nameslist of str
Feature names corresponding to the indices in features.
target_idxint
In a multiclass setting, specifies the class for which the PDPs should be computed. Note that for binary classification, the positive class (index 1) is always used. In a multioutput setting, specifies the task for which the PDPs should be computed. Ignored in binary classification or classical regression settings.
pdp_limdict
Global min and max average predictions, such that all plots will have the same scale and y limits. pdp_lim[1] is the global min and max for single partial dependence curves. pdp_lim[2] is the global min and max for two-way partial dependence curves.
decilesdict
Deciles for feature indices in features.
kind{‘average’, ‘individual’, ‘both’}, default=’average’
Whether to plot the partial dependence averaged across all the samples in the dataset or one line per sample or both.
kind='average' results in the traditional PD plot;
kind='individual' results in the ICE plot. Note that the fast method='recursion' option is only available for kind='average'. Plotting individual dependencies requires using the slower method='brute' option. New in version 0.24.
subsamplefloat, int or None, default=1000
Sampling for ICE curves when kind is ‘individual’ or ‘both’. If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to be used to plot ICE curves. If int, represents the maximum absolute number of samples to use. Note that the full dataset is still used to calculate partial dependence when kind='both'. New in version 0.24.
random_stateint, RandomState instance or None, default=None
Controls the randomness of the selected samples when subsamples is not None. See Glossary for details. New in version 0.24. Attributes
bounding_ax_matplotlib Axes or None
If ax is an axes or None, the bounding_ax_ is the axes where the grid of partial dependence plots are drawn. If ax is a list of axes or a numpy array of axes, bounding_ax_ is None.
axes_ndarray of matplotlib Axes
If ax is an axes or None, axes_[i, j] is the axes on the i-th row and j-th column. If ax is a list of axes, axes_[i] is the i-th item in ax. Elements that are None correspond to a nonexisting axes in that position.
lines_ndarray of matplotlib Artists
If ax is an axes or None, lines_[i, j] is the partial dependence curve on the i-th row and j-th column. If ax is a list of axes, lines_[i] is the partial dependence curve corresponding to the i-th item in ax. Elements that are None correspond to a nonexisting axes or an axes that does not include a line plot.
deciles_vlines_ndarray of matplotlib LineCollection
If ax is an axes or None, vlines_[i, j] is the line collection representing the x axis deciles of the i-th row and j-th column. If ax is a list of axes, vlines_[i] corresponds to the i-th item in ax. Elements that are None correspond to a nonexisting axes or an axes that does not include a PDP plot. New in version 0.23.
deciles_hlines_ndarray of matplotlib LineCollection
If ax is an axes or None, vlines_[i, j] is the line collection representing the y axis deciles of the i-th row and j-th column. If ax is a list of axes, vlines_[i] corresponds to the i-th item in ax. Elements that are None correspond to a nonexisting axes or an axes that does not include a 2-way plot. New in version 0.23.
contours_ndarray of matplotlib Artists
If ax is an axes or None, contours_[i, j] is the partial dependence plot on the i-th row and j-th column. If ax is a list of axes, contours_[i] is the partial dependence plot corresponding to the i-th item in ax. Elements that are None correspond to a nonexisting axes or an axes that does not include a contour plot.
figure_matplotlib Figure
Figure containing partial dependence plots. See also
partial_dependence
Compute Partial Dependence values.
plot_partial_dependence
Plot Partial Dependence. Methods
plot(*[, ax, n_cols, line_kw, contour_kw]) Plot partial dependence plots.
plot(*, ax=None, n_cols=3, line_kw=None, contour_kw=None) [source]
Plot partial dependence plots. Parameters
axMatplotlib axes or array-like of Matplotlib axes, default=None
If a single axis is passed in, it is treated as a bounding axes
and a grid of partial dependence plots will be drawn within these bounds. The n_cols parameter controls the number of columns in the grid.
If an array-like of axes are passed in, the partial dependence
plots will be drawn directly into these axes.
If None, a figure and a bounding axes is created and treated
as the single axes case.
n_colsint, default=3
The maximum number of columns in the grid plot. Only active when ax is a single axes or None.
line_kwdict, default=None
Dict with keywords passed to the matplotlib.pyplot.plot call. For one-way partial dependence plots.
contour_kwdict, default=None
Dict with keywords passed to the matplotlib.pyplot.contourf call for two-way partial dependence plots. Returns
displayPartialDependenceDisplay
Examples using sklearn.inspection.PartialDependenceDisplay
Advanced Plotting With Partial Dependence | sklearn.modules.generated.sklearn.inspection.partialdependencedisplay |
plot(*, ax=None, n_cols=3, line_kw=None, contour_kw=None) [source]
Plot partial dependence plots. Parameters
axMatplotlib axes or array-like of Matplotlib axes, default=None
If a single axis is passed in, it is treated as a bounding axes
and a grid of partial dependence plots will be drawn within these bounds. The n_cols parameter controls the number of columns in the grid.
If an array-like of axes are passed in, the partial dependence
plots will be drawn directly into these axes.
If None, a figure and a bounding axes is created and treated
as the single axes case.
n_colsint, default=3
The maximum number of columns in the grid plot. Only active when ax is a single axes or None.
line_kwdict, default=None
Dict with keywords passed to the matplotlib.pyplot.plot call. For one-way partial dependence plots.
contour_kwdict, default=None
Dict with keywords passed to the matplotlib.pyplot.contourf call for two-way partial dependence plots. Returns
displayPartialDependenceDisplay | sklearn.modules.generated.sklearn.inspection.partialdependencedisplay#sklearn.inspection.PartialDependenceDisplay.plot |
sklearn.inspection.partial_dependence(estimator, X, features, *, response_method='auto', percentiles=0.05, 0.95, grid_resolution=100, method='auto', kind='legacy') [source]
Partial dependence of features. Partial dependence of a feature (or a set of features) corresponds to the average response of an estimator for each possible value of the feature. Read more in the User Guide. Warning For GradientBoostingClassifier and GradientBoostingRegressor, the 'recursion' method (used by default) will not account for the init predictor of the boosting process. In practice, this will produce the same values as 'brute' up to a constant offset in the target response, provided that init is a constant estimator (which is the default). However, if init is not a constant estimator, the partial dependence values are incorrect for 'recursion' because the offset will be sample-dependent. It is preferable to use the 'brute' method. Note that this only applies to GradientBoostingClassifier and GradientBoostingRegressor, not to HistGradientBoostingClassifier and HistGradientBoostingRegressor. Parameters
estimatorBaseEstimator
A fitted estimator object implementing predict, predict_proba, or decision_function. Multioutput-multiclass classifiers are not supported.
X{array-like or dataframe} of shape (n_samples, n_features)
X is used to generate a grid of values for the target features (where the partial dependence will be evaluated), and also to generate values for the complement features when the method is ‘brute’.
featuresarray-like of {int, str}
The feature (e.g. [0]) or pair of interacting features (e.g. [(0, 1)]) for which the partial dependency should be computed.
response_method{‘auto’, ‘predict_proba’, ‘decision_function’}, default=’auto’
Specifies whether to use predict_proba or decision_function as the target response. For regressors this parameter is ignored and the response is always the output of predict. By default, predict_proba is tried first and we revert to decision_function if it doesn’t exist. If method is ‘recursion’, the response is always the output of decision_function.
percentilestuple of float, default=(0.05, 0.95)
The lower and upper percentile used to create the extreme values for the grid. Must be in [0, 1].
grid_resolutionint, default=100
The number of equally spaced points on the grid, for each target feature.
method{‘auto’, ‘recursion’, ‘brute’}, default=’auto’
The method used to calculate the averaged predictions:
'recursion' is only supported for some tree-based estimators (namely GradientBoostingClassifier, GradientBoostingRegressor, HistGradientBoostingClassifier, HistGradientBoostingRegressor, DecisionTreeRegressor, RandomForestRegressor, ) when kind='average'. This is more efficient in terms of speed. With this method, the target response of a classifier is always the decision function, not the predicted probabilities. Since the 'recursion' method implicitely computes the average of the Individual Conditional Expectation (ICE) by design, it is not compatible with ICE and thus kind must be 'average'.
'brute' is supported for any estimator, but is more computationally intensive.
'auto': the 'recursion' is used for estimators that support it, and 'brute' is used otherwise. Please see this note for differences between the 'brute' and 'recursion' method.
kind{‘legacy’, ‘average’, ‘individual’, ‘both’}, default=’legacy’
Whether to return the partial dependence averaged across all the samples in the dataset or one line per sample or both. See Returns below. Note that the fast method='recursion' option is only available for kind='average'. Plotting individual dependencies requires using the slower method='brute' option. New in version 0.24. Deprecated since version 0.24: kind='legacy' is deprecated and will be removed in version 1.1. kind='average' will be the new default. It is intended to migrate from the ndarray output to Bunch output. Returns
predictionsndarray or Bunch
if kind='legacy', return value is ndarray of shape (n_outputs, len(values[0]), len(values[1]), …)
The predictions for all the points in the grid, averaged over all samples in X (or over the training data if method is ‘recursion’).
if kind='individual', 'average' or 'both', return value is Bunch
Dictionary-like object, with the following attributes.
individualndarray of shape (n_outputs, n_instances, len(values[0]), len(values[1]), …)
The predictions for all the points in the grid for all samples in X. This is also known as Individual Conditional Expectation (ICE)
averagendarray of shape (n_outputs, len(values[0]), len(values[1]), …)
The predictions for all the points in the grid, averaged over all samples in X (or over the training data if method is ‘recursion’). Only available when kind=’both’.
valuesseq of 1d ndarrays
The values with which the grid has been created. The generated grid is a cartesian product of the arrays in values. len(values) == len(features). The size of each array values[j] is either grid_resolution, or the number of unique values in X[:, j], whichever is smaller. n_outputs corresponds to the number of classes in a multi-class setting, or to the number of tasks for multi-output regression. For classical regression and binary classification n_outputs==1. n_values_feature_j corresponds to the size values[j].
valuesseq of 1d ndarrays
The values with which the grid has been created. The generated grid is a cartesian product of the arrays in values. len(values) ==
len(features). The size of each array values[j] is either grid_resolution, or the number of unique values in X[:, j], whichever is smaller. Only available when kind="legacy". See also
plot_partial_dependence
Plot Partial Dependence.
PartialDependenceDisplay
Partial Dependence visualization. Examples >>> X = [[0, 0, 2], [1, 0, 0]]
>>> y = [0, 1]
>>> from sklearn.ensemble import GradientBoostingClassifier
>>> gb = GradientBoostingClassifier(random_state=0).fit(X, y)
>>> partial_dependence(gb, features=[0], X=X, percentiles=(0, 1),
... grid_resolution=2)
(array([[-4.52..., 4.52...]]), [array([ 0., 1.])]) | sklearn.modules.generated.sklearn.inspection.partial_dependence#sklearn.inspection.partial_dependence |
sklearn.inspection.permutation_importance(estimator, X, y, *, scoring=None, n_repeats=5, n_jobs=None, random_state=None, sample_weight=None) [source]
Permutation importance for feature evaluation [BRE]. The estimator is required to be a fitted estimator. X can be the data set used to train the estimator or a hold-out set. The permutation importance of a feature is calculated as follows. First, a baseline metric, defined by scoring, is evaluated on a (potentially different) dataset defined by the X. Next, a feature column from the validation set is permuted and the metric is evaluated again. The permutation importance is defined to be the difference between the baseline metric and metric from permutating the feature column. Read more in the User Guide. Parameters
estimatorobject
An estimator that has already been fitted and is compatible with scorer.
Xndarray or DataFrame, shape (n_samples, n_features)
Data on which permutation importance will be computed.
yarray-like or None, shape (n_samples, ) or (n_samples, n_classes)
Targets for supervised or None for unsupervised.
scoringstring, callable or None, default=None
Scorer to use. It can be a single string (see The scoring parameter: defining model evaluation rules) or a callable (see Defining your scoring strategy from metric functions). If None, the estimator’s default scorer is used.
n_repeatsint, default=5
Number of times to permute a feature.
n_jobsint or None, default=None
Number of jobs to run in parallel. The computation is done by computing permutation score for each columns and parallelized over the columns. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
random_stateint, RandomState instance, default=None
Pseudo-random number generator to control the permutations of each feature. Pass an int to get reproducible results across function calls. See :term: Glossary <random_state>.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights used in scoring. New in version 0.24. Returns
resultBunch
Dictionary-like object, with the following attributes.
importances_meanndarray, shape (n_features, )
Mean of feature importance over n_repeats.
importances_stdndarray, shape (n_features, )
Standard deviation over n_repeats.
importancesndarray, shape (n_features, n_repeats)
Raw permutation importance scores. References
BRE
L. Breiman, “Random Forests”, Machine Learning, 45(1), 5-32, 2001. https://doi.org/10.1023/A:1010933404324 Examples >>> from sklearn.linear_model import LogisticRegression
>>> from sklearn.inspection import permutation_importance
>>> X = [[1, 9, 9],[1, 9, 9],[1, 9, 9],
... [0, 9, 9],[0, 9, 9],[0, 9, 9]]
>>> y = [1, 1, 1, 0, 0, 0]
>>> clf = LogisticRegression().fit(X, y)
>>> result = permutation_importance(clf, X, y, n_repeats=10,
... random_state=0)
>>> result.importances_mean
array([0.4666..., 0. , 0. ])
>>> result.importances_std
array([0.2211..., 0. , 0. ]) | sklearn.modules.generated.sklearn.inspection.permutation_importance#sklearn.inspection.permutation_importance |
sklearn.inspection.plot_partial_dependence(estimator, X, features, *, feature_names=None, target=None, response_method='auto', n_cols=3, grid_resolution=100, percentiles=0.05, 0.95, method='auto', n_jobs=None, verbose=0, line_kw=None, contour_kw=None, ax=None, kind='average', subsample=1000, random_state=None) [source]
Partial dependence (PD) and individual conditional expectation (ICE) plots. Partial dependence plots, individual conditional expectation plots or an overlay of both of them can be plotted by setting the kind parameter. The len(features) plots are arranged in a grid with n_cols columns. Two-way partial dependence plots are plotted as contour plots. The deciles of the feature values will be shown with tick marks on the x-axes for one-way plots, and on both axes for two-way plots. Read more in the User Guide. Note plot_partial_dependence does not support using the same axes with multiple calls. To plot the the partial dependence for multiple estimators, please pass the axes created by the first call to the second call: >>> from sklearn.inspection import plot_partial_dependence
>>> from sklearn.datasets import make_friedman1
>>> from sklearn.linear_model import LinearRegression
>>> from sklearn.ensemble import RandomForestRegressor
>>> X, y = make_friedman1()
>>> est1 = LinearRegression().fit(X, y)
>>> est2 = RandomForestRegressor().fit(X, y)
>>> disp1 = plot_partial_dependence(est1, X,
... [1, 2])
>>> disp2 = plot_partial_dependence(est2, X, [1, 2],
... ax=disp1.axes_)
Warning For GradientBoostingClassifier and GradientBoostingRegressor, the 'recursion' method (used by default) will not account for the init predictor of the boosting process. In practice, this will produce the same values as 'brute' up to a constant offset in the target response, provided that init is a constant estimator (which is the default). However, if init is not a constant estimator, the partial dependence values are incorrect for 'recursion' because the offset will be sample-dependent. It is preferable to use the 'brute' method. Note that this only applies to GradientBoostingClassifier and GradientBoostingRegressor, not to HistGradientBoostingClassifier and HistGradientBoostingRegressor. Parameters
estimatorBaseEstimator
A fitted estimator object implementing predict, predict_proba, or decision_function. Multioutput-multiclass classifiers are not supported.
X{array-like or dataframe} of shape (n_samples, n_features)
X is used to generate a grid of values for the target features (where the partial dependence will be evaluated), and also to generate values for the complement features when the method is 'brute'.
featureslist of {int, str, pair of int, pair of str}
The target features for which to create the PDPs. If features[i] is an integer or a string, a one-way PDP is created; if features[i] is a tuple, a two-way PDP is created (only supported with kind='average'). Each tuple must be of size 2. if any entry is a string, then it must be in feature_names.
feature_namesarray-like of shape (n_features,), dtype=str, default=None
Name of each feature; feature_names[i] holds the name of the feature with index i. By default, the name of the feature corresponds to their numerical index for NumPy array and their column name for pandas dataframe.
targetint, default=None
In a multiclass setting, specifies the class for which the PDPs should be computed. Note that for binary classification, the positive class (index 1) is always used. In a multioutput setting, specifies the task for which the PDPs should be computed. Ignored in binary classification or classical regression settings.
response_method{‘auto’, ‘predict_proba’, ‘decision_function’}, default=’auto’
Specifies whether to use predict_proba or decision_function as the target response. For regressors this parameter is ignored and the response is always the output of predict. By default, predict_proba is tried first and we revert to decision_function if it doesn’t exist. If method is 'recursion', the response is always the output of decision_function.
n_colsint, default=3
The maximum number of columns in the grid plot. Only active when ax is a single axis or None.
grid_resolutionint, default=100
The number of equally spaced points on the axes of the plots, for each target feature.
percentilestuple of float, default=(0.05, 0.95)
The lower and upper percentile used to create the extreme values for the PDP axes. Must be in [0, 1].
methodstr, default=’auto’
The method used to calculate the averaged predictions:
'recursion' is only supported for some tree-based estimators (namely GradientBoostingClassifier, GradientBoostingRegressor, HistGradientBoostingClassifier, HistGradientBoostingRegressor, DecisionTreeRegressor, RandomForestRegressor but is more efficient in terms of speed. With this method, the target response of a classifier is always the decision function, not the predicted probabilities. Since the 'recursion' method implicitely computes the average of the ICEs by design, it is not compatible with ICE and thus kind must be 'average'.
'brute' is supported for any estimator, but is more computationally intensive.
'auto': the 'recursion' is used for estimators that support it, and 'brute' is used otherwise. Please see this note for differences between the 'brute' and 'recursion' method.
n_jobsint, default=None
The number of CPUs to use to compute the partial dependences. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
verboseint, default=0
Verbose output during PD computations.
line_kwdict, default=None
Dict with keywords passed to the matplotlib.pyplot.plot call. For one-way partial dependence plots.
contour_kwdict, default=None
Dict with keywords passed to the matplotlib.pyplot.contourf call. For two-way partial dependence plots.
axMatplotlib axes or array-like of Matplotlib axes, default=None
If a single axis is passed in, it is treated as a bounding axes and a grid of partial dependence plots will be drawn within these bounds. The n_cols parameter controls the number of columns in the grid. If an array-like of axes are passed in, the partial dependence plots will be drawn directly into these axes. If None, a figure and a bounding axes is created and treated as the single axes case. New in version 0.22.
kind{‘average’, ‘individual’, ‘both’}, default=’average’
Whether to plot the partial dependence averaged across all the samples in the dataset or one line per sample or both.
kind='average' results in the traditional PD plot;
kind='individual' results in the ICE plot. Note that the fast method='recursion' option is only available for kind='average'. Plotting individual dependencies requires using the slower method='brute' option. New in version 0.24.
subsamplefloat, int or None, default=1000
Sampling for ICE curves when kind is ‘individual’ or ‘both’. If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to be used to plot ICE curves. If int, represents the absolute number samples to use. Note that the full dataset is still used to calculate averaged partial dependence when kind='both'. New in version 0.24.
random_stateint, RandomState instance or None, default=None
Controls the randomness of the selected samples when subsamples is not None and kind is either 'both' or 'individual'. See Glossary for details. New in version 0.24. Returns
displayPartialDependenceDisplay
See also
partial_dependence
Compute Partial Dependence values.
PartialDependenceDisplay
Partial Dependence visualization. Examples >>> from sklearn.datasets import make_friedman1
>>> from sklearn.ensemble import GradientBoostingRegressor
>>> X, y = make_friedman1()
>>> clf = GradientBoostingRegressor(n_estimators=10).fit(X, y)
>>> plot_partial_dependence(clf, X, [0, (0, 1)]) | sklearn.modules.generated.sklearn.inspection.plot_partial_dependence#sklearn.inspection.plot_partial_dependence |
sklearn.isotonic.check_increasing(x, y) [source]
Determine whether y is monotonically correlated with x. y is found increasing or decreasing with respect to x based on a Spearman correlation test. Parameters
xarray-like of shape (n_samples,)
Training data.
yarray-like of shape (n_samples,)
Training target. Returns
increasing_boolboolean
Whether the relationship is increasing or decreasing. Notes The Spearman correlation coefficient is estimated from the data, and the sign of the resulting estimate is used as the result. In the event that the 95% confidence interval based on Fisher transform spans zero, a warning is raised. References Fisher transformation. Wikipedia. https://en.wikipedia.org/wiki/Fisher_transformation | sklearn.modules.generated.sklearn.isotonic.check_increasing#sklearn.isotonic.check_increasing |
class sklearn.isotonic.IsotonicRegression(*, y_min=None, y_max=None, increasing=True, out_of_bounds='nan') [source]
Isotonic regression model. Read more in the User Guide. New in version 0.13. Parameters
y_minfloat, default=None
Lower bound on the lowest predicted value (the minimum value may still be higher). If not set, defaults to -inf.
y_maxfloat, default=None
Upper bound on the highest predicted value (the maximum may still be lower). If not set, defaults to +inf.
increasingbool or ‘auto’, default=True
Determines whether the predictions should be constrained to increase or decrease with X. ‘auto’ will decide based on the Spearman correlation estimate’s sign.
out_of_bounds{‘nan’, ‘clip’, ‘raise’}, default=’nan’
Handles how X values outside of the training domain are handled during prediction. ‘nan’, predictions will be NaN. ‘clip’, predictions will be set to the value corresponding to the nearest train interval endpoint. ‘raise’, a ValueError is raised. Attributes
X_min_float
Minimum value of input array X_ for left bound.
X_max_float
Maximum value of input array X_ for right bound.
X_thresholds_ndarray of shape (n_thresholds,)
Unique ascending X values used to interpolate the y = f(X) monotonic function. New in version 0.24.
y_thresholds_ndarray of shape (n_thresholds,)
De-duplicated y values suitable to interpolate the y = f(X) monotonic function. New in version 0.24.
f_function
The stepwise interpolating function that covers the input domain X.
increasing_bool
Inferred value for increasing. Notes Ties are broken using the secondary method from de Leeuw, 1977. References Isotonic Median Regression: A Linear Programming Approach Nilotpal Chakravarti Mathematics of Operations Research Vol. 14, No. 2 (May, 1989), pp. 303-308 Isotone Optimization in R : Pool-Adjacent-Violators Algorithm (PAVA) and Active Set Methods de Leeuw, Hornik, Mair Journal of Statistical Software 2009 Correctness of Kruskal’s algorithms for monotone regression with ties de Leeuw, Psychometrica, 1977 Examples >>> from sklearn.datasets import make_regression
>>> from sklearn.isotonic import IsotonicRegression
>>> X, y = make_regression(n_samples=10, n_features=1, random_state=41)
>>> iso_reg = IsotonicRegression().fit(X, y)
>>> iso_reg.predict([.1, .2])
array([1.8628..., 3.7256...])
Methods
fit(X, y[, sample_weight]) Fit the model using X, y as training data.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
predict(T) Predict new data by linear interpolation.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
transform(T) Transform new data by linear interpolation
fit(X, y, sample_weight=None) [source]
Fit the model using X, y as training data. Parameters
Xarray-like of shape (n_samples,) or (n_samples, 1)
Training data. Changed in version 0.24: Also accepts 2d array with 1 feature.
yarray-like of shape (n_samples,)
Training target.
sample_weightarray-like of shape (n_samples,), default=None
Weights. If set to None, all weights will be set to 1 (equal weights). Returns
selfobject
Returns an instance of self. Notes X is stored for future use, as transform needs X to interpolate new input data.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(T) [source]
Predict new data by linear interpolation. Parameters
Tarray-like of shape (n_samples,) or (n_samples, 1)
Data to transform. Returns
y_predndarray of shape (n_samples,)
Transformed data.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(T) [source]
Transform new data by linear interpolation Parameters
Tarray-like of shape (n_samples,) or (n_samples, 1)
Data to transform. Changed in version 0.24: Also accepts 2d array with 1 feature. Returns
y_predndarray of shape (n_samples,)
The transformed data | sklearn.modules.generated.sklearn.isotonic.isotonicregression#sklearn.isotonic.IsotonicRegression |
sklearn.isotonic.IsotonicRegression
class sklearn.isotonic.IsotonicRegression(*, y_min=None, y_max=None, increasing=True, out_of_bounds='nan') [source]
Isotonic regression model. Read more in the User Guide. New in version 0.13. Parameters
y_minfloat, default=None
Lower bound on the lowest predicted value (the minimum value may still be higher). If not set, defaults to -inf.
y_maxfloat, default=None
Upper bound on the highest predicted value (the maximum may still be lower). If not set, defaults to +inf.
increasingbool or ‘auto’, default=True
Determines whether the predictions should be constrained to increase or decrease with X. ‘auto’ will decide based on the Spearman correlation estimate’s sign.
out_of_bounds{‘nan’, ‘clip’, ‘raise’}, default=’nan’
Handles how X values outside of the training domain are handled during prediction. ‘nan’, predictions will be NaN. ‘clip’, predictions will be set to the value corresponding to the nearest train interval endpoint. ‘raise’, a ValueError is raised. Attributes
X_min_float
Minimum value of input array X_ for left bound.
X_max_float
Maximum value of input array X_ for right bound.
X_thresholds_ndarray of shape (n_thresholds,)
Unique ascending X values used to interpolate the y = f(X) monotonic function. New in version 0.24.
y_thresholds_ndarray of shape (n_thresholds,)
De-duplicated y values suitable to interpolate the y = f(X) monotonic function. New in version 0.24.
f_function
The stepwise interpolating function that covers the input domain X.
increasing_bool
Inferred value for increasing. Notes Ties are broken using the secondary method from de Leeuw, 1977. References Isotonic Median Regression: A Linear Programming Approach Nilotpal Chakravarti Mathematics of Operations Research Vol. 14, No. 2 (May, 1989), pp. 303-308 Isotone Optimization in R : Pool-Adjacent-Violators Algorithm (PAVA) and Active Set Methods de Leeuw, Hornik, Mair Journal of Statistical Software 2009 Correctness of Kruskal’s algorithms for monotone regression with ties de Leeuw, Psychometrica, 1977 Examples >>> from sklearn.datasets import make_regression
>>> from sklearn.isotonic import IsotonicRegression
>>> X, y = make_regression(n_samples=10, n_features=1, random_state=41)
>>> iso_reg = IsotonicRegression().fit(X, y)
>>> iso_reg.predict([.1, .2])
array([1.8628..., 3.7256...])
Methods
fit(X, y[, sample_weight]) Fit the model using X, y as training data.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
predict(T) Predict new data by linear interpolation.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
transform(T) Transform new data by linear interpolation
fit(X, y, sample_weight=None) [source]
Fit the model using X, y as training data. Parameters
Xarray-like of shape (n_samples,) or (n_samples, 1)
Training data. Changed in version 0.24: Also accepts 2d array with 1 feature.
yarray-like of shape (n_samples,)
Training target.
sample_weightarray-like of shape (n_samples,), default=None
Weights. If set to None, all weights will be set to 1 (equal weights). Returns
selfobject
Returns an instance of self. Notes X is stored for future use, as transform needs X to interpolate new input data.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(T) [source]
Predict new data by linear interpolation. Parameters
Tarray-like of shape (n_samples,) or (n_samples, 1)
Data to transform. Returns
y_predndarray of shape (n_samples,)
Transformed data.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(T) [source]
Transform new data by linear interpolation Parameters
Tarray-like of shape (n_samples,) or (n_samples, 1)
Data to transform. Changed in version 0.24: Also accepts 2d array with 1 feature. Returns
y_predndarray of shape (n_samples,)
The transformed data
Examples using sklearn.isotonic.IsotonicRegression
Isotonic Regression | sklearn.modules.generated.sklearn.isotonic.isotonicregression |
fit(X, y, sample_weight=None) [source]
Fit the model using X, y as training data. Parameters
Xarray-like of shape (n_samples,) or (n_samples, 1)
Training data. Changed in version 0.24: Also accepts 2d array with 1 feature.
yarray-like of shape (n_samples,)
Training target.
sample_weightarray-like of shape (n_samples,), default=None
Weights. If set to None, all weights will be set to 1 (equal weights). Returns
selfobject
Returns an instance of self. Notes X is stored for future use, as transform needs X to interpolate new input data. | sklearn.modules.generated.sklearn.isotonic.isotonicregression#sklearn.isotonic.IsotonicRegression.fit |
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | sklearn.modules.generated.sklearn.isotonic.isotonicregression#sklearn.isotonic.IsotonicRegression.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.isotonic.isotonicregression#sklearn.isotonic.IsotonicRegression.get_params |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.