doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
predict(T) [source] Predict new data by linear interpolation. Parameters Tarray-like of shape (n_samples,) or (n_samples, 1) Data to transform. Returns y_predndarray of shape (n_samples,) Transformed data.
sklearn.modules.generated.sklearn.isotonic.isotonicregression#sklearn.isotonic.IsotonicRegression.predict
score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
sklearn.modules.generated.sklearn.isotonic.isotonicregression#sklearn.isotonic.IsotonicRegression.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.isotonic.isotonicregression#sklearn.isotonic.IsotonicRegression.set_params
transform(T) [source] Transform new data by linear interpolation Parameters Tarray-like of shape (n_samples,) or (n_samples, 1) Data to transform. Changed in version 0.24: Also accepts 2d array with 1 feature. Returns y_predndarray of shape (n_samples,) The transformed data
sklearn.modules.generated.sklearn.isotonic.isotonicregression#sklearn.isotonic.IsotonicRegression.transform
sklearn.isotonic.isotonic_regression(y, *, sample_weight=None, y_min=None, y_max=None, increasing=True) [source] Solve the isotonic regression model. Read more in the User Guide. Parameters yarray-like of shape (n_samples,) The data. sample_weightarray-like of shape (n_samples,), default=None Weights on each point of the regression. If None, weight is set to 1 (equal weights). y_minfloat, default=None Lower bound on the lowest predicted value (the minimum value may still be higher). If not set, defaults to -inf. y_maxfloat, default=None Upper bound on the highest predicted value (the maximum may still be lower). If not set, defaults to +inf. increasingbool, default=True Whether to compute y_ is increasing (if set to True) or decreasing (if set to False) Returns y_list of floats Isotonic fit of y. References “Active set algorithms for isotonic regression; A unifying framework” by Michael J. Best and Nilotpal Chakravarti, section 3.
sklearn.modules.generated.sklearn.isotonic.isotonic_regression#sklearn.isotonic.isotonic_regression
class sklearn.kernel_approximation.AdditiveChi2Sampler(*, sample_steps=2, sample_interval=None) [source] Approximate feature map for additive chi2 kernel. Uses sampling the fourier transform of the kernel characteristic at regular intervals. Since the kernel that is to be approximated is additive, the components of the input vectors can be treated separately. Each entry in the original space is transformed into 2*sample_steps+1 features, where sample_steps is a parameter of the method. Typical values of sample_steps include 1, 2 and 3. Optimal choices for the sampling interval for certain data ranges can be computed (see the reference). The default values should be reasonable. Read more in the User Guide. Parameters sample_stepsint, default=2 Gives the number of (complex) sampling points. sample_intervalfloat, default=None Sampling interval. Must be specified when sample_steps not in {1,2,3}. Attributes sample_interval_float Stored sampling interval. Specified as a parameter if sample_steps not in {1,2,3}. See also SkewedChi2Sampler A Fourier-approximation to a non-additive variant of the chi squared kernel. sklearn.metrics.pairwise.chi2_kernel The exact chi squared kernel. sklearn.metrics.pairwise.additive_chi2_kernel The exact additive chi squared kernel. Notes This estimator approximates a slightly different version of the additive chi squared kernel then metric.additive_chi2 computes. References See “Efficient additive kernels via explicit feature maps” A. Vedaldi and A. Zisserman, Pattern Analysis and Machine Intelligence, 2011 Examples >>> from sklearn.datasets import load_digits >>> from sklearn.linear_model import SGDClassifier >>> from sklearn.kernel_approximation import AdditiveChi2Sampler >>> X, y = load_digits(return_X_y=True) >>> chi2sampler = AdditiveChi2Sampler(sample_steps=2) >>> X_transformed = chi2sampler.fit_transform(X, y) >>> clf = SGDClassifier(max_iter=5, random_state=0, tol=1e-3) >>> clf.fit(X_transformed, y) SGDClassifier(max_iter=5, random_state=0) >>> clf.score(X_transformed, y) 0.9499... Methods fit(X[, y]) Set the parameters fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. set_params(**params) Set the parameters of this estimator. transform(X) Apply approximate feature map to X. fit(X, y=None) [source] Set the parameters Parameters Xarray-like, shape (n_samples, n_features) Training data, where n_samples in the number of samples and n_features is the number of features. Returns selfobject Returns the transformer. fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Apply approximate feature map to X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Returns X_new{ndarray, sparse matrix}, shape = (n_samples, n_features * (2*sample_steps + 1)) Whether the return value is an array of sparse matrix depends on the type of the input X.
sklearn.modules.generated.sklearn.kernel_approximation.additivechi2sampler#sklearn.kernel_approximation.AdditiveChi2Sampler
sklearn.kernel_approximation.AdditiveChi2Sampler class sklearn.kernel_approximation.AdditiveChi2Sampler(*, sample_steps=2, sample_interval=None) [source] Approximate feature map for additive chi2 kernel. Uses sampling the fourier transform of the kernel characteristic at regular intervals. Since the kernel that is to be approximated is additive, the components of the input vectors can be treated separately. Each entry in the original space is transformed into 2*sample_steps+1 features, where sample_steps is a parameter of the method. Typical values of sample_steps include 1, 2 and 3. Optimal choices for the sampling interval for certain data ranges can be computed (see the reference). The default values should be reasonable. Read more in the User Guide. Parameters sample_stepsint, default=2 Gives the number of (complex) sampling points. sample_intervalfloat, default=None Sampling interval. Must be specified when sample_steps not in {1,2,3}. Attributes sample_interval_float Stored sampling interval. Specified as a parameter if sample_steps not in {1,2,3}. See also SkewedChi2Sampler A Fourier-approximation to a non-additive variant of the chi squared kernel. sklearn.metrics.pairwise.chi2_kernel The exact chi squared kernel. sklearn.metrics.pairwise.additive_chi2_kernel The exact additive chi squared kernel. Notes This estimator approximates a slightly different version of the additive chi squared kernel then metric.additive_chi2 computes. References See “Efficient additive kernels via explicit feature maps” A. Vedaldi and A. Zisserman, Pattern Analysis and Machine Intelligence, 2011 Examples >>> from sklearn.datasets import load_digits >>> from sklearn.linear_model import SGDClassifier >>> from sklearn.kernel_approximation import AdditiveChi2Sampler >>> X, y = load_digits(return_X_y=True) >>> chi2sampler = AdditiveChi2Sampler(sample_steps=2) >>> X_transformed = chi2sampler.fit_transform(X, y) >>> clf = SGDClassifier(max_iter=5, random_state=0, tol=1e-3) >>> clf.fit(X_transformed, y) SGDClassifier(max_iter=5, random_state=0) >>> clf.score(X_transformed, y) 0.9499... Methods fit(X[, y]) Set the parameters fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. set_params(**params) Set the parameters of this estimator. transform(X) Apply approximate feature map to X. fit(X, y=None) [source] Set the parameters Parameters Xarray-like, shape (n_samples, n_features) Training data, where n_samples in the number of samples and n_features is the number of features. Returns selfobject Returns the transformer. fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Apply approximate feature map to X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Returns X_new{ndarray, sparse matrix}, shape = (n_samples, n_features * (2*sample_steps + 1)) Whether the return value is an array of sparse matrix depends on the type of the input X.
sklearn.modules.generated.sklearn.kernel_approximation.additivechi2sampler
fit(X, y=None) [source] Set the parameters Parameters Xarray-like, shape (n_samples, n_features) Training data, where n_samples in the number of samples and n_features is the number of features. Returns selfobject Returns the transformer.
sklearn.modules.generated.sklearn.kernel_approximation.additivechi2sampler#sklearn.kernel_approximation.AdditiveChi2Sampler.fit
fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array.
sklearn.modules.generated.sklearn.kernel_approximation.additivechi2sampler#sklearn.kernel_approximation.AdditiveChi2Sampler.fit_transform
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.kernel_approximation.additivechi2sampler#sklearn.kernel_approximation.AdditiveChi2Sampler.get_params
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.kernel_approximation.additivechi2sampler#sklearn.kernel_approximation.AdditiveChi2Sampler.set_params
transform(X) [source] Apply approximate feature map to X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Returns X_new{ndarray, sparse matrix}, shape = (n_samples, n_features * (2*sample_steps + 1)) Whether the return value is an array of sparse matrix depends on the type of the input X.
sklearn.modules.generated.sklearn.kernel_approximation.additivechi2sampler#sklearn.kernel_approximation.AdditiveChi2Sampler.transform
class sklearn.kernel_approximation.Nystroem(kernel='rbf', *, gamma=None, coef0=None, degree=None, kernel_params=None, n_components=100, random_state=None, n_jobs=None) [source] Approximate a kernel map using a subset of the training data. Constructs an approximate feature map for an arbitrary kernel using a subset of the data as basis. Read more in the User Guide. New in version 0.13. Parameters kernelstring or callable, default=’rbf’ Kernel map to be approximated. A callable should accept two arguments and the keyword arguments passed to this object as kernel_params, and should return a floating point number. gammafloat, default=None Gamma parameter for the RBF, laplacian, polynomial, exponential chi2 and sigmoid kernels. Interpretation of the default value is left to the kernel; see the documentation for sklearn.metrics.pairwise. Ignored by other kernels. coef0float, default=None Zero coefficient for polynomial and sigmoid kernels. Ignored by other kernels. degreefloat, default=None Degree of the polynomial kernel. Ignored by other kernels. kernel_paramsdict, default=None Additional parameters (keyword arguments) for kernel function passed as callable object. n_componentsint, default=100 Number of features to construct. How many data points will be used to construct the mapping. random_stateint, RandomState instance or None, default=None Pseudo-random number generator to control the uniform sampling without replacement of n_components of the training data to construct the basis kernel. Pass an int for reproducible output across multiple function calls. See Glossary. n_jobsint, default=None The number of jobs to use for the computation. This works by breaking down the kernel matrix into n_jobs even slices and computing them in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. New in version 0.24. Attributes components_ndarray of shape (n_components, n_features) Subset of training points used to construct the feature map. component_indices_ndarray of shape (n_components) Indices of components_ in the training set. normalization_ndarray of shape (n_components, n_components) Normalization matrix needed for embedding. Square root of the kernel matrix on components_. See also RBFSampler An approximation to the RBF kernel using random Fourier features. sklearn.metrics.pairwise.kernel_metrics List of built-in kernels. References Williams, C.K.I. and Seeger, M. “Using the Nystroem method to speed up kernel machines”, Advances in neural information processing systems 2001 T. Yang, Y. Li, M. Mahdavi, R. Jin and Z. Zhou “Nystroem Method vs Random Fourier Features: A Theoretical and Empirical Comparison”, Advances in Neural Information Processing Systems 2012 Examples >>> from sklearn import datasets, svm >>> from sklearn.kernel_approximation import Nystroem >>> X, y = datasets.load_digits(n_class=9, return_X_y=True) >>> data = X / 16. >>> clf = svm.LinearSVC() >>> feature_map_nystroem = Nystroem(gamma=.2, ... random_state=1, ... n_components=300) >>> data_transformed = feature_map_nystroem.fit_transform(data) >>> clf.fit(data_transformed, y) LinearSVC() >>> clf.score(data_transformed, y) 0.9987... Methods fit(X[, y]) Fit estimator to data. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. set_params(**params) Set the parameters of this estimator. transform(X) Apply feature map to X. fit(X, y=None) [source] Fit estimator to data. Samples a subset of training points, computes kernel on these and computes normalization matrix. Parameters Xarray-like of shape (n_samples, n_features) Training data. fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Apply feature map to X. Computes an approximate feature map using the kernel between some training points and X. Parameters Xarray-like of shape (n_samples, n_features) Data to transform. Returns X_transformedndarray of shape (n_samples, n_components) Transformed data.
sklearn.modules.generated.sklearn.kernel_approximation.nystroem#sklearn.kernel_approximation.Nystroem
sklearn.kernel_approximation.Nystroem class sklearn.kernel_approximation.Nystroem(kernel='rbf', *, gamma=None, coef0=None, degree=None, kernel_params=None, n_components=100, random_state=None, n_jobs=None) [source] Approximate a kernel map using a subset of the training data. Constructs an approximate feature map for an arbitrary kernel using a subset of the data as basis. Read more in the User Guide. New in version 0.13. Parameters kernelstring or callable, default=’rbf’ Kernel map to be approximated. A callable should accept two arguments and the keyword arguments passed to this object as kernel_params, and should return a floating point number. gammafloat, default=None Gamma parameter for the RBF, laplacian, polynomial, exponential chi2 and sigmoid kernels. Interpretation of the default value is left to the kernel; see the documentation for sklearn.metrics.pairwise. Ignored by other kernels. coef0float, default=None Zero coefficient for polynomial and sigmoid kernels. Ignored by other kernels. degreefloat, default=None Degree of the polynomial kernel. Ignored by other kernels. kernel_paramsdict, default=None Additional parameters (keyword arguments) for kernel function passed as callable object. n_componentsint, default=100 Number of features to construct. How many data points will be used to construct the mapping. random_stateint, RandomState instance or None, default=None Pseudo-random number generator to control the uniform sampling without replacement of n_components of the training data to construct the basis kernel. Pass an int for reproducible output across multiple function calls. See Glossary. n_jobsint, default=None The number of jobs to use for the computation. This works by breaking down the kernel matrix into n_jobs even slices and computing them in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. New in version 0.24. Attributes components_ndarray of shape (n_components, n_features) Subset of training points used to construct the feature map. component_indices_ndarray of shape (n_components) Indices of components_ in the training set. normalization_ndarray of shape (n_components, n_components) Normalization matrix needed for embedding. Square root of the kernel matrix on components_. See also RBFSampler An approximation to the RBF kernel using random Fourier features. sklearn.metrics.pairwise.kernel_metrics List of built-in kernels. References Williams, C.K.I. and Seeger, M. “Using the Nystroem method to speed up kernel machines”, Advances in neural information processing systems 2001 T. Yang, Y. Li, M. Mahdavi, R. Jin and Z. Zhou “Nystroem Method vs Random Fourier Features: A Theoretical and Empirical Comparison”, Advances in Neural Information Processing Systems 2012 Examples >>> from sklearn import datasets, svm >>> from sklearn.kernel_approximation import Nystroem >>> X, y = datasets.load_digits(n_class=9, return_X_y=True) >>> data = X / 16. >>> clf = svm.LinearSVC() >>> feature_map_nystroem = Nystroem(gamma=.2, ... random_state=1, ... n_components=300) >>> data_transformed = feature_map_nystroem.fit_transform(data) >>> clf.fit(data_transformed, y) LinearSVC() >>> clf.score(data_transformed, y) 0.9987... Methods fit(X[, y]) Fit estimator to data. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. set_params(**params) Set the parameters of this estimator. transform(X) Apply feature map to X. fit(X, y=None) [source] Fit estimator to data. Samples a subset of training points, computes kernel on these and computes normalization matrix. Parameters Xarray-like of shape (n_samples, n_features) Training data. fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Apply feature map to X. Computes an approximate feature map using the kernel between some training points and X. Parameters Xarray-like of shape (n_samples, n_features) Data to transform. Returns X_transformedndarray of shape (n_samples, n_components) Transformed data. Examples using sklearn.kernel_approximation.Nystroem Explicit feature map approximation for RBF kernels
sklearn.modules.generated.sklearn.kernel_approximation.nystroem
fit(X, y=None) [source] Fit estimator to data. Samples a subset of training points, computes kernel on these and computes normalization matrix. Parameters Xarray-like of shape (n_samples, n_features) Training data.
sklearn.modules.generated.sklearn.kernel_approximation.nystroem#sklearn.kernel_approximation.Nystroem.fit
fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array.
sklearn.modules.generated.sklearn.kernel_approximation.nystroem#sklearn.kernel_approximation.Nystroem.fit_transform
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.kernel_approximation.nystroem#sklearn.kernel_approximation.Nystroem.get_params
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.kernel_approximation.nystroem#sklearn.kernel_approximation.Nystroem.set_params
transform(X) [source] Apply feature map to X. Computes an approximate feature map using the kernel between some training points and X. Parameters Xarray-like of shape (n_samples, n_features) Data to transform. Returns X_transformedndarray of shape (n_samples, n_components) Transformed data.
sklearn.modules.generated.sklearn.kernel_approximation.nystroem#sklearn.kernel_approximation.Nystroem.transform
class sklearn.kernel_approximation.PolynomialCountSketch(*, gamma=1.0, degree=2, coef0=0, n_components=100, random_state=None) [source] Polynomial kernel approximation via Tensor Sketch. Implements Tensor Sketch, which approximates the feature map of the polynomial kernel: K(X, Y) = (gamma * <X, Y> + coef0)^degree by efficiently computing a Count Sketch of the outer product of a vector with itself using Fast Fourier Transforms (FFT). Read more in the User Guide. New in version 0.24. Parameters gammafloat, default=1.0 Parameter of the polynomial kernel whose feature map will be approximated. degreeint, default=2 Degree of the polynomial kernel whose feature map will be approximated. coef0int, default=0 Constant term of the polynomial kernel whose feature map will be approximated. n_componentsint, default=100 Dimensionality of the output feature space. Usually, n_components should be greater than the number of features in input samples in order to achieve good performance. The optimal score / run time balance is typically achieved around n_components = 10 * n_features, but this depends on the specific dataset being used. random_stateint, RandomState instance, default=None Determines random number generation for indexHash and bitHash initialization. Pass an int for reproducible results across multiple function calls. See Glossary. Attributes indexHash_ndarray of shape (degree, n_features), dtype=int64 Array of indexes in range [0, n_components) used to represent the 2-wise independent hash functions for Count Sketch computation. bitHash_ndarray of shape (degree, n_features), dtype=float32 Array with random entries in {+1, -1}, used to represent the 2-wise independent hash functions for Count Sketch computation. Examples >>> from sklearn.kernel_approximation import PolynomialCountSketch >>> from sklearn.linear_model import SGDClassifier >>> X = [[0, 0], [1, 1], [1, 0], [0, 1]] >>> y = [0, 0, 1, 1] >>> ps = PolynomialCountSketch(degree=3, random_state=1) >>> X_features = ps.fit_transform(X) >>> clf = SGDClassifier(max_iter=10, tol=1e-3) >>> clf.fit(X_features, y) SGDClassifier(max_iter=10) >>> clf.score(X_features, y) 1.0 Methods fit(X[, y]) Fit the model with X. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. set_params(**params) Set the parameters of this estimator. transform(X) Generate the feature map approximation for X. fit(X, y=None) [source] Fit the model with X. Initializes the internal variables. The method needs no information about the distribution of data, so we only care about n_features in X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data, where n_samples in the number of samples and n_features is the number of features. Returns selfobject Returns the transformer. fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Generate the feature map approximation for X. Parameters X{array-like}, shape (n_samples, n_features) New data, where n_samples in the number of samples and n_features is the number of features. Returns X_newarray-like, shape (n_samples, n_components)
sklearn.modules.generated.sklearn.kernel_approximation.polynomialcountsketch#sklearn.kernel_approximation.PolynomialCountSketch
sklearn.kernel_approximation.PolynomialCountSketch class sklearn.kernel_approximation.PolynomialCountSketch(*, gamma=1.0, degree=2, coef0=0, n_components=100, random_state=None) [source] Polynomial kernel approximation via Tensor Sketch. Implements Tensor Sketch, which approximates the feature map of the polynomial kernel: K(X, Y) = (gamma * <X, Y> + coef0)^degree by efficiently computing a Count Sketch of the outer product of a vector with itself using Fast Fourier Transforms (FFT). Read more in the User Guide. New in version 0.24. Parameters gammafloat, default=1.0 Parameter of the polynomial kernel whose feature map will be approximated. degreeint, default=2 Degree of the polynomial kernel whose feature map will be approximated. coef0int, default=0 Constant term of the polynomial kernel whose feature map will be approximated. n_componentsint, default=100 Dimensionality of the output feature space. Usually, n_components should be greater than the number of features in input samples in order to achieve good performance. The optimal score / run time balance is typically achieved around n_components = 10 * n_features, but this depends on the specific dataset being used. random_stateint, RandomState instance, default=None Determines random number generation for indexHash and bitHash initialization. Pass an int for reproducible results across multiple function calls. See Glossary. Attributes indexHash_ndarray of shape (degree, n_features), dtype=int64 Array of indexes in range [0, n_components) used to represent the 2-wise independent hash functions for Count Sketch computation. bitHash_ndarray of shape (degree, n_features), dtype=float32 Array with random entries in {+1, -1}, used to represent the 2-wise independent hash functions for Count Sketch computation. Examples >>> from sklearn.kernel_approximation import PolynomialCountSketch >>> from sklearn.linear_model import SGDClassifier >>> X = [[0, 0], [1, 1], [1, 0], [0, 1]] >>> y = [0, 0, 1, 1] >>> ps = PolynomialCountSketch(degree=3, random_state=1) >>> X_features = ps.fit_transform(X) >>> clf = SGDClassifier(max_iter=10, tol=1e-3) >>> clf.fit(X_features, y) SGDClassifier(max_iter=10) >>> clf.score(X_features, y) 1.0 Methods fit(X[, y]) Fit the model with X. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. set_params(**params) Set the parameters of this estimator. transform(X) Generate the feature map approximation for X. fit(X, y=None) [source] Fit the model with X. Initializes the internal variables. The method needs no information about the distribution of data, so we only care about n_features in X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data, where n_samples in the number of samples and n_features is the number of features. Returns selfobject Returns the transformer. fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Generate the feature map approximation for X. Parameters X{array-like}, shape (n_samples, n_features) New data, where n_samples in the number of samples and n_features is the number of features. Returns X_newarray-like, shape (n_samples, n_components) Examples using sklearn.kernel_approximation.PolynomialCountSketch Release Highlights for scikit-learn 0.24 Scalable learning with polynomial kernel aproximation
sklearn.modules.generated.sklearn.kernel_approximation.polynomialcountsketch
fit(X, y=None) [source] Fit the model with X. Initializes the internal variables. The method needs no information about the distribution of data, so we only care about n_features in X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data, where n_samples in the number of samples and n_features is the number of features. Returns selfobject Returns the transformer.
sklearn.modules.generated.sklearn.kernel_approximation.polynomialcountsketch#sklearn.kernel_approximation.PolynomialCountSketch.fit
fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array.
sklearn.modules.generated.sklearn.kernel_approximation.polynomialcountsketch#sklearn.kernel_approximation.PolynomialCountSketch.fit_transform
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.kernel_approximation.polynomialcountsketch#sklearn.kernel_approximation.PolynomialCountSketch.get_params
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.kernel_approximation.polynomialcountsketch#sklearn.kernel_approximation.PolynomialCountSketch.set_params
transform(X) [source] Generate the feature map approximation for X. Parameters X{array-like}, shape (n_samples, n_features) New data, where n_samples in the number of samples and n_features is the number of features. Returns X_newarray-like, shape (n_samples, n_components)
sklearn.modules.generated.sklearn.kernel_approximation.polynomialcountsketch#sklearn.kernel_approximation.PolynomialCountSketch.transform
class sklearn.kernel_approximation.RBFSampler(*, gamma=1.0, n_components=100, random_state=None) [source] Approximates feature map of an RBF kernel by Monte Carlo approximation of its Fourier transform. It implements a variant of Random Kitchen Sinks.[1] Read more in the User Guide. Parameters gammafloat, default=1.0 Parameter of RBF kernel: exp(-gamma * x^2) n_componentsint, default=100 Number of Monte Carlo samples per original feature. Equals the dimensionality of the computed feature space. random_stateint, RandomState instance or None, default=None Pseudo-random number generator to control the generation of the random weights and random offset when fitting the training data. Pass an int for reproducible output across multiple function calls. See Glossary. Attributes random_offset_ndarray of shape (n_components,), dtype=float64 Random offset used to compute the projection in the n_components dimensions of the feature space. random_weights_ndarray of shape (n_features, n_components), dtype=float64 Random projection directions drawn from the Fourier transform of the RBF kernel. Notes See “Random Features for Large-Scale Kernel Machines” by A. Rahimi and Benjamin Recht. [1] “Weighted Sums of Random Kitchen Sinks: Replacing minimization with randomization in learning” by A. Rahimi and Benjamin Recht. (https://people.eecs.berkeley.edu/~brecht/papers/08.rah.rec.nips.pdf) Examples >>> from sklearn.kernel_approximation import RBFSampler >>> from sklearn.linear_model import SGDClassifier >>> X = [[0, 0], [1, 1], [1, 0], [0, 1]] >>> y = [0, 0, 1, 1] >>> rbf_feature = RBFSampler(gamma=1, random_state=1) >>> X_features = rbf_feature.fit_transform(X) >>> clf = SGDClassifier(max_iter=5, tol=1e-3) >>> clf.fit(X_features, y) SGDClassifier(max_iter=5) >>> clf.score(X_features, y) 1.0 Methods fit(X[, y]) Fit the model with X. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. set_params(**params) Set the parameters of this estimator. transform(X) Apply the approximate feature map to X. fit(X, y=None) [source] Fit the model with X. Samples random projection according to n_features. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Training data, where n_samples in the number of samples and n_features is the number of features. Returns selfobject Returns the transformer. fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Apply the approximate feature map to X. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) New data, where n_samples in the number of samples and n_features is the number of features. Returns X_newarray-like, shape (n_samples, n_components)
sklearn.modules.generated.sklearn.kernel_approximation.rbfsampler#sklearn.kernel_approximation.RBFSampler
sklearn.kernel_approximation.RBFSampler class sklearn.kernel_approximation.RBFSampler(*, gamma=1.0, n_components=100, random_state=None) [source] Approximates feature map of an RBF kernel by Monte Carlo approximation of its Fourier transform. It implements a variant of Random Kitchen Sinks.[1] Read more in the User Guide. Parameters gammafloat, default=1.0 Parameter of RBF kernel: exp(-gamma * x^2) n_componentsint, default=100 Number of Monte Carlo samples per original feature. Equals the dimensionality of the computed feature space. random_stateint, RandomState instance or None, default=None Pseudo-random number generator to control the generation of the random weights and random offset when fitting the training data. Pass an int for reproducible output across multiple function calls. See Glossary. Attributes random_offset_ndarray of shape (n_components,), dtype=float64 Random offset used to compute the projection in the n_components dimensions of the feature space. random_weights_ndarray of shape (n_features, n_components), dtype=float64 Random projection directions drawn from the Fourier transform of the RBF kernel. Notes See “Random Features for Large-Scale Kernel Machines” by A. Rahimi and Benjamin Recht. [1] “Weighted Sums of Random Kitchen Sinks: Replacing minimization with randomization in learning” by A. Rahimi and Benjamin Recht. (https://people.eecs.berkeley.edu/~brecht/papers/08.rah.rec.nips.pdf) Examples >>> from sklearn.kernel_approximation import RBFSampler >>> from sklearn.linear_model import SGDClassifier >>> X = [[0, 0], [1, 1], [1, 0], [0, 1]] >>> y = [0, 0, 1, 1] >>> rbf_feature = RBFSampler(gamma=1, random_state=1) >>> X_features = rbf_feature.fit_transform(X) >>> clf = SGDClassifier(max_iter=5, tol=1e-3) >>> clf.fit(X_features, y) SGDClassifier(max_iter=5) >>> clf.score(X_features, y) 1.0 Methods fit(X[, y]) Fit the model with X. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. set_params(**params) Set the parameters of this estimator. transform(X) Apply the approximate feature map to X. fit(X, y=None) [source] Fit the model with X. Samples random projection according to n_features. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Training data, where n_samples in the number of samples and n_features is the number of features. Returns selfobject Returns the transformer. fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Apply the approximate feature map to X. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) New data, where n_samples in the number of samples and n_features is the number of features. Returns X_newarray-like, shape (n_samples, n_components) Examples using sklearn.kernel_approximation.RBFSampler Explicit feature map approximation for RBF kernels
sklearn.modules.generated.sklearn.kernel_approximation.rbfsampler
fit(X, y=None) [source] Fit the model with X. Samples random projection according to n_features. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Training data, where n_samples in the number of samples and n_features is the number of features. Returns selfobject Returns the transformer.
sklearn.modules.generated.sklearn.kernel_approximation.rbfsampler#sklearn.kernel_approximation.RBFSampler.fit
fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array.
sklearn.modules.generated.sklearn.kernel_approximation.rbfsampler#sklearn.kernel_approximation.RBFSampler.fit_transform
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.kernel_approximation.rbfsampler#sklearn.kernel_approximation.RBFSampler.get_params
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.kernel_approximation.rbfsampler#sklearn.kernel_approximation.RBFSampler.set_params
transform(X) [source] Apply the approximate feature map to X. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) New data, where n_samples in the number of samples and n_features is the number of features. Returns X_newarray-like, shape (n_samples, n_components)
sklearn.modules.generated.sklearn.kernel_approximation.rbfsampler#sklearn.kernel_approximation.RBFSampler.transform
class sklearn.kernel_approximation.SkewedChi2Sampler(*, skewedness=1.0, n_components=100, random_state=None) [source] Approximates feature map of the “skewed chi-squared” kernel by Monte Carlo approximation of its Fourier transform. Read more in the User Guide. Parameters skewednessfloat, default=1.0 “skewedness” parameter of the kernel. Needs to be cross-validated. n_componentsint, default=100 number of Monte Carlo samples per original feature. Equals the dimensionality of the computed feature space. random_stateint, RandomState instance or None, default=None Pseudo-random number generator to control the generation of the random weights and random offset when fitting the training data. Pass an int for reproducible output across multiple function calls. See Glossary. Attributes random_weights_ndarray of shape (n_features, n_components) Weight array, sampled from a secant hyperbolic distribution, which will be used to linearly transform the log of the data. random_offset_ndarray of shape (n_features, n_components) Bias term, which will be added to the data. It is uniformly distributed between 0 and 2*pi. See also AdditiveChi2Sampler A different approach for approximating an additive variant of the chi squared kernel. sklearn.metrics.pairwise.chi2_kernel The exact chi squared kernel. References See “Random Fourier Approximations for Skewed Multiplicative Histogram Kernels” by Fuxin Li, Catalin Ionescu and Cristian Sminchisescu. Examples >>> from sklearn.kernel_approximation import SkewedChi2Sampler >>> from sklearn.linear_model import SGDClassifier >>> X = [[0, 0], [1, 1], [1, 0], [0, 1]] >>> y = [0, 0, 1, 1] >>> chi2_feature = SkewedChi2Sampler(skewedness=.01, ... n_components=10, ... random_state=0) >>> X_features = chi2_feature.fit_transform(X, y) >>> clf = SGDClassifier(max_iter=10, tol=1e-3) >>> clf.fit(X_features, y) SGDClassifier(max_iter=10) >>> clf.score(X_features, y) 1.0 Methods fit(X[, y]) Fit the model with X. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. set_params(**params) Set the parameters of this estimator. transform(X) Apply the approximate feature map to X. fit(X, y=None) [source] Fit the model with X. Samples random projection according to n_features. Parameters Xarray-like, shape (n_samples, n_features) Training data, where n_samples in the number of samples and n_features is the number of features. Returns selfobject Returns the transformer. fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Apply the approximate feature map to X. Parameters Xarray-like, shape (n_samples, n_features) New data, where n_samples in the number of samples and n_features is the number of features. All values of X must be strictly greater than “-skewedness”. Returns X_newarray-like, shape (n_samples, n_components)
sklearn.modules.generated.sklearn.kernel_approximation.skewedchi2sampler#sklearn.kernel_approximation.SkewedChi2Sampler
sklearn.kernel_approximation.SkewedChi2Sampler class sklearn.kernel_approximation.SkewedChi2Sampler(*, skewedness=1.0, n_components=100, random_state=None) [source] Approximates feature map of the “skewed chi-squared” kernel by Monte Carlo approximation of its Fourier transform. Read more in the User Guide. Parameters skewednessfloat, default=1.0 “skewedness” parameter of the kernel. Needs to be cross-validated. n_componentsint, default=100 number of Monte Carlo samples per original feature. Equals the dimensionality of the computed feature space. random_stateint, RandomState instance or None, default=None Pseudo-random number generator to control the generation of the random weights and random offset when fitting the training data. Pass an int for reproducible output across multiple function calls. See Glossary. Attributes random_weights_ndarray of shape (n_features, n_components) Weight array, sampled from a secant hyperbolic distribution, which will be used to linearly transform the log of the data. random_offset_ndarray of shape (n_features, n_components) Bias term, which will be added to the data. It is uniformly distributed between 0 and 2*pi. See also AdditiveChi2Sampler A different approach for approximating an additive variant of the chi squared kernel. sklearn.metrics.pairwise.chi2_kernel The exact chi squared kernel. References See “Random Fourier Approximations for Skewed Multiplicative Histogram Kernels” by Fuxin Li, Catalin Ionescu and Cristian Sminchisescu. Examples >>> from sklearn.kernel_approximation import SkewedChi2Sampler >>> from sklearn.linear_model import SGDClassifier >>> X = [[0, 0], [1, 1], [1, 0], [0, 1]] >>> y = [0, 0, 1, 1] >>> chi2_feature = SkewedChi2Sampler(skewedness=.01, ... n_components=10, ... random_state=0) >>> X_features = chi2_feature.fit_transform(X, y) >>> clf = SGDClassifier(max_iter=10, tol=1e-3) >>> clf.fit(X_features, y) SGDClassifier(max_iter=10) >>> clf.score(X_features, y) 1.0 Methods fit(X[, y]) Fit the model with X. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. set_params(**params) Set the parameters of this estimator. transform(X) Apply the approximate feature map to X. fit(X, y=None) [source] Fit the model with X. Samples random projection according to n_features. Parameters Xarray-like, shape (n_samples, n_features) Training data, where n_samples in the number of samples and n_features is the number of features. Returns selfobject Returns the transformer. fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Apply the approximate feature map to X. Parameters Xarray-like, shape (n_samples, n_features) New data, where n_samples in the number of samples and n_features is the number of features. All values of X must be strictly greater than “-skewedness”. Returns X_newarray-like, shape (n_samples, n_components)
sklearn.modules.generated.sklearn.kernel_approximation.skewedchi2sampler
fit(X, y=None) [source] Fit the model with X. Samples random projection according to n_features. Parameters Xarray-like, shape (n_samples, n_features) Training data, where n_samples in the number of samples and n_features is the number of features. Returns selfobject Returns the transformer.
sklearn.modules.generated.sklearn.kernel_approximation.skewedchi2sampler#sklearn.kernel_approximation.SkewedChi2Sampler.fit
fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array.
sklearn.modules.generated.sklearn.kernel_approximation.skewedchi2sampler#sklearn.kernel_approximation.SkewedChi2Sampler.fit_transform
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.kernel_approximation.skewedchi2sampler#sklearn.kernel_approximation.SkewedChi2Sampler.get_params
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.kernel_approximation.skewedchi2sampler#sklearn.kernel_approximation.SkewedChi2Sampler.set_params
transform(X) [source] Apply the approximate feature map to X. Parameters Xarray-like, shape (n_samples, n_features) New data, where n_samples in the number of samples and n_features is the number of features. All values of X must be strictly greater than “-skewedness”. Returns X_newarray-like, shape (n_samples, n_components)
sklearn.modules.generated.sklearn.kernel_approximation.skewedchi2sampler#sklearn.kernel_approximation.SkewedChi2Sampler.transform
class sklearn.kernel_ridge.KernelRidge(alpha=1, *, kernel='linear', gamma=None, degree=3, coef0=1, kernel_params=None) [source] Kernel ridge regression. Kernel ridge regression (KRR) combines ridge regression (linear least squares with l2-norm regularization) with the kernel trick. It thus learns a linear function in the space induced by the respective kernel and the data. For non-linear kernels, this corresponds to a non-linear function in the original space. The form of the model learned by KRR is identical to support vector regression (SVR). However, different loss functions are used: KRR uses squared error loss while support vector regression uses epsilon-insensitive loss, both combined with l2 regularization. In contrast to SVR, fitting a KRR model can be done in closed-form and is typically faster for medium-sized datasets. On the other hand, the learned model is non-sparse and thus slower than SVR, which learns a sparse model for epsilon > 0, at prediction-time. This estimator has built-in support for multi-variate regression (i.e., when y is a 2d-array of shape [n_samples, n_targets]). Read more in the User Guide. Parameters alphafloat or array-like of shape (n_targets,), default=1.0 Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to 1 / (2C) in other linear models such as LogisticRegression or LinearSVC. If an array is passed, penalties are assumed to be specific to the targets. Hence they must correspond in number. See Ridge regression and classification for formula. kernelstring or callable, default=”linear” Kernel mapping used internally. This parameter is directly passed to pairwise_kernel. If kernel is a string, it must be one of the metrics in pairwise.PAIRWISE_KERNEL_FUNCTIONS. If kernel is “precomputed”, X is assumed to be a kernel matrix. Alternatively, if kernel is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two rows from X as input and return the corresponding kernel value as a single number. This means that callables from sklearn.metrics.pairwise are not allowed, as they operate on matrices, not single samples. Use the string identifying the kernel instead. gammafloat, default=None Gamma parameter for the RBF, laplacian, polynomial, exponential chi2 and sigmoid kernels. Interpretation of the default value is left to the kernel; see the documentation for sklearn.metrics.pairwise. Ignored by other kernels. degreefloat, default=3 Degree of the polynomial kernel. Ignored by other kernels. coef0float, default=1 Zero coefficient for polynomial and sigmoid kernels. Ignored by other kernels. kernel_paramsmapping of string to any, default=None Additional parameters (keyword arguments) for kernel function passed as callable object. Attributes dual_coef_ndarray of shape (n_samples,) or (n_samples, n_targets) Representation of weight vector(s) in kernel space X_fit_{ndarray, sparse matrix} of shape (n_samples, n_features) Training data, which is also required for prediction. If kernel == “precomputed” this is instead the precomputed training matrix, of shape (n_samples, n_samples). See also sklearn.linear_model.Ridge Linear ridge regression. sklearn.svm.SVR Support Vector Regression implemented using libsvm. References Kevin P. Murphy “Machine Learning: A Probabilistic Perspective”, The MIT Press chapter 14.4.3, pp. 492-493 Examples >>> from sklearn.kernel_ridge import KernelRidge >>> import numpy as np >>> n_samples, n_features = 10, 5 >>> rng = np.random.RandomState(0) >>> y = rng.randn(n_samples) >>> X = rng.randn(n_samples, n_features) >>> clf = KernelRidge(alpha=1.0) >>> clf.fit(X, y) KernelRidge(alpha=1.0) Methods fit(X, y[, sample_weight]) Fit Kernel Ridge regression model get_params([deep]) Get parameters for this estimator. predict(X) Predict using the kernel ridge model score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y, sample_weight=None) [source] Fit Kernel Ridge regression model Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. If kernel == “precomputed” this is instead a precomputed kernel matrix, of shape (n_samples, n_samples). yarray-like of shape (n_samples,) or (n_samples, n_targets) Target values sample_weightfloat or array-like of shape (n_samples,), default=None Individual weights for each sample, ignored if None is passed. Returns selfreturns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using the kernel ridge model Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Samples. If kernel == “precomputed” this is instead a precomputed kernel matrix, shape = [n_samples, n_samples_fitted], where n_samples_fitted is the number of samples used in the fitting for this estimator. Returns Cndarray of shape (n_samples,) or (n_samples, n_targets) Returns predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.kernel_ridge.kernelridge#sklearn.kernel_ridge.KernelRidge
sklearn.kernel_ridge.KernelRidge class sklearn.kernel_ridge.KernelRidge(alpha=1, *, kernel='linear', gamma=None, degree=3, coef0=1, kernel_params=None) [source] Kernel ridge regression. Kernel ridge regression (KRR) combines ridge regression (linear least squares with l2-norm regularization) with the kernel trick. It thus learns a linear function in the space induced by the respective kernel and the data. For non-linear kernels, this corresponds to a non-linear function in the original space. The form of the model learned by KRR is identical to support vector regression (SVR). However, different loss functions are used: KRR uses squared error loss while support vector regression uses epsilon-insensitive loss, both combined with l2 regularization. In contrast to SVR, fitting a KRR model can be done in closed-form and is typically faster for medium-sized datasets. On the other hand, the learned model is non-sparse and thus slower than SVR, which learns a sparse model for epsilon > 0, at prediction-time. This estimator has built-in support for multi-variate regression (i.e., when y is a 2d-array of shape [n_samples, n_targets]). Read more in the User Guide. Parameters alphafloat or array-like of shape (n_targets,), default=1.0 Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to 1 / (2C) in other linear models such as LogisticRegression or LinearSVC. If an array is passed, penalties are assumed to be specific to the targets. Hence they must correspond in number. See Ridge regression and classification for formula. kernelstring or callable, default=”linear” Kernel mapping used internally. This parameter is directly passed to pairwise_kernel. If kernel is a string, it must be one of the metrics in pairwise.PAIRWISE_KERNEL_FUNCTIONS. If kernel is “precomputed”, X is assumed to be a kernel matrix. Alternatively, if kernel is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two rows from X as input and return the corresponding kernel value as a single number. This means that callables from sklearn.metrics.pairwise are not allowed, as they operate on matrices, not single samples. Use the string identifying the kernel instead. gammafloat, default=None Gamma parameter for the RBF, laplacian, polynomial, exponential chi2 and sigmoid kernels. Interpretation of the default value is left to the kernel; see the documentation for sklearn.metrics.pairwise. Ignored by other kernels. degreefloat, default=3 Degree of the polynomial kernel. Ignored by other kernels. coef0float, default=1 Zero coefficient for polynomial and sigmoid kernels. Ignored by other kernels. kernel_paramsmapping of string to any, default=None Additional parameters (keyword arguments) for kernel function passed as callable object. Attributes dual_coef_ndarray of shape (n_samples,) or (n_samples, n_targets) Representation of weight vector(s) in kernel space X_fit_{ndarray, sparse matrix} of shape (n_samples, n_features) Training data, which is also required for prediction. If kernel == “precomputed” this is instead the precomputed training matrix, of shape (n_samples, n_samples). See also sklearn.linear_model.Ridge Linear ridge regression. sklearn.svm.SVR Support Vector Regression implemented using libsvm. References Kevin P. Murphy “Machine Learning: A Probabilistic Perspective”, The MIT Press chapter 14.4.3, pp. 492-493 Examples >>> from sklearn.kernel_ridge import KernelRidge >>> import numpy as np >>> n_samples, n_features = 10, 5 >>> rng = np.random.RandomState(0) >>> y = rng.randn(n_samples) >>> X = rng.randn(n_samples, n_features) >>> clf = KernelRidge(alpha=1.0) >>> clf.fit(X, y) KernelRidge(alpha=1.0) Methods fit(X, y[, sample_weight]) Fit Kernel Ridge regression model get_params([deep]) Get parameters for this estimator. predict(X) Predict using the kernel ridge model score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y, sample_weight=None) [source] Fit Kernel Ridge regression model Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. If kernel == “precomputed” this is instead a precomputed kernel matrix, of shape (n_samples, n_samples). yarray-like of shape (n_samples,) or (n_samples, n_targets) Target values sample_weightfloat or array-like of shape (n_samples,), default=None Individual weights for each sample, ignored if None is passed. Returns selfreturns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using the kernel ridge model Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Samples. If kernel == “precomputed” this is instead a precomputed kernel matrix, shape = [n_samples, n_samples_fitted], where n_samples_fitted is the number of samples used in the fitting for this estimator. Returns Cndarray of shape (n_samples,) or (n_samples, n_targets) Returns predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. Examples using sklearn.kernel_ridge.KernelRidge Comparison of kernel ridge and Gaussian process regression Comparison of kernel ridge regression and SVR
sklearn.modules.generated.sklearn.kernel_ridge.kernelridge
fit(X, y, sample_weight=None) [source] Fit Kernel Ridge regression model Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. If kernel == “precomputed” this is instead a precomputed kernel matrix, of shape (n_samples, n_samples). yarray-like of shape (n_samples,) or (n_samples, n_targets) Target values sample_weightfloat or array-like of shape (n_samples,), default=None Individual weights for each sample, ignored if None is passed. Returns selfreturns an instance of self.
sklearn.modules.generated.sklearn.kernel_ridge.kernelridge#sklearn.kernel_ridge.KernelRidge.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.kernel_ridge.kernelridge#sklearn.kernel_ridge.KernelRidge.get_params
predict(X) [source] Predict using the kernel ridge model Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Samples. If kernel == “precomputed” this is instead a precomputed kernel matrix, shape = [n_samples, n_samples_fitted], where n_samples_fitted is the number of samples used in the fitting for this estimator. Returns Cndarray of shape (n_samples,) or (n_samples, n_targets) Returns predicted values.
sklearn.modules.generated.sklearn.kernel_ridge.kernelridge#sklearn.kernel_ridge.KernelRidge.predict
score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
sklearn.modules.generated.sklearn.kernel_ridge.kernelridge#sklearn.kernel_ridge.KernelRidge.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.kernel_ridge.kernelridge#sklearn.kernel_ridge.KernelRidge.set_params
class sklearn.linear_model.ARDRegression(*, n_iter=300, tol=0.001, alpha_1=1e-06, alpha_2=1e-06, lambda_1=1e-06, lambda_2=1e-06, compute_score=False, threshold_lambda=10000.0, fit_intercept=True, normalize=False, copy_X=True, verbose=False) [source] Bayesian ARD regression. Fit the weights of a regression model, using an ARD prior. The weights of the regression model are assumed to be in Gaussian distributions. Also estimate the parameters lambda (precisions of the distributions of the weights) and alpha (precision of the distribution of the noise). The estimation is done by an iterative procedures (Evidence Maximization) Read more in the User Guide. Parameters n_iterint, default=300 Maximum number of iterations. tolfloat, default=1e-3 Stop the algorithm if w has converged. alpha_1float, default=1e-6 Hyper-parameter : shape parameter for the Gamma distribution prior over the alpha parameter. alpha_2float, default=1e-6 Hyper-parameter : inverse scale parameter (rate parameter) for the Gamma distribution prior over the alpha parameter. lambda_1float, default=1e-6 Hyper-parameter : shape parameter for the Gamma distribution prior over the lambda parameter. lambda_2float, default=1e-6 Hyper-parameter : inverse scale parameter (rate parameter) for the Gamma distribution prior over the lambda parameter. compute_scorebool, default=False If True, compute the objective function at each step of the model. threshold_lambdafloat, default=10 000 threshold for removing (pruning) weights with high precision from the computation. fit_interceptbool, default=True whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered). normalizebool, default=False This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. verbosebool, default=False Verbose mode when fitting the model. Attributes coef_array-like of shape (n_features,) Coefficients of the regression model (mean of distribution) alpha_float estimated precision of the noise. lambda_array-like of shape (n_features,) estimated precisions of the weights. sigma_array-like of shape (n_features, n_features) estimated variance-covariance matrix of the weights scores_float if computed, value of the objective function (to be maximized) intercept_float Independent term in decision function. Set to 0.0 if fit_intercept = False. X_offset_float If normalize=True, offset subtracted for centering data to a zero mean. X_scale_float If normalize=True, parameter used to scale data to a unit standard deviation. Notes For an example, see examples/linear_model/plot_ard.py. References D. J. C. MacKay, Bayesian nonlinear modeling for the prediction competition, ASHRAE Transactions, 1994. R. Salakhutdinov, Lecture notes on Statistical Machine Learning, http://www.utstat.toronto.edu/~rsalakhu/sta4273/notes/Lecture2.pdf#page=15 Their beta is our self.alpha_ Their alpha is our self.lambda_ ARD is a little different than the slide: only dimensions/features for which self.lambda_ < self.threshold_lambda are kept and the rest are discarded. Examples >>> from sklearn import linear_model >>> clf = linear_model.ARDRegression() >>> clf.fit([[0,0], [1, 1], [2, 2]], [0, 1, 2]) ARDRegression() >>> clf.predict([[1, 1]]) array([1.]) Methods fit(X, y) Fit the ARDRegression model according to the given training data and parameters. get_params([deep]) Get parameters for this estimator. predict(X[, return_std]) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y) [source] Fit the ARDRegression model according to the given training data and parameters. Iterative procedure to maximize the evidence Parameters Xarray-like of shape (n_samples, n_features) Training vector, where n_samples in the number of samples and n_features is the number of features. yarray-like of shape (n_samples,) Target values (integers). Will be cast to X’s dtype if necessary Returns selfreturns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X, return_std=False) [source] Predict using the linear model. In addition to the mean of the predictive distribution, also its standard deviation can be returned. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Samples. return_stdbool, default=False Whether to return the standard deviation of posterior prediction. Returns y_meanarray-like of shape (n_samples,) Mean of predictive distribution of query points. y_stdarray-like of shape (n_samples,) Standard deviation of predictive distribution of query points. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.ardregression#sklearn.linear_model.ARDRegression
sklearn.linear_model.ARDRegression class sklearn.linear_model.ARDRegression(*, n_iter=300, tol=0.001, alpha_1=1e-06, alpha_2=1e-06, lambda_1=1e-06, lambda_2=1e-06, compute_score=False, threshold_lambda=10000.0, fit_intercept=True, normalize=False, copy_X=True, verbose=False) [source] Bayesian ARD regression. Fit the weights of a regression model, using an ARD prior. The weights of the regression model are assumed to be in Gaussian distributions. Also estimate the parameters lambda (precisions of the distributions of the weights) and alpha (precision of the distribution of the noise). The estimation is done by an iterative procedures (Evidence Maximization) Read more in the User Guide. Parameters n_iterint, default=300 Maximum number of iterations. tolfloat, default=1e-3 Stop the algorithm if w has converged. alpha_1float, default=1e-6 Hyper-parameter : shape parameter for the Gamma distribution prior over the alpha parameter. alpha_2float, default=1e-6 Hyper-parameter : inverse scale parameter (rate parameter) for the Gamma distribution prior over the alpha parameter. lambda_1float, default=1e-6 Hyper-parameter : shape parameter for the Gamma distribution prior over the lambda parameter. lambda_2float, default=1e-6 Hyper-parameter : inverse scale parameter (rate parameter) for the Gamma distribution prior over the lambda parameter. compute_scorebool, default=False If True, compute the objective function at each step of the model. threshold_lambdafloat, default=10 000 threshold for removing (pruning) weights with high precision from the computation. fit_interceptbool, default=True whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered). normalizebool, default=False This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. verbosebool, default=False Verbose mode when fitting the model. Attributes coef_array-like of shape (n_features,) Coefficients of the regression model (mean of distribution) alpha_float estimated precision of the noise. lambda_array-like of shape (n_features,) estimated precisions of the weights. sigma_array-like of shape (n_features, n_features) estimated variance-covariance matrix of the weights scores_float if computed, value of the objective function (to be maximized) intercept_float Independent term in decision function. Set to 0.0 if fit_intercept = False. X_offset_float If normalize=True, offset subtracted for centering data to a zero mean. X_scale_float If normalize=True, parameter used to scale data to a unit standard deviation. Notes For an example, see examples/linear_model/plot_ard.py. References D. J. C. MacKay, Bayesian nonlinear modeling for the prediction competition, ASHRAE Transactions, 1994. R. Salakhutdinov, Lecture notes on Statistical Machine Learning, http://www.utstat.toronto.edu/~rsalakhu/sta4273/notes/Lecture2.pdf#page=15 Their beta is our self.alpha_ Their alpha is our self.lambda_ ARD is a little different than the slide: only dimensions/features for which self.lambda_ < self.threshold_lambda are kept and the rest are discarded. Examples >>> from sklearn import linear_model >>> clf = linear_model.ARDRegression() >>> clf.fit([[0,0], [1, 1], [2, 2]], [0, 1, 2]) ARDRegression() >>> clf.predict([[1, 1]]) array([1.]) Methods fit(X, y) Fit the ARDRegression model according to the given training data and parameters. get_params([deep]) Get parameters for this estimator. predict(X[, return_std]) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y) [source] Fit the ARDRegression model according to the given training data and parameters. Iterative procedure to maximize the evidence Parameters Xarray-like of shape (n_samples, n_features) Training vector, where n_samples in the number of samples and n_features is the number of features. yarray-like of shape (n_samples,) Target values (integers). Will be cast to X’s dtype if necessary Returns selfreturns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X, return_std=False) [source] Predict using the linear model. In addition to the mean of the predictive distribution, also its standard deviation can be returned. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Samples. return_stdbool, default=False Whether to return the standard deviation of posterior prediction. Returns y_meanarray-like of shape (n_samples,) Mean of predictive distribution of query points. y_stdarray-like of shape (n_samples,) Standard deviation of predictive distribution of query points. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. Examples using sklearn.linear_model.ARDRegression Automatic Relevance Determination Regression (ARD)
sklearn.modules.generated.sklearn.linear_model.ardregression
fit(X, y) [source] Fit the ARDRegression model according to the given training data and parameters. Iterative procedure to maximize the evidence Parameters Xarray-like of shape (n_samples, n_features) Training vector, where n_samples in the number of samples and n_features is the number of features. yarray-like of shape (n_samples,) Target values (integers). Will be cast to X’s dtype if necessary Returns selfreturns an instance of self.
sklearn.modules.generated.sklearn.linear_model.ardregression#sklearn.linear_model.ARDRegression.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.linear_model.ardregression#sklearn.linear_model.ARDRegression.get_params
predict(X, return_std=False) [source] Predict using the linear model. In addition to the mean of the predictive distribution, also its standard deviation can be returned. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Samples. return_stdbool, default=False Whether to return the standard deviation of posterior prediction. Returns y_meanarray-like of shape (n_samples,) Mean of predictive distribution of query points. y_stdarray-like of shape (n_samples,) Standard deviation of predictive distribution of query points.
sklearn.modules.generated.sklearn.linear_model.ardregression#sklearn.linear_model.ARDRegression.predict
score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
sklearn.modules.generated.sklearn.linear_model.ardregression#sklearn.linear_model.ARDRegression.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.ardregression#sklearn.linear_model.ARDRegression.set_params
class sklearn.linear_model.BayesianRidge(*, n_iter=300, tol=0.001, alpha_1=1e-06, alpha_2=1e-06, lambda_1=1e-06, lambda_2=1e-06, alpha_init=None, lambda_init=None, compute_score=False, fit_intercept=True, normalize=False, copy_X=True, verbose=False) [source] Bayesian ridge regression. Fit a Bayesian ridge model. See the Notes section for details on this implementation and the optimization of the regularization parameters lambda (precision of the weights) and alpha (precision of the noise). Read more in the User Guide. Parameters n_iterint, default=300 Maximum number of iterations. Should be greater than or equal to 1. tolfloat, default=1e-3 Stop the algorithm if w has converged. alpha_1float, default=1e-6 Hyper-parameter : shape parameter for the Gamma distribution prior over the alpha parameter. alpha_2float, default=1e-6 Hyper-parameter : inverse scale parameter (rate parameter) for the Gamma distribution prior over the alpha parameter. lambda_1float, default=1e-6 Hyper-parameter : shape parameter for the Gamma distribution prior over the lambda parameter. lambda_2float, default=1e-6 Hyper-parameter : inverse scale parameter (rate parameter) for the Gamma distribution prior over the lambda parameter. alpha_initfloat, default=None Initial value for alpha (precision of the noise). If not set, alpha_init is 1/Var(y). New in version 0.22. lambda_initfloat, default=None Initial value for lambda (precision of the weights). If not set, lambda_init is 1. New in version 0.22. compute_scorebool, default=False If True, compute the log marginal likelihood at each iteration of the optimization. fit_interceptbool, default=True Whether to calculate the intercept for this model. The intercept is not treated as a probabilistic parameter and thus has no associated variance. If set to False, no intercept will be used in calculations (i.e. data is expected to be centered). normalizebool, default=False This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. verbosebool, default=False Verbose mode when fitting the model. Attributes coef_array-like of shape (n_features,) Coefficients of the regression model (mean of distribution) intercept_float Independent term in decision function. Set to 0.0 if fit_intercept = False. alpha_float Estimated precision of the noise. lambda_float Estimated precision of the weights. sigma_array-like of shape (n_features, n_features) Estimated variance-covariance matrix of the weights scores_array-like of shape (n_iter_+1,) If computed_score is True, value of the log marginal likelihood (to be maximized) at each iteration of the optimization. The array starts with the value of the log marginal likelihood obtained for the initial values of alpha and lambda and ends with the value obtained for the estimated alpha and lambda. n_iter_int The actual number of iterations to reach the stopping criterion. X_offset_float If normalize=True, offset subtracted for centering data to a zero mean. X_scale_float If normalize=True, parameter used to scale data to a unit standard deviation. Notes There exist several strategies to perform Bayesian ridge regression. This implementation is based on the algorithm described in Appendix A of (Tipping, 2001) where updates of the regularization parameters are done as suggested in (MacKay, 1992). Note that according to A New View of Automatic Relevance Determination (Wipf and Nagarajan, 2008) these update rules do not guarantee that the marginal likelihood is increasing between two consecutive iterations of the optimization. References D. J. C. MacKay, Bayesian Interpolation, Computation and Neural Systems, Vol. 4, No. 3, 1992. M. E. Tipping, Sparse Bayesian Learning and the Relevance Vector Machine, Journal of Machine Learning Research, Vol. 1, 2001. Examples >>> from sklearn import linear_model >>> clf = linear_model.BayesianRidge() >>> clf.fit([[0,0], [1, 1], [2, 2]], [0, 1, 2]) BayesianRidge() >>> clf.predict([[1, 1]]) array([1.]) Methods fit(X, y[, sample_weight]) Fit the model get_params([deep]) Get parameters for this estimator. predict(X[, return_std]) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y, sample_weight=None) [source] Fit the model Parameters Xndarray of shape (n_samples, n_features) Training data yndarray of shape (n_samples,) Target values. Will be cast to X’s dtype if necessary sample_weightndarray of shape (n_samples,), default=None Individual weights for each sample New in version 0.20: parameter sample_weight support to BayesianRidge. Returns selfreturns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X, return_std=False) [source] Predict using the linear model. In addition to the mean of the predictive distribution, also its standard deviation can be returned. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Samples. return_stdbool, default=False Whether to return the standard deviation of posterior prediction. Returns y_meanarray-like of shape (n_samples,) Mean of predictive distribution of query points. y_stdarray-like of shape (n_samples,) Standard deviation of predictive distribution of query points. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.bayesianridge#sklearn.linear_model.BayesianRidge
sklearn.linear_model.BayesianRidge class sklearn.linear_model.BayesianRidge(*, n_iter=300, tol=0.001, alpha_1=1e-06, alpha_2=1e-06, lambda_1=1e-06, lambda_2=1e-06, alpha_init=None, lambda_init=None, compute_score=False, fit_intercept=True, normalize=False, copy_X=True, verbose=False) [source] Bayesian ridge regression. Fit a Bayesian ridge model. See the Notes section for details on this implementation and the optimization of the regularization parameters lambda (precision of the weights) and alpha (precision of the noise). Read more in the User Guide. Parameters n_iterint, default=300 Maximum number of iterations. Should be greater than or equal to 1. tolfloat, default=1e-3 Stop the algorithm if w has converged. alpha_1float, default=1e-6 Hyper-parameter : shape parameter for the Gamma distribution prior over the alpha parameter. alpha_2float, default=1e-6 Hyper-parameter : inverse scale parameter (rate parameter) for the Gamma distribution prior over the alpha parameter. lambda_1float, default=1e-6 Hyper-parameter : shape parameter for the Gamma distribution prior over the lambda parameter. lambda_2float, default=1e-6 Hyper-parameter : inverse scale parameter (rate parameter) for the Gamma distribution prior over the lambda parameter. alpha_initfloat, default=None Initial value for alpha (precision of the noise). If not set, alpha_init is 1/Var(y). New in version 0.22. lambda_initfloat, default=None Initial value for lambda (precision of the weights). If not set, lambda_init is 1. New in version 0.22. compute_scorebool, default=False If True, compute the log marginal likelihood at each iteration of the optimization. fit_interceptbool, default=True Whether to calculate the intercept for this model. The intercept is not treated as a probabilistic parameter and thus has no associated variance. If set to False, no intercept will be used in calculations (i.e. data is expected to be centered). normalizebool, default=False This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. verbosebool, default=False Verbose mode when fitting the model. Attributes coef_array-like of shape (n_features,) Coefficients of the regression model (mean of distribution) intercept_float Independent term in decision function. Set to 0.0 if fit_intercept = False. alpha_float Estimated precision of the noise. lambda_float Estimated precision of the weights. sigma_array-like of shape (n_features, n_features) Estimated variance-covariance matrix of the weights scores_array-like of shape (n_iter_+1,) If computed_score is True, value of the log marginal likelihood (to be maximized) at each iteration of the optimization. The array starts with the value of the log marginal likelihood obtained for the initial values of alpha and lambda and ends with the value obtained for the estimated alpha and lambda. n_iter_int The actual number of iterations to reach the stopping criterion. X_offset_float If normalize=True, offset subtracted for centering data to a zero mean. X_scale_float If normalize=True, parameter used to scale data to a unit standard deviation. Notes There exist several strategies to perform Bayesian ridge regression. This implementation is based on the algorithm described in Appendix A of (Tipping, 2001) where updates of the regularization parameters are done as suggested in (MacKay, 1992). Note that according to A New View of Automatic Relevance Determination (Wipf and Nagarajan, 2008) these update rules do not guarantee that the marginal likelihood is increasing between two consecutive iterations of the optimization. References D. J. C. MacKay, Bayesian Interpolation, Computation and Neural Systems, Vol. 4, No. 3, 1992. M. E. Tipping, Sparse Bayesian Learning and the Relevance Vector Machine, Journal of Machine Learning Research, Vol. 1, 2001. Examples >>> from sklearn import linear_model >>> clf = linear_model.BayesianRidge() >>> clf.fit([[0,0], [1, 1], [2, 2]], [0, 1, 2]) BayesianRidge() >>> clf.predict([[1, 1]]) array([1.]) Methods fit(X, y[, sample_weight]) Fit the model get_params([deep]) Get parameters for this estimator. predict(X[, return_std]) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y, sample_weight=None) [source] Fit the model Parameters Xndarray of shape (n_samples, n_features) Training data yndarray of shape (n_samples,) Target values. Will be cast to X’s dtype if necessary sample_weightndarray of shape (n_samples,), default=None Individual weights for each sample New in version 0.20: parameter sample_weight support to BayesianRidge. Returns selfreturns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X, return_std=False) [source] Predict using the linear model. In addition to the mean of the predictive distribution, also its standard deviation can be returned. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Samples. return_stdbool, default=False Whether to return the standard deviation of posterior prediction. Returns y_meanarray-like of shape (n_samples,) Mean of predictive distribution of query points. y_stdarray-like of shape (n_samples,) Standard deviation of predictive distribution of query points. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. Examples using sklearn.linear_model.BayesianRidge Feature agglomeration vs. univariate selection Curve Fitting with Bayesian Ridge Regression Bayesian Ridge Regression Imputing missing values with variants of IterativeImputer
sklearn.modules.generated.sklearn.linear_model.bayesianridge
fit(X, y, sample_weight=None) [source] Fit the model Parameters Xndarray of shape (n_samples, n_features) Training data yndarray of shape (n_samples,) Target values. Will be cast to X’s dtype if necessary sample_weightndarray of shape (n_samples,), default=None Individual weights for each sample New in version 0.20: parameter sample_weight support to BayesianRidge. Returns selfreturns an instance of self.
sklearn.modules.generated.sklearn.linear_model.bayesianridge#sklearn.linear_model.BayesianRidge.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.linear_model.bayesianridge#sklearn.linear_model.BayesianRidge.get_params
predict(X, return_std=False) [source] Predict using the linear model. In addition to the mean of the predictive distribution, also its standard deviation can be returned. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Samples. return_stdbool, default=False Whether to return the standard deviation of posterior prediction. Returns y_meanarray-like of shape (n_samples,) Mean of predictive distribution of query points. y_stdarray-like of shape (n_samples,) Standard deviation of predictive distribution of query points.
sklearn.modules.generated.sklearn.linear_model.bayesianridge#sklearn.linear_model.BayesianRidge.predict
score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
sklearn.modules.generated.sklearn.linear_model.bayesianridge#sklearn.linear_model.BayesianRidge.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.bayesianridge#sklearn.linear_model.BayesianRidge.set_params
class sklearn.linear_model.ElasticNet(alpha=1.0, *, l1_ratio=0.5, fit_intercept=True, normalize=False, precompute=False, max_iter=1000, copy_X=True, tol=0.0001, warm_start=False, positive=False, random_state=None, selection='cyclic') [source] Linear regression with combined L1 and L2 priors as regularizer. Minimizes the objective function: 1 / (2 * n_samples) * ||y - Xw||^2_2 + alpha * l1_ratio * ||w||_1 + 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2 If you are interested in controlling the L1 and L2 penalty separately, keep in mind that this is equivalent to: a * L1 + b * L2 where: alpha = a + b and l1_ratio = a / (a + b) The parameter l1_ratio corresponds to alpha in the glmnet R package while alpha corresponds to the lambda parameter in glmnet. Specifically, l1_ratio = 1 is the lasso penalty. Currently, l1_ratio <= 0.01 is not reliable, unless you supply your own sequence of alpha. Read more in the User Guide. Parameters alphafloat, default=1.0 Constant that multiplies the penalty terms. Defaults to 1.0. See the notes for the exact mathematical meaning of this parameter. alpha = 0 is equivalent to an ordinary least square, solved by the LinearRegression object. For numerical reasons, using alpha = 0 with the Lasso object is not advised. Given this, you should use the LinearRegression object. l1_ratiofloat, default=0.5 The ElasticNet mixing parameter, with 0 <= l1_ratio <= 1. For l1_ratio = 0 the penalty is an L2 penalty. For l1_ratio = 1 it is an L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2. fit_interceptbool, default=True Whether the intercept should be estimated or not. If False, the data is assumed to be already centered. normalizebool, default=False This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. precomputebool or array-like of shape (n_features, n_features), default=False Whether to use a precomputed Gram matrix to speed up calculations. The Gram matrix can also be passed as argument. For sparse input this option is always True to preserve sparsity. max_iterint, default=1000 The maximum number of iterations. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. tolfloat, default=1e-4 The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol. warm_startbool, default=False When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary. positivebool, default=False When set to True, forces the coefficients to be positive. random_stateint, RandomState instance, default=None The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary. selection{‘cyclic’, ‘random’}, default=’cyclic’ If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4. Attributes coef_ndarray of shape (n_features,) or (n_targets, n_features) Parameter vector (w in the cost function formula). sparse_coef_sparse matrix of shape (n_features,) or (n_tasks, n_features) Sparse representation of the fitted coef_. intercept_float or ndarray of shape (n_targets,) Independent term in decision function. n_iter_list of int Number of iterations run by the coordinate descent solver to reach the specified tolerance. dual_gap_float or ndarray of shape (n_targets,) Given param alpha, the dual gaps at the end of the optimization, same shape as each observation of y. See also ElasticNetCV Elastic net model with best model selection by cross-validation. SGDRegressor Implements elastic net regression with incremental training. SGDClassifier Implements logistic regression with elastic net penalty (SGDClassifier(loss="log", penalty="elasticnet")). Notes To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array. Examples >>> from sklearn.linear_model import ElasticNet >>> from sklearn.datasets import make_regression >>> X, y = make_regression(n_features=2, random_state=0) >>> regr = ElasticNet(random_state=0) >>> regr.fit(X, y) ElasticNet(random_state=0) >>> print(regr.coef_) [18.83816048 64.55968825] >>> print(regr.intercept_) 1.451... >>> print(regr.predict([[0, 0]])) [1.451...] Methods fit(X, y[, sample_weight, check_input]) Fit model with coordinate descent. get_params([deep]) Get parameters for this estimator. path(*args, **kwargs) Compute elastic net path with coordinate descent. predict(X) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y, sample_weight=None, check_input=True) [source] Fit model with coordinate descent. Parameters X{ndarray, sparse matrix} of (n_samples, n_features) Data. y{ndarray, sparse matrix} of shape (n_samples,) or (n_samples, n_targets) Target. Will be cast to X’s dtype if necessary. sample_weightfloat or array-like of shape (n_samples,), default=None Sample weight. New in version 0.23. check_inputbool, default=True Allow to bypass several input checking. Don’t use this parameter unless you know what you do. Notes Coordinate descent is an algorithm that considers each column of data at a time hence it will automatically convert the X input as a Fortran-contiguous numpy array if necessary. To avoid memory re-allocation it is advised to allocate the initial data in memory directly using that format. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. static path(*args, **kwargs) [source] Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is: 1 / (2 * n_samples) * ||y - Xw||^2_2 + alpha * l1_ratio * ||w||_1 + 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2 For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2 + alpha * l1_ratio * ||W||_21 + 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2 Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2} i.e. the sum of norm of each row. Read more in the User Guide. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse. y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs) Target values. l1_ratiofloat, default=0.5 Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso. epsfloat, default=1e-3 Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3. n_alphasint, default=100 Number of alphas along the regularization path. alphasndarray, default=None List of alphas where to compute the models. If None alphas are set automatically. precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’ Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument. Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. coef_initndarray of shape (n_features, ), default=None The initial values of the coefficients. verbosebool or int, default=False Amount of verbosity. return_n_iterbool, default=False Whether to return the number of iterations or not. positivebool, default=False If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1). check_inputbool, default=True If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller. **paramskwargs Keyword arguments passed to the coordinate descent solver. Returns alphasndarray of shape (n_alphas,) The alphas along the path where models are computed. coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas) Coefficients along the path. dual_gapsndarray of shape (n_alphas,) The dual gaps at the end of the optimization for each alpha. n_iterslist of int The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when return_n_iter is set to True). See also MultiTaskElasticNet MultiTaskElasticNetCV ElasticNet ElasticNetCV Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py. predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. property sparse_coef_ Sparse representation of the fitted coef_.
sklearn.modules.generated.sklearn.linear_model.elasticnet#sklearn.linear_model.ElasticNet
sklearn.linear_model.ElasticNet class sklearn.linear_model.ElasticNet(alpha=1.0, *, l1_ratio=0.5, fit_intercept=True, normalize=False, precompute=False, max_iter=1000, copy_X=True, tol=0.0001, warm_start=False, positive=False, random_state=None, selection='cyclic') [source] Linear regression with combined L1 and L2 priors as regularizer. Minimizes the objective function: 1 / (2 * n_samples) * ||y - Xw||^2_2 + alpha * l1_ratio * ||w||_1 + 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2 If you are interested in controlling the L1 and L2 penalty separately, keep in mind that this is equivalent to: a * L1 + b * L2 where: alpha = a + b and l1_ratio = a / (a + b) The parameter l1_ratio corresponds to alpha in the glmnet R package while alpha corresponds to the lambda parameter in glmnet. Specifically, l1_ratio = 1 is the lasso penalty. Currently, l1_ratio <= 0.01 is not reliable, unless you supply your own sequence of alpha. Read more in the User Guide. Parameters alphafloat, default=1.0 Constant that multiplies the penalty terms. Defaults to 1.0. See the notes for the exact mathematical meaning of this parameter. alpha = 0 is equivalent to an ordinary least square, solved by the LinearRegression object. For numerical reasons, using alpha = 0 with the Lasso object is not advised. Given this, you should use the LinearRegression object. l1_ratiofloat, default=0.5 The ElasticNet mixing parameter, with 0 <= l1_ratio <= 1. For l1_ratio = 0 the penalty is an L2 penalty. For l1_ratio = 1 it is an L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2. fit_interceptbool, default=True Whether the intercept should be estimated or not. If False, the data is assumed to be already centered. normalizebool, default=False This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. precomputebool or array-like of shape (n_features, n_features), default=False Whether to use a precomputed Gram matrix to speed up calculations. The Gram matrix can also be passed as argument. For sparse input this option is always True to preserve sparsity. max_iterint, default=1000 The maximum number of iterations. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. tolfloat, default=1e-4 The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol. warm_startbool, default=False When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary. positivebool, default=False When set to True, forces the coefficients to be positive. random_stateint, RandomState instance, default=None The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary. selection{‘cyclic’, ‘random’}, default=’cyclic’ If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4. Attributes coef_ndarray of shape (n_features,) or (n_targets, n_features) Parameter vector (w in the cost function formula). sparse_coef_sparse matrix of shape (n_features,) or (n_tasks, n_features) Sparse representation of the fitted coef_. intercept_float or ndarray of shape (n_targets,) Independent term in decision function. n_iter_list of int Number of iterations run by the coordinate descent solver to reach the specified tolerance. dual_gap_float or ndarray of shape (n_targets,) Given param alpha, the dual gaps at the end of the optimization, same shape as each observation of y. See also ElasticNetCV Elastic net model with best model selection by cross-validation. SGDRegressor Implements elastic net regression with incremental training. SGDClassifier Implements logistic regression with elastic net penalty (SGDClassifier(loss="log", penalty="elasticnet")). Notes To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array. Examples >>> from sklearn.linear_model import ElasticNet >>> from sklearn.datasets import make_regression >>> X, y = make_regression(n_features=2, random_state=0) >>> regr = ElasticNet(random_state=0) >>> regr.fit(X, y) ElasticNet(random_state=0) >>> print(regr.coef_) [18.83816048 64.55968825] >>> print(regr.intercept_) 1.451... >>> print(regr.predict([[0, 0]])) [1.451...] Methods fit(X, y[, sample_weight, check_input]) Fit model with coordinate descent. get_params([deep]) Get parameters for this estimator. path(*args, **kwargs) Compute elastic net path with coordinate descent. predict(X) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y, sample_weight=None, check_input=True) [source] Fit model with coordinate descent. Parameters X{ndarray, sparse matrix} of (n_samples, n_features) Data. y{ndarray, sparse matrix} of shape (n_samples,) or (n_samples, n_targets) Target. Will be cast to X’s dtype if necessary. sample_weightfloat or array-like of shape (n_samples,), default=None Sample weight. New in version 0.23. check_inputbool, default=True Allow to bypass several input checking. Don’t use this parameter unless you know what you do. Notes Coordinate descent is an algorithm that considers each column of data at a time hence it will automatically convert the X input as a Fortran-contiguous numpy array if necessary. To avoid memory re-allocation it is advised to allocate the initial data in memory directly using that format. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. static path(*args, **kwargs) [source] Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is: 1 / (2 * n_samples) * ||y - Xw||^2_2 + alpha * l1_ratio * ||w||_1 + 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2 For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2 + alpha * l1_ratio * ||W||_21 + 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2 Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2} i.e. the sum of norm of each row. Read more in the User Guide. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse. y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs) Target values. l1_ratiofloat, default=0.5 Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso. epsfloat, default=1e-3 Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3. n_alphasint, default=100 Number of alphas along the regularization path. alphasndarray, default=None List of alphas where to compute the models. If None alphas are set automatically. precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’ Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument. Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. coef_initndarray of shape (n_features, ), default=None The initial values of the coefficients. verbosebool or int, default=False Amount of verbosity. return_n_iterbool, default=False Whether to return the number of iterations or not. positivebool, default=False If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1). check_inputbool, default=True If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller. **paramskwargs Keyword arguments passed to the coordinate descent solver. Returns alphasndarray of shape (n_alphas,) The alphas along the path where models are computed. coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas) Coefficients along the path. dual_gapsndarray of shape (n_alphas,) The dual gaps at the end of the optimization for each alpha. n_iterslist of int The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when return_n_iter is set to True). See also MultiTaskElasticNet MultiTaskElasticNetCV ElasticNet ElasticNetCV Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py. predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. property sparse_coef_ Sparse representation of the fitted coef_. Examples using sklearn.linear_model.ElasticNet Release Highlights for scikit-learn 0.23 Lasso and Elastic Net for Sparse Signals Train error vs Test error
sklearn.modules.generated.sklearn.linear_model.elasticnet
fit(X, y, sample_weight=None, check_input=True) [source] Fit model with coordinate descent. Parameters X{ndarray, sparse matrix} of (n_samples, n_features) Data. y{ndarray, sparse matrix} of shape (n_samples,) or (n_samples, n_targets) Target. Will be cast to X’s dtype if necessary. sample_weightfloat or array-like of shape (n_samples,), default=None Sample weight. New in version 0.23. check_inputbool, default=True Allow to bypass several input checking. Don’t use this parameter unless you know what you do. Notes Coordinate descent is an algorithm that considers each column of data at a time hence it will automatically convert the X input as a Fortran-contiguous numpy array if necessary. To avoid memory re-allocation it is advised to allocate the initial data in memory directly using that format.
sklearn.modules.generated.sklearn.linear_model.elasticnet#sklearn.linear_model.ElasticNet.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.linear_model.elasticnet#sklearn.linear_model.ElasticNet.get_params
static path(*args, **kwargs) [source] Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is: 1 / (2 * n_samples) * ||y - Xw||^2_2 + alpha * l1_ratio * ||w||_1 + 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2 For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2 + alpha * l1_ratio * ||W||_21 + 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2 Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2} i.e. the sum of norm of each row. Read more in the User Guide. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse. y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs) Target values. l1_ratiofloat, default=0.5 Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso. epsfloat, default=1e-3 Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3. n_alphasint, default=100 Number of alphas along the regularization path. alphasndarray, default=None List of alphas where to compute the models. If None alphas are set automatically. precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’ Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument. Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. coef_initndarray of shape (n_features, ), default=None The initial values of the coefficients. verbosebool or int, default=False Amount of verbosity. return_n_iterbool, default=False Whether to return the number of iterations or not. positivebool, default=False If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1). check_inputbool, default=True If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller. **paramskwargs Keyword arguments passed to the coordinate descent solver. Returns alphasndarray of shape (n_alphas,) The alphas along the path where models are computed. coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas) Coefficients along the path. dual_gapsndarray of shape (n_alphas,) The dual gaps at the end of the optimization for each alpha. n_iterslist of int The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when return_n_iter is set to True). See also MultiTaskElasticNet MultiTaskElasticNetCV ElasticNet ElasticNetCV Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py.
sklearn.modules.generated.sklearn.linear_model.elasticnet#sklearn.linear_model.ElasticNet.path
predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values.
sklearn.modules.generated.sklearn.linear_model.elasticnet#sklearn.linear_model.ElasticNet.predict
score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
sklearn.modules.generated.sklearn.linear_model.elasticnet#sklearn.linear_model.ElasticNet.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.elasticnet#sklearn.linear_model.ElasticNet.set_params
property sparse_coef_ Sparse representation of the fitted coef_.
sklearn.modules.generated.sklearn.linear_model.elasticnet#sklearn.linear_model.ElasticNet.sparse_coef_
class sklearn.linear_model.ElasticNetCV(*, l1_ratio=0.5, eps=0.001, n_alphas=100, alphas=None, fit_intercept=True, normalize=False, precompute='auto', max_iter=1000, tol=0.0001, cv=None, copy_X=True, verbose=0, n_jobs=None, positive=False, random_state=None, selection='cyclic') [source] Elastic Net model with iterative fitting along a regularization path. See glossary entry for cross-validation estimator. Read more in the User Guide. Parameters l1_ratiofloat or list of float, default=0.5 float between 0 and 1 passed to ElasticNet (scaling between l1 and l2 penalties). For l1_ratio = 0 the penalty is an L2 penalty. For l1_ratio = 1 it is an L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2 This parameter can be a list, in which case the different values are tested by cross-validation and the one giving the best prediction score is used. Note that a good choice of list of values for l1_ratio is often to put more values close to 1 (i.e. Lasso) and less close to 0 (i.e. Ridge), as in [.1, .5, .7, .9, .95, .99, 1]. epsfloat, default=1e-3 Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3. n_alphasint, default=100 Number of alphas along the regularization path, used for each l1_ratio. alphasndarray, default=None List of alphas where to compute the models. If None alphas are set automatically. fit_interceptbool, default=True Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered). normalizebool, default=False This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’ Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument. max_iterint, default=1000 The maximum number of iterations. tolfloat, default=1e-4 The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol. cvint, cross-validation generator or iterable, default=None Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross-validation, int, to specify the number of folds. CV splitter, An iterable yielding (train, test) splits as arrays of indices. For int/None inputs, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. verbosebool or int, default=0 Amount of verbosity. n_jobsint, default=None Number of CPUs to use during the cross validation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. positivebool, default=False When set to True, forces the coefficients to be positive. random_stateint, RandomState instance, default=None The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary. selection{‘cyclic’, ‘random’}, default=’cyclic’ If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4. Attributes alpha_float The amount of penalization chosen by cross validation. l1_ratio_float The compromise between l1 and l2 penalization chosen by cross validation. coef_ndarray of shape (n_features,) or (n_targets, n_features) Parameter vector (w in the cost function formula). intercept_float or ndarray of shape (n_targets, n_features) Independent term in the decision function. mse_path_ndarray of shape (n_l1_ratio, n_alpha, n_folds) Mean square error for the test set on each fold, varying l1_ratio and alpha. alphas_ndarray of shape (n_alphas,) or (n_l1_ratio, n_alphas) The grid of alphas used for fitting, for each l1_ratio. dual_gap_float The dual gaps at the end of the optimization for the optimal alpha. n_iter_int Number of iterations run by the coordinate descent solver to reach the specified tolerance for the optimal alpha. See also enet_path ElasticNet Notes For an example, see examples/linear_model/plot_lasso_model_selection.py. To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array. The parameter l1_ratio corresponds to alpha in the glmnet R package while alpha corresponds to the lambda parameter in glmnet. More specifically, the optimization objective is: 1 / (2 * n_samples) * ||y - Xw||^2_2 + alpha * l1_ratio * ||w||_1 + 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2 If you are interested in controlling the L1 and L2 penalty separately, keep in mind that this is equivalent to: a * L1 + b * L2 for: alpha = a + b and l1_ratio = a / (a + b). Examples >>> from sklearn.linear_model import ElasticNetCV >>> from sklearn.datasets import make_regression >>> X, y = make_regression(n_features=2, random_state=0) >>> regr = ElasticNetCV(cv=5, random_state=0) >>> regr.fit(X, y) ElasticNetCV(cv=5, random_state=0) >>> print(regr.alpha_) 0.199... >>> print(regr.intercept_) 0.398... >>> print(regr.predict([[0, 0]])) [0.398...] Methods fit(X, y) Fit linear model with coordinate descent. get_params([deep]) Get parameters for this estimator. path(*args, **kwargs) Compute elastic net path with coordinate descent. predict(X) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y) [source] Fit linear model with coordinate descent. Fit is on grid of alphas and best alpha estimated by cross-validation. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output, X can be sparse. yarray-like of shape (n_samples,) or (n_samples, n_targets) Target values. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. static path(*args, **kwargs) [source] Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is: 1 / (2 * n_samples) * ||y - Xw||^2_2 + alpha * l1_ratio * ||w||_1 + 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2 For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2 + alpha * l1_ratio * ||W||_21 + 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2 Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2} i.e. the sum of norm of each row. Read more in the User Guide. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse. y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs) Target values. l1_ratiofloat, default=0.5 Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso. epsfloat, default=1e-3 Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3. n_alphasint, default=100 Number of alphas along the regularization path. alphasndarray, default=None List of alphas where to compute the models. If None alphas are set automatically. precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’ Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument. Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. coef_initndarray of shape (n_features, ), default=None The initial values of the coefficients. verbosebool or int, default=False Amount of verbosity. return_n_iterbool, default=False Whether to return the number of iterations or not. positivebool, default=False If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1). check_inputbool, default=True If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller. **paramskwargs Keyword arguments passed to the coordinate descent solver. Returns alphasndarray of shape (n_alphas,) The alphas along the path where models are computed. coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas) Coefficients along the path. dual_gapsndarray of shape (n_alphas,) The dual gaps at the end of the optimization for each alpha. n_iterslist of int The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when return_n_iter is set to True). See also MultiTaskElasticNet MultiTaskElasticNetCV ElasticNet ElasticNetCV Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py. predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.elasticnetcv#sklearn.linear_model.ElasticNetCV
sklearn.linear_model.ElasticNetCV class sklearn.linear_model.ElasticNetCV(*, l1_ratio=0.5, eps=0.001, n_alphas=100, alphas=None, fit_intercept=True, normalize=False, precompute='auto', max_iter=1000, tol=0.0001, cv=None, copy_X=True, verbose=0, n_jobs=None, positive=False, random_state=None, selection='cyclic') [source] Elastic Net model with iterative fitting along a regularization path. See glossary entry for cross-validation estimator. Read more in the User Guide. Parameters l1_ratiofloat or list of float, default=0.5 float between 0 and 1 passed to ElasticNet (scaling between l1 and l2 penalties). For l1_ratio = 0 the penalty is an L2 penalty. For l1_ratio = 1 it is an L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2 This parameter can be a list, in which case the different values are tested by cross-validation and the one giving the best prediction score is used. Note that a good choice of list of values for l1_ratio is often to put more values close to 1 (i.e. Lasso) and less close to 0 (i.e. Ridge), as in [.1, .5, .7, .9, .95, .99, 1]. epsfloat, default=1e-3 Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3. n_alphasint, default=100 Number of alphas along the regularization path, used for each l1_ratio. alphasndarray, default=None List of alphas where to compute the models. If None alphas are set automatically. fit_interceptbool, default=True Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered). normalizebool, default=False This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’ Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument. max_iterint, default=1000 The maximum number of iterations. tolfloat, default=1e-4 The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol. cvint, cross-validation generator or iterable, default=None Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross-validation, int, to specify the number of folds. CV splitter, An iterable yielding (train, test) splits as arrays of indices. For int/None inputs, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. verbosebool or int, default=0 Amount of verbosity. n_jobsint, default=None Number of CPUs to use during the cross validation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. positivebool, default=False When set to True, forces the coefficients to be positive. random_stateint, RandomState instance, default=None The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary. selection{‘cyclic’, ‘random’}, default=’cyclic’ If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4. Attributes alpha_float The amount of penalization chosen by cross validation. l1_ratio_float The compromise between l1 and l2 penalization chosen by cross validation. coef_ndarray of shape (n_features,) or (n_targets, n_features) Parameter vector (w in the cost function formula). intercept_float or ndarray of shape (n_targets, n_features) Independent term in the decision function. mse_path_ndarray of shape (n_l1_ratio, n_alpha, n_folds) Mean square error for the test set on each fold, varying l1_ratio and alpha. alphas_ndarray of shape (n_alphas,) or (n_l1_ratio, n_alphas) The grid of alphas used for fitting, for each l1_ratio. dual_gap_float The dual gaps at the end of the optimization for the optimal alpha. n_iter_int Number of iterations run by the coordinate descent solver to reach the specified tolerance for the optimal alpha. See also enet_path ElasticNet Notes For an example, see examples/linear_model/plot_lasso_model_selection.py. To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array. The parameter l1_ratio corresponds to alpha in the glmnet R package while alpha corresponds to the lambda parameter in glmnet. More specifically, the optimization objective is: 1 / (2 * n_samples) * ||y - Xw||^2_2 + alpha * l1_ratio * ||w||_1 + 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2 If you are interested in controlling the L1 and L2 penalty separately, keep in mind that this is equivalent to: a * L1 + b * L2 for: alpha = a + b and l1_ratio = a / (a + b). Examples >>> from sklearn.linear_model import ElasticNetCV >>> from sklearn.datasets import make_regression >>> X, y = make_regression(n_features=2, random_state=0) >>> regr = ElasticNetCV(cv=5, random_state=0) >>> regr.fit(X, y) ElasticNetCV(cv=5, random_state=0) >>> print(regr.alpha_) 0.199... >>> print(regr.intercept_) 0.398... >>> print(regr.predict([[0, 0]])) [0.398...] Methods fit(X, y) Fit linear model with coordinate descent. get_params([deep]) Get parameters for this estimator. path(*args, **kwargs) Compute elastic net path with coordinate descent. predict(X) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y) [source] Fit linear model with coordinate descent. Fit is on grid of alphas and best alpha estimated by cross-validation. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output, X can be sparse. yarray-like of shape (n_samples,) or (n_samples, n_targets) Target values. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. static path(*args, **kwargs) [source] Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is: 1 / (2 * n_samples) * ||y - Xw||^2_2 + alpha * l1_ratio * ||w||_1 + 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2 For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2 + alpha * l1_ratio * ||W||_21 + 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2 Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2} i.e. the sum of norm of each row. Read more in the User Guide. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse. y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs) Target values. l1_ratiofloat, default=0.5 Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso. epsfloat, default=1e-3 Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3. n_alphasint, default=100 Number of alphas along the regularization path. alphasndarray, default=None List of alphas where to compute the models. If None alphas are set automatically. precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’ Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument. Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. coef_initndarray of shape (n_features, ), default=None The initial values of the coefficients. verbosebool or int, default=False Amount of verbosity. return_n_iterbool, default=False Whether to return the number of iterations or not. positivebool, default=False If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1). check_inputbool, default=True If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller. **paramskwargs Keyword arguments passed to the coordinate descent solver. Returns alphasndarray of shape (n_alphas,) The alphas along the path where models are computed. coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas) Coefficients along the path. dual_gapsndarray of shape (n_alphas,) The dual gaps at the end of the optimization for each alpha. n_iterslist of int The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when return_n_iter is set to True). See also MultiTaskElasticNet MultiTaskElasticNetCV ElasticNet ElasticNetCV Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py. predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.elasticnetcv
fit(X, y) [source] Fit linear model with coordinate descent. Fit is on grid of alphas and best alpha estimated by cross-validation. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output, X can be sparse. yarray-like of shape (n_samples,) or (n_samples, n_targets) Target values.
sklearn.modules.generated.sklearn.linear_model.elasticnetcv#sklearn.linear_model.ElasticNetCV.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.linear_model.elasticnetcv#sklearn.linear_model.ElasticNetCV.get_params
static path(*args, **kwargs) [source] Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is: 1 / (2 * n_samples) * ||y - Xw||^2_2 + alpha * l1_ratio * ||w||_1 + 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2 For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2 + alpha * l1_ratio * ||W||_21 + 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2 Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2} i.e. the sum of norm of each row. Read more in the User Guide. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse. y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs) Target values. l1_ratiofloat, default=0.5 Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso. epsfloat, default=1e-3 Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3. n_alphasint, default=100 Number of alphas along the regularization path. alphasndarray, default=None List of alphas where to compute the models. If None alphas are set automatically. precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’ Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument. Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. coef_initndarray of shape (n_features, ), default=None The initial values of the coefficients. verbosebool or int, default=False Amount of verbosity. return_n_iterbool, default=False Whether to return the number of iterations or not. positivebool, default=False If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1). check_inputbool, default=True If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller. **paramskwargs Keyword arguments passed to the coordinate descent solver. Returns alphasndarray of shape (n_alphas,) The alphas along the path where models are computed. coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas) Coefficients along the path. dual_gapsndarray of shape (n_alphas,) The dual gaps at the end of the optimization for each alpha. n_iterslist of int The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when return_n_iter is set to True). See also MultiTaskElasticNet MultiTaskElasticNetCV ElasticNet ElasticNetCV Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py.
sklearn.modules.generated.sklearn.linear_model.elasticnetcv#sklearn.linear_model.ElasticNetCV.path
predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values.
sklearn.modules.generated.sklearn.linear_model.elasticnetcv#sklearn.linear_model.ElasticNetCV.predict
score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
sklearn.modules.generated.sklearn.linear_model.elasticnetcv#sklearn.linear_model.ElasticNetCV.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.elasticnetcv#sklearn.linear_model.ElasticNetCV.set_params
sklearn.linear_model.enet_path(X, y, *, l1_ratio=0.5, eps=0.001, n_alphas=100, alphas=None, precompute='auto', Xy=None, copy_X=True, coef_init=None, verbose=False, return_n_iter=False, positive=False, check_input=True, **params) [source] Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is: 1 / (2 * n_samples) * ||y - Xw||^2_2 + alpha * l1_ratio * ||w||_1 + 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2 For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2 + alpha * l1_ratio * ||W||_21 + 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2 Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2} i.e. the sum of norm of each row. Read more in the User Guide. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse. y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs) Target values. l1_ratiofloat, default=0.5 Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso. epsfloat, default=1e-3 Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3. n_alphasint, default=100 Number of alphas along the regularization path. alphasndarray, default=None List of alphas where to compute the models. If None alphas are set automatically. precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’ Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument. Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. coef_initndarray of shape (n_features, ), default=None The initial values of the coefficients. verbosebool or int, default=False Amount of verbosity. return_n_iterbool, default=False Whether to return the number of iterations or not. positivebool, default=False If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1). check_inputbool, default=True If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller. **paramskwargs Keyword arguments passed to the coordinate descent solver. Returns alphasndarray of shape (n_alphas,) The alphas along the path where models are computed. coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas) Coefficients along the path. dual_gapsndarray of shape (n_alphas,) The dual gaps at the end of the optimization for each alpha. n_iterslist of int The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when return_n_iter is set to True). See also MultiTaskElasticNet MultiTaskElasticNetCV ElasticNet ElasticNetCV Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py.
sklearn.modules.generated.sklearn.linear_model.enet_path#sklearn.linear_model.enet_path
class sklearn.linear_model.GammaRegressor(*, alpha=1.0, fit_intercept=True, max_iter=100, tol=0.0001, warm_start=False, verbose=0) [source] Generalized Linear Model with a Gamma distribution. Read more in the User Guide. New in version 0.23. Parameters alphafloat, default=1 Constant that multiplies the penalty term and thus determines the regularization strength. alpha = 0 is equivalent to unpenalized GLMs. In this case, the design matrix X must have full column rank (no collinearities). fit_interceptbool, default=True Specifies if a constant (a.k.a. bias or intercept) should be added to the linear predictor (X @ coef + intercept). max_iterint, default=100 The maximal number of iterations for the solver. tolfloat, default=1e-4 Stopping criterion. For the lbfgs solver, the iteration will stop when max{|g_j|, j = 1, ..., d} <= tol where g_j is the j-th component of the gradient (derivative) of the objective function. warm_startbool, default=False If set to True, reuse the solution of the previous call to fit as initialization for coef_ and intercept_ . verboseint, default=0 For the lbfgs solver set verbose to any positive number for verbosity. Attributes coef_array of shape (n_features,) Estimated coefficients for the linear predictor (X * coef_ + intercept_) in the GLM. intercept_float Intercept (a.k.a. bias) added to linear predictor. n_iter_int Actual number of iterations used in the solver. Examples >>> from sklearn import linear_model >>> clf = linear_model.GammaRegressor() >>> X = [[1, 2], [2, 3], [3, 4], [4, 3]] >>> y = [19, 26, 33, 30] >>> clf.fit(X, y) GammaRegressor() >>> clf.score(X, y) 0.773... >>> clf.coef_ array([0.072..., 0.066...]) >>> clf.intercept_ 2.896... >>> clf.predict([[1, 0], [2, 8]]) array([19.483..., 35.795...]) Methods fit(X, y[, sample_weight]) Fit a Generalized Linear Model. get_params([deep]) Get parameters for this estimator. predict(X) Predict using GLM with feature matrix X. score(X, y[, sample_weight]) Compute D^2, the percentage of deviance explained. set_params(**params) Set the parameters of this estimator. fit(X, y, sample_weight=None) [source] Fit a Generalized Linear Model. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. yarray-like of shape (n_samples,) Target values. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns selfreturns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using GLM with feature matrix X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Samples. Returns y_predarray of shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Compute D^2, the percentage of deviance explained. D^2 is a generalization of the coefficient of determination R^2. R^2 uses squared error and D^2 deviance. Note that those two are equal for family='normal'. D^2 is defined as \(D^2 = 1-\frac{D(y_{true},y_{pred})}{D_{null}}\), \(D_{null}\) is the null deviance, i.e. the deviance of a model with intercept alone, which corresponds to \(y_{pred} = \bar{y}\). The mean \(\bar{y}\) is averaged by sample_weight. Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) True values of target. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat D^2 of self.predict(X) w.r.t. y. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.gammaregressor#sklearn.linear_model.GammaRegressor
sklearn.linear_model.GammaRegressor class sklearn.linear_model.GammaRegressor(*, alpha=1.0, fit_intercept=True, max_iter=100, tol=0.0001, warm_start=False, verbose=0) [source] Generalized Linear Model with a Gamma distribution. Read more in the User Guide. New in version 0.23. Parameters alphafloat, default=1 Constant that multiplies the penalty term and thus determines the regularization strength. alpha = 0 is equivalent to unpenalized GLMs. In this case, the design matrix X must have full column rank (no collinearities). fit_interceptbool, default=True Specifies if a constant (a.k.a. bias or intercept) should be added to the linear predictor (X @ coef + intercept). max_iterint, default=100 The maximal number of iterations for the solver. tolfloat, default=1e-4 Stopping criterion. For the lbfgs solver, the iteration will stop when max{|g_j|, j = 1, ..., d} <= tol where g_j is the j-th component of the gradient (derivative) of the objective function. warm_startbool, default=False If set to True, reuse the solution of the previous call to fit as initialization for coef_ and intercept_ . verboseint, default=0 For the lbfgs solver set verbose to any positive number for verbosity. Attributes coef_array of shape (n_features,) Estimated coefficients for the linear predictor (X * coef_ + intercept_) in the GLM. intercept_float Intercept (a.k.a. bias) added to linear predictor. n_iter_int Actual number of iterations used in the solver. Examples >>> from sklearn import linear_model >>> clf = linear_model.GammaRegressor() >>> X = [[1, 2], [2, 3], [3, 4], [4, 3]] >>> y = [19, 26, 33, 30] >>> clf.fit(X, y) GammaRegressor() >>> clf.score(X, y) 0.773... >>> clf.coef_ array([0.072..., 0.066...]) >>> clf.intercept_ 2.896... >>> clf.predict([[1, 0], [2, 8]]) array([19.483..., 35.795...]) Methods fit(X, y[, sample_weight]) Fit a Generalized Linear Model. get_params([deep]) Get parameters for this estimator. predict(X) Predict using GLM with feature matrix X. score(X, y[, sample_weight]) Compute D^2, the percentage of deviance explained. set_params(**params) Set the parameters of this estimator. fit(X, y, sample_weight=None) [source] Fit a Generalized Linear Model. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. yarray-like of shape (n_samples,) Target values. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns selfreturns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using GLM with feature matrix X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Samples. Returns y_predarray of shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Compute D^2, the percentage of deviance explained. D^2 is a generalization of the coefficient of determination R^2. R^2 uses squared error and D^2 deviance. Note that those two are equal for family='normal'. D^2 is defined as \(D^2 = 1-\frac{D(y_{true},y_{pred})}{D_{null}}\), \(D_{null}\) is the null deviance, i.e. the deviance of a model with intercept alone, which corresponds to \(y_{pred} = \bar{y}\). The mean \(\bar{y}\) is averaged by sample_weight. Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) True values of target. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat D^2 of self.predict(X) w.r.t. y. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. Examples using sklearn.linear_model.GammaRegressor Release Highlights for scikit-learn 0.23 Tweedie regression on insurance claims
sklearn.modules.generated.sklearn.linear_model.gammaregressor
fit(X, y, sample_weight=None) [source] Fit a Generalized Linear Model. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. yarray-like of shape (n_samples,) Target values. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns selfreturns an instance of self.
sklearn.modules.generated.sklearn.linear_model.gammaregressor#sklearn.linear_model.GammaRegressor.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.linear_model.gammaregressor#sklearn.linear_model.GammaRegressor.get_params
predict(X) [source] Predict using GLM with feature matrix X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Samples. Returns y_predarray of shape (n_samples,) Returns predicted values.
sklearn.modules.generated.sklearn.linear_model.gammaregressor#sklearn.linear_model.GammaRegressor.predict
score(X, y, sample_weight=None) [source] Compute D^2, the percentage of deviance explained. D^2 is a generalization of the coefficient of determination R^2. R^2 uses squared error and D^2 deviance. Note that those two are equal for family='normal'. D^2 is defined as \(D^2 = 1-\frac{D(y_{true},y_{pred})}{D_{null}}\), \(D_{null}\) is the null deviance, i.e. the deviance of a model with intercept alone, which corresponds to \(y_{pred} = \bar{y}\). The mean \(\bar{y}\) is averaged by sample_weight. Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) True values of target. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat D^2 of self.predict(X) w.r.t. y.
sklearn.modules.generated.sklearn.linear_model.gammaregressor#sklearn.linear_model.GammaRegressor.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.gammaregressor#sklearn.linear_model.GammaRegressor.set_params
class sklearn.linear_model.HuberRegressor(*, epsilon=1.35, max_iter=100, alpha=0.0001, warm_start=False, fit_intercept=True, tol=1e-05) [source] Linear regression model that is robust to outliers. The Huber Regressor optimizes the squared loss for the samples where |(y - X'w) / sigma| < epsilon and the absolute loss for the samples where |(y - X'w) / sigma| > epsilon, where w and sigma are parameters to be optimized. The parameter sigma makes sure that if y is scaled up or down by a certain factor, one does not need to rescale epsilon to achieve the same robustness. Note that this does not take into account the fact that the different features of X may be of different scales. This makes sure that the loss function is not heavily influenced by the outliers while not completely ignoring their effect. Read more in the User Guide New in version 0.18. Parameters epsilonfloat, greater than 1.0, default=1.35 The parameter epsilon controls the number of samples that should be classified as outliers. The smaller the epsilon, the more robust it is to outliers. max_iterint, default=100 Maximum number of iterations that scipy.optimize.minimize(method="L-BFGS-B") should run for. alphafloat, default=0.0001 Regularization parameter. warm_startbool, default=False This is useful if the stored attributes of a previously used model has to be reused. If set to False, then the coefficients will be rewritten for every call to fit. See the Glossary. fit_interceptbool, default=True Whether or not to fit the intercept. This can be set to False if the data is already centered around the origin. tolfloat, default=1e-05 The iteration will stop when max{|proj g_i | i = 1, ..., n} <= tol where pg_i is the i-th component of the projected gradient. Attributes coef_array, shape (n_features,) Features got by optimizing the Huber loss. intercept_float Bias. scale_float The value by which |y - X'w - c| is scaled down. n_iter_int Number of iterations that scipy.optimize.minimize(method="L-BFGS-B") has run for. Changed in version 0.20: In SciPy <= 1.0.0 the number of lbfgs iterations may exceed max_iter. n_iter_ will now report at most max_iter. outliers_array, shape (n_samples,) A boolean mask which is set to True where the samples are identified as outliers. References 1 Peter J. Huber, Elvezio M. Ronchetti, Robust Statistics Concomitant scale estimates, pg 172 2 Art B. Owen (2006), A robust hybrid of lasso and ridge regression. https://statweb.stanford.edu/~owen/reports/hhu.pdf Examples >>> import numpy as np >>> from sklearn.linear_model import HuberRegressor, LinearRegression >>> from sklearn.datasets import make_regression >>> rng = np.random.RandomState(0) >>> X, y, coef = make_regression( ... n_samples=200, n_features=2, noise=4.0, coef=True, random_state=0) >>> X[:4] = rng.uniform(10, 20, (4, 2)) >>> y[:4] = rng.uniform(10, 20, 4) >>> huber = HuberRegressor().fit(X, y) >>> huber.score(X, y) -7.284... >>> huber.predict(X[:1,]) array([806.7200...]) >>> linear = LinearRegression().fit(X, y) >>> print("True coefficients:", coef) True coefficients: [20.4923... 34.1698...] >>> print("Huber coefficients:", huber.coef_) Huber coefficients: [17.7906... 31.0106...] >>> print("Linear Regression coefficients:", linear.coef_) Linear Regression coefficients: [-1.9221... 7.0226...] Methods fit(X, y[, sample_weight]) Fit the model according to the given training data. get_params([deep]) Get parameters for this estimator. predict(X) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y, sample_weight=None) [source] Fit the model according to the given training data. Parameters Xarray-like, shape (n_samples, n_features) Training vector, where n_samples in the number of samples and n_features is the number of features. yarray-like, shape (n_samples,) Target vector relative to X. sample_weightarray-like, shape (n_samples,) Weight given to each sample. Returns selfobject get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.huberregressor#sklearn.linear_model.HuberRegressor
sklearn.linear_model.HuberRegressor class sklearn.linear_model.HuberRegressor(*, epsilon=1.35, max_iter=100, alpha=0.0001, warm_start=False, fit_intercept=True, tol=1e-05) [source] Linear regression model that is robust to outliers. The Huber Regressor optimizes the squared loss for the samples where |(y - X'w) / sigma| < epsilon and the absolute loss for the samples where |(y - X'w) / sigma| > epsilon, where w and sigma are parameters to be optimized. The parameter sigma makes sure that if y is scaled up or down by a certain factor, one does not need to rescale epsilon to achieve the same robustness. Note that this does not take into account the fact that the different features of X may be of different scales. This makes sure that the loss function is not heavily influenced by the outliers while not completely ignoring their effect. Read more in the User Guide New in version 0.18. Parameters epsilonfloat, greater than 1.0, default=1.35 The parameter epsilon controls the number of samples that should be classified as outliers. The smaller the epsilon, the more robust it is to outliers. max_iterint, default=100 Maximum number of iterations that scipy.optimize.minimize(method="L-BFGS-B") should run for. alphafloat, default=0.0001 Regularization parameter. warm_startbool, default=False This is useful if the stored attributes of a previously used model has to be reused. If set to False, then the coefficients will be rewritten for every call to fit. See the Glossary. fit_interceptbool, default=True Whether or not to fit the intercept. This can be set to False if the data is already centered around the origin. tolfloat, default=1e-05 The iteration will stop when max{|proj g_i | i = 1, ..., n} <= tol where pg_i is the i-th component of the projected gradient. Attributes coef_array, shape (n_features,) Features got by optimizing the Huber loss. intercept_float Bias. scale_float The value by which |y - X'w - c| is scaled down. n_iter_int Number of iterations that scipy.optimize.minimize(method="L-BFGS-B") has run for. Changed in version 0.20: In SciPy <= 1.0.0 the number of lbfgs iterations may exceed max_iter. n_iter_ will now report at most max_iter. outliers_array, shape (n_samples,) A boolean mask which is set to True where the samples are identified as outliers. References 1 Peter J. Huber, Elvezio M. Ronchetti, Robust Statistics Concomitant scale estimates, pg 172 2 Art B. Owen (2006), A robust hybrid of lasso and ridge regression. https://statweb.stanford.edu/~owen/reports/hhu.pdf Examples >>> import numpy as np >>> from sklearn.linear_model import HuberRegressor, LinearRegression >>> from sklearn.datasets import make_regression >>> rng = np.random.RandomState(0) >>> X, y, coef = make_regression( ... n_samples=200, n_features=2, noise=4.0, coef=True, random_state=0) >>> X[:4] = rng.uniform(10, 20, (4, 2)) >>> y[:4] = rng.uniform(10, 20, 4) >>> huber = HuberRegressor().fit(X, y) >>> huber.score(X, y) -7.284... >>> huber.predict(X[:1,]) array([806.7200...]) >>> linear = LinearRegression().fit(X, y) >>> print("True coefficients:", coef) True coefficients: [20.4923... 34.1698...] >>> print("Huber coefficients:", huber.coef_) Huber coefficients: [17.7906... 31.0106...] >>> print("Linear Regression coefficients:", linear.coef_) Linear Regression coefficients: [-1.9221... 7.0226...] Methods fit(X, y[, sample_weight]) Fit the model according to the given training data. get_params([deep]) Get parameters for this estimator. predict(X) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y, sample_weight=None) [source] Fit the model according to the given training data. Parameters Xarray-like, shape (n_samples, n_features) Training vector, where n_samples in the number of samples and n_features is the number of features. yarray-like, shape (n_samples,) Target vector relative to X. sample_weightarray-like, shape (n_samples,) Weight given to each sample. Returns selfobject get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. Examples using sklearn.linear_model.HuberRegressor HuberRegressor vs Ridge on dataset with strong outliers Robust linear estimator fitting
sklearn.modules.generated.sklearn.linear_model.huberregressor
fit(X, y, sample_weight=None) [source] Fit the model according to the given training data. Parameters Xarray-like, shape (n_samples, n_features) Training vector, where n_samples in the number of samples and n_features is the number of features. yarray-like, shape (n_samples,) Target vector relative to X. sample_weightarray-like, shape (n_samples,) Weight given to each sample. Returns selfobject
sklearn.modules.generated.sklearn.linear_model.huberregressor#sklearn.linear_model.HuberRegressor.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.linear_model.huberregressor#sklearn.linear_model.HuberRegressor.get_params
predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values.
sklearn.modules.generated.sklearn.linear_model.huberregressor#sklearn.linear_model.HuberRegressor.predict
score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
sklearn.modules.generated.sklearn.linear_model.huberregressor#sklearn.linear_model.HuberRegressor.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.huberregressor#sklearn.linear_model.HuberRegressor.set_params
class sklearn.linear_model.Lars(*, fit_intercept=True, verbose=False, normalize=True, precompute='auto', n_nonzero_coefs=500, eps=2.220446049250313e-16, copy_X=True, fit_path=True, jitter=None, random_state=None) [source] Least Angle Regression model a.k.a. LAR Read more in the User Guide. Parameters fit_interceptbool, default=True Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered). verbosebool or int, default=False Sets the verbosity amount. normalizebool, default=True This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. precomputebool, ‘auto’ or array-like , default=’auto’ Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument. n_nonzero_coefsint, default=500 Target number of non-zero coefficients. Use np.inf for no limit. epsfloat, default=np.finfo(float).eps The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. fit_pathbool, default=True If True the full path is stored in the coef_path_ attribute. If you compute the solution for a large problem or many targets, setting fit_path to False will lead to a speedup, especially with a small alpha. jitterfloat, default=None Upper bound on a uniform noise parameter to be added to the y values, to satisfy the model’s assumption of one-at-a-time computations. Might help with stability. New in version 0.23. random_stateint, RandomState instance or None, default=None Determines random number generation for jittering. Pass an int for reproducible output across multiple function calls. See Glossary. Ignored if jitter is None. New in version 0.23. Attributes alphas_array-like of shape (n_alphas + 1,) or list of such arrays Maximum of covariances (in absolute value) at each iteration. n_alphas is either max_iter, n_features or the number of nodes in the path with alpha >= alpha_min, whichever is smaller. If this is a list of array-like, the length of the outer list is n_targets. active_list of shape (n_alphas,) or list of such lists Indices of active variables at the end of the path. If this is a list of list, the length of the outer list is n_targets. coef_path_array-like of shape (n_features, n_alphas + 1) or list of such arrays The varying values of the coefficients along the path. It is not present if the fit_path parameter is False. If this is a list of array-like, the length of the outer list is n_targets. coef_array-like of shape (n_features,) or (n_targets, n_features) Parameter vector (w in the formulation formula). intercept_float or array-like of shape (n_targets,) Independent term in decision function. n_iter_array-like or int The number of iterations taken by lars_path to find the grid of alphas for each target. See also lars_path, LarsCV sklearn.decomposition.sparse_encode Examples >>> from sklearn import linear_model >>> reg = linear_model.Lars(n_nonzero_coefs=1) >>> reg.fit([[-1, 1], [0, 0], [1, 1]], [-1.1111, 0, -1.1111]) Lars(n_nonzero_coefs=1) >>> print(reg.coef_) [ 0. -1.11...] Methods fit(X, y[, Xy]) Fit the model using X, y as training data. get_params([deep]) Get parameters for this estimator. predict(X) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y, Xy=None) [source] Fit the model using X, y as training data. Parameters Xarray-like of shape (n_samples, n_features) Training data. yarray-like of shape (n_samples,) or (n_samples, n_targets) Target values. Xyarray-like of shape (n_samples,) or (n_samples, n_targets), default=None Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. Returns selfobject returns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.lars#sklearn.linear_model.Lars
sklearn.linear_model.Lars class sklearn.linear_model.Lars(*, fit_intercept=True, verbose=False, normalize=True, precompute='auto', n_nonzero_coefs=500, eps=2.220446049250313e-16, copy_X=True, fit_path=True, jitter=None, random_state=None) [source] Least Angle Regression model a.k.a. LAR Read more in the User Guide. Parameters fit_interceptbool, default=True Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered). verbosebool or int, default=False Sets the verbosity amount. normalizebool, default=True This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. precomputebool, ‘auto’ or array-like , default=’auto’ Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument. n_nonzero_coefsint, default=500 Target number of non-zero coefficients. Use np.inf for no limit. epsfloat, default=np.finfo(float).eps The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. fit_pathbool, default=True If True the full path is stored in the coef_path_ attribute. If you compute the solution for a large problem or many targets, setting fit_path to False will lead to a speedup, especially with a small alpha. jitterfloat, default=None Upper bound on a uniform noise parameter to be added to the y values, to satisfy the model’s assumption of one-at-a-time computations. Might help with stability. New in version 0.23. random_stateint, RandomState instance or None, default=None Determines random number generation for jittering. Pass an int for reproducible output across multiple function calls. See Glossary. Ignored if jitter is None. New in version 0.23. Attributes alphas_array-like of shape (n_alphas + 1,) or list of such arrays Maximum of covariances (in absolute value) at each iteration. n_alphas is either max_iter, n_features or the number of nodes in the path with alpha >= alpha_min, whichever is smaller. If this is a list of array-like, the length of the outer list is n_targets. active_list of shape (n_alphas,) or list of such lists Indices of active variables at the end of the path. If this is a list of list, the length of the outer list is n_targets. coef_path_array-like of shape (n_features, n_alphas + 1) or list of such arrays The varying values of the coefficients along the path. It is not present if the fit_path parameter is False. If this is a list of array-like, the length of the outer list is n_targets. coef_array-like of shape (n_features,) or (n_targets, n_features) Parameter vector (w in the formulation formula). intercept_float or array-like of shape (n_targets,) Independent term in decision function. n_iter_array-like or int The number of iterations taken by lars_path to find the grid of alphas for each target. See also lars_path, LarsCV sklearn.decomposition.sparse_encode Examples >>> from sklearn import linear_model >>> reg = linear_model.Lars(n_nonzero_coefs=1) >>> reg.fit([[-1, 1], [0, 0], [1, 1]], [-1.1111, 0, -1.1111]) Lars(n_nonzero_coefs=1) >>> print(reg.coef_) [ 0. -1.11...] Methods fit(X, y[, Xy]) Fit the model using X, y as training data. get_params([deep]) Get parameters for this estimator. predict(X) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y, Xy=None) [source] Fit the model using X, y as training data. Parameters Xarray-like of shape (n_samples, n_features) Training data. yarray-like of shape (n_samples,) or (n_samples, n_targets) Target values. Xyarray-like of shape (n_samples,) or (n_samples, n_targets), default=None Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. Returns selfobject returns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.lars
fit(X, y, Xy=None) [source] Fit the model using X, y as training data. Parameters Xarray-like of shape (n_samples, n_features) Training data. yarray-like of shape (n_samples,) or (n_samples, n_targets) Target values. Xyarray-like of shape (n_samples,) or (n_samples, n_targets), default=None Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. Returns selfobject returns an instance of self.
sklearn.modules.generated.sklearn.linear_model.lars#sklearn.linear_model.Lars.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.linear_model.lars#sklearn.linear_model.Lars.get_params
predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values.
sklearn.modules.generated.sklearn.linear_model.lars#sklearn.linear_model.Lars.predict
score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
sklearn.modules.generated.sklearn.linear_model.lars#sklearn.linear_model.Lars.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.lars#sklearn.linear_model.Lars.set_params