doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
densify() [source] Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns self Fitted estimator.
sklearn.modules.generated.sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier.densify
fit(X, y, coef_init=None, intercept_init=None, sample_weight=None) [source] Fit linear model with Stochastic Gradient Descent. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Training data. yndarray of shape (n_samples,) Target values. coef_initndarray of shape (n_classes, n_features), default=None The initial coefficients to warm-start the optimization. intercept_initndarray of shape (n_classes,), default=None The initial intercept to warm-start the optimization. sample_weightarray-like, shape (n_samples,), default=None Weights applied to individual samples. If not provided, uniform weights are assumed. These weights will be multiplied with class_weight (passed through the constructor) if class_weight is specified. Returns self : Returns an instance of self.
sklearn.modules.generated.sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier.get_params
partial_fit(X, y, classes=None, sample_weight=None) [source] Perform one epoch of stochastic gradient descent on given samples. Internally, this method uses max_iter = 1. Therefore, it is not guaranteed that a minimum of the cost function is reached after calling it once. Matters such as objective convergence and early stopping should be handled by the user. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Subset of the training data. yndarray of shape (n_samples,) Subset of the target values. classesndarray of shape (n_classes,), default=None Classes across all calls to partial_fit. Can be obtained by via np.unique(y_all), where y_all is the target vector of the entire dataset. This argument is required for the first call to partial_fit and can be omitted in the subsequent calls. Note that y doesn’t need to contain all labels in classes. sample_weightarray-like, shape (n_samples,), default=None Weights applied to individual samples. If not provided, uniform weights are assumed. Returns self : Returns an instance of self.
sklearn.modules.generated.sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier.partial_fit
predict(X) [source] Predict class labels for samples in X. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape [n_samples] Predicted class label per sample.
sklearn.modules.generated.sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier.predict
property predict_log_proba Log of probability estimates. This method is only available for log loss and modified Huber loss. When loss=”modified_huber”, probability estimates may be hard zeros and ones, so taking the logarithm is not possible. See predict_proba for details. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Input data for prediction. Returns Tarray-like, shape (n_samples, n_classes) Returns the log-probability of the sample for each class in the model, where classes are ordered as they are in self.classes_.
sklearn.modules.generated.sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier.predict_log_proba
property predict_proba Probability estimates. This method is only available for log loss and modified Huber loss. Multiclass probability estimates are derived from binary (one-vs.-rest) estimates by simple normalization, as recommended by Zadrozny and Elkan. Binary probability estimates for loss=”modified_huber” are given by (clip(decision_function(X), -1, 1) + 1) / 2. For other loss functions it is necessary to perform proper probability calibration by wrapping the classifier with CalibratedClassifierCV instead. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Input data for prediction. Returns ndarray of shape (n_samples, n_classes) Returns the probability of the sample for each class in the model, where classes are ordered as they are in self.classes_. References Zadrozny and Elkan, “Transforming classifier scores into multiclass probability estimates”, SIGKDD’02, http://www.research.ibm.com/people/z/zadrozny/kdd2002-Transf.pdf The justification for the formula in the loss=”modified_huber” case is in the appendix B in: http://jmlr.csail.mit.edu/papers/volume2/zhang02c/zhang02c.pdf
sklearn.modules.generated.sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier.predict_proba
score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y.
sklearn.modules.generated.sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier.score
set_params(**kwargs) [source] Set and validate the parameters of estimator. Parameters **kwargsdict Estimator parameters. Returns selfobject Estimator instance.
sklearn.modules.generated.sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier.set_params
sparsify() [source] Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns self Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify.
sklearn.modules.generated.sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier.sparsify
class sklearn.linear_model.SGDRegressor(loss='squared_loss', *, penalty='l2', alpha=0.0001, l1_ratio=0.15, fit_intercept=True, max_iter=1000, tol=0.001, shuffle=True, verbose=0, epsilon=0.1, random_state=None, learning_rate='invscaling', eta0=0.01, power_t=0.25, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, warm_start=False, average=False) [source] Linear model fitted by minimizing a regularized empirical loss with SGD SGD stands for Stochastic Gradient Descent: the gradient of the loss is estimated each sample at a time and the model is updated along the way with a decreasing strength schedule (aka learning rate). The regularizer is a penalty added to the loss function that shrinks model parameters towards the zero vector using either the squared euclidean norm L2 or the absolute norm L1 or a combination of both (Elastic Net). If the parameter update crosses the 0.0 value because of the regularizer, the update is truncated to 0.0 to allow for learning sparse models and achieve online feature selection. This implementation works with data represented as dense numpy arrays of floating point values for the features. Read more in the User Guide. Parameters lossstr, default=’squared_loss’ The loss function to be used. The possible values are ‘squared_loss’, ‘huber’, ‘epsilon_insensitive’, or ‘squared_epsilon_insensitive’ The ‘squared_loss’ refers to the ordinary least squares fit. ‘huber’ modifies ‘squared_loss’ to focus less on getting outliers correct by switching from squared to linear loss past a distance of epsilon. ‘epsilon_insensitive’ ignores errors less than epsilon and is linear past that; this is the loss function used in SVR. ‘squared_epsilon_insensitive’ is the same but becomes squared loss past a tolerance of epsilon. More details about the losses formulas can be found in the User Guide. penalty{‘l2’, ‘l1’, ‘elasticnet’}, default=’l2’ The penalty (aka regularization term) to be used. Defaults to ‘l2’ which is the standard regularizer for linear SVM models. ‘l1’ and ‘elasticnet’ might bring sparsity to the model (feature selection) not achievable with ‘l2’. alphafloat, default=0.0001 Constant that multiplies the regularization term. The higher the value, the stronger the regularization. Also used to compute the learning rate when set to learning_rate is set to ‘optimal’. l1_ratiofloat, default=0.15 The Elastic Net mixing parameter, with 0 <= l1_ratio <= 1. l1_ratio=0 corresponds to L2 penalty, l1_ratio=1 to L1. Only used if penalty is ‘elasticnet’. fit_interceptbool, default=True Whether the intercept should be estimated or not. If False, the data is assumed to be already centered. max_iterint, default=1000 The maximum number of passes over the training data (aka epochs). It only impacts the behavior in the fit method, and not the partial_fit method. New in version 0.19. tolfloat, default=1e-3 The stopping criterion. If it is not None, training will stop when (loss > best_loss - tol) for n_iter_no_change consecutive epochs. New in version 0.19. shufflebool, default=True Whether or not the training data should be shuffled after each epoch. verboseint, default=0 The verbosity level. epsilonfloat, default=0.1 Epsilon in the epsilon-insensitive loss functions; only if loss is ‘huber’, ‘epsilon_insensitive’, or ‘squared_epsilon_insensitive’. For ‘huber’, determines the threshold at which it becomes less important to get the prediction exactly right. For epsilon-insensitive, any differences between the current prediction and the correct label are ignored if they are less than this threshold. random_stateint, RandomState instance, default=None Used for shuffling the data, when shuffle is set to True. Pass an int for reproducible output across multiple function calls. See Glossary. learning_ratestring, default=’invscaling’ The learning rate schedule: ‘constant’: eta = eta0 ‘optimal’: eta = 1.0 / (alpha * (t + t0)) where t0 is chosen by a heuristic proposed by Leon Bottou. ‘invscaling’: eta = eta0 / pow(t, power_t) ‘adaptive’: eta = eta0, as long as the training keeps decreasing. Each time n_iter_no_change consecutive epochs fail to decrease the training loss by tol or fail to increase validation score by tol if early_stopping is True, the current learning rate is divided by 5. New in version 0.20: Added ‘adaptive’ option eta0double, default=0.01 The initial learning rate for the ‘constant’, ‘invscaling’ or ‘adaptive’ schedules. The default value is 0.01. power_tdouble, default=0.25 The exponent for inverse scaling learning rate. early_stoppingbool, default=False Whether to use early stopping to terminate training when validation score is not improving. If set to True, it will automatically set aside a fraction of training data as validation and terminate training when validation score returned by the score method is not improving by at least tol for n_iter_no_change consecutive epochs. New in version 0.20: Added ‘early_stopping’ option validation_fractionfloat, default=0.1 The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True. New in version 0.20: Added ‘validation_fraction’ option n_iter_no_changeint, default=5 Number of iterations with no improvement to wait before early stopping. New in version 0.20: Added ‘n_iter_no_change’ option warm_startbool, default=False When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary. Repeatedly calling fit or partial_fit when warm_start is True can result in a different solution than when calling fit a single time because of the way the data is shuffled. If a dynamic learning rate is used, the learning rate is adapted depending on the number of samples already seen. Calling fit resets this counter, while partial_fit will result in increasing the existing counter. averagebool or int, default=False When set to True, computes the averaged SGD weights accross all updates and stores the result in the coef_ attribute. If set to an int greater than 1, averaging will begin once the total number of samples seen reaches average. So average=10 will begin averaging after seeing 10 samples. Attributes coef_ndarray of shape (n_features,) Weights assigned to the features. intercept_ndarray of shape (1,) The intercept term. average_coef_ndarray of shape (n_features,) Averaged weights assigned to the features. Only available if average=True. Deprecated since version 0.23: Attribute average_coef_ was deprecated in version 0.23 and will be removed in 1.0 (renaming of 0.25). average_intercept_ndarray of shape (1,) The averaged intercept term. Only available if average=True. Deprecated since version 0.23: Attribute average_intercept_ was deprecated in version 0.23 and will be removed in 1.0 (renaming of 0.25). n_iter_int The actual number of iterations before reaching the stopping criterion. t_int Number of weight updates performed during training. Same as (n_iter_ * n_samples). See also Ridge, ElasticNet, Lasso, sklearn.svm.SVR Examples >>> import numpy as np >>> from sklearn.linear_model import SGDRegressor >>> from sklearn.pipeline import make_pipeline >>> from sklearn.preprocessing import StandardScaler >>> n_samples, n_features = 10, 5 >>> rng = np.random.RandomState(0) >>> y = rng.randn(n_samples) >>> X = rng.randn(n_samples, n_features) >>> # Always scale the input. The most convenient way is to use a pipeline. >>> reg = make_pipeline(StandardScaler(), ... SGDRegressor(max_iter=1000, tol=1e-3)) >>> reg.fit(X, y) Pipeline(steps=[('standardscaler', StandardScaler()), ('sgdregressor', SGDRegressor())]) Methods densify() Convert coefficient matrix to dense array format. fit(X, y[, coef_init, intercept_init, …]) Fit linear model with Stochastic Gradient Descent. get_params([deep]) Get parameters for this estimator. partial_fit(X, y[, sample_weight]) Perform one epoch of stochastic gradient descent on given samples. predict(X) Predict using the linear model score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**kwargs) Set and validate the parameters of estimator. sparsify() Convert coefficient matrix to sparse format. densify() [source] Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns self Fitted estimator. fit(X, y, coef_init=None, intercept_init=None, sample_weight=None) [source] Fit linear model with Stochastic Gradient Descent. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Training data yndarray of shape (n_samples,) Target values coef_initndarray of shape (n_features,), default=None The initial coefficients to warm-start the optimization. intercept_initndarray of shape (1,), default=None The initial intercept to warm-start the optimization. sample_weightarray-like, shape (n_samples,), default=None Weights applied to individual samples (1. for unweighted). Returns selfreturns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. partial_fit(X, y, sample_weight=None) [source] Perform one epoch of stochastic gradient descent on given samples. Internally, this method uses max_iter = 1. Therefore, it is not guaranteed that a minimum of the cost function is reached after calling it once. Matters such as objective convergence and early stopping should be handled by the user. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Subset of training data ynumpy array of shape (n_samples,) Subset of target values sample_weightarray-like, shape (n_samples,), default=None Weights applied to individual samples. If not provided, uniform weights are assumed. Returns selfreturns an instance of self. predict(X) [source] Predict using the linear model Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Returns ndarray of shape (n_samples,) Predicted target values per element in X. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**kwargs) [source] Set and validate the parameters of estimator. Parameters **kwargsdict Estimator parameters. Returns selfobject Estimator instance. sparsify() [source] Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns self Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify.
sklearn.modules.generated.sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor
sklearn.linear_model.SGDRegressor class sklearn.linear_model.SGDRegressor(loss='squared_loss', *, penalty='l2', alpha=0.0001, l1_ratio=0.15, fit_intercept=True, max_iter=1000, tol=0.001, shuffle=True, verbose=0, epsilon=0.1, random_state=None, learning_rate='invscaling', eta0=0.01, power_t=0.25, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, warm_start=False, average=False) [source] Linear model fitted by minimizing a regularized empirical loss with SGD SGD stands for Stochastic Gradient Descent: the gradient of the loss is estimated each sample at a time and the model is updated along the way with a decreasing strength schedule (aka learning rate). The regularizer is a penalty added to the loss function that shrinks model parameters towards the zero vector using either the squared euclidean norm L2 or the absolute norm L1 or a combination of both (Elastic Net). If the parameter update crosses the 0.0 value because of the regularizer, the update is truncated to 0.0 to allow for learning sparse models and achieve online feature selection. This implementation works with data represented as dense numpy arrays of floating point values for the features. Read more in the User Guide. Parameters lossstr, default=’squared_loss’ The loss function to be used. The possible values are ‘squared_loss’, ‘huber’, ‘epsilon_insensitive’, or ‘squared_epsilon_insensitive’ The ‘squared_loss’ refers to the ordinary least squares fit. ‘huber’ modifies ‘squared_loss’ to focus less on getting outliers correct by switching from squared to linear loss past a distance of epsilon. ‘epsilon_insensitive’ ignores errors less than epsilon and is linear past that; this is the loss function used in SVR. ‘squared_epsilon_insensitive’ is the same but becomes squared loss past a tolerance of epsilon. More details about the losses formulas can be found in the User Guide. penalty{‘l2’, ‘l1’, ‘elasticnet’}, default=’l2’ The penalty (aka regularization term) to be used. Defaults to ‘l2’ which is the standard regularizer for linear SVM models. ‘l1’ and ‘elasticnet’ might bring sparsity to the model (feature selection) not achievable with ‘l2’. alphafloat, default=0.0001 Constant that multiplies the regularization term. The higher the value, the stronger the regularization. Also used to compute the learning rate when set to learning_rate is set to ‘optimal’. l1_ratiofloat, default=0.15 The Elastic Net mixing parameter, with 0 <= l1_ratio <= 1. l1_ratio=0 corresponds to L2 penalty, l1_ratio=1 to L1. Only used if penalty is ‘elasticnet’. fit_interceptbool, default=True Whether the intercept should be estimated or not. If False, the data is assumed to be already centered. max_iterint, default=1000 The maximum number of passes over the training data (aka epochs). It only impacts the behavior in the fit method, and not the partial_fit method. New in version 0.19. tolfloat, default=1e-3 The stopping criterion. If it is not None, training will stop when (loss > best_loss - tol) for n_iter_no_change consecutive epochs. New in version 0.19. shufflebool, default=True Whether or not the training data should be shuffled after each epoch. verboseint, default=0 The verbosity level. epsilonfloat, default=0.1 Epsilon in the epsilon-insensitive loss functions; only if loss is ‘huber’, ‘epsilon_insensitive’, or ‘squared_epsilon_insensitive’. For ‘huber’, determines the threshold at which it becomes less important to get the prediction exactly right. For epsilon-insensitive, any differences between the current prediction and the correct label are ignored if they are less than this threshold. random_stateint, RandomState instance, default=None Used for shuffling the data, when shuffle is set to True. Pass an int for reproducible output across multiple function calls. See Glossary. learning_ratestring, default=’invscaling’ The learning rate schedule: ‘constant’: eta = eta0 ‘optimal’: eta = 1.0 / (alpha * (t + t0)) where t0 is chosen by a heuristic proposed by Leon Bottou. ‘invscaling’: eta = eta0 / pow(t, power_t) ‘adaptive’: eta = eta0, as long as the training keeps decreasing. Each time n_iter_no_change consecutive epochs fail to decrease the training loss by tol or fail to increase validation score by tol if early_stopping is True, the current learning rate is divided by 5. New in version 0.20: Added ‘adaptive’ option eta0double, default=0.01 The initial learning rate for the ‘constant’, ‘invscaling’ or ‘adaptive’ schedules. The default value is 0.01. power_tdouble, default=0.25 The exponent for inverse scaling learning rate. early_stoppingbool, default=False Whether to use early stopping to terminate training when validation score is not improving. If set to True, it will automatically set aside a fraction of training data as validation and terminate training when validation score returned by the score method is not improving by at least tol for n_iter_no_change consecutive epochs. New in version 0.20: Added ‘early_stopping’ option validation_fractionfloat, default=0.1 The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True. New in version 0.20: Added ‘validation_fraction’ option n_iter_no_changeint, default=5 Number of iterations with no improvement to wait before early stopping. New in version 0.20: Added ‘n_iter_no_change’ option warm_startbool, default=False When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary. Repeatedly calling fit or partial_fit when warm_start is True can result in a different solution than when calling fit a single time because of the way the data is shuffled. If a dynamic learning rate is used, the learning rate is adapted depending on the number of samples already seen. Calling fit resets this counter, while partial_fit will result in increasing the existing counter. averagebool or int, default=False When set to True, computes the averaged SGD weights accross all updates and stores the result in the coef_ attribute. If set to an int greater than 1, averaging will begin once the total number of samples seen reaches average. So average=10 will begin averaging after seeing 10 samples. Attributes coef_ndarray of shape (n_features,) Weights assigned to the features. intercept_ndarray of shape (1,) The intercept term. average_coef_ndarray of shape (n_features,) Averaged weights assigned to the features. Only available if average=True. Deprecated since version 0.23: Attribute average_coef_ was deprecated in version 0.23 and will be removed in 1.0 (renaming of 0.25). average_intercept_ndarray of shape (1,) The averaged intercept term. Only available if average=True. Deprecated since version 0.23: Attribute average_intercept_ was deprecated in version 0.23 and will be removed in 1.0 (renaming of 0.25). n_iter_int The actual number of iterations before reaching the stopping criterion. t_int Number of weight updates performed during training. Same as (n_iter_ * n_samples). See also Ridge, ElasticNet, Lasso, sklearn.svm.SVR Examples >>> import numpy as np >>> from sklearn.linear_model import SGDRegressor >>> from sklearn.pipeline import make_pipeline >>> from sklearn.preprocessing import StandardScaler >>> n_samples, n_features = 10, 5 >>> rng = np.random.RandomState(0) >>> y = rng.randn(n_samples) >>> X = rng.randn(n_samples, n_features) >>> # Always scale the input. The most convenient way is to use a pipeline. >>> reg = make_pipeline(StandardScaler(), ... SGDRegressor(max_iter=1000, tol=1e-3)) >>> reg.fit(X, y) Pipeline(steps=[('standardscaler', StandardScaler()), ('sgdregressor', SGDRegressor())]) Methods densify() Convert coefficient matrix to dense array format. fit(X, y[, coef_init, intercept_init, …]) Fit linear model with Stochastic Gradient Descent. get_params([deep]) Get parameters for this estimator. partial_fit(X, y[, sample_weight]) Perform one epoch of stochastic gradient descent on given samples. predict(X) Predict using the linear model score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**kwargs) Set and validate the parameters of estimator. sparsify() Convert coefficient matrix to sparse format. densify() [source] Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns self Fitted estimator. fit(X, y, coef_init=None, intercept_init=None, sample_weight=None) [source] Fit linear model with Stochastic Gradient Descent. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Training data yndarray of shape (n_samples,) Target values coef_initndarray of shape (n_features,), default=None The initial coefficients to warm-start the optimization. intercept_initndarray of shape (1,), default=None The initial intercept to warm-start the optimization. sample_weightarray-like, shape (n_samples,), default=None Weights applied to individual samples (1. for unweighted). Returns selfreturns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. partial_fit(X, y, sample_weight=None) [source] Perform one epoch of stochastic gradient descent on given samples. Internally, this method uses max_iter = 1. Therefore, it is not guaranteed that a minimum of the cost function is reached after calling it once. Matters such as objective convergence and early stopping should be handled by the user. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Subset of training data ynumpy array of shape (n_samples,) Subset of target values sample_weightarray-like, shape (n_samples,), default=None Weights applied to individual samples. If not provided, uniform weights are assumed. Returns selfreturns an instance of self. predict(X) [source] Predict using the linear model Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Returns ndarray of shape (n_samples,) Predicted target values per element in X. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**kwargs) [source] Set and validate the parameters of estimator. Parameters **kwargsdict Estimator parameters. Returns selfobject Estimator instance. sparsify() [source] Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns self Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify. Examples using sklearn.linear_model.SGDRegressor Prediction Latency SGD: Penalties
sklearn.modules.generated.sklearn.linear_model.sgdregressor
densify() [source] Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns self Fitted estimator.
sklearn.modules.generated.sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor.densify
fit(X, y, coef_init=None, intercept_init=None, sample_weight=None) [source] Fit linear model with Stochastic Gradient Descent. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Training data yndarray of shape (n_samples,) Target values coef_initndarray of shape (n_features,), default=None The initial coefficients to warm-start the optimization. intercept_initndarray of shape (1,), default=None The initial intercept to warm-start the optimization. sample_weightarray-like, shape (n_samples,), default=None Weights applied to individual samples (1. for unweighted). Returns selfreturns an instance of self.
sklearn.modules.generated.sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor.get_params
partial_fit(X, y, sample_weight=None) [source] Perform one epoch of stochastic gradient descent on given samples. Internally, this method uses max_iter = 1. Therefore, it is not guaranteed that a minimum of the cost function is reached after calling it once. Matters such as objective convergence and early stopping should be handled by the user. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Subset of training data ynumpy array of shape (n_samples,) Subset of target values sample_weightarray-like, shape (n_samples,), default=None Weights applied to individual samples. If not provided, uniform weights are assumed. Returns selfreturns an instance of self.
sklearn.modules.generated.sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor.partial_fit
predict(X) [source] Predict using the linear model Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Returns ndarray of shape (n_samples,) Predicted target values per element in X.
sklearn.modules.generated.sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor.predict
score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
sklearn.modules.generated.sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor.score
set_params(**kwargs) [source] Set and validate the parameters of estimator. Parameters **kwargsdict Estimator parameters. Returns selfobject Estimator instance.
sklearn.modules.generated.sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor.set_params
sparsify() [source] Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns self Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify.
sklearn.modules.generated.sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor.sparsify
class sklearn.linear_model.TheilSenRegressor(*, fit_intercept=True, copy_X=True, max_subpopulation=10000.0, n_subsamples=None, max_iter=300, tol=0.001, random_state=None, n_jobs=None, verbose=False) [source] Theil-Sen Estimator: robust multivariate regression model. The algorithm calculates least square solutions on subsets with size n_subsamples of the samples in X. Any value of n_subsamples between the number of features and samples leads to an estimator with a compromise between robustness and efficiency. Since the number of least square solutions is “n_samples choose n_subsamples”, it can be extremely large and can therefore be limited with max_subpopulation. If this limit is reached, the subsets are chosen randomly. In a final step, the spatial median (or L1 median) is calculated of all least square solutions. Read more in the User Guide. Parameters fit_interceptbool, default=True Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. max_subpopulationint, default=1e4 Instead of computing with a set of cardinality ‘n choose k’, where n is the number of samples and k is the number of subsamples (at least number of features), consider only a stochastic subpopulation of a given maximal size if ‘n choose k’ is larger than max_subpopulation. For other than small problem sizes this parameter will determine memory usage and runtime if n_subsamples is not changed. n_subsamplesint, default=None Number of samples to calculate the parameters. This is at least the number of features (plus 1 if fit_intercept=True) and the number of samples as a maximum. A lower number leads to a higher breakdown point and a low efficiency while a high number leads to a low breakdown point and a high efficiency. If None, take the minimum number of subsamples leading to maximal robustness. If n_subsamples is set to n_samples, Theil-Sen is identical to least squares. max_iterint, default=300 Maximum number of iterations for the calculation of spatial median. tolfloat, default=1.e-3 Tolerance when calculating spatial median. random_stateint, RandomState instance or None, default=None A random number generator instance to define the state of the random permutations generator. Pass an int for reproducible output across multiple function calls. See Glossary n_jobsint, default=None Number of CPUs to use during the cross validation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. verbosebool, default=False Verbose mode when fitting the model. Attributes coef_ndarray of shape (n_features,) Coefficients of the regression model (median of distribution). intercept_float Estimated intercept of regression model. breakdown_float Approximated breakdown point. n_iter_int Number of iterations needed for the spatial median. n_subpopulation_int Number of combinations taken into account from ‘n choose k’, where n is the number of samples and k is the number of subsamples. References Theil-Sen Estimators in a Multiple Linear Regression Model, 2009 Xin Dang, Hanxiang Peng, Xueqin Wang and Heping Zhang http://home.olemiss.edu/~xdang/papers/MTSE.pdf Examples >>> from sklearn.linear_model import TheilSenRegressor >>> from sklearn.datasets import make_regression >>> X, y = make_regression( ... n_samples=200, n_features=2, noise=4.0, random_state=0) >>> reg = TheilSenRegressor(random_state=0).fit(X, y) >>> reg.score(X, y) 0.9884... >>> reg.predict(X[:1,]) array([-31.5871...]) Methods fit(X, y) Fit linear model. get_params([deep]) Get parameters for this estimator. predict(X) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y) [source] Fit linear model. Parameters Xndarray of shape (n_samples, n_features) Training data. yndarray of shape (n_samples,) Target values. Returns selfreturns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.theilsenregressor#sklearn.linear_model.TheilSenRegressor
sklearn.linear_model.TheilSenRegressor class sklearn.linear_model.TheilSenRegressor(*, fit_intercept=True, copy_X=True, max_subpopulation=10000.0, n_subsamples=None, max_iter=300, tol=0.001, random_state=None, n_jobs=None, verbose=False) [source] Theil-Sen Estimator: robust multivariate regression model. The algorithm calculates least square solutions on subsets with size n_subsamples of the samples in X. Any value of n_subsamples between the number of features and samples leads to an estimator with a compromise between robustness and efficiency. Since the number of least square solutions is “n_samples choose n_subsamples”, it can be extremely large and can therefore be limited with max_subpopulation. If this limit is reached, the subsets are chosen randomly. In a final step, the spatial median (or L1 median) is calculated of all least square solutions. Read more in the User Guide. Parameters fit_interceptbool, default=True Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. max_subpopulationint, default=1e4 Instead of computing with a set of cardinality ‘n choose k’, where n is the number of samples and k is the number of subsamples (at least number of features), consider only a stochastic subpopulation of a given maximal size if ‘n choose k’ is larger than max_subpopulation. For other than small problem sizes this parameter will determine memory usage and runtime if n_subsamples is not changed. n_subsamplesint, default=None Number of samples to calculate the parameters. This is at least the number of features (plus 1 if fit_intercept=True) and the number of samples as a maximum. A lower number leads to a higher breakdown point and a low efficiency while a high number leads to a low breakdown point and a high efficiency. If None, take the minimum number of subsamples leading to maximal robustness. If n_subsamples is set to n_samples, Theil-Sen is identical to least squares. max_iterint, default=300 Maximum number of iterations for the calculation of spatial median. tolfloat, default=1.e-3 Tolerance when calculating spatial median. random_stateint, RandomState instance or None, default=None A random number generator instance to define the state of the random permutations generator. Pass an int for reproducible output across multiple function calls. See Glossary n_jobsint, default=None Number of CPUs to use during the cross validation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. verbosebool, default=False Verbose mode when fitting the model. Attributes coef_ndarray of shape (n_features,) Coefficients of the regression model (median of distribution). intercept_float Estimated intercept of regression model. breakdown_float Approximated breakdown point. n_iter_int Number of iterations needed for the spatial median. n_subpopulation_int Number of combinations taken into account from ‘n choose k’, where n is the number of samples and k is the number of subsamples. References Theil-Sen Estimators in a Multiple Linear Regression Model, 2009 Xin Dang, Hanxiang Peng, Xueqin Wang and Heping Zhang http://home.olemiss.edu/~xdang/papers/MTSE.pdf Examples >>> from sklearn.linear_model import TheilSenRegressor >>> from sklearn.datasets import make_regression >>> X, y = make_regression( ... n_samples=200, n_features=2, noise=4.0, random_state=0) >>> reg = TheilSenRegressor(random_state=0).fit(X, y) >>> reg.score(X, y) 0.9884... >>> reg.predict(X[:1,]) array([-31.5871...]) Methods fit(X, y) Fit linear model. get_params([deep]) Get parameters for this estimator. predict(X) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y) [source] Fit linear model. Parameters Xndarray of shape (n_samples, n_features) Training data. yndarray of shape (n_samples,) Target values. Returns selfreturns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. Examples using sklearn.linear_model.TheilSenRegressor Theil-Sen Regression Robust linear estimator fitting
sklearn.modules.generated.sklearn.linear_model.theilsenregressor
fit(X, y) [source] Fit linear model. Parameters Xndarray of shape (n_samples, n_features) Training data. yndarray of shape (n_samples,) Target values. Returns selfreturns an instance of self.
sklearn.modules.generated.sklearn.linear_model.theilsenregressor#sklearn.linear_model.TheilSenRegressor.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.linear_model.theilsenregressor#sklearn.linear_model.TheilSenRegressor.get_params
predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values.
sklearn.modules.generated.sklearn.linear_model.theilsenregressor#sklearn.linear_model.TheilSenRegressor.predict
score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
sklearn.modules.generated.sklearn.linear_model.theilsenregressor#sklearn.linear_model.TheilSenRegressor.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.theilsenregressor#sklearn.linear_model.TheilSenRegressor.set_params
class sklearn.linear_model.TweedieRegressor(*, power=0.0, alpha=1.0, fit_intercept=True, link='auto', max_iter=100, tol=0.0001, warm_start=False, verbose=0) [source] Generalized Linear Model with a Tweedie distribution. This estimator can be used to model different GLMs depending on the power parameter, which determines the underlying distribution. Read more in the User Guide. New in version 0.23. Parameters powerfloat, default=0 The power determines the underlying target distribution according to the following table: Power Distribution 0 Normal 1 Poisson (1,2) Compound Poisson Gamma 2 Gamma 3 Inverse Gaussian For 0 < power < 1, no distribution exists. alphafloat, default=1 Constant that multiplies the penalty term and thus determines the regularization strength. alpha = 0 is equivalent to unpenalized GLMs. In this case, the design matrix X must have full column rank (no collinearities). link{‘auto’, ‘identity’, ‘log’}, default=’auto’ The link function of the GLM, i.e. mapping from linear predictor X @ coeff + intercept to prediction y_pred. Option ‘auto’ sets the link depending on the chosen family as follows: ‘identity’ for Normal distribution ‘log’ for Poisson, Gamma and Inverse Gaussian distributions fit_interceptbool, default=True Specifies if a constant (a.k.a. bias or intercept) should be added to the linear predictor (X @ coef + intercept). max_iterint, default=100 The maximal number of iterations for the solver. tolfloat, default=1e-4 Stopping criterion. For the lbfgs solver, the iteration will stop when max{|g_j|, j = 1, ..., d} <= tol where g_j is the j-th component of the gradient (derivative) of the objective function. warm_startbool, default=False If set to True, reuse the solution of the previous call to fit as initialization for coef_ and intercept_ . verboseint, default=0 For the lbfgs solver set verbose to any positive number for verbosity. Attributes coef_array of shape (n_features,) Estimated coefficients for the linear predictor (X @ coef_ + intercept_) in the GLM. intercept_float Intercept (a.k.a. bias) added to linear predictor. n_iter_int Actual number of iterations used in the solver. Examples >>> from sklearn import linear_model >>> clf = linear_model.TweedieRegressor() >>> X = [[1, 2], [2, 3], [3, 4], [4, 3]] >>> y = [2, 3.5, 5, 5.5] >>> clf.fit(X, y) TweedieRegressor() >>> clf.score(X, y) 0.839... >>> clf.coef_ array([0.599..., 0.299...]) >>> clf.intercept_ 1.600... >>> clf.predict([[1, 1], [3, 4]]) array([2.500..., 4.599...]) Methods fit(X, y[, sample_weight]) Fit a Generalized Linear Model. get_params([deep]) Get parameters for this estimator. predict(X) Predict using GLM with feature matrix X. score(X, y[, sample_weight]) Compute D^2, the percentage of deviance explained. set_params(**params) Set the parameters of this estimator. fit(X, y, sample_weight=None) [source] Fit a Generalized Linear Model. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. yarray-like of shape (n_samples,) Target values. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns selfreturns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using GLM with feature matrix X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Samples. Returns y_predarray of shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Compute D^2, the percentage of deviance explained. D^2 is a generalization of the coefficient of determination R^2. R^2 uses squared error and D^2 deviance. Note that those two are equal for family='normal'. D^2 is defined as \(D^2 = 1-\frac{D(y_{true},y_{pred})}{D_{null}}\), \(D_{null}\) is the null deviance, i.e. the deviance of a model with intercept alone, which corresponds to \(y_{pred} = \bar{y}\). The mean \(\bar{y}\) is averaged by sample_weight. Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) True values of target. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat D^2 of self.predict(X) w.r.t. y. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.tweedieregressor#sklearn.linear_model.TweedieRegressor
sklearn.linear_model.TweedieRegressor class sklearn.linear_model.TweedieRegressor(*, power=0.0, alpha=1.0, fit_intercept=True, link='auto', max_iter=100, tol=0.0001, warm_start=False, verbose=0) [source] Generalized Linear Model with a Tweedie distribution. This estimator can be used to model different GLMs depending on the power parameter, which determines the underlying distribution. Read more in the User Guide. New in version 0.23. Parameters powerfloat, default=0 The power determines the underlying target distribution according to the following table: Power Distribution 0 Normal 1 Poisson (1,2) Compound Poisson Gamma 2 Gamma 3 Inverse Gaussian For 0 < power < 1, no distribution exists. alphafloat, default=1 Constant that multiplies the penalty term and thus determines the regularization strength. alpha = 0 is equivalent to unpenalized GLMs. In this case, the design matrix X must have full column rank (no collinearities). link{‘auto’, ‘identity’, ‘log’}, default=’auto’ The link function of the GLM, i.e. mapping from linear predictor X @ coeff + intercept to prediction y_pred. Option ‘auto’ sets the link depending on the chosen family as follows: ‘identity’ for Normal distribution ‘log’ for Poisson, Gamma and Inverse Gaussian distributions fit_interceptbool, default=True Specifies if a constant (a.k.a. bias or intercept) should be added to the linear predictor (X @ coef + intercept). max_iterint, default=100 The maximal number of iterations for the solver. tolfloat, default=1e-4 Stopping criterion. For the lbfgs solver, the iteration will stop when max{|g_j|, j = 1, ..., d} <= tol where g_j is the j-th component of the gradient (derivative) of the objective function. warm_startbool, default=False If set to True, reuse the solution of the previous call to fit as initialization for coef_ and intercept_ . verboseint, default=0 For the lbfgs solver set verbose to any positive number for verbosity. Attributes coef_array of shape (n_features,) Estimated coefficients for the linear predictor (X @ coef_ + intercept_) in the GLM. intercept_float Intercept (a.k.a. bias) added to linear predictor. n_iter_int Actual number of iterations used in the solver. Examples >>> from sklearn import linear_model >>> clf = linear_model.TweedieRegressor() >>> X = [[1, 2], [2, 3], [3, 4], [4, 3]] >>> y = [2, 3.5, 5, 5.5] >>> clf.fit(X, y) TweedieRegressor() >>> clf.score(X, y) 0.839... >>> clf.coef_ array([0.599..., 0.299...]) >>> clf.intercept_ 1.600... >>> clf.predict([[1, 1], [3, 4]]) array([2.500..., 4.599...]) Methods fit(X, y[, sample_weight]) Fit a Generalized Linear Model. get_params([deep]) Get parameters for this estimator. predict(X) Predict using GLM with feature matrix X. score(X, y[, sample_weight]) Compute D^2, the percentage of deviance explained. set_params(**params) Set the parameters of this estimator. fit(X, y, sample_weight=None) [source] Fit a Generalized Linear Model. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. yarray-like of shape (n_samples,) Target values. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns selfreturns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using GLM with feature matrix X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Samples. Returns y_predarray of shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Compute D^2, the percentage of deviance explained. D^2 is a generalization of the coefficient of determination R^2. R^2 uses squared error and D^2 deviance. Note that those two are equal for family='normal'. D^2 is defined as \(D^2 = 1-\frac{D(y_{true},y_{pred})}{D_{null}}\), \(D_{null}\) is the null deviance, i.e. the deviance of a model with intercept alone, which corresponds to \(y_{pred} = \bar{y}\). The mean \(\bar{y}\) is averaged by sample_weight. Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) True values of target. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat D^2 of self.predict(X) w.r.t. y. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. Examples using sklearn.linear_model.TweedieRegressor Release Highlights for scikit-learn 0.23 Tweedie regression on insurance claims
sklearn.modules.generated.sklearn.linear_model.tweedieregressor
fit(X, y, sample_weight=None) [source] Fit a Generalized Linear Model. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. yarray-like of shape (n_samples,) Target values. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns selfreturns an instance of self.
sklearn.modules.generated.sklearn.linear_model.tweedieregressor#sklearn.linear_model.TweedieRegressor.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.linear_model.tweedieregressor#sklearn.linear_model.TweedieRegressor.get_params
predict(X) [source] Predict using GLM with feature matrix X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Samples. Returns y_predarray of shape (n_samples,) Returns predicted values.
sklearn.modules.generated.sklearn.linear_model.tweedieregressor#sklearn.linear_model.TweedieRegressor.predict
score(X, y, sample_weight=None) [source] Compute D^2, the percentage of deviance explained. D^2 is a generalization of the coefficient of determination R^2. R^2 uses squared error and D^2 deviance. Note that those two are equal for family='normal'. D^2 is defined as \(D^2 = 1-\frac{D(y_{true},y_{pred})}{D_{null}}\), \(D_{null}\) is the null deviance, i.e. the deviance of a model with intercept alone, which corresponds to \(y_{pred} = \bar{y}\). The mean \(\bar{y}\) is averaged by sample_weight. Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) True values of target. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat D^2 of self.predict(X) w.r.t. y.
sklearn.modules.generated.sklearn.linear_model.tweedieregressor#sklearn.linear_model.TweedieRegressor.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.tweedieregressor#sklearn.linear_model.TweedieRegressor.set_params
class sklearn.manifold.Isomap(*, n_neighbors=5, n_components=2, eigen_solver='auto', tol=0, max_iter=None, path_method='auto', neighbors_algorithm='auto', n_jobs=None, metric='minkowski', p=2, metric_params=None) [source] Isomap Embedding Non-linear dimensionality reduction through Isometric Mapping Read more in the User Guide. Parameters n_neighborsint, default=5 number of neighbors to consider for each point. n_componentsint, default=2 number of coordinates for the manifold eigen_solver{‘auto’, ‘arpack’, ‘dense’}, default=’auto’ ‘auto’ : Attempt to choose the most efficient solver for the given problem. ‘arpack’ : Use Arnoldi decomposition to find the eigenvalues and eigenvectors. ‘dense’ : Use a direct solver (i.e. LAPACK) for the eigenvalue decomposition. tolfloat, default=0 Convergence tolerance passed to arpack or lobpcg. not used if eigen_solver == ‘dense’. max_iterint, default=None Maximum number of iterations for the arpack solver. not used if eigen_solver == ‘dense’. path_method{‘auto’, ‘FW’, ‘D’}, default=’auto’ Method to use in finding shortest path. ‘auto’ : attempt to choose the best algorithm automatically. ‘FW’ : Floyd-Warshall algorithm. ‘D’ : Dijkstra’s algorithm. neighbors_algorithm{‘auto’, ‘brute’, ‘kd_tree’, ‘ball_tree’}, default=’auto’ Algorithm to use for nearest neighbors search, passed to neighbors.NearestNeighbors instance. n_jobsint or None, default=None The number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. metricstring, or callable, default=”minkowski” The metric to use when calculating distance between instances in a feature array. If metric is a string or callable, it must be one of the options allowed by sklearn.metrics.pairwise_distances for its metric parameter. If metric is “precomputed”, X is assumed to be a distance matrix and must be square. X may be a Glossary. New in version 0.22. pint, default=2 Parameter for the Minkowski metric from sklearn.metrics.pairwise.pairwise_distances. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used. New in version 0.22. metric_paramsdict, default=None Additional keyword arguments for the metric function. New in version 0.22. Attributes embedding_array-like, shape (n_samples, n_components) Stores the embedding vectors. kernel_pca_object KernelPCA object used to implement the embedding. nbrs_sklearn.neighbors.NearestNeighbors instance Stores nearest neighbors instance, including BallTree or KDtree if applicable. dist_matrix_array-like, shape (n_samples, n_samples) Stores the geodesic distance matrix of training data. References 1 Tenenbaum, J.B.; De Silva, V.; & Langford, J.C. A global geometric framework for nonlinear dimensionality reduction. Science 290 (5500) Examples >>> from sklearn.datasets import load_digits >>> from sklearn.manifold import Isomap >>> X, _ = load_digits(return_X_y=True) >>> X.shape (1797, 64) >>> embedding = Isomap(n_components=2) >>> X_transformed = embedding.fit_transform(X[:100]) >>> X_transformed.shape (100, 2) Methods fit(X[, y]) Compute the embedding vectors for data X fit_transform(X[, y]) Fit the model from data in X and transform X. get_params([deep]) Get parameters for this estimator. reconstruction_error() Compute the reconstruction error for the embedding. set_params(**params) Set the parameters of this estimator. transform(X) Transform X. fit(X, y=None) [source] Compute the embedding vectors for data X Parameters X{array-like, sparse graph, BallTree, KDTree, NearestNeighbors} Sample data, shape = (n_samples, n_features), in the form of a numpy array, sparse graph, precomputed tree, or NearestNeighbors object. yIgnored Returns selfreturns an instance of self. fit_transform(X, y=None) [source] Fit the model from data in X and transform X. Parameters X{array-like, sparse graph, BallTree, KDTree} Training vector, where n_samples in the number of samples and n_features is the number of features. yIgnored Returns X_newarray-like, shape (n_samples, n_components) get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. reconstruction_error() [source] Compute the reconstruction error for the embedding. Returns reconstruction_errorfloat Notes The cost function of an isomap embedding is E = frobenius_norm[K(D) - K(D_fit)] / n_samples Where D is the matrix of distances for the input data X, D_fit is the matrix of distances for the output embedding X_fit, and K is the isomap kernel: K(D) = -0.5 * (I - 1/n_samples) * D^2 * (I - 1/n_samples) set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Transform X. This is implemented by linking the points X into the graph of geodesic distances of the training data. First the n_neighbors nearest neighbors of X are found in the training data, and from these the shortest geodesic distances from each point in X to each point in the training data are computed in order to construct the kernel. The embedding of X is the projection of this kernel onto the embedding vectors of the training set. Parameters Xarray-like, shape (n_queries, n_features) If neighbors_algorithm=’precomputed’, X is assumed to be a distance matrix or a sparse graph of shape (n_queries, n_samples_fit). Returns X_newarray-like, shape (n_queries, n_components)
sklearn.modules.generated.sklearn.manifold.isomap#sklearn.manifold.Isomap
sklearn.manifold.Isomap class sklearn.manifold.Isomap(*, n_neighbors=5, n_components=2, eigen_solver='auto', tol=0, max_iter=None, path_method='auto', neighbors_algorithm='auto', n_jobs=None, metric='minkowski', p=2, metric_params=None) [source] Isomap Embedding Non-linear dimensionality reduction through Isometric Mapping Read more in the User Guide. Parameters n_neighborsint, default=5 number of neighbors to consider for each point. n_componentsint, default=2 number of coordinates for the manifold eigen_solver{‘auto’, ‘arpack’, ‘dense’}, default=’auto’ ‘auto’ : Attempt to choose the most efficient solver for the given problem. ‘arpack’ : Use Arnoldi decomposition to find the eigenvalues and eigenvectors. ‘dense’ : Use a direct solver (i.e. LAPACK) for the eigenvalue decomposition. tolfloat, default=0 Convergence tolerance passed to arpack or lobpcg. not used if eigen_solver == ‘dense’. max_iterint, default=None Maximum number of iterations for the arpack solver. not used if eigen_solver == ‘dense’. path_method{‘auto’, ‘FW’, ‘D’}, default=’auto’ Method to use in finding shortest path. ‘auto’ : attempt to choose the best algorithm automatically. ‘FW’ : Floyd-Warshall algorithm. ‘D’ : Dijkstra’s algorithm. neighbors_algorithm{‘auto’, ‘brute’, ‘kd_tree’, ‘ball_tree’}, default=’auto’ Algorithm to use for nearest neighbors search, passed to neighbors.NearestNeighbors instance. n_jobsint or None, default=None The number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. metricstring, or callable, default=”minkowski” The metric to use when calculating distance between instances in a feature array. If metric is a string or callable, it must be one of the options allowed by sklearn.metrics.pairwise_distances for its metric parameter. If metric is “precomputed”, X is assumed to be a distance matrix and must be square. X may be a Glossary. New in version 0.22. pint, default=2 Parameter for the Minkowski metric from sklearn.metrics.pairwise.pairwise_distances. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used. New in version 0.22. metric_paramsdict, default=None Additional keyword arguments for the metric function. New in version 0.22. Attributes embedding_array-like, shape (n_samples, n_components) Stores the embedding vectors. kernel_pca_object KernelPCA object used to implement the embedding. nbrs_sklearn.neighbors.NearestNeighbors instance Stores nearest neighbors instance, including BallTree or KDtree if applicable. dist_matrix_array-like, shape (n_samples, n_samples) Stores the geodesic distance matrix of training data. References 1 Tenenbaum, J.B.; De Silva, V.; & Langford, J.C. A global geometric framework for nonlinear dimensionality reduction. Science 290 (5500) Examples >>> from sklearn.datasets import load_digits >>> from sklearn.manifold import Isomap >>> X, _ = load_digits(return_X_y=True) >>> X.shape (1797, 64) >>> embedding = Isomap(n_components=2) >>> X_transformed = embedding.fit_transform(X[:100]) >>> X_transformed.shape (100, 2) Methods fit(X[, y]) Compute the embedding vectors for data X fit_transform(X[, y]) Fit the model from data in X and transform X. get_params([deep]) Get parameters for this estimator. reconstruction_error() Compute the reconstruction error for the embedding. set_params(**params) Set the parameters of this estimator. transform(X) Transform X. fit(X, y=None) [source] Compute the embedding vectors for data X Parameters X{array-like, sparse graph, BallTree, KDTree, NearestNeighbors} Sample data, shape = (n_samples, n_features), in the form of a numpy array, sparse graph, precomputed tree, or NearestNeighbors object. yIgnored Returns selfreturns an instance of self. fit_transform(X, y=None) [source] Fit the model from data in X and transform X. Parameters X{array-like, sparse graph, BallTree, KDTree} Training vector, where n_samples in the number of samples and n_features is the number of features. yIgnored Returns X_newarray-like, shape (n_samples, n_components) get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. reconstruction_error() [source] Compute the reconstruction error for the embedding. Returns reconstruction_errorfloat Notes The cost function of an isomap embedding is E = frobenius_norm[K(D) - K(D_fit)] / n_samples Where D is the matrix of distances for the input data X, D_fit is the matrix of distances for the output embedding X_fit, and K is the isomap kernel: K(D) = -0.5 * (I - 1/n_samples) * D^2 * (I - 1/n_samples) set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Transform X. This is implemented by linking the points X into the graph of geodesic distances of the training data. First the n_neighbors nearest neighbors of X are found in the training data, and from these the shortest geodesic distances from each point in X to each point in the training data are computed in order to construct the kernel. The embedding of X is the projection of this kernel onto the embedding vectors of the training set. Parameters Xarray-like, shape (n_queries, n_features) If neighbors_algorithm=’precomputed’, X is assumed to be a distance matrix or a sparse graph of shape (n_queries, n_samples_fit). Returns X_newarray-like, shape (n_queries, n_components) Examples using sklearn.manifold.Isomap Release Highlights for scikit-learn 0.22 Comparison of Manifold Learning methods Manifold Learning methods on a severed sphere Manifold learning on handwritten digits: Locally Linear Embedding, Isomap…
sklearn.modules.generated.sklearn.manifold.isomap
fit(X, y=None) [source] Compute the embedding vectors for data X Parameters X{array-like, sparse graph, BallTree, KDTree, NearestNeighbors} Sample data, shape = (n_samples, n_features), in the form of a numpy array, sparse graph, precomputed tree, or NearestNeighbors object. yIgnored Returns selfreturns an instance of self.
sklearn.modules.generated.sklearn.manifold.isomap#sklearn.manifold.Isomap.fit
fit_transform(X, y=None) [source] Fit the model from data in X and transform X. Parameters X{array-like, sparse graph, BallTree, KDTree} Training vector, where n_samples in the number of samples and n_features is the number of features. yIgnored Returns X_newarray-like, shape (n_samples, n_components)
sklearn.modules.generated.sklearn.manifold.isomap#sklearn.manifold.Isomap.fit_transform
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.manifold.isomap#sklearn.manifold.Isomap.get_params
reconstruction_error() [source] Compute the reconstruction error for the embedding. Returns reconstruction_errorfloat Notes The cost function of an isomap embedding is E = frobenius_norm[K(D) - K(D_fit)] / n_samples Where D is the matrix of distances for the input data X, D_fit is the matrix of distances for the output embedding X_fit, and K is the isomap kernel: K(D) = -0.5 * (I - 1/n_samples) * D^2 * (I - 1/n_samples)
sklearn.modules.generated.sklearn.manifold.isomap#sklearn.manifold.Isomap.reconstruction_error
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.manifold.isomap#sklearn.manifold.Isomap.set_params
transform(X) [source] Transform X. This is implemented by linking the points X into the graph of geodesic distances of the training data. First the n_neighbors nearest neighbors of X are found in the training data, and from these the shortest geodesic distances from each point in X to each point in the training data are computed in order to construct the kernel. The embedding of X is the projection of this kernel onto the embedding vectors of the training set. Parameters Xarray-like, shape (n_queries, n_features) If neighbors_algorithm=’precomputed’, X is assumed to be a distance matrix or a sparse graph of shape (n_queries, n_samples_fit). Returns X_newarray-like, shape (n_queries, n_components)
sklearn.modules.generated.sklearn.manifold.isomap#sklearn.manifold.Isomap.transform
class sklearn.manifold.LocallyLinearEmbedding(*, n_neighbors=5, n_components=2, reg=0.001, eigen_solver='auto', tol=1e-06, max_iter=100, method='standard', hessian_tol=0.0001, modified_tol=1e-12, neighbors_algorithm='auto', random_state=None, n_jobs=None) [source] Locally Linear Embedding Read more in the User Guide. Parameters n_neighborsint, default=5 number of neighbors to consider for each point. n_componentsint, default=2 number of coordinates for the manifold regfloat, default=1e-3 regularization constant, multiplies the trace of the local covariance matrix of the distances. eigen_solver{‘auto’, ‘arpack’, ‘dense’}, default=’auto’ auto : algorithm will attempt to choose the best method for input data arpackuse arnoldi iteration in shift-invert mode. For this method, M may be a dense matrix, sparse matrix, or general linear operator. Warning: ARPACK can be unstable for some problems. It is best to try several random seeds in order to check results. denseuse standard dense matrix operations for the eigenvalue decomposition. For this method, M must be an array or matrix type. This method should be avoided for large problems. tolfloat, default=1e-6 Tolerance for ‘arpack’ method Not used if eigen_solver==’dense’. max_iterint, default=100 maximum number of iterations for the arpack solver. Not used if eigen_solver==’dense’. method{‘standard’, ‘hessian’, ‘modified’, ‘ltsa’}, default=’standard’ standarduse the standard locally linear embedding algorithm. see reference [1] hessianuse the Hessian eigenmap method. This method requires n_neighbors > n_components * (1 + (n_components + 1) / 2 see reference [2] modifieduse the modified locally linear embedding algorithm. see reference [3] ltsause local tangent space alignment algorithm see reference [4] hessian_tolfloat, default=1e-4 Tolerance for Hessian eigenmapping method. Only used if method == 'hessian' modified_tolfloat, default=1e-12 Tolerance for modified LLE method. Only used if method == 'modified' neighbors_algorithm{‘auto’, ‘brute’, ‘kd_tree’, ‘ball_tree’}, default=’auto’ algorithm to use for nearest neighbors search, passed to neighbors.NearestNeighbors instance random_stateint, RandomState instance, default=None Determines the random number generator when eigen_solver == ‘arpack’. Pass an int for reproducible results across multiple function calls. See :term: Glossary <random_state>. n_jobsint or None, default=None The number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Attributes embedding_array-like, shape [n_samples, n_components] Stores the embedding vectors reconstruction_error_float Reconstruction error associated with embedding_ nbrs_NearestNeighbors object Stores nearest neighbors instance, including BallTree or KDtree if applicable. References 1 Roweis, S. & Saul, L. Nonlinear dimensionality reduction by locally linear embedding. Science 290:2323 (2000). 2 Donoho, D. & Grimes, C. Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data. Proc Natl Acad Sci U S A. 100:5591 (2003). 3 Zhang, Z. & Wang, J. MLLE: Modified Locally Linear Embedding Using Multiple Weights. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.70.382 4 Zhang, Z. & Zha, H. Principal manifolds and nonlinear dimensionality reduction via tangent space alignment. Journal of Shanghai Univ. 8:406 (2004) Examples >>> from sklearn.datasets import load_digits >>> from sklearn.manifold import LocallyLinearEmbedding >>> X, _ = load_digits(return_X_y=True) >>> X.shape (1797, 64) >>> embedding = LocallyLinearEmbedding(n_components=2) >>> X_transformed = embedding.fit_transform(X[:100]) >>> X_transformed.shape (100, 2) Methods fit(X[, y]) Compute the embedding vectors for data X fit_transform(X[, y]) Compute the embedding vectors for data X and transform X. get_params([deep]) Get parameters for this estimator. set_params(**params) Set the parameters of this estimator. transform(X) Transform new points into embedding space. fit(X, y=None) [source] Compute the embedding vectors for data X Parameters Xarray-like of shape [n_samples, n_features] training set. yIgnored Returns selfreturns an instance of self. fit_transform(X, y=None) [source] Compute the embedding vectors for data X and transform X. Parameters Xarray-like of shape [n_samples, n_features] training set. yIgnored Returns X_newarray-like, shape (n_samples, n_components) get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Transform new points into embedding space. Parameters Xarray-like of shape (n_samples, n_features) Returns X_newarray, shape = [n_samples, n_components] Notes Because of scaling performed by this method, it is discouraged to use it together with methods that are not scale-invariant (like SVMs)
sklearn.modules.generated.sklearn.manifold.locallylinearembedding#sklearn.manifold.LocallyLinearEmbedding
sklearn.manifold.LocallyLinearEmbedding class sklearn.manifold.LocallyLinearEmbedding(*, n_neighbors=5, n_components=2, reg=0.001, eigen_solver='auto', tol=1e-06, max_iter=100, method='standard', hessian_tol=0.0001, modified_tol=1e-12, neighbors_algorithm='auto', random_state=None, n_jobs=None) [source] Locally Linear Embedding Read more in the User Guide. Parameters n_neighborsint, default=5 number of neighbors to consider for each point. n_componentsint, default=2 number of coordinates for the manifold regfloat, default=1e-3 regularization constant, multiplies the trace of the local covariance matrix of the distances. eigen_solver{‘auto’, ‘arpack’, ‘dense’}, default=’auto’ auto : algorithm will attempt to choose the best method for input data arpackuse arnoldi iteration in shift-invert mode. For this method, M may be a dense matrix, sparse matrix, or general linear operator. Warning: ARPACK can be unstable for some problems. It is best to try several random seeds in order to check results. denseuse standard dense matrix operations for the eigenvalue decomposition. For this method, M must be an array or matrix type. This method should be avoided for large problems. tolfloat, default=1e-6 Tolerance for ‘arpack’ method Not used if eigen_solver==’dense’. max_iterint, default=100 maximum number of iterations for the arpack solver. Not used if eigen_solver==’dense’. method{‘standard’, ‘hessian’, ‘modified’, ‘ltsa’}, default=’standard’ standarduse the standard locally linear embedding algorithm. see reference [1] hessianuse the Hessian eigenmap method. This method requires n_neighbors > n_components * (1 + (n_components + 1) / 2 see reference [2] modifieduse the modified locally linear embedding algorithm. see reference [3] ltsause local tangent space alignment algorithm see reference [4] hessian_tolfloat, default=1e-4 Tolerance for Hessian eigenmapping method. Only used if method == 'hessian' modified_tolfloat, default=1e-12 Tolerance for modified LLE method. Only used if method == 'modified' neighbors_algorithm{‘auto’, ‘brute’, ‘kd_tree’, ‘ball_tree’}, default=’auto’ algorithm to use for nearest neighbors search, passed to neighbors.NearestNeighbors instance random_stateint, RandomState instance, default=None Determines the random number generator when eigen_solver == ‘arpack’. Pass an int for reproducible results across multiple function calls. See :term: Glossary <random_state>. n_jobsint or None, default=None The number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Attributes embedding_array-like, shape [n_samples, n_components] Stores the embedding vectors reconstruction_error_float Reconstruction error associated with embedding_ nbrs_NearestNeighbors object Stores nearest neighbors instance, including BallTree or KDtree if applicable. References 1 Roweis, S. & Saul, L. Nonlinear dimensionality reduction by locally linear embedding. Science 290:2323 (2000). 2 Donoho, D. & Grimes, C. Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data. Proc Natl Acad Sci U S A. 100:5591 (2003). 3 Zhang, Z. & Wang, J. MLLE: Modified Locally Linear Embedding Using Multiple Weights. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.70.382 4 Zhang, Z. & Zha, H. Principal manifolds and nonlinear dimensionality reduction via tangent space alignment. Journal of Shanghai Univ. 8:406 (2004) Examples >>> from sklearn.datasets import load_digits >>> from sklearn.manifold import LocallyLinearEmbedding >>> X, _ = load_digits(return_X_y=True) >>> X.shape (1797, 64) >>> embedding = LocallyLinearEmbedding(n_components=2) >>> X_transformed = embedding.fit_transform(X[:100]) >>> X_transformed.shape (100, 2) Methods fit(X[, y]) Compute the embedding vectors for data X fit_transform(X[, y]) Compute the embedding vectors for data X and transform X. get_params([deep]) Get parameters for this estimator. set_params(**params) Set the parameters of this estimator. transform(X) Transform new points into embedding space. fit(X, y=None) [source] Compute the embedding vectors for data X Parameters Xarray-like of shape [n_samples, n_features] training set. yIgnored Returns selfreturns an instance of self. fit_transform(X, y=None) [source] Compute the embedding vectors for data X and transform X. Parameters Xarray-like of shape [n_samples, n_features] training set. yIgnored Returns X_newarray-like, shape (n_samples, n_components) get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Transform new points into embedding space. Parameters Xarray-like of shape (n_samples, n_features) Returns X_newarray, shape = [n_samples, n_components] Notes Because of scaling performed by this method, it is discouraged to use it together with methods that are not scale-invariant (like SVMs) Examples using sklearn.manifold.LocallyLinearEmbedding Visualizing the stock market structure Comparison of Manifold Learning methods Manifold Learning methods on a severed sphere Manifold learning on handwritten digits: Locally Linear Embedding, Isomap…
sklearn.modules.generated.sklearn.manifold.locallylinearembedding
fit(X, y=None) [source] Compute the embedding vectors for data X Parameters Xarray-like of shape [n_samples, n_features] training set. yIgnored Returns selfreturns an instance of self.
sklearn.modules.generated.sklearn.manifold.locallylinearembedding#sklearn.manifold.LocallyLinearEmbedding.fit
fit_transform(X, y=None) [source] Compute the embedding vectors for data X and transform X. Parameters Xarray-like of shape [n_samples, n_features] training set. yIgnored Returns X_newarray-like, shape (n_samples, n_components)
sklearn.modules.generated.sklearn.manifold.locallylinearembedding#sklearn.manifold.LocallyLinearEmbedding.fit_transform
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.manifold.locallylinearembedding#sklearn.manifold.LocallyLinearEmbedding.get_params
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.manifold.locallylinearembedding#sklearn.manifold.LocallyLinearEmbedding.set_params
transform(X) [source] Transform new points into embedding space. Parameters Xarray-like of shape (n_samples, n_features) Returns X_newarray, shape = [n_samples, n_components] Notes Because of scaling performed by this method, it is discouraged to use it together with methods that are not scale-invariant (like SVMs)
sklearn.modules.generated.sklearn.manifold.locallylinearembedding#sklearn.manifold.LocallyLinearEmbedding.transform
sklearn.manifold.locally_linear_embedding(X, *, n_neighbors, n_components, reg=0.001, eigen_solver='auto', tol=1e-06, max_iter=100, method='standard', hessian_tol=0.0001, modified_tol=1e-12, random_state=None, n_jobs=None) [source] Perform a Locally Linear Embedding analysis on the data. Read more in the User Guide. Parameters X{array-like, NearestNeighbors} Sample data, shape = (n_samples, n_features), in the form of a numpy array or a NearestNeighbors object. n_neighborsint number of neighbors to consider for each point. n_componentsint number of coordinates for the manifold. regfloat, default=1e-3 regularization constant, multiplies the trace of the local covariance matrix of the distances. eigen_solver{‘auto’, ‘arpack’, ‘dense’}, default=’auto’ auto : algorithm will attempt to choose the best method for input data arpackuse arnoldi iteration in shift-invert mode. For this method, M may be a dense matrix, sparse matrix, or general linear operator. Warning: ARPACK can be unstable for some problems. It is best to try several random seeds in order to check results. denseuse standard dense matrix operations for the eigenvalue decomposition. For this method, M must be an array or matrix type. This method should be avoided for large problems. tolfloat, default=1e-6 Tolerance for ‘arpack’ method Not used if eigen_solver==’dense’. max_iterint, default=100 maximum number of iterations for the arpack solver. method{‘standard’, ‘hessian’, ‘modified’, ‘ltsa’}, default=’standard’ standarduse the standard locally linear embedding algorithm. see reference [1] hessianuse the Hessian eigenmap method. This method requires n_neighbors > n_components * (1 + (n_components + 1) / 2. see reference [2] modifieduse the modified locally linear embedding algorithm. see reference [3] ltsause local tangent space alignment algorithm see reference [4] hessian_tolfloat, default=1e-4 Tolerance for Hessian eigenmapping method. Only used if method == ‘hessian’ modified_tolfloat, default=1e-12 Tolerance for modified LLE method. Only used if method == ‘modified’ random_stateint, RandomState instance, default=None Determines the random number generator when solver == ‘arpack’. Pass an int for reproducible results across multiple function calls. See :term: Glossary <random_state>. n_jobsint or None, default=None The number of parallel jobs to run for neighbors search. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Returns Yarray-like, shape [n_samples, n_components] Embedding vectors. squared_errorfloat Reconstruction error for the embedding vectors. Equivalent to norm(Y - W Y, 'fro')**2, where W are the reconstruction weights. References 1 Roweis, S. & Saul, L. Nonlinear dimensionality reduction by locally linear embedding. Science 290:2323 (2000). 2 Donoho, D. & Grimes, C. Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data. Proc Natl Acad Sci U S A. 100:5591 (2003). 3 Zhang, Z. & Wang, J. MLLE: Modified Locally Linear Embedding Using Multiple Weights. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.70.382 4 Zhang, Z. & Zha, H. Principal manifolds and nonlinear dimensionality reduction via tangent space alignment. Journal of Shanghai Univ. 8:406 (2004)
sklearn.modules.generated.sklearn.manifold.locally_linear_embedding#sklearn.manifold.locally_linear_embedding
class sklearn.manifold.MDS(n_components=2, *, metric=True, n_init=4, max_iter=300, verbose=0, eps=0.001, n_jobs=None, random_state=None, dissimilarity='euclidean') [source] Multidimensional scaling. Read more in the User Guide. Parameters n_componentsint, default=2 Number of dimensions in which to immerse the dissimilarities. metricbool, default=True If True, perform metric MDS; otherwise, perform nonmetric MDS. n_initint, default=4 Number of times the SMACOF algorithm will be run with different initializations. The final results will be the best output of the runs, determined by the run with the smallest final stress. max_iterint, default=300 Maximum number of iterations of the SMACOF algorithm for a single run. verboseint, default=0 Level of verbosity. epsfloat, default=1e-3 Relative tolerance with respect to stress at which to declare convergence. n_jobsint, default=None The number of jobs to use for the computation. If multiple initializations are used (n_init), each run of the algorithm is computed in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. random_stateint, RandomState instance or None, default=None Determines the random number generator used to initialize the centers. Pass an int for reproducible results across multiple function calls. See :term: Glossary <random_state>. dissimilarity{‘euclidean’, ‘precomputed’}, default=’euclidean’ Dissimilarity measure to use: ‘euclidean’: Pairwise Euclidean distances between points in the dataset. ‘precomputed’: Pre-computed dissimilarities are passed directly to fit and fit_transform. Attributes embedding_ndarray of shape (n_samples, n_components) Stores the position of the dataset in the embedding space. stress_float The final value of the stress (sum of squared distance of the disparities and the distances for all constrained points). dissimilarity_matrix_ndarray of shape (n_samples, n_samples) Pairwise dissimilarities between the points. Symmetric matrix that: either uses a custom dissimilarity matrix by setting dissimilarity to ‘precomputed’; or constructs a dissimilarity matrix from data using Euclidean distances. n_iter_int The number of iterations corresponding to the best stress. References “Modern Multidimensional Scaling - Theory and Applications” Borg, I.; Groenen P. Springer Series in Statistics (1997) “Nonmetric multidimensional scaling: a numerical method” Kruskal, J. Psychometrika, 29 (1964) “Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis” Kruskal, J. Psychometrika, 29, (1964) Examples >>> from sklearn.datasets import load_digits >>> from sklearn.manifold import MDS >>> X, _ = load_digits(return_X_y=True) >>> X.shape (1797, 64) >>> embedding = MDS(n_components=2) >>> X_transformed = embedding.fit_transform(X[:100]) >>> X_transformed.shape (100, 2) Methods fit(X[, y, init]) Computes the position of the points in the embedding space. fit_transform(X[, y, init]) Fit the data from X, and returns the embedded coordinates. get_params([deep]) Get parameters for this estimator. set_params(**params) Set the parameters of this estimator. fit(X, y=None, init=None) [source] Computes the position of the points in the embedding space. Parameters Xarray-like of shape (n_samples, n_features) or (n_samples, n_samples) Input data. If dissimilarity=='precomputed', the input should be the dissimilarity matrix. yIgnored initndarray of shape (n_samples,), default=None Starting configuration of the embedding to initialize the SMACOF algorithm. By default, the algorithm is initialized with a randomly chosen array. fit_transform(X, y=None, init=None) [source] Fit the data from X, and returns the embedded coordinates. Parameters Xarray-like of shape (n_samples, n_features) or (n_samples, n_samples) Input data. If dissimilarity=='precomputed', the input should be the dissimilarity matrix. yIgnored initndarray of shape (n_samples,), default=None Starting configuration of the embedding to initialize the SMACOF algorithm. By default, the algorithm is initialized with a randomly chosen array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.manifold.mds#sklearn.manifold.MDS
sklearn.manifold.MDS class sklearn.manifold.MDS(n_components=2, *, metric=True, n_init=4, max_iter=300, verbose=0, eps=0.001, n_jobs=None, random_state=None, dissimilarity='euclidean') [source] Multidimensional scaling. Read more in the User Guide. Parameters n_componentsint, default=2 Number of dimensions in which to immerse the dissimilarities. metricbool, default=True If True, perform metric MDS; otherwise, perform nonmetric MDS. n_initint, default=4 Number of times the SMACOF algorithm will be run with different initializations. The final results will be the best output of the runs, determined by the run with the smallest final stress. max_iterint, default=300 Maximum number of iterations of the SMACOF algorithm for a single run. verboseint, default=0 Level of verbosity. epsfloat, default=1e-3 Relative tolerance with respect to stress at which to declare convergence. n_jobsint, default=None The number of jobs to use for the computation. If multiple initializations are used (n_init), each run of the algorithm is computed in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. random_stateint, RandomState instance or None, default=None Determines the random number generator used to initialize the centers. Pass an int for reproducible results across multiple function calls. See :term: Glossary <random_state>. dissimilarity{‘euclidean’, ‘precomputed’}, default=’euclidean’ Dissimilarity measure to use: ‘euclidean’: Pairwise Euclidean distances between points in the dataset. ‘precomputed’: Pre-computed dissimilarities are passed directly to fit and fit_transform. Attributes embedding_ndarray of shape (n_samples, n_components) Stores the position of the dataset in the embedding space. stress_float The final value of the stress (sum of squared distance of the disparities and the distances for all constrained points). dissimilarity_matrix_ndarray of shape (n_samples, n_samples) Pairwise dissimilarities between the points. Symmetric matrix that: either uses a custom dissimilarity matrix by setting dissimilarity to ‘precomputed’; or constructs a dissimilarity matrix from data using Euclidean distances. n_iter_int The number of iterations corresponding to the best stress. References “Modern Multidimensional Scaling - Theory and Applications” Borg, I.; Groenen P. Springer Series in Statistics (1997) “Nonmetric multidimensional scaling: a numerical method” Kruskal, J. Psychometrika, 29 (1964) “Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis” Kruskal, J. Psychometrika, 29, (1964) Examples >>> from sklearn.datasets import load_digits >>> from sklearn.manifold import MDS >>> X, _ = load_digits(return_X_y=True) >>> X.shape (1797, 64) >>> embedding = MDS(n_components=2) >>> X_transformed = embedding.fit_transform(X[:100]) >>> X_transformed.shape (100, 2) Methods fit(X[, y, init]) Computes the position of the points in the embedding space. fit_transform(X[, y, init]) Fit the data from X, and returns the embedded coordinates. get_params([deep]) Get parameters for this estimator. set_params(**params) Set the parameters of this estimator. fit(X, y=None, init=None) [source] Computes the position of the points in the embedding space. Parameters Xarray-like of shape (n_samples, n_features) or (n_samples, n_samples) Input data. If dissimilarity=='precomputed', the input should be the dissimilarity matrix. yIgnored initndarray of shape (n_samples,), default=None Starting configuration of the embedding to initialize the SMACOF algorithm. By default, the algorithm is initialized with a randomly chosen array. fit_transform(X, y=None, init=None) [source] Fit the data from X, and returns the embedded coordinates. Parameters Xarray-like of shape (n_samples, n_features) or (n_samples, n_samples) Input data. If dissimilarity=='precomputed', the input should be the dissimilarity matrix. yIgnored initndarray of shape (n_samples,), default=None Starting configuration of the embedding to initialize the SMACOF algorithm. By default, the algorithm is initialized with a randomly chosen array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. Examples using sklearn.manifold.MDS Comparison of Manifold Learning methods Multi-dimensional scaling Manifold Learning methods on a severed sphere Manifold learning on handwritten digits: Locally Linear Embedding, Isomap…
sklearn.modules.generated.sklearn.manifold.mds
fit(X, y=None, init=None) [source] Computes the position of the points in the embedding space. Parameters Xarray-like of shape (n_samples, n_features) or (n_samples, n_samples) Input data. If dissimilarity=='precomputed', the input should be the dissimilarity matrix. yIgnored initndarray of shape (n_samples,), default=None Starting configuration of the embedding to initialize the SMACOF algorithm. By default, the algorithm is initialized with a randomly chosen array.
sklearn.modules.generated.sklearn.manifold.mds#sklearn.manifold.MDS.fit
fit_transform(X, y=None, init=None) [source] Fit the data from X, and returns the embedded coordinates. Parameters Xarray-like of shape (n_samples, n_features) or (n_samples, n_samples) Input data. If dissimilarity=='precomputed', the input should be the dissimilarity matrix. yIgnored initndarray of shape (n_samples,), default=None Starting configuration of the embedding to initialize the SMACOF algorithm. By default, the algorithm is initialized with a randomly chosen array.
sklearn.modules.generated.sklearn.manifold.mds#sklearn.manifold.MDS.fit_transform
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.manifold.mds#sklearn.manifold.MDS.get_params
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.manifold.mds#sklearn.manifold.MDS.set_params
sklearn.manifold.smacof(dissimilarities, *, metric=True, n_components=2, init=None, n_init=8, n_jobs=None, max_iter=300, verbose=0, eps=0.001, random_state=None, return_n_iter=False) [source] Computes multidimensional scaling using the SMACOF algorithm. The SMACOF (Scaling by MAjorizing a COmplicated Function) algorithm is a multidimensional scaling algorithm which minimizes an objective function (the stress) using a majorization technique. Stress majorization, also known as the Guttman Transform, guarantees a monotone convergence of stress, and is more powerful than traditional techniques such as gradient descent. The SMACOF algorithm for metric MDS can summarized by the following steps: Set an initial start configuration, randomly or not. Compute the stress Compute the Guttman Transform Iterate 2 and 3 until convergence. The nonmetric algorithm adds a monotonic regression step before computing the stress. Parameters dissimilaritiesndarray of shape (n_samples, n_samples) Pairwise dissimilarities between the points. Must be symmetric. metricbool, default=True Compute metric or nonmetric SMACOF algorithm. n_componentsint, default=2 Number of dimensions in which to immerse the dissimilarities. If an init array is provided, this option is overridden and the shape of init is used to determine the dimensionality of the embedding space. initndarray of shape (n_samples, n_components), default=None Starting configuration of the embedding to initialize the algorithm. By default, the algorithm is initialized with a randomly chosen array. n_initint, default=8 Number of times the SMACOF algorithm will be run with different initializations. The final results will be the best output of the runs, determined by the run with the smallest final stress. If init is provided, this option is overridden and a single run is performed. n_jobsint, default=None The number of jobs to use for the computation. If multiple initializations are used (n_init), each run of the algorithm is computed in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. max_iterint, default=300 Maximum number of iterations of the SMACOF algorithm for a single run. verboseint, default=0 Level of verbosity. epsfloat, default=1e-3 Relative tolerance with respect to stress at which to declare convergence. random_stateint, RandomState instance or None, default=None Determines the random number generator used to initialize the centers. Pass an int for reproducible results across multiple function calls. See :term: Glossary <random_state>. return_n_iterbool, default=False Whether or not to return the number of iterations. Returns Xndarray of shape (n_samples, n_components) Coordinates of the points in a n_components-space. stressfloat The final value of the stress (sum of squared distance of the disparities and the distances for all constrained points). n_iterint The number of iterations corresponding to the best stress. Returned only if return_n_iter is set to True. Notes “Modern Multidimensional Scaling - Theory and Applications” Borg, I.; Groenen P. Springer Series in Statistics (1997) “Nonmetric multidimensional scaling: a numerical method” Kruskal, J. Psychometrika, 29 (1964) “Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis” Kruskal, J. Psychometrika, 29, (1964)
sklearn.modules.generated.sklearn.manifold.smacof#sklearn.manifold.smacof
class sklearn.manifold.SpectralEmbedding(n_components=2, *, affinity='nearest_neighbors', gamma=None, random_state=None, eigen_solver=None, n_neighbors=None, n_jobs=None) [source] Spectral embedding for non-linear dimensionality reduction. Forms an affinity matrix given by the specified function and applies spectral decomposition to the corresponding graph laplacian. The resulting transformation is given by the value of the eigenvectors for each data point. Note : Laplacian Eigenmaps is the actual algorithm implemented here. Read more in the User Guide. Parameters n_componentsint, default=2 The dimension of the projected subspace. affinity{‘nearest_neighbors’, ‘rbf’, ‘precomputed’, ‘precomputed_nearest_neighbors’} or callable, default=’nearest_neighbors’ How to construct the affinity matrix. ‘nearest_neighbors’ : construct the affinity matrix by computing a graph of nearest neighbors. ‘rbf’ : construct the affinity matrix by computing a radial basis function (RBF) kernel. ‘precomputed’ : interpret X as a precomputed affinity matrix. ‘precomputed_nearest_neighbors’ : interpret X as a sparse graph of precomputed nearest neighbors, and constructs the affinity matrix by selecting the n_neighbors nearest neighbors. callable : use passed in function as affinity the function takes in data matrix (n_samples, n_features) and return affinity matrix (n_samples, n_samples). gammafloat, default=None Kernel coefficient for rbf kernel. If None, gamma will be set to 1/n_features. random_stateint, RandomState instance or None, default=None Determines the random number generator used for the initialization of the lobpcg eigenvectors when solver == ‘amg’. Pass an int for reproducible results across multiple function calls. See :term: Glossary <random_state>. eigen_solver{‘arpack’, ‘lobpcg’, ‘amg’}, default=None The eigenvalue decomposition strategy to use. AMG requires pyamg to be installed. It can be faster on very large, sparse problems. If None, then 'arpack' is used. n_neighborsint, default=None Number of nearest neighbors for nearest_neighbors graph building. If None, n_neighbors will be set to max(n_samples/10, 1). n_jobsint, default=None The number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Attributes embedding_ndarray of shape (n_samples, n_components) Spectral embedding of the training matrix. affinity_matrix_ndarray of shape (n_samples, n_samples) Affinity_matrix constructed from samples or precomputed. n_neighbors_int Number of nearest neighbors effectively used. References A Tutorial on Spectral Clustering, 2007 Ulrike von Luxburg http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.165.9323 On Spectral Clustering: Analysis and an algorithm, 2001 Andrew Y. Ng, Michael I. Jordan, Yair Weiss http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.19.8100 Normalized cuts and image segmentation, 2000 Jianbo Shi, Jitendra Malik http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.160.2324 Examples >>> from sklearn.datasets import load_digits >>> from sklearn.manifold import SpectralEmbedding >>> X, _ = load_digits(return_X_y=True) >>> X.shape (1797, 64) >>> embedding = SpectralEmbedding(n_components=2) >>> X_transformed = embedding.fit_transform(X[:100]) >>> X_transformed.shape (100, 2) Methods fit(X[, y]) Fit the model from data in X. fit_transform(X[, y]) Fit the model from data in X and transform X. get_params([deep]) Get parameters for this estimator. set_params(**params) Set the parameters of this estimator. fit(X, y=None) [source] Fit the model from data in X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training vector, where n_samples is the number of samples and n_features is the number of features. If affinity is “precomputed” X : {array-like, sparse matrix}, shape (n_samples, n_samples), Interpret X as precomputed adjacency graph computed from samples. yIgnored Returns selfobject Returns the instance itself. fit_transform(X, y=None) [source] Fit the model from data in X and transform X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training vector, where n_samples is the number of samples and n_features is the number of features. If affinity is “precomputed” X : {array-like, sparse matrix} of shape (n_samples, n_samples), Interpret X as precomputed adjacency graph computed from samples. yIgnored Returns X_newarray-like of shape (n_samples, n_components) get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.manifold.spectralembedding#sklearn.manifold.SpectralEmbedding
sklearn.manifold.SpectralEmbedding class sklearn.manifold.SpectralEmbedding(n_components=2, *, affinity='nearest_neighbors', gamma=None, random_state=None, eigen_solver=None, n_neighbors=None, n_jobs=None) [source] Spectral embedding for non-linear dimensionality reduction. Forms an affinity matrix given by the specified function and applies spectral decomposition to the corresponding graph laplacian. The resulting transformation is given by the value of the eigenvectors for each data point. Note : Laplacian Eigenmaps is the actual algorithm implemented here. Read more in the User Guide. Parameters n_componentsint, default=2 The dimension of the projected subspace. affinity{‘nearest_neighbors’, ‘rbf’, ‘precomputed’, ‘precomputed_nearest_neighbors’} or callable, default=’nearest_neighbors’ How to construct the affinity matrix. ‘nearest_neighbors’ : construct the affinity matrix by computing a graph of nearest neighbors. ‘rbf’ : construct the affinity matrix by computing a radial basis function (RBF) kernel. ‘precomputed’ : interpret X as a precomputed affinity matrix. ‘precomputed_nearest_neighbors’ : interpret X as a sparse graph of precomputed nearest neighbors, and constructs the affinity matrix by selecting the n_neighbors nearest neighbors. callable : use passed in function as affinity the function takes in data matrix (n_samples, n_features) and return affinity matrix (n_samples, n_samples). gammafloat, default=None Kernel coefficient for rbf kernel. If None, gamma will be set to 1/n_features. random_stateint, RandomState instance or None, default=None Determines the random number generator used for the initialization of the lobpcg eigenvectors when solver == ‘amg’. Pass an int for reproducible results across multiple function calls. See :term: Glossary <random_state>. eigen_solver{‘arpack’, ‘lobpcg’, ‘amg’}, default=None The eigenvalue decomposition strategy to use. AMG requires pyamg to be installed. It can be faster on very large, sparse problems. If None, then 'arpack' is used. n_neighborsint, default=None Number of nearest neighbors for nearest_neighbors graph building. If None, n_neighbors will be set to max(n_samples/10, 1). n_jobsint, default=None The number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Attributes embedding_ndarray of shape (n_samples, n_components) Spectral embedding of the training matrix. affinity_matrix_ndarray of shape (n_samples, n_samples) Affinity_matrix constructed from samples or precomputed. n_neighbors_int Number of nearest neighbors effectively used. References A Tutorial on Spectral Clustering, 2007 Ulrike von Luxburg http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.165.9323 On Spectral Clustering: Analysis and an algorithm, 2001 Andrew Y. Ng, Michael I. Jordan, Yair Weiss http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.19.8100 Normalized cuts and image segmentation, 2000 Jianbo Shi, Jitendra Malik http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.160.2324 Examples >>> from sklearn.datasets import load_digits >>> from sklearn.manifold import SpectralEmbedding >>> X, _ = load_digits(return_X_y=True) >>> X.shape (1797, 64) >>> embedding = SpectralEmbedding(n_components=2) >>> X_transformed = embedding.fit_transform(X[:100]) >>> X_transformed.shape (100, 2) Methods fit(X[, y]) Fit the model from data in X. fit_transform(X[, y]) Fit the model from data in X and transform X. get_params([deep]) Get parameters for this estimator. set_params(**params) Set the parameters of this estimator. fit(X, y=None) [source] Fit the model from data in X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training vector, where n_samples is the number of samples and n_features is the number of features. If affinity is “precomputed” X : {array-like, sparse matrix}, shape (n_samples, n_samples), Interpret X as precomputed adjacency graph computed from samples. yIgnored Returns selfobject Returns the instance itself. fit_transform(X, y=None) [source] Fit the model from data in X and transform X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training vector, where n_samples is the number of samples and n_features is the number of features. If affinity is “precomputed” X : {array-like, sparse matrix} of shape (n_samples, n_samples), Interpret X as precomputed adjacency graph computed from samples. yIgnored Returns X_newarray-like of shape (n_samples, n_components) get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. Examples using sklearn.manifold.SpectralEmbedding Various Agglomerative Clustering on a 2D embedding of digits Comparison of Manifold Learning methods Manifold Learning methods on a severed sphere Manifold learning on handwritten digits: Locally Linear Embedding, Isomap…
sklearn.modules.generated.sklearn.manifold.spectralembedding
fit(X, y=None) [source] Fit the model from data in X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training vector, where n_samples is the number of samples and n_features is the number of features. If affinity is “precomputed” X : {array-like, sparse matrix}, shape (n_samples, n_samples), Interpret X as precomputed adjacency graph computed from samples. yIgnored Returns selfobject Returns the instance itself.
sklearn.modules.generated.sklearn.manifold.spectralembedding#sklearn.manifold.SpectralEmbedding.fit
fit_transform(X, y=None) [source] Fit the model from data in X and transform X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training vector, where n_samples is the number of samples and n_features is the number of features. If affinity is “precomputed” X : {array-like, sparse matrix} of shape (n_samples, n_samples), Interpret X as precomputed adjacency graph computed from samples. yIgnored Returns X_newarray-like of shape (n_samples, n_components)
sklearn.modules.generated.sklearn.manifold.spectralembedding#sklearn.manifold.SpectralEmbedding.fit_transform
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.manifold.spectralembedding#sklearn.manifold.SpectralEmbedding.get_params
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.manifold.spectralembedding#sklearn.manifold.SpectralEmbedding.set_params
sklearn.manifold.spectral_embedding(adjacency, *, n_components=8, eigen_solver=None, random_state=None, eigen_tol=0.0, norm_laplacian=True, drop_first=True) [source] Project the sample on the first eigenvectors of the graph Laplacian. The adjacency matrix is used to compute a normalized graph Laplacian whose spectrum (especially the eigenvectors associated to the smallest eigenvalues) has an interpretation in terms of minimal number of cuts necessary to split the graph into comparably sized components. This embedding can also ‘work’ even if the adjacency variable is not strictly the adjacency matrix of a graph but more generally an affinity or similarity matrix between samples (for instance the heat kernel of a euclidean distance matrix or a k-NN matrix). However care must taken to always make the affinity matrix symmetric so that the eigenvector decomposition works as expected. Note : Laplacian Eigenmaps is the actual algorithm implemented here. Read more in the User Guide. Parameters adjacency{array-like, sparse graph} of shape (n_samples, n_samples) The adjacency matrix of the graph to embed. n_componentsint, default=8 The dimension of the projection subspace. eigen_solver{‘arpack’, ‘lobpcg’, ‘amg’}, default=None The eigenvalue decomposition strategy to use. AMG requires pyamg to be installed. It can be faster on very large, sparse problems, but may also lead to instabilities. If None, then 'arpack' is used. random_stateint, RandomState instance or None, default=None Determines the random number generator used for the initialization of the lobpcg eigenvectors decomposition when solver == ‘amg’. Pass an int for reproducible results across multiple function calls. See :term: Glossary <random_state>. eigen_tolfloat, default=0.0 Stopping criterion for eigendecomposition of the Laplacian matrix when using arpack eigen_solver. norm_laplacianbool, default=True If True, then compute normalized Laplacian. drop_firstbool, default=True Whether to drop the first eigenvector. For spectral embedding, this should be True as the first eigenvector should be constant vector for connected graph, but for spectral clustering, this should be kept as False to retain the first eigenvector. Returns embeddingndarray of shape (n_samples, n_components) The reduced samples. Notes Spectral Embedding (Laplacian Eigenmaps) is most useful when the graph has one connected component. If there graph has many components, the first few eigenvectors will simply uncover the connected components of the graph. References https://en.wikipedia.org/wiki/LOBPCG Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block Preconditioned Conjugate Gradient Method Andrew V. Knyazev https://doi.org/10.1137%2FS1064827500366124
sklearn.modules.generated.sklearn.manifold.spectral_embedding#sklearn.manifold.spectral_embedding
sklearn.manifold.trustworthiness(X, X_embedded, *, n_neighbors=5, metric='euclidean') [source] Expresses to what extent the local structure is retained. The trustworthiness is within [0, 1]. It is defined as \[T(k) = 1 - \frac{2}{nk (2n - 3k - 1)} \sum^n_{i=1} \sum_{j \in \mathcal{N}_{i}^{k}} \max(0, (r(i, j) - k))\] where for each sample i, \(\mathcal{N}_{i}^{k}\) are its k nearest neighbors in the output space, and every sample j is its \(r(i, j)\)-th nearest neighbor in the input space. In other words, any unexpected nearest neighbors in the output space are penalised in proportion to their rank in the input space. “Neighborhood Preservation in Nonlinear Projection Methods: An Experimental Study” J. Venna, S. Kaski “Learning a Parametric Embedding by Preserving Local Structure” L.J.P. van der Maaten Parameters Xndarray of shape (n_samples, n_features) or (n_samples, n_samples) If the metric is ‘precomputed’ X must be a square distance matrix. Otherwise it contains a sample per row. X_embeddedndarray of shape (n_samples, n_components) Embedding of the training data in low-dimensional space. n_neighborsint, default=5 Number of neighbors k that will be considered. metricstr or callable, default=’euclidean’ Which metric to use for computing pairwise distances between samples from the original input space. If metric is ‘precomputed’, X must be a matrix of pairwise distances or squared distances. Otherwise, see the documentation of argument metric in sklearn.pairwise.pairwise_distances for a list of available metrics. New in version 0.20. Returns trustworthinessfloat Trustworthiness of the low-dimensional embedding.
sklearn.modules.generated.sklearn.manifold.trustworthiness#sklearn.manifold.trustworthiness
class sklearn.manifold.TSNE(n_components=2, *, perplexity=30.0, early_exaggeration=12.0, learning_rate=200.0, n_iter=1000, n_iter_without_progress=300, min_grad_norm=1e-07, metric='euclidean', init='random', verbose=0, random_state=None, method='barnes_hut', angle=0.5, n_jobs=None, square_distances='legacy') [source] t-distributed Stochastic Neighbor Embedding. t-SNE [1] is a tool to visualize high-dimensional data. It converts similarities between data points to joint probabilities and tries to minimize the Kullback-Leibler divergence between the joint probabilities of the low-dimensional embedding and the high-dimensional data. t-SNE has a cost function that is not convex, i.e. with different initializations we can get different results. It is highly recommended to use another dimensionality reduction method (e.g. PCA for dense data or TruncatedSVD for sparse data) to reduce the number of dimensions to a reasonable amount (e.g. 50) if the number of features is very high. This will suppress some noise and speed up the computation of pairwise distances between samples. For more tips see Laurens van der Maaten’s FAQ [2]. Read more in the User Guide. Parameters n_componentsint, default=2 Dimension of the embedded space. perplexityfloat, default=30.0 The perplexity is related to the number of nearest neighbors that is used in other manifold learning algorithms. Larger datasets usually require a larger perplexity. Consider selecting a value between 5 and 50. Different values can result in significantly different results. early_exaggerationfloat, default=12.0 Controls how tight natural clusters in the original space are in the embedded space and how much space will be between them. For larger values, the space between natural clusters will be larger in the embedded space. Again, the choice of this parameter is not very critical. If the cost function increases during initial optimization, the early exaggeration factor or the learning rate might be too high. learning_ratefloat, default=200.0 The learning rate for t-SNE is usually in the range [10.0, 1000.0]. If the learning rate is too high, the data may look like a ‘ball’ with any point approximately equidistant from its nearest neighbours. If the learning rate is too low, most points may look compressed in a dense cloud with few outliers. If the cost function gets stuck in a bad local minimum increasing the learning rate may help. n_iterint, default=1000 Maximum number of iterations for the optimization. Should be at least 250. n_iter_without_progressint, default=300 Maximum number of iterations without progress before we abort the optimization, used after 250 initial iterations with early exaggeration. Note that progress is only checked every 50 iterations so this value is rounded to the next multiple of 50. New in version 0.17: parameter n_iter_without_progress to control stopping criteria. min_grad_normfloat, default=1e-7 If the gradient norm is below this threshold, the optimization will be stopped. metricstr or callable, default=’euclidean’ The metric to use when calculating distance between instances in a feature array. If metric is a string, it must be one of the options allowed by scipy.spatial.distance.pdist for its metric parameter, or a metric listed in pairwise.PAIRWISE_DISTANCE_FUNCTIONS. If metric is “precomputed”, X is assumed to be a distance matrix. Alternatively, if metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays from X as input and return a value indicating the distance between them. The default is “euclidean” which is interpreted as squared euclidean distance. init{‘random’, ‘pca’} or ndarray of shape (n_samples, n_components), default=’random’ Initialization of embedding. Possible options are ‘random’, ‘pca’, and a numpy array of shape (n_samples, n_components). PCA initialization cannot be used with precomputed distances and is usually more globally stable than random initialization. verboseint, default=0 Verbosity level. random_stateint, RandomState instance or None, default=None Determines the random number generator. Pass an int for reproducible results across multiple function calls. Note that different initializations might result in different local minima of the cost function. See :term: Glossary <random_state>. methodstr, default=’barnes_hut’ By default the gradient calculation algorithm uses Barnes-Hut approximation running in O(NlogN) time. method=’exact’ will run on the slower, but exact, algorithm in O(N^2) time. The exact algorithm should be used when nearest-neighbor errors need to be better than 3%. However, the exact method cannot scale to millions of examples. New in version 0.17: Approximate optimization method via the Barnes-Hut. anglefloat, default=0.5 Only used if method=’barnes_hut’ This is the trade-off between speed and accuracy for Barnes-Hut T-SNE. ‘angle’ is the angular size (referred to as theta in [3]) of a distant node as measured from a point. If this size is below ‘angle’ then it is used as a summary node of all points contained within it. This method is not very sensitive to changes in this parameter in the range of 0.2 - 0.8. Angle less than 0.2 has quickly increasing computation time and angle greater 0.8 has quickly increasing error. n_jobsint, default=None The number of parallel jobs to run for neighbors search. This parameter has no impact when metric="precomputed" or (metric="euclidean" and method="exact"). None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. New in version 0.22. square_distancesTrue or ‘legacy’, default=’legacy’ Whether TSNE should square the distance values. 'legacy' means that distance values are squared only when metric="euclidean". True means that distance values are squared for all metrics. New in version 0.24: Added to provide backward compatibility during deprecation of legacy squaring behavior. Deprecated since version 0.24: Legacy squaring behavior was deprecated in 0.24. The 'legacy' value will be removed in 1.1 (renaming of 0.26), at which point the default value will change to True. Attributes embedding_array-like of shape (n_samples, n_components) Stores the embedding vectors. kl_divergence_float Kullback-Leibler divergence after optimization. n_iter_int Number of iterations run. References [1] van der Maaten, L.J.P.; Hinton, G.E. Visualizing High-Dimensional Data Using t-SNE. Journal of Machine Learning Research 9:2579-2605, 2008. [2] van der Maaten, L.J.P. t-Distributed Stochastic Neighbor Embedding https://lvdmaaten.github.io/tsne/ [3] L.J.P. van der Maaten. Accelerating t-SNE using Tree-Based Algorithms. Journal of Machine Learning Research 15(Oct):3221-3245, 2014. https://lvdmaaten.github.io/publications/papers/JMLR_2014.pdf Examples >>> import numpy as np >>> from sklearn.manifold import TSNE >>> X = np.array([[0, 0, 0], [0, 1, 1], [1, 0, 1], [1, 1, 1]]) >>> X_embedded = TSNE(n_components=2).fit_transform(X) >>> X_embedded.shape (4, 2) Methods fit(X[, y]) Fit X into an embedded space. fit_transform(X[, y]) Fit X into an embedded space and return that transformed output. get_params([deep]) Get parameters for this estimator. set_params(**params) Set the parameters of this estimator. fit(X, y=None) [source] Fit X into an embedded space. Parameters Xndarray of shape (n_samples, n_features) or (n_samples, n_samples) If the metric is ‘precomputed’ X must be a square distance matrix. Otherwise it contains a sample per row. If the method is ‘exact’, X may be a sparse matrix of type ‘csr’, ‘csc’ or ‘coo’. If the method is ‘barnes_hut’ and the metric is ‘precomputed’, X may be a precomputed sparse graph. yIgnored fit_transform(X, y=None) [source] Fit X into an embedded space and return that transformed output. Parameters Xndarray of shape (n_samples, n_features) or (n_samples, n_samples) If the metric is ‘precomputed’ X must be a square distance matrix. Otherwise it contains a sample per row. If the method is ‘exact’, X may be a sparse matrix of type ‘csr’, ‘csc’ or ‘coo’. If the method is ‘barnes_hut’ and the metric is ‘precomputed’, X may be a precomputed sparse graph. yIgnored Returns X_newndarray of shape (n_samples, n_components) Embedding of the training data in low-dimensional space. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.manifold.tsne#sklearn.manifold.TSNE
sklearn.manifold.TSNE class sklearn.manifold.TSNE(n_components=2, *, perplexity=30.0, early_exaggeration=12.0, learning_rate=200.0, n_iter=1000, n_iter_without_progress=300, min_grad_norm=1e-07, metric='euclidean', init='random', verbose=0, random_state=None, method='barnes_hut', angle=0.5, n_jobs=None, square_distances='legacy') [source] t-distributed Stochastic Neighbor Embedding. t-SNE [1] is a tool to visualize high-dimensional data. It converts similarities between data points to joint probabilities and tries to minimize the Kullback-Leibler divergence between the joint probabilities of the low-dimensional embedding and the high-dimensional data. t-SNE has a cost function that is not convex, i.e. with different initializations we can get different results. It is highly recommended to use another dimensionality reduction method (e.g. PCA for dense data or TruncatedSVD for sparse data) to reduce the number of dimensions to a reasonable amount (e.g. 50) if the number of features is very high. This will suppress some noise and speed up the computation of pairwise distances between samples. For more tips see Laurens van der Maaten’s FAQ [2]. Read more in the User Guide. Parameters n_componentsint, default=2 Dimension of the embedded space. perplexityfloat, default=30.0 The perplexity is related to the number of nearest neighbors that is used in other manifold learning algorithms. Larger datasets usually require a larger perplexity. Consider selecting a value between 5 and 50. Different values can result in significantly different results. early_exaggerationfloat, default=12.0 Controls how tight natural clusters in the original space are in the embedded space and how much space will be between them. For larger values, the space between natural clusters will be larger in the embedded space. Again, the choice of this parameter is not very critical. If the cost function increases during initial optimization, the early exaggeration factor or the learning rate might be too high. learning_ratefloat, default=200.0 The learning rate for t-SNE is usually in the range [10.0, 1000.0]. If the learning rate is too high, the data may look like a ‘ball’ with any point approximately equidistant from its nearest neighbours. If the learning rate is too low, most points may look compressed in a dense cloud with few outliers. If the cost function gets stuck in a bad local minimum increasing the learning rate may help. n_iterint, default=1000 Maximum number of iterations for the optimization. Should be at least 250. n_iter_without_progressint, default=300 Maximum number of iterations without progress before we abort the optimization, used after 250 initial iterations with early exaggeration. Note that progress is only checked every 50 iterations so this value is rounded to the next multiple of 50. New in version 0.17: parameter n_iter_without_progress to control stopping criteria. min_grad_normfloat, default=1e-7 If the gradient norm is below this threshold, the optimization will be stopped. metricstr or callable, default=’euclidean’ The metric to use when calculating distance between instances in a feature array. If metric is a string, it must be one of the options allowed by scipy.spatial.distance.pdist for its metric parameter, or a metric listed in pairwise.PAIRWISE_DISTANCE_FUNCTIONS. If metric is “precomputed”, X is assumed to be a distance matrix. Alternatively, if metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays from X as input and return a value indicating the distance between them. The default is “euclidean” which is interpreted as squared euclidean distance. init{‘random’, ‘pca’} or ndarray of shape (n_samples, n_components), default=’random’ Initialization of embedding. Possible options are ‘random’, ‘pca’, and a numpy array of shape (n_samples, n_components). PCA initialization cannot be used with precomputed distances and is usually more globally stable than random initialization. verboseint, default=0 Verbosity level. random_stateint, RandomState instance or None, default=None Determines the random number generator. Pass an int for reproducible results across multiple function calls. Note that different initializations might result in different local minima of the cost function. See :term: Glossary <random_state>. methodstr, default=’barnes_hut’ By default the gradient calculation algorithm uses Barnes-Hut approximation running in O(NlogN) time. method=’exact’ will run on the slower, but exact, algorithm in O(N^2) time. The exact algorithm should be used when nearest-neighbor errors need to be better than 3%. However, the exact method cannot scale to millions of examples. New in version 0.17: Approximate optimization method via the Barnes-Hut. anglefloat, default=0.5 Only used if method=’barnes_hut’ This is the trade-off between speed and accuracy for Barnes-Hut T-SNE. ‘angle’ is the angular size (referred to as theta in [3]) of a distant node as measured from a point. If this size is below ‘angle’ then it is used as a summary node of all points contained within it. This method is not very sensitive to changes in this parameter in the range of 0.2 - 0.8. Angle less than 0.2 has quickly increasing computation time and angle greater 0.8 has quickly increasing error. n_jobsint, default=None The number of parallel jobs to run for neighbors search. This parameter has no impact when metric="precomputed" or (metric="euclidean" and method="exact"). None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. New in version 0.22. square_distancesTrue or ‘legacy’, default=’legacy’ Whether TSNE should square the distance values. 'legacy' means that distance values are squared only when metric="euclidean". True means that distance values are squared for all metrics. New in version 0.24: Added to provide backward compatibility during deprecation of legacy squaring behavior. Deprecated since version 0.24: Legacy squaring behavior was deprecated in 0.24. The 'legacy' value will be removed in 1.1 (renaming of 0.26), at which point the default value will change to True. Attributes embedding_array-like of shape (n_samples, n_components) Stores the embedding vectors. kl_divergence_float Kullback-Leibler divergence after optimization. n_iter_int Number of iterations run. References [1] van der Maaten, L.J.P.; Hinton, G.E. Visualizing High-Dimensional Data Using t-SNE. Journal of Machine Learning Research 9:2579-2605, 2008. [2] van der Maaten, L.J.P. t-Distributed Stochastic Neighbor Embedding https://lvdmaaten.github.io/tsne/ [3] L.J.P. van der Maaten. Accelerating t-SNE using Tree-Based Algorithms. Journal of Machine Learning Research 15(Oct):3221-3245, 2014. https://lvdmaaten.github.io/publications/papers/JMLR_2014.pdf Examples >>> import numpy as np >>> from sklearn.manifold import TSNE >>> X = np.array([[0, 0, 0], [0, 1, 1], [1, 0, 1], [1, 1, 1]]) >>> X_embedded = TSNE(n_components=2).fit_transform(X) >>> X_embedded.shape (4, 2) Methods fit(X[, y]) Fit X into an embedded space. fit_transform(X[, y]) Fit X into an embedded space and return that transformed output. get_params([deep]) Get parameters for this estimator. set_params(**params) Set the parameters of this estimator. fit(X, y=None) [source] Fit X into an embedded space. Parameters Xndarray of shape (n_samples, n_features) or (n_samples, n_samples) If the metric is ‘precomputed’ X must be a square distance matrix. Otherwise it contains a sample per row. If the method is ‘exact’, X may be a sparse matrix of type ‘csr’, ‘csc’ or ‘coo’. If the method is ‘barnes_hut’ and the metric is ‘precomputed’, X may be a precomputed sparse graph. yIgnored fit_transform(X, y=None) [source] Fit X into an embedded space and return that transformed output. Parameters Xndarray of shape (n_samples, n_features) or (n_samples, n_samples) If the metric is ‘precomputed’ X must be a square distance matrix. Otherwise it contains a sample per row. If the method is ‘exact’, X may be a sparse matrix of type ‘csr’, ‘csc’ or ‘coo’. If the method is ‘barnes_hut’ and the metric is ‘precomputed’, X may be a precomputed sparse graph. yIgnored Returns X_newndarray of shape (n_samples, n_components) Embedding of the training data in low-dimensional space. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. Examples using sklearn.manifold.TSNE Comparison of Manifold Learning methods t-SNE: The effect of various perplexity values on the shape Manifold Learning methods on a severed sphere Manifold learning on handwritten digits: Locally Linear Embedding, Isomap… Approximate nearest neighbors in TSNE
sklearn.modules.generated.sklearn.manifold.tsne
fit(X, y=None) [source] Fit X into an embedded space. Parameters Xndarray of shape (n_samples, n_features) or (n_samples, n_samples) If the metric is ‘precomputed’ X must be a square distance matrix. Otherwise it contains a sample per row. If the method is ‘exact’, X may be a sparse matrix of type ‘csr’, ‘csc’ or ‘coo’. If the method is ‘barnes_hut’ and the metric is ‘precomputed’, X may be a precomputed sparse graph. yIgnored
sklearn.modules.generated.sklearn.manifold.tsne#sklearn.manifold.TSNE.fit
fit_transform(X, y=None) [source] Fit X into an embedded space and return that transformed output. Parameters Xndarray of shape (n_samples, n_features) or (n_samples, n_samples) If the metric is ‘precomputed’ X must be a square distance matrix. Otherwise it contains a sample per row. If the method is ‘exact’, X may be a sparse matrix of type ‘csr’, ‘csc’ or ‘coo’. If the method is ‘barnes_hut’ and the metric is ‘precomputed’, X may be a precomputed sparse graph. yIgnored Returns X_newndarray of shape (n_samples, n_components) Embedding of the training data in low-dimensional space.
sklearn.modules.generated.sklearn.manifold.tsne#sklearn.manifold.TSNE.fit_transform
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.manifold.tsne#sklearn.manifold.TSNE.get_params
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.manifold.tsne#sklearn.manifold.TSNE.set_params
sklearn.metrics.accuracy_score(y_true, y_pred, *, normalize=True, sample_weight=None) [source] Accuracy classification score. In multilabel classification, this function computes subset accuracy: the set of labels predicted for a sample must exactly match the corresponding set of labels in y_true. Read more in the User Guide. Parameters y_true1d array-like, or label indicator array / sparse matrix Ground truth (correct) labels. y_pred1d array-like, or label indicator array / sparse matrix Predicted labels, as returned by a classifier. normalizebool, default=True If False, return the number of correctly classified samples. Otherwise, return the fraction of correctly classified samples. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat If normalize == True, return the fraction of correctly classified samples (float), else returns the number of correctly classified samples (int). The best performance is 1 with normalize == True and the number of samples with normalize == False. See also jaccard_score, hamming_loss, zero_one_loss Notes In binary and multiclass classification, this function is equal to the jaccard_score function. Examples >>> from sklearn.metrics import accuracy_score >>> y_pred = [0, 2, 1, 3] >>> y_true = [0, 1, 2, 3] >>> accuracy_score(y_true, y_pred) 0.5 >>> accuracy_score(y_true, y_pred, normalize=False) 2 In the multilabel case with binary label indicators: >>> import numpy as np >>> accuracy_score(np.array([[0, 1], [1, 1]]), np.ones((2, 2))) 0.5
sklearn.modules.generated.sklearn.metrics.accuracy_score#sklearn.metrics.accuracy_score
sklearn.metrics.adjusted_mutual_info_score(labels_true, labels_pred, *, average_method='arithmetic') [source] Adjusted Mutual Information between two clusterings. Adjusted Mutual Information (AMI) is an adjustment of the Mutual Information (MI) score to account for chance. It accounts for the fact that the MI is generally higher for two clusterings with a larger number of clusters, regardless of whether there is actually more information shared. For two clusterings \(U\) and \(V\), the AMI is given as: AMI(U, V) = [MI(U, V) - E(MI(U, V))] / [avg(H(U), H(V)) - E(MI(U, V))] This metric is independent of the absolute values of the labels: a permutation of the class or cluster label values won’t change the score value in any way. This metric is furthermore symmetric: switching label_true with label_pred will return the same score value. This can be useful to measure the agreement of two independent label assignments strategies on the same dataset when the real ground truth is not known. Be mindful that this function is an order of magnitude slower than other metrics, such as the Adjusted Rand Index. Read more in the User Guide. Parameters labels_trueint array, shape = [n_samples] A clustering of the data into disjoint subsets. labels_predint array-like of shape (n_samples,) A clustering of the data into disjoint subsets. average_methodstr, default=’arithmetic’ How to compute the normalizer in the denominator. Possible options are ‘min’, ‘geometric’, ‘arithmetic’, and ‘max’. New in version 0.20. Changed in version 0.22: The default value of average_method changed from ‘max’ to ‘arithmetic’. Returns ami: float (upperlimited by 1.0) The AMI returns a value of 1 when the two partitions are identical (ie perfectly matched). Random partitions (independent labellings) have an expected AMI around 0 on average hence can be negative. See also adjusted_rand_score Adjusted Rand Index. mutual_info_score Mutual Information (not adjusted for chance). References 1 Vinh, Epps, and Bailey, (2010). Information Theoretic Measures for Clusterings Comparison: Variants, Properties, Normalization and Correction for Chance, JMLR 2 Wikipedia entry for the Adjusted Mutual Information Examples Perfect labelings are both homogeneous and complete, hence have score 1.0: >>> from sklearn.metrics.cluster import adjusted_mutual_info_score >>> adjusted_mutual_info_score([0, 0, 1, 1], [0, 0, 1, 1]) ... 1.0 >>> adjusted_mutual_info_score([0, 0, 1, 1], [1, 1, 0, 0]) ... 1.0 If classes members are completely split across different clusters, the assignment is totally in-complete, hence the AMI is null: >>> adjusted_mutual_info_score([0, 0, 0, 0], [0, 1, 2, 3]) ... 0.0
sklearn.modules.generated.sklearn.metrics.adjusted_mutual_info_score#sklearn.metrics.adjusted_mutual_info_score
sklearn.metrics.adjusted_rand_score(labels_true, labels_pred) [source] Rand index adjusted for chance. The Rand Index computes a similarity measure between two clusterings by considering all pairs of samples and counting pairs that are assigned in the same or different clusters in the predicted and true clusterings. The raw RI score is then “adjusted for chance” into the ARI score using the following scheme: ARI = (RI - Expected_RI) / (max(RI) - Expected_RI) The adjusted Rand index is thus ensured to have a value close to 0.0 for random labeling independently of the number of clusters and samples and exactly 1.0 when the clusterings are identical (up to a permutation). ARI is a symmetric measure: adjusted_rand_score(a, b) == adjusted_rand_score(b, a) Read more in the User Guide. Parameters labels_trueint array, shape = [n_samples] Ground truth class labels to be used as a reference labels_predarray-like of shape (n_samples,) Cluster labels to evaluate Returns ARIfloat Similarity score between -1.0 and 1.0. Random labelings have an ARI close to 0.0. 1.0 stands for perfect match. See also adjusted_mutual_info_score Adjusted Mutual Information. References Hubert1985 L. Hubert and P. Arabie, Comparing Partitions, Journal of Classification 1985 https://link.springer.com/article/10.1007%2FBF01908075 Steinley2004 D. Steinley, Properties of the Hubert-Arabie adjusted Rand index, Psychological Methods 2004 wk https://en.wikipedia.org/wiki/Rand_index#Adjusted_Rand_index Examples Perfectly matching labelings have a score of 1 even >>> from sklearn.metrics.cluster import adjusted_rand_score >>> adjusted_rand_score([0, 0, 1, 1], [0, 0, 1, 1]) 1.0 >>> adjusted_rand_score([0, 0, 1, 1], [1, 1, 0, 0]) 1.0 Labelings that assign all classes members to the same clusters are complete but may not always be pure, hence penalized: >>> adjusted_rand_score([0, 0, 1, 2], [0, 0, 1, 1]) 0.57... ARI is symmetric, so labelings that have pure clusters with members coming from the same classes but unnecessary splits are penalized: >>> adjusted_rand_score([0, 0, 1, 1], [0, 0, 1, 2]) 0.57... If classes members are completely split across different clusters, the assignment is totally incomplete, hence the ARI is very low: >>> adjusted_rand_score([0, 0, 0, 0], [0, 1, 2, 3]) 0.0
sklearn.modules.generated.sklearn.metrics.adjusted_rand_score#sklearn.metrics.adjusted_rand_score
sklearn.metrics.auc(x, y) [source] Compute Area Under the Curve (AUC) using the trapezoidal rule. This is a general function, given points on a curve. For computing the area under the ROC-curve, see roc_auc_score. For an alternative way to summarize a precision-recall curve, see average_precision_score. Parameters xndarray of shape (n,) x coordinates. These must be either monotonic increasing or monotonic decreasing. yndarray of shape, (n,) y coordinates. Returns aucfloat See also roc_auc_score Compute the area under the ROC curve. average_precision_score Compute average precision from prediction scores. precision_recall_curve Compute precision-recall pairs for different probability thresholds. Examples >>> import numpy as np >>> from sklearn import metrics >>> y = np.array([1, 1, 2, 2]) >>> pred = np.array([0.1, 0.4, 0.35, 0.8]) >>> fpr, tpr, thresholds = metrics.roc_curve(y, pred, pos_label=2) >>> metrics.auc(fpr, tpr) 0.75
sklearn.modules.generated.sklearn.metrics.auc#sklearn.metrics.auc
sklearn.metrics.average_precision_score(y_true, y_score, *, average='macro', pos_label=1, sample_weight=None) [source] Compute average precision (AP) from prediction scores. AP summarizes a precision-recall curve as the weighted mean of precisions achieved at each threshold, with the increase in recall from the previous threshold used as the weight: \[\text{AP} = \sum_n (R_n - R_{n-1}) P_n\] where \(P_n\) and \(R_n\) are the precision and recall at the nth threshold [1]. This implementation is not interpolated and is different from computing the area under the precision-recall curve with the trapezoidal rule, which uses linear interpolation and can be too optimistic. Note: this implementation is restricted to the binary classification task or multilabel classification task. Read more in the User Guide. Parameters y_truendarray of shape (n_samples,) or (n_samples, n_classes) True binary labels or binary label indicators. y_scorendarray of shape (n_samples,) or (n_samples, n_classes) Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by decision_function on some classifiers). average{‘micro’, ‘samples’, ‘weighted’, ‘macro’} or None, default=’macro’ If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data: 'micro': Calculate metrics globally by considering each element of the label indicator matrix as a label. 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account. 'weighted': Calculate metrics for each label, and find their average, weighted by support (the number of true instances for each label). 'samples': Calculate metrics for each instance, and find their average. Will be ignored when y_true is binary. pos_labelint or str, default=1 The label of the positive class. Only applied to binary y_true. For multilabel-indicator y_true, pos_label is fixed to 1. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns average_precisionfloat See also roc_auc_score Compute the area under the ROC curve. precision_recall_curve Compute precision-recall pairs for different probability thresholds. Notes Changed in version 0.19: Instead of linearly interpolating between operating points, precisions are weighted by the change in recall since the last operating point. References 1 Wikipedia entry for the Average precision Examples >>> import numpy as np >>> from sklearn.metrics import average_precision_score >>> y_true = np.array([0, 0, 1, 1]) >>> y_scores = np.array([0.1, 0.4, 0.35, 0.8]) >>> average_precision_score(y_true, y_scores) 0.83...
sklearn.modules.generated.sklearn.metrics.average_precision_score#sklearn.metrics.average_precision_score
sklearn.metrics.balanced_accuracy_score(y_true, y_pred, *, sample_weight=None, adjusted=False) [source] Compute the balanced accuracy. The balanced accuracy in binary and multiclass classification problems to deal with imbalanced datasets. It is defined as the average of recall obtained on each class. The best value is 1 and the worst value is 0 when adjusted=False. Read more in the User Guide. New in version 0.20. Parameters y_true1d array-like Ground truth (correct) target values. y_pred1d array-like Estimated targets as returned by a classifier. sample_weightarray-like of shape (n_samples,), default=None Sample weights. adjustedbool, default=False When true, the result is adjusted for chance, so that random performance would score 0, and perfect performance scores 1. Returns balanced_accuracyfloat See also recall_score, roc_auc_score Notes Some literature promotes alternative definitions of balanced accuracy. Our definition is equivalent to accuracy_score with class-balanced sample weights, and shares desirable properties with the binary case. See the User Guide. References 1 Brodersen, K.H.; Ong, C.S.; Stephan, K.E.; Buhmann, J.M. (2010). The balanced accuracy and its posterior distribution. Proceedings of the 20th International Conference on Pattern Recognition, 3121-24. 2 John. D. Kelleher, Brian Mac Namee, Aoife D’Arcy, (2015). Fundamentals of Machine Learning for Predictive Data Analytics: Algorithms, Worked Examples, and Case Studies. Examples >>> from sklearn.metrics import balanced_accuracy_score >>> y_true = [0, 1, 0, 0, 1, 0] >>> y_pred = [0, 1, 0, 0, 0, 1] >>> balanced_accuracy_score(y_true, y_pred) 0.625
sklearn.modules.generated.sklearn.metrics.balanced_accuracy_score#sklearn.metrics.balanced_accuracy_score
sklearn.metrics.brier_score_loss(y_true, y_prob, *, sample_weight=None, pos_label=None) [source] Compute the Brier score loss. The smaller the Brier score loss, the better, hence the naming with “loss”. The Brier score measures the mean squared difference between the predicted probability and the actual outcome. The Brier score always takes on a value between zero and one, since this is the largest possible difference between a predicted probability (which must be between zero and one) and the actual outcome (which can take on values of only 0 and 1). It can be decomposed is the sum of refinement loss and calibration loss. The Brier score is appropriate for binary and categorical outcomes that can be structured as true or false, but is inappropriate for ordinal variables which can take on three or more values (this is because the Brier score assumes that all possible outcomes are equivalently “distant” from one another). Which label is considered to be the positive label is controlled via the parameter pos_label, which defaults to the greater label unless y_true is all 0 or all -1, in which case pos_label defaults to 1. Read more in the User Guide. Parameters y_truearray of shape (n_samples,) True targets. y_probarray of shape (n_samples,) Probabilities of the positive class. sample_weightarray-like of shape (n_samples,), default=None Sample weights. pos_labelint or str, default=None Label of the positive class. pos_label will be infered in the following manner: if y_true in {-1, 1} or {0, 1}, pos_label defaults to 1; else if y_true contains string, an error will be raised and pos_label should be explicitely specified; otherwise, pos_label defaults to the greater label, i.e. np.unique(y_true)[-1]. Returns scorefloat Brier score loss. References 1 Wikipedia entry for the Brier score. Examples >>> import numpy as np >>> from sklearn.metrics import brier_score_loss >>> y_true = np.array([0, 1, 1, 0]) >>> y_true_categorical = np.array(["spam", "ham", "ham", "spam"]) >>> y_prob = np.array([0.1, 0.9, 0.8, 0.3]) >>> brier_score_loss(y_true, y_prob) 0.037... >>> brier_score_loss(y_true, 1-y_prob, pos_label=0) 0.037... >>> brier_score_loss(y_true_categorical, y_prob, pos_label="ham") 0.037... >>> brier_score_loss(y_true, np.array(y_prob) > 0.5) 0.0
sklearn.modules.generated.sklearn.metrics.brier_score_loss#sklearn.metrics.brier_score_loss
sklearn.metrics.calinski_harabasz_score(X, labels) [source] Compute the Calinski and Harabasz score. It is also known as the Variance Ratio Criterion. The score is defined as ratio between the within-cluster dispersion and the between-cluster dispersion. Read more in the User Guide. Parameters Xarray-like of shape (n_samples, n_features) A list of n_features-dimensional data points. Each row corresponds to a single data point. labelsarray-like of shape (n_samples,) Predicted labels for each sample. Returns scorefloat The resulting Calinski-Harabasz score. References 1 T. Calinski and J. Harabasz, 1974. “A dendrite method for cluster analysis”. Communications in Statistics
sklearn.modules.generated.sklearn.metrics.calinski_harabasz_score#sklearn.metrics.calinski_harabasz_score
sklearn.metrics.check_scoring(estimator, scoring=None, *, allow_none=False) [source] Determine scorer from user options. A TypeError will be thrown if the estimator cannot be scored. Parameters estimatorestimator object implementing ‘fit’ The object to use to fit the data. scoringstr or callable, default=None A string (see model evaluation documentation) or a scorer callable object / function with signature scorer(estimator, X, y). allow_nonebool, default=False If no scoring is specified and the estimator has no score function, we can either return None or raise an exception. Returns scoringcallable A scorer callable object / function with signature scorer(estimator, X, y).
sklearn.modules.generated.sklearn.metrics.check_scoring#sklearn.metrics.check_scoring
sklearn.metrics.classification_report(y_true, y_pred, *, labels=None, target_names=None, sample_weight=None, digits=2, output_dict=False, zero_division='warn') [source] Build a text report showing the main classification metrics. Read more in the User Guide. Parameters y_true1d array-like, or label indicator array / sparse matrix Ground truth (correct) target values. y_pred1d array-like, or label indicator array / sparse matrix Estimated targets as returned by a classifier. labelsarray-like of shape (n_labels,), default=None Optional list of label indices to include in the report. target_nameslist of str of shape (n_labels,), default=None Optional display names matching the labels (same order). sample_weightarray-like of shape (n_samples,), default=None Sample weights. digitsint, default=2 Number of digits for formatting output floating point values. When output_dict is True, this will be ignored and the returned values will not be rounded. output_dictbool, default=False If True, return output as dict. New in version 0.20. zero_division“warn”, 0 or 1, default=”warn” Sets the value to return when there is a zero division. If set to “warn”, this acts as 0, but warnings are also raised. Returns reportstring / dict Text summary of the precision, recall, F1 score for each class. Dictionary returned if output_dict is True. Dictionary has the following structure: {'label 1': {'precision':0.5, 'recall':1.0, 'f1-score':0.67, 'support':1}, 'label 2': { ... }, ... } The reported averages include macro average (averaging the unweighted mean per label), weighted average (averaging the support-weighted mean per label), and sample average (only for multilabel classification). Micro average (averaging the total true positives, false negatives and false positives) is only shown for multi-label or multi-class with a subset of classes, because it corresponds to accuracy otherwise and would be the same for all metrics. See also precision_recall_fscore_support for more details on averages. Note that in binary classification, recall of the positive class is also known as “sensitivity”; recall of the negative class is “specificity”. See also precision_recall_fscore_support, confusion_matrix multilabel_confusion_matrix Examples >>> from sklearn.metrics import classification_report >>> y_true = [0, 1, 2, 2, 2] >>> y_pred = [0, 0, 2, 2, 1] >>> target_names = ['class 0', 'class 1', 'class 2'] >>> print(classification_report(y_true, y_pred, target_names=target_names)) precision recall f1-score support class 0 0.50 1.00 0.67 1 class 1 0.00 0.00 0.00 1 class 2 1.00 0.67 0.80 3 accuracy 0.60 5 macro avg 0.50 0.56 0.49 5 weighted avg 0.70 0.60 0.61 5 >>> y_pred = [1, 1, 0] >>> y_true = [1, 1, 1] >>> print(classification_report(y_true, y_pred, labels=[1, 2, 3])) precision recall f1-score support 1 1.00 0.67 0.80 3 2 0.00 0.00 0.00 0 3 0.00 0.00 0.00 0 micro avg 1.00 0.67 0.80 3 macro avg 0.33 0.22 0.27 3 weighted avg 1.00 0.67 0.80 3
sklearn.modules.generated.sklearn.metrics.classification_report#sklearn.metrics.classification_report
sklearn.metrics.cluster.contingency_matrix(labels_true, labels_pred, *, eps=None, sparse=False, dtype=<class 'numpy.int64'>) [source] Build a contingency matrix describing the relationship between labels. Parameters labels_trueint array, shape = [n_samples] Ground truth class labels to be used as a reference. labels_predarray-like of shape (n_samples,) Cluster labels to evaluate. epsfloat, default=None If a float, that value is added to all values in the contingency matrix. This helps to stop NaN propagation. If None, nothing is adjusted. sparsebool, default=False If True, return a sparse CSR continency matrix. If eps is not None and sparse is True will raise ValueError. New in version 0.18. dtypenumeric type, default=np.int64 Output dtype. Ignored if eps is not None. New in version 0.24. Returns contingency{array-like, sparse}, shape=[n_classes_true, n_classes_pred] Matrix \(C\) such that \(C_{i, j}\) is the number of samples in true class \(i\) and in predicted class \(j\). If eps is None, the dtype of this array will be integer unless set otherwise with the dtype argument. If eps is given, the dtype will be float. Will be a sklearn.sparse.csr_matrix if sparse=True.
sklearn.modules.generated.sklearn.metrics.cluster.contingency_matrix#sklearn.metrics.cluster.contingency_matrix
sklearn.metrics.cluster.pair_confusion_matrix(labels_true, labels_pred) [source] Pair confusion matrix arising from two clusterings. The pair confusion matrix \(C\) computes a 2 by 2 similarity matrix between two clusterings by considering all pairs of samples and counting pairs that are assigned into the same or into different clusters under the true and predicted clusterings. Considering a pair of samples that is clustered together a positive pair, then as in binary classification the count of true negatives is \(C_{00}\), false negatives is \(C_{10}\), true positives is \(C_{11}\) and false positives is \(C_{01}\). Read more in the User Guide. Parameters labels_truearray-like of shape (n_samples,), dtype=integral Ground truth class labels to be used as a reference. labels_predarray-like of shape (n_samples,), dtype=integral Cluster labels to evaluate. Returns Cndarray of shape (2, 2), dtype=np.int64 The contingency matrix. See also rand_score Rand Score adjusted_rand_score Adjusted Rand Score adjusted_mutual_info_score Adjusted Mutual Information References Examples Perfectly matching labelings have all non-zero entries on the diagonal regardless of actual label values: >>> from sklearn.metrics.cluster import pair_confusion_matrix >>> pair_confusion_matrix([0, 0, 1, 1], [1, 1, 0, 0]) array([[8, 0], [0, 4]]... Labelings that assign all classes members to the same clusters are complete but may be not always pure, hence penalized, and have some off-diagonal non-zero entries: >>> pair_confusion_matrix([0, 0, 1, 2], [0, 0, 1, 1]) array([[8, 2], [0, 2]]... Note that the matrix is not symmetric.
sklearn.modules.generated.sklearn.metrics.cluster.pair_confusion_matrix#sklearn.metrics.cluster.pair_confusion_matrix
sklearn.metrics.cohen_kappa_score(y1, y2, *, labels=None, weights=None, sample_weight=None) [source] Cohen’s kappa: a statistic that measures inter-annotator agreement. This function computes Cohen’s kappa [1], a score that expresses the level of agreement between two annotators on a classification problem. It is defined as \[\kappa = (p_o - p_e) / (1 - p_e)\] where \(p_o\) is the empirical probability of agreement on the label assigned to any sample (the observed agreement ratio), and \(p_e\) is the expected agreement when both annotators assign labels randomly. \(p_e\) is estimated using a per-annotator empirical prior over the class labels [2]. Read more in the User Guide. Parameters y1array of shape (n_samples,) Labels assigned by the first annotator. y2array of shape (n_samples,) Labels assigned by the second annotator. The kappa statistic is symmetric, so swapping y1 and y2 doesn’t change the value. labelsarray-like of shape (n_classes,), default=None List of labels to index the matrix. This may be used to select a subset of labels. If None, all labels that appear at least once in y1 or y2 are used. weights{‘linear’, ‘quadratic’}, default=None Weighting type to calculate the score. None means no weighted; “linear” means linear weighted; “quadratic” means quadratic weighted. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns kappafloat The kappa statistic, which is a number between -1 and 1. The maximum value means complete agreement; zero or lower means chance agreement. References 1 J. Cohen (1960). “A coefficient of agreement for nominal scales”. Educational and Psychological Measurement 20(1):37-46. doi:10.1177/001316446002000104. 2 R. Artstein and M. Poesio (2008). “Inter-coder agreement for computational linguistics”. Computational Linguistics 34(4):555-596. 3 Wikipedia entry for the Cohen’s kappa.
sklearn.modules.generated.sklearn.metrics.cohen_kappa_score#sklearn.metrics.cohen_kappa_score
sklearn.metrics.completeness_score(labels_true, labels_pred) [source] Completeness metric of a cluster labeling given a ground truth. A clustering result satisfies completeness if all the data points that are members of a given class are elements of the same cluster. This metric is independent of the absolute values of the labels: a permutation of the class or cluster label values won’t change the score value in any way. This metric is not symmetric: switching label_true with label_pred will return the homogeneity_score which will be different in general. Read more in the User Guide. Parameters labels_trueint array, shape = [n_samples] ground truth class labels to be used as a reference labels_predarray-like of shape (n_samples,) cluster labels to evaluate Returns completenessfloat score between 0.0 and 1.0. 1.0 stands for perfectly complete labeling See also homogeneity_score v_measure_score References 1 Andrew Rosenberg and Julia Hirschberg, 2007. V-Measure: A conditional entropy-based external cluster evaluation measure Examples Perfect labelings are complete: >>> from sklearn.metrics.cluster import completeness_score >>> completeness_score([0, 0, 1, 1], [1, 1, 0, 0]) 1.0 Non-perfect labelings that assign all classes members to the same clusters are still complete: >>> print(completeness_score([0, 0, 1, 1], [0, 0, 0, 0])) 1.0 >>> print(completeness_score([0, 1, 2, 3], [0, 0, 1, 1])) 0.999... If classes members are split across different clusters, the assignment cannot be complete: >>> print(completeness_score([0, 0, 1, 1], [0, 1, 0, 1])) 0.0 >>> print(completeness_score([0, 0, 0, 0], [0, 1, 2, 3])) 0.0
sklearn.modules.generated.sklearn.metrics.completeness_score#sklearn.metrics.completeness_score
class sklearn.metrics.ConfusionMatrixDisplay(confusion_matrix, *, display_labels=None) [source] Confusion Matrix visualization. It is recommend to use plot_confusion_matrix to create a ConfusionMatrixDisplay. All parameters are stored as attributes. Read more in the User Guide. Parameters confusion_matrixndarray of shape (n_classes, n_classes) Confusion matrix. display_labelsndarray of shape (n_classes,), default=None Display labels for plot. If None, display labels are set from 0 to n_classes - 1. Attributes im_matplotlib AxesImage Image representing the confusion matrix. text_ndarray of shape (n_classes, n_classes), dtype=matplotlib Text, or None Array of matplotlib axes. None if include_values is false. ax_matplotlib Axes Axes with confusion matrix. figure_matplotlib Figure Figure containing the confusion matrix. See also confusion_matrix Compute Confusion Matrix to evaluate the accuracy of a classification. plot_confusion_matrix Plot Confusion Matrix. Examples >>> from sklearn.datasets import make_classification >>> from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay >>> from sklearn.model_selection import train_test_split >>> from sklearn.svm import SVC >>> X, y = make_classification(random_state=0) >>> X_train, X_test, y_train, y_test = train_test_split(X, y, ... random_state=0) >>> clf = SVC(random_state=0) >>> clf.fit(X_train, y_train) SVC(random_state=0) >>> predictions = clf.predict(X_test) >>> cm = confusion_matrix(y_test, predictions, labels=clf.classes_) >>> disp = ConfusionMatrixDisplay(confusion_matrix=cm, ... display_labels=clf.classes_) >>> disp.plot() Methods plot(*[, include_values, cmap, …]) Plot visualization. plot(*, include_values=True, cmap='viridis', xticks_rotation='horizontal', values_format=None, ax=None, colorbar=True) [source] Plot visualization. Parameters include_valuesbool, default=True Includes values in confusion matrix. cmapstr or matplotlib Colormap, default=’viridis’ Colormap recognized by matplotlib. xticks_rotation{‘vertical’, ‘horizontal’} or float, default=’horizontal’ Rotation of xtick labels. values_formatstr, default=None Format specification for values in confusion matrix. If None, the format specification is ‘d’ or ‘.2g’ whichever is shorter. axmatplotlib axes, default=None Axes object to plot on. If None, a new figure and axes is created. colorbarbool, default=True Whether or not to add a colorbar to the plot. Returns displayConfusionMatrixDisplay
sklearn.modules.generated.sklearn.metrics.confusionmatrixdisplay#sklearn.metrics.ConfusionMatrixDisplay
sklearn.metrics.ConfusionMatrixDisplay class sklearn.metrics.ConfusionMatrixDisplay(confusion_matrix, *, display_labels=None) [source] Confusion Matrix visualization. It is recommend to use plot_confusion_matrix to create a ConfusionMatrixDisplay. All parameters are stored as attributes. Read more in the User Guide. Parameters confusion_matrixndarray of shape (n_classes, n_classes) Confusion matrix. display_labelsndarray of shape (n_classes,), default=None Display labels for plot. If None, display labels are set from 0 to n_classes - 1. Attributes im_matplotlib AxesImage Image representing the confusion matrix. text_ndarray of shape (n_classes, n_classes), dtype=matplotlib Text, or None Array of matplotlib axes. None if include_values is false. ax_matplotlib Axes Axes with confusion matrix. figure_matplotlib Figure Figure containing the confusion matrix. See also confusion_matrix Compute Confusion Matrix to evaluate the accuracy of a classification. plot_confusion_matrix Plot Confusion Matrix. Examples >>> from sklearn.datasets import make_classification >>> from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay >>> from sklearn.model_selection import train_test_split >>> from sklearn.svm import SVC >>> X, y = make_classification(random_state=0) >>> X_train, X_test, y_train, y_test = train_test_split(X, y, ... random_state=0) >>> clf = SVC(random_state=0) >>> clf.fit(X_train, y_train) SVC(random_state=0) >>> predictions = clf.predict(X_test) >>> cm = confusion_matrix(y_test, predictions, labels=clf.classes_) >>> disp = ConfusionMatrixDisplay(confusion_matrix=cm, ... display_labels=clf.classes_) >>> disp.plot() Methods plot(*[, include_values, cmap, …]) Plot visualization. plot(*, include_values=True, cmap='viridis', xticks_rotation='horizontal', values_format=None, ax=None, colorbar=True) [source] Plot visualization. Parameters include_valuesbool, default=True Includes values in confusion matrix. cmapstr or matplotlib Colormap, default=’viridis’ Colormap recognized by matplotlib. xticks_rotation{‘vertical’, ‘horizontal’} or float, default=’horizontal’ Rotation of xtick labels. values_formatstr, default=None Format specification for values in confusion matrix. If None, the format specification is ‘d’ or ‘.2g’ whichever is shorter. axmatplotlib axes, default=None Axes object to plot on. If None, a new figure and axes is created. colorbarbool, default=True Whether or not to add a colorbar to the plot. Returns displayConfusionMatrixDisplay Examples using sklearn.metrics.ConfusionMatrixDisplay Visualizations with Display Objects
sklearn.modules.generated.sklearn.metrics.confusionmatrixdisplay
plot(*, include_values=True, cmap='viridis', xticks_rotation='horizontal', values_format=None, ax=None, colorbar=True) [source] Plot visualization. Parameters include_valuesbool, default=True Includes values in confusion matrix. cmapstr or matplotlib Colormap, default=’viridis’ Colormap recognized by matplotlib. xticks_rotation{‘vertical’, ‘horizontal’} or float, default=’horizontal’ Rotation of xtick labels. values_formatstr, default=None Format specification for values in confusion matrix. If None, the format specification is ‘d’ or ‘.2g’ whichever is shorter. axmatplotlib axes, default=None Axes object to plot on. If None, a new figure and axes is created. colorbarbool, default=True Whether or not to add a colorbar to the plot. Returns displayConfusionMatrixDisplay
sklearn.modules.generated.sklearn.metrics.confusionmatrixdisplay#sklearn.metrics.ConfusionMatrixDisplay.plot
sklearn.metrics.confusion_matrix(y_true, y_pred, *, labels=None, sample_weight=None, normalize=None) [source] Compute confusion matrix to evaluate the accuracy of a classification. By definition a confusion matrix \(C\) is such that \(C_{i, j}\) is equal to the number of observations known to be in group \(i\) and predicted to be in group \(j\). Thus in binary classification, the count of true negatives is \(C_{0,0}\), false negatives is \(C_{1,0}\), true positives is \(C_{1,1}\) and false positives is \(C_{0,1}\). Read more in the User Guide. Parameters y_truearray-like of shape (n_samples,) Ground truth (correct) target values. y_predarray-like of shape (n_samples,) Estimated targets as returned by a classifier. labelsarray-like of shape (n_classes), default=None List of labels to index the matrix. This may be used to reorder or select a subset of labels. If None is given, those that appear at least once in y_true or y_pred are used in sorted order. sample_weightarray-like of shape (n_samples,), default=None Sample weights. New in version 0.18. normalize{‘true’, ‘pred’, ‘all’}, default=None Normalizes confusion matrix over the true (rows), predicted (columns) conditions or all the population. If None, confusion matrix will not be normalized. Returns Cndarray of shape (n_classes, n_classes) Confusion matrix whose i-th row and j-th column entry indicates the number of samples with true label being i-th class and predicted label being j-th class. See also plot_confusion_matrix Plot Confusion Matrix. ConfusionMatrixDisplay Confusion Matrix visualization. References 1 Wikipedia entry for the Confusion matrix (Wikipedia and other references may use a different convention for axes). Examples >>> from sklearn.metrics import confusion_matrix >>> y_true = [2, 0, 2, 2, 0, 1] >>> y_pred = [0, 0, 2, 2, 0, 2] >>> confusion_matrix(y_true, y_pred) array([[2, 0, 0], [0, 0, 1], [1, 0, 2]]) >>> y_true = ["cat", "ant", "cat", "cat", "ant", "bird"] >>> y_pred = ["ant", "ant", "cat", "cat", "ant", "cat"] >>> confusion_matrix(y_true, y_pred, labels=["ant", "bird", "cat"]) array([[2, 0, 0], [0, 0, 1], [1, 0, 2]]) In the binary case, we can extract true positives, etc as follows: >>> tn, fp, fn, tp = confusion_matrix([0, 1, 0, 1], [1, 1, 1, 0]).ravel() >>> (tn, fp, fn, tp) (0, 2, 1, 1)
sklearn.modules.generated.sklearn.metrics.confusion_matrix#sklearn.metrics.confusion_matrix
sklearn.metrics.consensus_score(a, b, *, similarity='jaccard') [source] The similarity of two sets of biclusters. Similarity between individual biclusters is computed. Then the best matching between sets is found using the Hungarian algorithm. The final score is the sum of similarities divided by the size of the larger set. Read more in the User Guide. Parameters a(rows, columns) Tuple of row and column indicators for a set of biclusters. b(rows, columns) Another set of biclusters like a. similarity‘jaccard’ or callable, default=’jaccard’ May be the string “jaccard” to use the Jaccard coefficient, or any function that takes four arguments, each of which is a 1d indicator vector: (a_rows, a_columns, b_rows, b_columns). References Hochreiter, Bodenhofer, et. al., 2010. FABIA: factor analysis for bicluster acquisition.
sklearn.modules.generated.sklearn.metrics.consensus_score#sklearn.metrics.consensus_score
sklearn.metrics.coverage_error(y_true, y_score, *, sample_weight=None) [source] Coverage error measure. Compute how far we need to go through the ranked scores to cover all true labels. The best value is equal to the average number of labels in y_true per sample. Ties in y_scores are broken by giving maximal rank that would have been assigned to all tied values. Note: Our implementation’s score is 1 greater than the one given in Tsoumakas et al., 2010. This extends it to handle the degenerate case in which an instance has 0 true labels. Read more in the User Guide. Parameters y_truendarray of shape (n_samples, n_labels) True binary labels in binary indicator format. y_scorendarray of shape (n_samples, n_labels) Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by “decision_function” on some classifiers). sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns coverage_errorfloat References 1 Tsoumakas, G., Katakis, I., & Vlahavas, I. (2010). Mining multi-label data. In Data mining and knowledge discovery handbook (pp. 667-685). Springer US.
sklearn.modules.generated.sklearn.metrics.coverage_error#sklearn.metrics.coverage_error
sklearn.metrics.davies_bouldin_score(X, labels) [source] Computes the Davies-Bouldin score. The score is defined as the average similarity measure of each cluster with its most similar cluster, where similarity is the ratio of within-cluster distances to between-cluster distances. Thus, clusters which are farther apart and less dispersed will result in a better score. The minimum score is zero, with lower values indicating better clustering. Read more in the User Guide. New in version 0.20. Parameters Xarray-like of shape (n_samples, n_features) A list of n_features-dimensional data points. Each row corresponds to a single data point. labelsarray-like of shape (n_samples,) Predicted labels for each sample. Returns score: float The resulting Davies-Bouldin score. References 1 Davies, David L.; Bouldin, Donald W. (1979). “A Cluster Separation Measure”. IEEE Transactions on Pattern Analysis and Machine Intelligence. PAMI-1 (2): 224-227
sklearn.modules.generated.sklearn.metrics.davies_bouldin_score#sklearn.metrics.davies_bouldin_score
sklearn.metrics.dcg_score(y_true, y_score, *, k=None, log_base=2, sample_weight=None, ignore_ties=False) [source] Compute Discounted Cumulative Gain. Sum the true scores ranked in the order induced by the predicted scores, after applying a logarithmic discount. This ranking metric yields a high value if true labels are ranked high by y_score. Usually the Normalized Discounted Cumulative Gain (NDCG, computed by ndcg_score) is preferred. Parameters y_truendarray of shape (n_samples, n_labels) True targets of multilabel classification, or true scores of entities to be ranked. y_scorendarray of shape (n_samples, n_labels) Target scores, can either be probability estimates, confidence values, or non-thresholded measure of decisions (as returned by “decision_function” on some classifiers). kint, default=None Only consider the highest k scores in the ranking. If None, use all outputs. log_basefloat, default=2 Base of the logarithm used for the discount. A low value means a sharper discount (top results are more important). sample_weightndarray of shape (n_samples,), default=None Sample weights. If None, all samples are given the same weight. ignore_tiesbool, default=False Assume that there are no ties in y_score (which is likely to be the case if y_score is continuous) for efficiency gains. Returns discounted_cumulative_gainfloat The averaged sample DCG scores. See also ndcg_score The Discounted Cumulative Gain divided by the Ideal Discounted Cumulative Gain (the DCG obtained for a perfect ranking), in order to have a score between 0 and 1. References Wikipedia entry for Discounted Cumulative Gain. Jarvelin, K., & Kekalainen, J. (2002). Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems (TOIS), 20(4), 422-446. Wang, Y., Wang, L., Li, Y., He, D., Chen, W., & Liu, T. Y. (2013, May). A theoretical analysis of NDCG ranking measures. In Proceedings of the 26th Annual Conference on Learning Theory (COLT 2013). McSherry, F., & Najork, M. (2008, March). Computing information retrieval performance measures efficiently in the presence of tied scores. In European conference on information retrieval (pp. 414-421). Springer, Berlin, Heidelberg. Examples >>> from sklearn.metrics import dcg_score >>> # we have groud-truth relevance of some answers to a query: >>> true_relevance = np.asarray([[10, 0, 0, 1, 5]]) >>> # we predict scores for the answers >>> scores = np.asarray([[.1, .2, .3, 4, 70]]) >>> dcg_score(true_relevance, scores) 9.49... >>> # we can set k to truncate the sum; only top k answers contribute >>> dcg_score(true_relevance, scores, k=2) 5.63... >>> # now we have some ties in our prediction >>> scores = np.asarray([[1, 0, 0, 0, 1]]) >>> # by default ties are averaged, so here we get the average true >>> # relevance of our top predictions: (10 + 5) / 2 = 7.5 >>> dcg_score(true_relevance, scores, k=1) 7.5 >>> # we can choose to ignore ties for faster results, but only >>> # if we know there aren't ties in our scores, otherwise we get >>> # wrong results: >>> dcg_score(true_relevance, ... scores, k=1, ignore_ties=True) 5.0
sklearn.modules.generated.sklearn.metrics.dcg_score#sklearn.metrics.dcg_score
class sklearn.metrics.DetCurveDisplay(*, fpr, fnr, estimator_name=None, pos_label=None) [source] DET curve visualization. It is recommend to use plot_det_curve to create a visualizer. All parameters are stored as attributes. Read more in the User Guide. New in version 0.24. Parameters fprndarray False positive rate. tprndarray True positive rate. estimator_namestr, default=None Name of estimator. If None, the estimator name is not shown. pos_labelstr or int, default=None The label of the positive class. Attributes line_matplotlib Artist DET Curve. ax_matplotlib Axes Axes with DET Curve. figure_matplotlib Figure Figure containing the curve. See also det_curve Compute error rates for different probability thresholds. plot_det_curve Plot detection error tradeoff (DET) curve. Examples >>> import matplotlib.pyplot as plt >>> import numpy as np >>> from sklearn import metrics >>> y = np.array([0, 0, 1, 1]) >>> pred = np.array([0.1, 0.4, 0.35, 0.8]) >>> fpr, fnr, thresholds = metrics.det_curve(y, pred) >>> display = metrics.DetCurveDisplay( ... fpr=fpr, fnr=fnr, estimator_name='example estimator' ... ) >>> display.plot() >>> plt.show() Methods plot([ax, name]) Plot visualization. plot(ax=None, *, name=None, **kwargs) [source] Plot visualization. Parameters axmatplotlib axes, default=None Axes object to plot on. If None, a new figure and axes is created. namestr, default=None Name of DET curve for labeling. If None, use the name of the estimator. Returns displayDetCurveDisplay Object that stores computed values.
sklearn.modules.generated.sklearn.metrics.detcurvedisplay#sklearn.metrics.DetCurveDisplay
sklearn.metrics.DetCurveDisplay class sklearn.metrics.DetCurveDisplay(*, fpr, fnr, estimator_name=None, pos_label=None) [source] DET curve visualization. It is recommend to use plot_det_curve to create a visualizer. All parameters are stored as attributes. Read more in the User Guide. New in version 0.24. Parameters fprndarray False positive rate. tprndarray True positive rate. estimator_namestr, default=None Name of estimator. If None, the estimator name is not shown. pos_labelstr or int, default=None The label of the positive class. Attributes line_matplotlib Artist DET Curve. ax_matplotlib Axes Axes with DET Curve. figure_matplotlib Figure Figure containing the curve. See also det_curve Compute error rates for different probability thresholds. plot_det_curve Plot detection error tradeoff (DET) curve. Examples >>> import matplotlib.pyplot as plt >>> import numpy as np >>> from sklearn import metrics >>> y = np.array([0, 0, 1, 1]) >>> pred = np.array([0.1, 0.4, 0.35, 0.8]) >>> fpr, fnr, thresholds = metrics.det_curve(y, pred) >>> display = metrics.DetCurveDisplay( ... fpr=fpr, fnr=fnr, estimator_name='example estimator' ... ) >>> display.plot() >>> plt.show() Methods plot([ax, name]) Plot visualization. plot(ax=None, *, name=None, **kwargs) [source] Plot visualization. Parameters axmatplotlib axes, default=None Axes object to plot on. If None, a new figure and axes is created. namestr, default=None Name of DET curve for labeling. If None, use the name of the estimator. Returns displayDetCurveDisplay Object that stores computed values.
sklearn.modules.generated.sklearn.metrics.detcurvedisplay
plot(ax=None, *, name=None, **kwargs) [source] Plot visualization. Parameters axmatplotlib axes, default=None Axes object to plot on. If None, a new figure and axes is created. namestr, default=None Name of DET curve for labeling. If None, use the name of the estimator. Returns displayDetCurveDisplay Object that stores computed values.
sklearn.modules.generated.sklearn.metrics.detcurvedisplay#sklearn.metrics.DetCurveDisplay.plot
sklearn.metrics.det_curve(y_true, y_score, pos_label=None, sample_weight=None) [source] Compute error rates for different probability thresholds. Note This metric is used for evaluation of ranking and error tradeoffs of a binary classification task. Read more in the User Guide. New in version 0.24. Parameters y_truendarray of shape (n_samples,) True binary labels. If labels are not either {-1, 1} or {0, 1}, then pos_label should be explicitly given. y_scorendarray of shape of (n_samples,) Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by “decision_function” on some classifiers). pos_labelint or str, default=None The label of the positive class. When pos_label=None, if y_true is in {-1, 1} or {0, 1}, pos_label is set to 1, otherwise an error will be raised. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns fprndarray of shape (n_thresholds,) False positive rate (FPR) such that element i is the false positive rate of predictions with score >= thresholds[i]. This is occasionally referred to as false acceptance propability or fall-out. fnrndarray of shape (n_thresholds,) False negative rate (FNR) such that element i is the false negative rate of predictions with score >= thresholds[i]. This is occasionally referred to as false rejection or miss rate. thresholdsndarray of shape (n_thresholds,) Decreasing score values. See also plot_det_curve Plot detection error tradeoff (DET) curve. DetCurveDisplay DET curve visualization. roc_curve Compute Receiver operating characteristic (ROC) curve. precision_recall_curve Compute precision-recall curve. Examples >>> import numpy as np >>> from sklearn.metrics import det_curve >>> y_true = np.array([0, 0, 1, 1]) >>> y_scores = np.array([0.1, 0.4, 0.35, 0.8]) >>> fpr, fnr, thresholds = det_curve(y_true, y_scores) >>> fpr array([0.5, 0.5, 0. ]) >>> fnr array([0. , 0.5, 0.5]) >>> thresholds array([0.35, 0.4 , 0.8 ])
sklearn.modules.generated.sklearn.metrics.det_curve#sklearn.metrics.det_curve
sklearn.metrics.explained_variance_score(y_true, y_pred, *, sample_weight=None, multioutput='uniform_average') [source] Explained variance regression score function. Best possible score is 1.0, lower values are worse. Read more in the User Guide. Parameters y_truearray-like of shape (n_samples,) or (n_samples, n_outputs) Ground truth (correct) target values. y_predarray-like of shape (n_samples,) or (n_samples, n_outputs) Estimated target values. sample_weightarray-like of shape (n_samples,), default=None Sample weights. multioutput{‘raw_values’, ‘uniform_average’, ‘variance_weighted’} or array-like of shape (n_outputs,), default=’uniform_average’ Defines aggregating of multiple output scores. Array-like value defines weights used to average scores. ‘raw_values’ : Returns a full set of scores in case of multioutput input. ‘uniform_average’ : Scores of all outputs are averaged with uniform weight. ‘variance_weighted’ : Scores of all outputs are averaged, weighted by the variances of each individual output. Returns scorefloat or ndarray of floats The explained variance or ndarray if ‘multioutput’ is ‘raw_values’. Notes This is not a symmetric function. Examples >>> from sklearn.metrics import explained_variance_score >>> y_true = [3, -0.5, 2, 7] >>> y_pred = [2.5, 0.0, 2, 8] >>> explained_variance_score(y_true, y_pred) 0.957... >>> y_true = [[0.5, 1], [-1, 1], [7, -6]] >>> y_pred = [[0, 2], [-1, 2], [8, -5]] >>> explained_variance_score(y_true, y_pred, multioutput='uniform_average') 0.983...
sklearn.modules.generated.sklearn.metrics.explained_variance_score#sklearn.metrics.explained_variance_score
sklearn.metrics.f1_score(y_true, y_pred, *, labels=None, pos_label=1, average='binary', sample_weight=None, zero_division='warn') [source] Compute the F1 score, also known as balanced F-score or F-measure. The F1 score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst score at 0. The relative contribution of precision and recall to the F1 score are equal. The formula for the F1 score is: F1 = 2 * (precision * recall) / (precision + recall) In the multi-class and multi-label case, this is the average of the F1 score of each class with weighting depending on the average parameter. Read more in the User Guide. Parameters y_true1d array-like, or label indicator array / sparse matrix Ground truth (correct) target values. y_pred1d array-like, or label indicator array / sparse matrix Estimated targets as returned by a classifier. labelsarray-like, default=None The set of labels to include when average != 'binary', and their order if average is None. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in y_true and y_pred are used in sorted order. Changed in version 0.17: Parameter labels improved for multiclass problem. pos_labelstr or int, default=1 The class to report if average='binary' and the data is binary. If the data are multiclass or multilabel, this will be ignored; setting labels=[pos_label] and average != 'binary' will report scores for that label only. average{‘micro’, ‘macro’, ‘samples’,’weighted’, ‘binary’} or None, default=’binary’ This parameter is required for multiclass/multilabel targets. If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data: 'binary': Only report results for the class specified by pos_label. This is applicable only if targets (y_{true,pred}) are binary. 'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives. 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account. 'weighted': Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance; it can result in an F-score that is not between precision and recall. 'samples': Calculate metrics for each instance, and find their average (only meaningful for multilabel classification where this differs from accuracy_score). sample_weightarray-like of shape (n_samples,), default=None Sample weights. zero_division“warn”, 0 or 1, default=”warn” Sets the value to return when there is a zero division, i.e. when all predictions and labels are negative. If set to “warn”, this acts as 0, but warnings are also raised. Returns f1_scorefloat or array of float, shape = [n_unique_labels] F1 score of the positive class in binary classification or weighted average of the F1 scores of each class for the multiclass task. See also fbeta_score, precision_recall_fscore_support, jaccard_score multilabel_confusion_matrix Notes When true positive + false positive == 0, precision is undefined. When true positive + false negative == 0, recall is undefined. In such cases, by default the metric will be set to 0, as will f-score, and UndefinedMetricWarning will be raised. This behavior can be modified with zero_division. References 1 Wikipedia entry for the F1-score. Examples >>> from sklearn.metrics import f1_score >>> y_true = [0, 1, 2, 0, 1, 2] >>> y_pred = [0, 2, 1, 0, 0, 1] >>> f1_score(y_true, y_pred, average='macro') 0.26... >>> f1_score(y_true, y_pred, average='micro') 0.33... >>> f1_score(y_true, y_pred, average='weighted') 0.26... >>> f1_score(y_true, y_pred, average=None) array([0.8, 0. , 0. ]) >>> y_true = [0, 0, 0, 0, 0, 0] >>> y_pred = [0, 0, 0, 0, 0, 0] >>> f1_score(y_true, y_pred, zero_division=1) 1.0...
sklearn.modules.generated.sklearn.metrics.f1_score#sklearn.metrics.f1_score
sklearn.metrics.fbeta_score(y_true, y_pred, *, beta, labels=None, pos_label=1, average='binary', sample_weight=None, zero_division='warn') [source] Compute the F-beta score. The F-beta score is the weighted harmonic mean of precision and recall, reaching its optimal value at 1 and its worst value at 0. The beta parameter determines the weight of recall in the combined score. beta < 1 lends more weight to precision, while beta > 1 favors recall (beta -> 0 considers only precision, beta -> +inf only recall). Read more in the User Guide. Parameters y_true1d array-like, or label indicator array / sparse matrix Ground truth (correct) target values. y_pred1d array-like, or label indicator array / sparse matrix Estimated targets as returned by a classifier. betafloat Determines the weight of recall in the combined score. labelsarray-like, default=None The set of labels to include when average != 'binary', and their order if average is None. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in y_true and y_pred are used in sorted order. Changed in version 0.17: Parameter labels improved for multiclass problem. pos_labelstr or int, default=1 The class to report if average='binary' and the data is binary. If the data are multiclass or multilabel, this will be ignored; setting labels=[pos_label] and average != 'binary' will report scores for that label only. average{‘micro’, ‘macro’, ‘samples’, ‘weighted’, ‘binary’} or None default=’binary’ This parameter is required for multiclass/multilabel targets. If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data: 'binary': Only report results for the class specified by pos_label. This is applicable only if targets (y_{true,pred}) are binary. 'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives. 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account. 'weighted': Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance; it can result in an F-score that is not between precision and recall. 'samples': Calculate metrics for each instance, and find their average (only meaningful for multilabel classification where this differs from accuracy_score). sample_weightarray-like of shape (n_samples,), default=None Sample weights. zero_division“warn”, 0 or 1, default=”warn” Sets the value to return when there is a zero division, i.e. when all predictions and labels are negative. If set to “warn”, this acts as 0, but warnings are also raised. Returns fbeta_scorefloat (if average is not None) or array of float, shape = [n_unique_labels] F-beta score of the positive class in binary classification or weighted average of the F-beta score of each class for the multiclass task. See also precision_recall_fscore_support, multilabel_confusion_matrix Notes When true positive + false positive == 0 or true positive + false negative == 0, f-score returns 0 and raises UndefinedMetricWarning. This behavior can be modified with zero_division. References 1 R. Baeza-Yates and B. Ribeiro-Neto (2011). Modern Information Retrieval. Addison Wesley, pp. 327-328. 2 Wikipedia entry for the F1-score. Examples >>> from sklearn.metrics import fbeta_score >>> y_true = [0, 1, 2, 0, 1, 2] >>> y_pred = [0, 2, 1, 0, 0, 1] >>> fbeta_score(y_true, y_pred, average='macro', beta=0.5) 0.23... >>> fbeta_score(y_true, y_pred, average='micro', beta=0.5) 0.33... >>> fbeta_score(y_true, y_pred, average='weighted', beta=0.5) 0.23... >>> fbeta_score(y_true, y_pred, average=None, beta=0.5) array([0.71..., 0. , 0. ])
sklearn.modules.generated.sklearn.metrics.fbeta_score#sklearn.metrics.fbeta_score