doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.ensemble.randomtreesembedding#sklearn.ensemble.RandomTreesEmbedding.set_params |
transform(X) [source]
Transform dataset. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Input data to be transformed. Use dtype=np.float32 for maximum efficiency. Sparse matrices are also supported, use sparse csr_matrix for maximum efficiency. Returns
X_transformedsparse matrix of shape (n_samples, n_out)
Transformed dataset. | sklearn.modules.generated.sklearn.ensemble.randomtreesembedding#sklearn.ensemble.RandomTreesEmbedding.transform |
class sklearn.ensemble.StackingClassifier(estimators, final_estimator=None, *, cv=None, stack_method='auto', n_jobs=None, passthrough=False, verbose=0) [source]
Stack of estimators with a final classifier. Stacked generalization consists in stacking the output of individual estimator and use a classifier to compute the final prediction. Stacking allows to use the strength of each individual estimator by using their output as input of a final estimator. Note that estimators_ are fitted on the full X while final_estimator_ is trained using cross-validated predictions of the base estimators using cross_val_predict. Read more in the User Guide. New in version 0.22. Parameters
estimatorslist of (str, estimator)
Base estimators which will be stacked together. Each element of the list is defined as a tuple of string (i.e. name) and an estimator instance. An estimator can be set to ‘drop’ using set_params.
final_estimatorestimator, default=None
A classifier which will be used to combine the base estimators. The default classifier is a LogisticRegression.
cvint, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy used in cross_val_predict to train final_estimator. Possible inputs for cv are: None, to use the default 5-fold cross validation, integer, to specify the number of folds in a (Stratified) KFold, An object to be used as a cross-validation generator, An iterable yielding train, test splits. For integer/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Note A larger number of split will provide no benefits if the number of training samples is large enough. Indeed, the training time will increase. cv is not used for model evaluation but for prediction.
stack_method{‘auto’, ‘predict_proba’, ‘decision_function’, ‘predict’}, default=’auto’
Methods called for each base estimator. It can be: if ‘auto’, it will try to invoke, for each estimator, 'predict_proba', 'decision_function' or 'predict' in that order. otherwise, one of 'predict_proba', 'decision_function' or 'predict'. If the method is not implemented by the estimator, it will raise an error.
n_jobsint, default=None
The number of jobs to run in parallel all estimators fit. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
passthroughbool, default=False
When False, only the predictions of estimators will be used as training data for final_estimator. When True, the final_estimator is trained on the predictions as well as the original training data.
verboseint, default=0
Verbosity level. Attributes
classes_ndarray of shape (n_classes,)
Class labels.
estimators_list of estimators
The elements of the estimators parameter, having been fitted on the training data. If an estimator has been set to 'drop', it will not appear in estimators_.
named_estimators_Bunch
Attribute to access any fitted sub-estimators by name.
final_estimator_estimator
The classifier which predicts given the output of estimators_.
stack_method_list of str
The method used by each base estimator. Notes When predict_proba is used by each estimator (i.e. most of the time for stack_method='auto' or specifically for stack_method='predict_proba'), The first column predicted by each estimator will be dropped in the case of a binary classification problem. Indeed, both feature will be perfectly collinear. References
1
Wolpert, David H. “Stacked generalization.” Neural networks 5.2 (1992): 241-259. Examples >>> from sklearn.datasets import load_iris
>>> from sklearn.ensemble import RandomForestClassifier
>>> from sklearn.svm import LinearSVC
>>> from sklearn.linear_model import LogisticRegression
>>> from sklearn.preprocessing import StandardScaler
>>> from sklearn.pipeline import make_pipeline
>>> from sklearn.ensemble import StackingClassifier
>>> X, y = load_iris(return_X_y=True)
>>> estimators = [
... ('rf', RandomForestClassifier(n_estimators=10, random_state=42)),
... ('svr', make_pipeline(StandardScaler(),
... LinearSVC(random_state=42)))
... ]
>>> clf = StackingClassifier(
... estimators=estimators, final_estimator=LogisticRegression()
... )
>>> from sklearn.model_selection import train_test_split
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, stratify=y, random_state=42
... )
>>> clf.fit(X_train, y_train).score(X_test, y_test)
0.9...
Methods
decision_function(X) Predict decision function for samples in X using final_estimator_.decision_function.
fit(X, y[, sample_weight]) Fit the estimators.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get the parameters of an estimator from the ensemble.
predict(X, **predict_params) Predict target for X.
predict_proba(X) Predict class probabilities for X using final_estimator_.predict_proba.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of an estimator from the ensemble.
transform(X) Return class labels or probabilities for X for each estimator.
decision_function(X) [source]
Predict decision function for samples in X using final_estimator_.decision_function. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns
decisionsndarray of shape (n_samples,), (n_samples, n_classes), or (n_samples, n_classes * (n_classes-1) / 2)
The decision function computed the final estimator.
fit(X, y, sample_weight=None) [source]
Fit the estimators. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Note that this is supported only if all underlying estimators support sample weights. Returns
selfobject
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get the parameters of an estimator from the ensemble. Returns the parameters given in the constructor as well as the estimators contained within the estimators parameter. Parameters
deepbool, default=True
Setting it to True gets the various estimators and the parameters of the estimators as well.
property n_features_in_
Number of features seen during fit.
predict(X, **predict_params) [source]
Predict target for X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
**predict_paramsdict of str -> obj
Parameters to the predict called by the final_estimator. Note that this may be used to return uncertainties from some estimators with return_std or return_cov. Be aware that it will only accounts for uncertainty in the final estimator. Returns
y_predndarray of shape (n_samples,) or (n_samples, n_output)
Predicted targets.
predict_proba(X) [source]
Predict class probabilities for X using final_estimator_.predict_proba. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns
probabilitiesndarray of shape (n_samples, n_classes) or list of ndarray of shape (n_output,)
The class probabilities of the input samples.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of an estimator from the ensemble. Valid parameter keys can be listed with get_params(). Note that you can directly set the parameters of the estimators contained in estimators. Parameters
**paramskeyword arguments
Specific parameters using e.g. set_params(parameter_name=new_value). In addition, to setting the parameters of the estimator, the individual estimator of the estimators can also be set, or can be removed by setting them to ‘drop’.
transform(X) [source]
Return class labels or probabilities for X for each estimator. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns
y_predsndarray of shape (n_samples, n_estimators) or (n_samples, n_classes * n_estimators)
Prediction outputs for each estimator. | sklearn.modules.generated.sklearn.ensemble.stackingclassifier#sklearn.ensemble.StackingClassifier |
sklearn.ensemble.StackingClassifier
class sklearn.ensemble.StackingClassifier(estimators, final_estimator=None, *, cv=None, stack_method='auto', n_jobs=None, passthrough=False, verbose=0) [source]
Stack of estimators with a final classifier. Stacked generalization consists in stacking the output of individual estimator and use a classifier to compute the final prediction. Stacking allows to use the strength of each individual estimator by using their output as input of a final estimator. Note that estimators_ are fitted on the full X while final_estimator_ is trained using cross-validated predictions of the base estimators using cross_val_predict. Read more in the User Guide. New in version 0.22. Parameters
estimatorslist of (str, estimator)
Base estimators which will be stacked together. Each element of the list is defined as a tuple of string (i.e. name) and an estimator instance. An estimator can be set to ‘drop’ using set_params.
final_estimatorestimator, default=None
A classifier which will be used to combine the base estimators. The default classifier is a LogisticRegression.
cvint, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy used in cross_val_predict to train final_estimator. Possible inputs for cv are: None, to use the default 5-fold cross validation, integer, to specify the number of folds in a (Stratified) KFold, An object to be used as a cross-validation generator, An iterable yielding train, test splits. For integer/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Note A larger number of split will provide no benefits if the number of training samples is large enough. Indeed, the training time will increase. cv is not used for model evaluation but for prediction.
stack_method{‘auto’, ‘predict_proba’, ‘decision_function’, ‘predict’}, default=’auto’
Methods called for each base estimator. It can be: if ‘auto’, it will try to invoke, for each estimator, 'predict_proba', 'decision_function' or 'predict' in that order. otherwise, one of 'predict_proba', 'decision_function' or 'predict'. If the method is not implemented by the estimator, it will raise an error.
n_jobsint, default=None
The number of jobs to run in parallel all estimators fit. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
passthroughbool, default=False
When False, only the predictions of estimators will be used as training data for final_estimator. When True, the final_estimator is trained on the predictions as well as the original training data.
verboseint, default=0
Verbosity level. Attributes
classes_ndarray of shape (n_classes,)
Class labels.
estimators_list of estimators
The elements of the estimators parameter, having been fitted on the training data. If an estimator has been set to 'drop', it will not appear in estimators_.
named_estimators_Bunch
Attribute to access any fitted sub-estimators by name.
final_estimator_estimator
The classifier which predicts given the output of estimators_.
stack_method_list of str
The method used by each base estimator. Notes When predict_proba is used by each estimator (i.e. most of the time for stack_method='auto' or specifically for stack_method='predict_proba'), The first column predicted by each estimator will be dropped in the case of a binary classification problem. Indeed, both feature will be perfectly collinear. References
1
Wolpert, David H. “Stacked generalization.” Neural networks 5.2 (1992): 241-259. Examples >>> from sklearn.datasets import load_iris
>>> from sklearn.ensemble import RandomForestClassifier
>>> from sklearn.svm import LinearSVC
>>> from sklearn.linear_model import LogisticRegression
>>> from sklearn.preprocessing import StandardScaler
>>> from sklearn.pipeline import make_pipeline
>>> from sklearn.ensemble import StackingClassifier
>>> X, y = load_iris(return_X_y=True)
>>> estimators = [
... ('rf', RandomForestClassifier(n_estimators=10, random_state=42)),
... ('svr', make_pipeline(StandardScaler(),
... LinearSVC(random_state=42)))
... ]
>>> clf = StackingClassifier(
... estimators=estimators, final_estimator=LogisticRegression()
... )
>>> from sklearn.model_selection import train_test_split
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, stratify=y, random_state=42
... )
>>> clf.fit(X_train, y_train).score(X_test, y_test)
0.9...
Methods
decision_function(X) Predict decision function for samples in X using final_estimator_.decision_function.
fit(X, y[, sample_weight]) Fit the estimators.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get the parameters of an estimator from the ensemble.
predict(X, **predict_params) Predict target for X.
predict_proba(X) Predict class probabilities for X using final_estimator_.predict_proba.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of an estimator from the ensemble.
transform(X) Return class labels or probabilities for X for each estimator.
decision_function(X) [source]
Predict decision function for samples in X using final_estimator_.decision_function. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns
decisionsndarray of shape (n_samples,), (n_samples, n_classes), or (n_samples, n_classes * (n_classes-1) / 2)
The decision function computed the final estimator.
fit(X, y, sample_weight=None) [source]
Fit the estimators. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Note that this is supported only if all underlying estimators support sample weights. Returns
selfobject
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get the parameters of an estimator from the ensemble. Returns the parameters given in the constructor as well as the estimators contained within the estimators parameter. Parameters
deepbool, default=True
Setting it to True gets the various estimators and the parameters of the estimators as well.
property n_features_in_
Number of features seen during fit.
predict(X, **predict_params) [source]
Predict target for X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
**predict_paramsdict of str -> obj
Parameters to the predict called by the final_estimator. Note that this may be used to return uncertainties from some estimators with return_std or return_cov. Be aware that it will only accounts for uncertainty in the final estimator. Returns
y_predndarray of shape (n_samples,) or (n_samples, n_output)
Predicted targets.
predict_proba(X) [source]
Predict class probabilities for X using final_estimator_.predict_proba. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns
probabilitiesndarray of shape (n_samples, n_classes) or list of ndarray of shape (n_output,)
The class probabilities of the input samples.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of an estimator from the ensemble. Valid parameter keys can be listed with get_params(). Note that you can directly set the parameters of the estimators contained in estimators. Parameters
**paramskeyword arguments
Specific parameters using e.g. set_params(parameter_name=new_value). In addition, to setting the parameters of the estimator, the individual estimator of the estimators can also be set, or can be removed by setting them to ‘drop’.
transform(X) [source]
Return class labels or probabilities for X for each estimator. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns
y_predsndarray of shape (n_samples, n_estimators) or (n_samples, n_classes * n_estimators)
Prediction outputs for each estimator.
Examples using sklearn.ensemble.StackingClassifier
Release Highlights for scikit-learn 0.22 | sklearn.modules.generated.sklearn.ensemble.stackingclassifier |
decision_function(X) [source]
Predict decision function for samples in X using final_estimator_.decision_function. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns
decisionsndarray of shape (n_samples,), (n_samples, n_classes), or (n_samples, n_classes * (n_classes-1) / 2)
The decision function computed the final estimator. | sklearn.modules.generated.sklearn.ensemble.stackingclassifier#sklearn.ensemble.StackingClassifier.decision_function |
fit(X, y, sample_weight=None) [source]
Fit the estimators. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Note that this is supported only if all underlying estimators support sample weights. Returns
selfobject | sklearn.modules.generated.sklearn.ensemble.stackingclassifier#sklearn.ensemble.StackingClassifier.fit |
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | sklearn.modules.generated.sklearn.ensemble.stackingclassifier#sklearn.ensemble.StackingClassifier.fit_transform |
get_params(deep=True) [source]
Get the parameters of an estimator from the ensemble. Returns the parameters given in the constructor as well as the estimators contained within the estimators parameter. Parameters
deepbool, default=True
Setting it to True gets the various estimators and the parameters of the estimators as well. | sklearn.modules.generated.sklearn.ensemble.stackingclassifier#sklearn.ensemble.StackingClassifier.get_params |
property n_features_in_
Number of features seen during fit. | sklearn.modules.generated.sklearn.ensemble.stackingclassifier#sklearn.ensemble.StackingClassifier.n_features_in_ |
predict(X, **predict_params) [source]
Predict target for X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
**predict_paramsdict of str -> obj
Parameters to the predict called by the final_estimator. Note that this may be used to return uncertainties from some estimators with return_std or return_cov. Be aware that it will only accounts for uncertainty in the final estimator. Returns
y_predndarray of shape (n_samples,) or (n_samples, n_output)
Predicted targets. | sklearn.modules.generated.sklearn.ensemble.stackingclassifier#sklearn.ensemble.StackingClassifier.predict |
predict_proba(X) [source]
Predict class probabilities for X using final_estimator_.predict_proba. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns
probabilitiesndarray of shape (n_samples, n_classes) or list of ndarray of shape (n_output,)
The class probabilities of the input samples. | sklearn.modules.generated.sklearn.ensemble.stackingclassifier#sklearn.ensemble.StackingClassifier.predict_proba |
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y. | sklearn.modules.generated.sklearn.ensemble.stackingclassifier#sklearn.ensemble.StackingClassifier.score |
set_params(**params) [source]
Set the parameters of an estimator from the ensemble. Valid parameter keys can be listed with get_params(). Note that you can directly set the parameters of the estimators contained in estimators. Parameters
**paramskeyword arguments
Specific parameters using e.g. set_params(parameter_name=new_value). In addition, to setting the parameters of the estimator, the individual estimator of the estimators can also be set, or can be removed by setting them to ‘drop’. | sklearn.modules.generated.sklearn.ensemble.stackingclassifier#sklearn.ensemble.StackingClassifier.set_params |
transform(X) [source]
Return class labels or probabilities for X for each estimator. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns
y_predsndarray of shape (n_samples, n_estimators) or (n_samples, n_classes * n_estimators)
Prediction outputs for each estimator. | sklearn.modules.generated.sklearn.ensemble.stackingclassifier#sklearn.ensemble.StackingClassifier.transform |
class sklearn.ensemble.StackingRegressor(estimators, final_estimator=None, *, cv=None, n_jobs=None, passthrough=False, verbose=0) [source]
Stack of estimators with a final regressor. Stacked generalization consists in stacking the output of individual estimator and use a regressor to compute the final prediction. Stacking allows to use the strength of each individual estimator by using their output as input of a final estimator. Note that estimators_ are fitted on the full X while final_estimator_ is trained using cross-validated predictions of the base estimators using cross_val_predict. Read more in the User Guide. New in version 0.22. Parameters
estimatorslist of (str, estimator)
Base estimators which will be stacked together. Each element of the list is defined as a tuple of string (i.e. name) and an estimator instance. An estimator can be set to ‘drop’ using set_params.
final_estimatorestimator, default=None
A regressor which will be used to combine the base estimators. The default regressor is a RidgeCV.
cvint, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy used in cross_val_predict to train final_estimator. Possible inputs for cv are: None, to use the default 5-fold cross validation, integer, to specify the number of folds in a (Stratified) KFold, An object to be used as a cross-validation generator, An iterable yielding train, test splits. For integer/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Note A larger number of split will provide no benefits if the number of training samples is large enough. Indeed, the training time will increase. cv is not used for model evaluation but for prediction.
n_jobsint, default=None
The number of jobs to run in parallel for fit of all estimators. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
passthroughbool, default=False
When False, only the predictions of estimators will be used as training data for final_estimator. When True, the final_estimator is trained on the predictions as well as the original training data.
verboseint, default=0
Verbosity level. Attributes
estimators_list of estimator
The elements of the estimators parameter, having been fitted on the training data. If an estimator has been set to 'drop', it will not appear in estimators_.
named_estimators_Bunch
Attribute to access any fitted sub-estimators by name.
final_estimator_estimator
The regressor to stacked the base estimators fitted. References
1
Wolpert, David H. “Stacked generalization.” Neural networks 5.2 (1992): 241-259. Examples >>> from sklearn.datasets import load_diabetes
>>> from sklearn.linear_model import RidgeCV
>>> from sklearn.svm import LinearSVR
>>> from sklearn.ensemble import RandomForestRegressor
>>> from sklearn.ensemble import StackingRegressor
>>> X, y = load_diabetes(return_X_y=True)
>>> estimators = [
... ('lr', RidgeCV()),
... ('svr', LinearSVR(random_state=42))
... ]
>>> reg = StackingRegressor(
... estimators=estimators,
... final_estimator=RandomForestRegressor(n_estimators=10,
... random_state=42)
... )
>>> from sklearn.model_selection import train_test_split
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, random_state=42
... )
>>> reg.fit(X_train, y_train).score(X_test, y_test)
0.3...
Methods
fit(X, y[, sample_weight]) Fit the estimators.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get the parameters of an estimator from the ensemble.
predict(X, **predict_params) Predict target for X.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of an estimator from the ensemble.
transform(X) Return the predictions for X for each estimator.
fit(X, y, sample_weight=None) [source]
Fit the estimators. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Note that this is supported only if all underlying estimators support sample weights. Returns
selfobject
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get the parameters of an estimator from the ensemble. Returns the parameters given in the constructor as well as the estimators contained within the estimators parameter. Parameters
deepbool, default=True
Setting it to True gets the various estimators and the parameters of the estimators as well.
property n_features_in_
Number of features seen during fit.
predict(X, **predict_params) [source]
Predict target for X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
**predict_paramsdict of str -> obj
Parameters to the predict called by the final_estimator. Note that this may be used to return uncertainties from some estimators with return_std or return_cov. Be aware that it will only accounts for uncertainty in the final estimator. Returns
y_predndarray of shape (n_samples,) or (n_samples, n_output)
Predicted targets.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of an estimator from the ensemble. Valid parameter keys can be listed with get_params(). Note that you can directly set the parameters of the estimators contained in estimators. Parameters
**paramskeyword arguments
Specific parameters using e.g. set_params(parameter_name=new_value). In addition, to setting the parameters of the estimator, the individual estimator of the estimators can also be set, or can be removed by setting them to ‘drop’.
transform(X) [source]
Return the predictions for X for each estimator. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns
y_predsndarray of shape (n_samples, n_estimators)
Prediction outputs for each estimator. | sklearn.modules.generated.sklearn.ensemble.stackingregressor#sklearn.ensemble.StackingRegressor |
sklearn.ensemble.StackingRegressor
class sklearn.ensemble.StackingRegressor(estimators, final_estimator=None, *, cv=None, n_jobs=None, passthrough=False, verbose=0) [source]
Stack of estimators with a final regressor. Stacked generalization consists in stacking the output of individual estimator and use a regressor to compute the final prediction. Stacking allows to use the strength of each individual estimator by using their output as input of a final estimator. Note that estimators_ are fitted on the full X while final_estimator_ is trained using cross-validated predictions of the base estimators using cross_val_predict. Read more in the User Guide. New in version 0.22. Parameters
estimatorslist of (str, estimator)
Base estimators which will be stacked together. Each element of the list is defined as a tuple of string (i.e. name) and an estimator instance. An estimator can be set to ‘drop’ using set_params.
final_estimatorestimator, default=None
A regressor which will be used to combine the base estimators. The default regressor is a RidgeCV.
cvint, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy used in cross_val_predict to train final_estimator. Possible inputs for cv are: None, to use the default 5-fold cross validation, integer, to specify the number of folds in a (Stratified) KFold, An object to be used as a cross-validation generator, An iterable yielding train, test splits. For integer/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Note A larger number of split will provide no benefits if the number of training samples is large enough. Indeed, the training time will increase. cv is not used for model evaluation but for prediction.
n_jobsint, default=None
The number of jobs to run in parallel for fit of all estimators. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
passthroughbool, default=False
When False, only the predictions of estimators will be used as training data for final_estimator. When True, the final_estimator is trained on the predictions as well as the original training data.
verboseint, default=0
Verbosity level. Attributes
estimators_list of estimator
The elements of the estimators parameter, having been fitted on the training data. If an estimator has been set to 'drop', it will not appear in estimators_.
named_estimators_Bunch
Attribute to access any fitted sub-estimators by name.
final_estimator_estimator
The regressor to stacked the base estimators fitted. References
1
Wolpert, David H. “Stacked generalization.” Neural networks 5.2 (1992): 241-259. Examples >>> from sklearn.datasets import load_diabetes
>>> from sklearn.linear_model import RidgeCV
>>> from sklearn.svm import LinearSVR
>>> from sklearn.ensemble import RandomForestRegressor
>>> from sklearn.ensemble import StackingRegressor
>>> X, y = load_diabetes(return_X_y=True)
>>> estimators = [
... ('lr', RidgeCV()),
... ('svr', LinearSVR(random_state=42))
... ]
>>> reg = StackingRegressor(
... estimators=estimators,
... final_estimator=RandomForestRegressor(n_estimators=10,
... random_state=42)
... )
>>> from sklearn.model_selection import train_test_split
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, random_state=42
... )
>>> reg.fit(X_train, y_train).score(X_test, y_test)
0.3...
Methods
fit(X, y[, sample_weight]) Fit the estimators.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get the parameters of an estimator from the ensemble.
predict(X, **predict_params) Predict target for X.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of an estimator from the ensemble.
transform(X) Return the predictions for X for each estimator.
fit(X, y, sample_weight=None) [source]
Fit the estimators. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Note that this is supported only if all underlying estimators support sample weights. Returns
selfobject
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get the parameters of an estimator from the ensemble. Returns the parameters given in the constructor as well as the estimators contained within the estimators parameter. Parameters
deepbool, default=True
Setting it to True gets the various estimators and the parameters of the estimators as well.
property n_features_in_
Number of features seen during fit.
predict(X, **predict_params) [source]
Predict target for X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
**predict_paramsdict of str -> obj
Parameters to the predict called by the final_estimator. Note that this may be used to return uncertainties from some estimators with return_std or return_cov. Be aware that it will only accounts for uncertainty in the final estimator. Returns
y_predndarray of shape (n_samples,) or (n_samples, n_output)
Predicted targets.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of an estimator from the ensemble. Valid parameter keys can be listed with get_params(). Note that you can directly set the parameters of the estimators contained in estimators. Parameters
**paramskeyword arguments
Specific parameters using e.g. set_params(parameter_name=new_value). In addition, to setting the parameters of the estimator, the individual estimator of the estimators can also be set, or can be removed by setting them to ‘drop’.
transform(X) [source]
Return the predictions for X for each estimator. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns
y_predsndarray of shape (n_samples, n_estimators)
Prediction outputs for each estimator.
Examples using sklearn.ensemble.StackingRegressor
Combine predictors using stacking | sklearn.modules.generated.sklearn.ensemble.stackingregressor |
fit(X, y, sample_weight=None) [source]
Fit the estimators. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Note that this is supported only if all underlying estimators support sample weights. Returns
selfobject | sklearn.modules.generated.sklearn.ensemble.stackingregressor#sklearn.ensemble.StackingRegressor.fit |
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | sklearn.modules.generated.sklearn.ensemble.stackingregressor#sklearn.ensemble.StackingRegressor.fit_transform |
get_params(deep=True) [source]
Get the parameters of an estimator from the ensemble. Returns the parameters given in the constructor as well as the estimators contained within the estimators parameter. Parameters
deepbool, default=True
Setting it to True gets the various estimators and the parameters of the estimators as well. | sklearn.modules.generated.sklearn.ensemble.stackingregressor#sklearn.ensemble.StackingRegressor.get_params |
property n_features_in_
Number of features seen during fit. | sklearn.modules.generated.sklearn.ensemble.stackingregressor#sklearn.ensemble.StackingRegressor.n_features_in_ |
predict(X, **predict_params) [source]
Predict target for X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
**predict_paramsdict of str -> obj
Parameters to the predict called by the final_estimator. Note that this may be used to return uncertainties from some estimators with return_std or return_cov. Be aware that it will only accounts for uncertainty in the final estimator. Returns
y_predndarray of shape (n_samples,) or (n_samples, n_output)
Predicted targets. | sklearn.modules.generated.sklearn.ensemble.stackingregressor#sklearn.ensemble.StackingRegressor.predict |
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). | sklearn.modules.generated.sklearn.ensemble.stackingregressor#sklearn.ensemble.StackingRegressor.score |
set_params(**params) [source]
Set the parameters of an estimator from the ensemble. Valid parameter keys can be listed with get_params(). Note that you can directly set the parameters of the estimators contained in estimators. Parameters
**paramskeyword arguments
Specific parameters using e.g. set_params(parameter_name=new_value). In addition, to setting the parameters of the estimator, the individual estimator of the estimators can also be set, or can be removed by setting them to ‘drop’. | sklearn.modules.generated.sklearn.ensemble.stackingregressor#sklearn.ensemble.StackingRegressor.set_params |
transform(X) [source]
Return the predictions for X for each estimator. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns
y_predsndarray of shape (n_samples, n_estimators)
Prediction outputs for each estimator. | sklearn.modules.generated.sklearn.ensemble.stackingregressor#sklearn.ensemble.StackingRegressor.transform |
class sklearn.ensemble.VotingClassifier(estimators, *, voting='hard', weights=None, n_jobs=None, flatten_transform=True, verbose=False) [source]
Soft Voting/Majority Rule classifier for unfitted estimators. Read more in the User Guide. New in version 0.17. Parameters
estimatorslist of (str, estimator) tuples
Invoking the fit method on the VotingClassifier will fit clones of those original estimators that will be stored in the class attribute self.estimators_. An estimator can be set to 'drop' using set_params. Changed in version 0.21: 'drop' is accepted. Using None was deprecated in 0.22 and support was removed in 0.24.
voting{‘hard’, ‘soft’}, default=’hard’
If ‘hard’, uses predicted class labels for majority rule voting. Else if ‘soft’, predicts the class label based on the argmax of the sums of the predicted probabilities, which is recommended for an ensemble of well-calibrated classifiers.
weightsarray-like of shape (n_classifiers,), default=None
Sequence of weights (float or int) to weight the occurrences of predicted class labels (hard voting) or class probabilities before averaging (soft voting). Uses uniform weights if None.
n_jobsint, default=None
The number of jobs to run in parallel for fit. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. New in version 0.18.
flatten_transformbool, default=True
Affects shape of transform output only when voting=’soft’ If voting=’soft’ and flatten_transform=True, transform method returns matrix with shape (n_samples, n_classifiers * n_classes). If flatten_transform=False, it returns (n_classifiers, n_samples, n_classes).
verbosebool, default=False
If True, the time elapsed while fitting will be printed as it is completed. New in version 0.23. Attributes
estimators_list of classifiers
The collection of fitted sub-estimators as defined in estimators that are not ‘drop’.
named_estimators_Bunch
Attribute to access any fitted sub-estimators by name. New in version 0.20.
classes_array-like of shape (n_predictions,)
The classes labels. See also
VotingRegressor
Prediction voting regressor. Examples >>> import numpy as np
>>> from sklearn.linear_model import LogisticRegression
>>> from sklearn.naive_bayes import GaussianNB
>>> from sklearn.ensemble import RandomForestClassifier, VotingClassifier
>>> clf1 = LogisticRegression(multi_class='multinomial', random_state=1)
>>> clf2 = RandomForestClassifier(n_estimators=50, random_state=1)
>>> clf3 = GaussianNB()
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> y = np.array([1, 1, 1, 2, 2, 2])
>>> eclf1 = VotingClassifier(estimators=[
... ('lr', clf1), ('rf', clf2), ('gnb', clf3)], voting='hard')
>>> eclf1 = eclf1.fit(X, y)
>>> print(eclf1.predict(X))
[1 1 1 2 2 2]
>>> np.array_equal(eclf1.named_estimators_.lr.predict(X),
... eclf1.named_estimators_['lr'].predict(X))
True
>>> eclf2 = VotingClassifier(estimators=[
... ('lr', clf1), ('rf', clf2), ('gnb', clf3)],
... voting='soft')
>>> eclf2 = eclf2.fit(X, y)
>>> print(eclf2.predict(X))
[1 1 1 2 2 2]
>>> eclf3 = VotingClassifier(estimators=[
... ('lr', clf1), ('rf', clf2), ('gnb', clf3)],
... voting='soft', weights=[2,1,1],
... flatten_transform=True)
>>> eclf3 = eclf3.fit(X, y)
>>> print(eclf3.predict(X))
[1 1 1 2 2 2]
>>> print(eclf3.transform(X).shape)
(6, 6)
Methods
fit(X, y[, sample_weight]) Fit the estimators.
fit_transform(X[, y]) Return class labels or probabilities for each estimator.
get_params([deep]) Get the parameters of an estimator from the ensemble.
predict(X) Predict class labels for X.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of an estimator from the ensemble.
transform(X) Return class labels or probabilities for X for each estimator.
fit(X, y, sample_weight=None) [source]
Fit the estimators. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Note that this is supported only if all underlying estimators support sample weights. New in version 0.18. Returns
selfobject
fit_transform(X, y=None, **fit_params) [source]
Return class labels or probabilities for each estimator. Return predictions for X for each estimator. Parameters
X{array-like, sparse matrix, dataframe} of shape (n_samples, n_features)
Input samples
yndarray of shape (n_samples,), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get the parameters of an estimator from the ensemble. Returns the parameters given in the constructor as well as the estimators contained within the estimators parameter. Parameters
deepbool, default=True
Setting it to True gets the various estimators and the parameters of the estimators as well.
predict(X) [source]
Predict class labels for X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Returns
majarray-like of shape (n_samples,)
Predicted class labels.
property predict_proba
Compute probabilities of possible outcomes for samples in X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Returns
avgarray-like of shape (n_samples, n_classes)
Weighted average probability for each class per sample.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of an estimator from the ensemble. Valid parameter keys can be listed with get_params(). Note that you can directly set the parameters of the estimators contained in estimators. Parameters
**paramskeyword arguments
Specific parameters using e.g. set_params(parameter_name=new_value). In addition, to setting the parameters of the estimator, the individual estimator of the estimators can also be set, or can be removed by setting them to ‘drop’.
transform(X) [source]
Return class labels or probabilities for X for each estimator. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns
probabilities_or_labels
If voting='soft' and flatten_transform=True:
returns ndarray of shape (n_classifiers, n_samples * n_classes), being class probabilities calculated by each classifier.
If voting='soft' and `flatten_transform=False:
ndarray of shape (n_classifiers, n_samples, n_classes)
If voting='hard':
ndarray of shape (n_samples, n_classifiers), being class labels predicted by each classifier. | sklearn.modules.generated.sklearn.ensemble.votingclassifier#sklearn.ensemble.VotingClassifier |
sklearn.ensemble.VotingClassifier
class sklearn.ensemble.VotingClassifier(estimators, *, voting='hard', weights=None, n_jobs=None, flatten_transform=True, verbose=False) [source]
Soft Voting/Majority Rule classifier for unfitted estimators. Read more in the User Guide. New in version 0.17. Parameters
estimatorslist of (str, estimator) tuples
Invoking the fit method on the VotingClassifier will fit clones of those original estimators that will be stored in the class attribute self.estimators_. An estimator can be set to 'drop' using set_params. Changed in version 0.21: 'drop' is accepted. Using None was deprecated in 0.22 and support was removed in 0.24.
voting{‘hard’, ‘soft’}, default=’hard’
If ‘hard’, uses predicted class labels for majority rule voting. Else if ‘soft’, predicts the class label based on the argmax of the sums of the predicted probabilities, which is recommended for an ensemble of well-calibrated classifiers.
weightsarray-like of shape (n_classifiers,), default=None
Sequence of weights (float or int) to weight the occurrences of predicted class labels (hard voting) or class probabilities before averaging (soft voting). Uses uniform weights if None.
n_jobsint, default=None
The number of jobs to run in parallel for fit. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. New in version 0.18.
flatten_transformbool, default=True
Affects shape of transform output only when voting=’soft’ If voting=’soft’ and flatten_transform=True, transform method returns matrix with shape (n_samples, n_classifiers * n_classes). If flatten_transform=False, it returns (n_classifiers, n_samples, n_classes).
verbosebool, default=False
If True, the time elapsed while fitting will be printed as it is completed. New in version 0.23. Attributes
estimators_list of classifiers
The collection of fitted sub-estimators as defined in estimators that are not ‘drop’.
named_estimators_Bunch
Attribute to access any fitted sub-estimators by name. New in version 0.20.
classes_array-like of shape (n_predictions,)
The classes labels. See also
VotingRegressor
Prediction voting regressor. Examples >>> import numpy as np
>>> from sklearn.linear_model import LogisticRegression
>>> from sklearn.naive_bayes import GaussianNB
>>> from sklearn.ensemble import RandomForestClassifier, VotingClassifier
>>> clf1 = LogisticRegression(multi_class='multinomial', random_state=1)
>>> clf2 = RandomForestClassifier(n_estimators=50, random_state=1)
>>> clf3 = GaussianNB()
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> y = np.array([1, 1, 1, 2, 2, 2])
>>> eclf1 = VotingClassifier(estimators=[
... ('lr', clf1), ('rf', clf2), ('gnb', clf3)], voting='hard')
>>> eclf1 = eclf1.fit(X, y)
>>> print(eclf1.predict(X))
[1 1 1 2 2 2]
>>> np.array_equal(eclf1.named_estimators_.lr.predict(X),
... eclf1.named_estimators_['lr'].predict(X))
True
>>> eclf2 = VotingClassifier(estimators=[
... ('lr', clf1), ('rf', clf2), ('gnb', clf3)],
... voting='soft')
>>> eclf2 = eclf2.fit(X, y)
>>> print(eclf2.predict(X))
[1 1 1 2 2 2]
>>> eclf3 = VotingClassifier(estimators=[
... ('lr', clf1), ('rf', clf2), ('gnb', clf3)],
... voting='soft', weights=[2,1,1],
... flatten_transform=True)
>>> eclf3 = eclf3.fit(X, y)
>>> print(eclf3.predict(X))
[1 1 1 2 2 2]
>>> print(eclf3.transform(X).shape)
(6, 6)
Methods
fit(X, y[, sample_weight]) Fit the estimators.
fit_transform(X[, y]) Return class labels or probabilities for each estimator.
get_params([deep]) Get the parameters of an estimator from the ensemble.
predict(X) Predict class labels for X.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of an estimator from the ensemble.
transform(X) Return class labels or probabilities for X for each estimator.
fit(X, y, sample_weight=None) [source]
Fit the estimators. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Note that this is supported only if all underlying estimators support sample weights. New in version 0.18. Returns
selfobject
fit_transform(X, y=None, **fit_params) [source]
Return class labels or probabilities for each estimator. Return predictions for X for each estimator. Parameters
X{array-like, sparse matrix, dataframe} of shape (n_samples, n_features)
Input samples
yndarray of shape (n_samples,), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get the parameters of an estimator from the ensemble. Returns the parameters given in the constructor as well as the estimators contained within the estimators parameter. Parameters
deepbool, default=True
Setting it to True gets the various estimators and the parameters of the estimators as well.
predict(X) [source]
Predict class labels for X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Returns
majarray-like of shape (n_samples,)
Predicted class labels.
property predict_proba
Compute probabilities of possible outcomes for samples in X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Returns
avgarray-like of shape (n_samples, n_classes)
Weighted average probability for each class per sample.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of an estimator from the ensemble. Valid parameter keys can be listed with get_params(). Note that you can directly set the parameters of the estimators contained in estimators. Parameters
**paramskeyword arguments
Specific parameters using e.g. set_params(parameter_name=new_value). In addition, to setting the parameters of the estimator, the individual estimator of the estimators can also be set, or can be removed by setting them to ‘drop’.
transform(X) [source]
Return class labels or probabilities for X for each estimator. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns
probabilities_or_labels
If voting='soft' and flatten_transform=True:
returns ndarray of shape (n_classifiers, n_samples * n_classes), being class probabilities calculated by each classifier.
If voting='soft' and `flatten_transform=False:
ndarray of shape (n_classifiers, n_samples, n_classes)
If voting='hard':
ndarray of shape (n_samples, n_classifiers), being class labels predicted by each classifier.
Examples using sklearn.ensemble.VotingClassifier
Plot the decision boundaries of a VotingClassifier
Plot class probabilities calculated by the VotingClassifier | sklearn.modules.generated.sklearn.ensemble.votingclassifier |
fit(X, y, sample_weight=None) [source]
Fit the estimators. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Note that this is supported only if all underlying estimators support sample weights. New in version 0.18. Returns
selfobject | sklearn.modules.generated.sklearn.ensemble.votingclassifier#sklearn.ensemble.VotingClassifier.fit |
fit_transform(X, y=None, **fit_params) [source]
Return class labels or probabilities for each estimator. Return predictions for X for each estimator. Parameters
X{array-like, sparse matrix, dataframe} of shape (n_samples, n_features)
Input samples
yndarray of shape (n_samples,), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | sklearn.modules.generated.sklearn.ensemble.votingclassifier#sklearn.ensemble.VotingClassifier.fit_transform |
get_params(deep=True) [source]
Get the parameters of an estimator from the ensemble. Returns the parameters given in the constructor as well as the estimators contained within the estimators parameter. Parameters
deepbool, default=True
Setting it to True gets the various estimators and the parameters of the estimators as well. | sklearn.modules.generated.sklearn.ensemble.votingclassifier#sklearn.ensemble.VotingClassifier.get_params |
predict(X) [source]
Predict class labels for X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Returns
majarray-like of shape (n_samples,)
Predicted class labels. | sklearn.modules.generated.sklearn.ensemble.votingclassifier#sklearn.ensemble.VotingClassifier.predict |
property predict_proba
Compute probabilities of possible outcomes for samples in X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Returns
avgarray-like of shape (n_samples, n_classes)
Weighted average probability for each class per sample. | sklearn.modules.generated.sklearn.ensemble.votingclassifier#sklearn.ensemble.VotingClassifier.predict_proba |
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y. | sklearn.modules.generated.sklearn.ensemble.votingclassifier#sklearn.ensemble.VotingClassifier.score |
set_params(**params) [source]
Set the parameters of an estimator from the ensemble. Valid parameter keys can be listed with get_params(). Note that you can directly set the parameters of the estimators contained in estimators. Parameters
**paramskeyword arguments
Specific parameters using e.g. set_params(parameter_name=new_value). In addition, to setting the parameters of the estimator, the individual estimator of the estimators can also be set, or can be removed by setting them to ‘drop’. | sklearn.modules.generated.sklearn.ensemble.votingclassifier#sklearn.ensemble.VotingClassifier.set_params |
transform(X) [source]
Return class labels or probabilities for X for each estimator. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns
probabilities_or_labels
If voting='soft' and flatten_transform=True:
returns ndarray of shape (n_classifiers, n_samples * n_classes), being class probabilities calculated by each classifier.
If voting='soft' and `flatten_transform=False:
ndarray of shape (n_classifiers, n_samples, n_classes)
If voting='hard':
ndarray of shape (n_samples, n_classifiers), being class labels predicted by each classifier. | sklearn.modules.generated.sklearn.ensemble.votingclassifier#sklearn.ensemble.VotingClassifier.transform |
class sklearn.ensemble.VotingRegressor(estimators, *, weights=None, n_jobs=None, verbose=False) [source]
Prediction voting regressor for unfitted estimators. A voting regressor is an ensemble meta-estimator that fits several base regressors, each on the whole dataset. Then it averages the individual predictions to form a final prediction. Read more in the User Guide. New in version 0.21. Parameters
estimatorslist of (str, estimator) tuples
Invoking the fit method on the VotingRegressor will fit clones of those original estimators that will be stored in the class attribute self.estimators_. An estimator can be set to 'drop' using set_params. Changed in version 0.21: 'drop' is accepted. Using None was deprecated in 0.22 and support was removed in 0.24.
weightsarray-like of shape (n_regressors,), default=None
Sequence of weights (float or int) to weight the occurrences of predicted values before averaging. Uses uniform weights if None.
n_jobsint, default=None
The number of jobs to run in parallel for fit. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
verbosebool, default=False
If True, the time elapsed while fitting will be printed as it is completed. New in version 0.23. Attributes
estimators_list of regressors
The collection of fitted sub-estimators as defined in estimators that are not ‘drop’.
named_estimators_Bunch
Attribute to access any fitted sub-estimators by name. New in version 0.20. See also
VotingClassifier
Soft Voting/Majority Rule classifier. Examples >>> import numpy as np
>>> from sklearn.linear_model import LinearRegression
>>> from sklearn.ensemble import RandomForestRegressor
>>> from sklearn.ensemble import VotingRegressor
>>> r1 = LinearRegression()
>>> r2 = RandomForestRegressor(n_estimators=10, random_state=1)
>>> X = np.array([[1, 1], [2, 4], [3, 9], [4, 16], [5, 25], [6, 36]])
>>> y = np.array([2, 6, 12, 20, 30, 42])
>>> er = VotingRegressor([('lr', r1), ('rf', r2)])
>>> print(er.fit(X, y).predict(X))
[ 3.3 5.7 11.8 19.7 28. 40.3]
Methods
fit(X, y[, sample_weight]) Fit the estimators.
fit_transform(X[, y]) Return class labels or probabilities for each estimator.
get_params([deep]) Get the parameters of an estimator from the ensemble.
predict(X) Predict regression target for X.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of an estimator from the ensemble.
transform(X) Return predictions for X for each estimator.
fit(X, y, sample_weight=None) [source]
Fit the estimators. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Note that this is supported only if all underlying estimators support sample weights. Returns
selfobject
Fitted estimator.
fit_transform(X, y=None, **fit_params) [source]
Return class labels or probabilities for each estimator. Return predictions for X for each estimator. Parameters
X{array-like, sparse matrix, dataframe} of shape (n_samples, n_features)
Input samples
yndarray of shape (n_samples,), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get the parameters of an estimator from the ensemble. Returns the parameters given in the constructor as well as the estimators contained within the estimators parameter. Parameters
deepbool, default=True
Setting it to True gets the various estimators and the parameters of the estimators as well.
predict(X) [source]
Predict regression target for X. The predicted regression target of an input sample is computed as the mean predicted regression targets of the estimators in the ensemble. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Returns
yndarray of shape (n_samples,)
The predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of an estimator from the ensemble. Valid parameter keys can be listed with get_params(). Note that you can directly set the parameters of the estimators contained in estimators. Parameters
**paramskeyword arguments
Specific parameters using e.g. set_params(parameter_name=new_value). In addition, to setting the parameters of the estimator, the individual estimator of the estimators can also be set, or can be removed by setting them to ‘drop’.
transform(X) [source]
Return predictions for X for each estimator. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Returns
predictions: ndarray of shape (n_samples, n_classifiers)
Values predicted by each regressor. | sklearn.modules.generated.sklearn.ensemble.votingregressor#sklearn.ensemble.VotingRegressor |
sklearn.ensemble.VotingRegressor
class sklearn.ensemble.VotingRegressor(estimators, *, weights=None, n_jobs=None, verbose=False) [source]
Prediction voting regressor for unfitted estimators. A voting regressor is an ensemble meta-estimator that fits several base regressors, each on the whole dataset. Then it averages the individual predictions to form a final prediction. Read more in the User Guide. New in version 0.21. Parameters
estimatorslist of (str, estimator) tuples
Invoking the fit method on the VotingRegressor will fit clones of those original estimators that will be stored in the class attribute self.estimators_. An estimator can be set to 'drop' using set_params. Changed in version 0.21: 'drop' is accepted. Using None was deprecated in 0.22 and support was removed in 0.24.
weightsarray-like of shape (n_regressors,), default=None
Sequence of weights (float or int) to weight the occurrences of predicted values before averaging. Uses uniform weights if None.
n_jobsint, default=None
The number of jobs to run in parallel for fit. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
verbosebool, default=False
If True, the time elapsed while fitting will be printed as it is completed. New in version 0.23. Attributes
estimators_list of regressors
The collection of fitted sub-estimators as defined in estimators that are not ‘drop’.
named_estimators_Bunch
Attribute to access any fitted sub-estimators by name. New in version 0.20. See also
VotingClassifier
Soft Voting/Majority Rule classifier. Examples >>> import numpy as np
>>> from sklearn.linear_model import LinearRegression
>>> from sklearn.ensemble import RandomForestRegressor
>>> from sklearn.ensemble import VotingRegressor
>>> r1 = LinearRegression()
>>> r2 = RandomForestRegressor(n_estimators=10, random_state=1)
>>> X = np.array([[1, 1], [2, 4], [3, 9], [4, 16], [5, 25], [6, 36]])
>>> y = np.array([2, 6, 12, 20, 30, 42])
>>> er = VotingRegressor([('lr', r1), ('rf', r2)])
>>> print(er.fit(X, y).predict(X))
[ 3.3 5.7 11.8 19.7 28. 40.3]
Methods
fit(X, y[, sample_weight]) Fit the estimators.
fit_transform(X[, y]) Return class labels or probabilities for each estimator.
get_params([deep]) Get the parameters of an estimator from the ensemble.
predict(X) Predict regression target for X.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of an estimator from the ensemble.
transform(X) Return predictions for X for each estimator.
fit(X, y, sample_weight=None) [source]
Fit the estimators. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Note that this is supported only if all underlying estimators support sample weights. Returns
selfobject
Fitted estimator.
fit_transform(X, y=None, **fit_params) [source]
Return class labels or probabilities for each estimator. Return predictions for X for each estimator. Parameters
X{array-like, sparse matrix, dataframe} of shape (n_samples, n_features)
Input samples
yndarray of shape (n_samples,), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get the parameters of an estimator from the ensemble. Returns the parameters given in the constructor as well as the estimators contained within the estimators parameter. Parameters
deepbool, default=True
Setting it to True gets the various estimators and the parameters of the estimators as well.
predict(X) [source]
Predict regression target for X. The predicted regression target of an input sample is computed as the mean predicted regression targets of the estimators in the ensemble. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Returns
yndarray of shape (n_samples,)
The predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of an estimator from the ensemble. Valid parameter keys can be listed with get_params(). Note that you can directly set the parameters of the estimators contained in estimators. Parameters
**paramskeyword arguments
Specific parameters using e.g. set_params(parameter_name=new_value). In addition, to setting the parameters of the estimator, the individual estimator of the estimators can also be set, or can be removed by setting them to ‘drop’.
transform(X) [source]
Return predictions for X for each estimator. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Returns
predictions: ndarray of shape (n_samples, n_classifiers)
Values predicted by each regressor.
Examples using sklearn.ensemble.VotingRegressor
Plot individual and voting regression predictions | sklearn.modules.generated.sklearn.ensemble.votingregressor |
fit(X, y, sample_weight=None) [source]
Fit the estimators. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Note that this is supported only if all underlying estimators support sample weights. Returns
selfobject
Fitted estimator. | sklearn.modules.generated.sklearn.ensemble.votingregressor#sklearn.ensemble.VotingRegressor.fit |
fit_transform(X, y=None, **fit_params) [source]
Return class labels or probabilities for each estimator. Return predictions for X for each estimator. Parameters
X{array-like, sparse matrix, dataframe} of shape (n_samples, n_features)
Input samples
yndarray of shape (n_samples,), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | sklearn.modules.generated.sklearn.ensemble.votingregressor#sklearn.ensemble.VotingRegressor.fit_transform |
get_params(deep=True) [source]
Get the parameters of an estimator from the ensemble. Returns the parameters given in the constructor as well as the estimators contained within the estimators parameter. Parameters
deepbool, default=True
Setting it to True gets the various estimators and the parameters of the estimators as well. | sklearn.modules.generated.sklearn.ensemble.votingregressor#sklearn.ensemble.VotingRegressor.get_params |
predict(X) [source]
Predict regression target for X. The predicted regression target of an input sample is computed as the mean predicted regression targets of the estimators in the ensemble. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Returns
yndarray of shape (n_samples,)
The predicted values. | sklearn.modules.generated.sklearn.ensemble.votingregressor#sklearn.ensemble.VotingRegressor.predict |
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). | sklearn.modules.generated.sklearn.ensemble.votingregressor#sklearn.ensemble.VotingRegressor.score |
set_params(**params) [source]
Set the parameters of an estimator from the ensemble. Valid parameter keys can be listed with get_params(). Note that you can directly set the parameters of the estimators contained in estimators. Parameters
**paramskeyword arguments
Specific parameters using e.g. set_params(parameter_name=new_value). In addition, to setting the parameters of the estimator, the individual estimator of the estimators can also be set, or can be removed by setting them to ‘drop’. | sklearn.modules.generated.sklearn.ensemble.votingregressor#sklearn.ensemble.VotingRegressor.set_params |
transform(X) [source]
Return predictions for X for each estimator. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Returns
predictions: ndarray of shape (n_samples, n_classifiers)
Values predicted by each regressor. | sklearn.modules.generated.sklearn.ensemble.votingregressor#sklearn.ensemble.VotingRegressor.transform |
class sklearn.exceptions.ConvergenceWarning [source]
Custom warning to capture convergence problems Changed in version 0.18: Moved from sklearn.utils. Attributes
args
Methods
with_traceback Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
with_traceback()
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. | sklearn.modules.generated.sklearn.exceptions.convergencewarning#sklearn.exceptions.ConvergenceWarning |
sklearn.exceptions.ConvergenceWarning
class sklearn.exceptions.ConvergenceWarning [source]
Custom warning to capture convergence problems Changed in version 0.18: Moved from sklearn.utils. Attributes
args
Methods
with_traceback Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
with_traceback()
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
Examples using sklearn.exceptions.ConvergenceWarning
Multiclass sparse logistic regression on 20newgroups
Early stopping of Stochastic Gradient Descent
Visualization of MLP weights on MNIST
Compare Stochastic learning strategies for MLPClassifier
Feature discretization | sklearn.modules.generated.sklearn.exceptions.convergencewarning |
with_traceback()
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. | sklearn.modules.generated.sklearn.exceptions.convergencewarning#sklearn.exceptions.ConvergenceWarning.with_traceback |
sklearn.exceptions.DataConversionWarning
class sklearn.exceptions.DataConversionWarning [source]
Warning used to notify implicit data conversions happening in the code. This warning occurs when some input data needs to be converted or interpreted in a way that may not match the user’s expectations. For example, this warning may occur when the user
passes an integer array to a function which expects float input and will convert the input requests a non-copying operation, but a copy is required to meet the implementation’s data-type expectations; passes an input whose shape can be interpreted ambiguously. Changed in version 0.18: Moved from sklearn.utils.validation. Attributes
args
Methods
with_traceback Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
with_traceback()
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. | sklearn.modules.generated.sklearn.exceptions.dataconversionwarning |
class sklearn.exceptions.DataConversionWarning [source]
Warning used to notify implicit data conversions happening in the code. This warning occurs when some input data needs to be converted or interpreted in a way that may not match the user’s expectations. For example, this warning may occur when the user
passes an integer array to a function which expects float input and will convert the input requests a non-copying operation, but a copy is required to meet the implementation’s data-type expectations; passes an input whose shape can be interpreted ambiguously. Changed in version 0.18: Moved from sklearn.utils.validation. Attributes
args
Methods
with_traceback Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
with_traceback()
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. | sklearn.modules.generated.sklearn.exceptions.dataconversionwarning#sklearn.exceptions.DataConversionWarning |
with_traceback()
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. | sklearn.modules.generated.sklearn.exceptions.dataconversionwarning#sklearn.exceptions.DataConversionWarning.with_traceback |
sklearn.exceptions.DataDimensionalityWarning
class sklearn.exceptions.DataDimensionalityWarning [source]
Custom warning to notify potential issues with data dimensionality. For example, in random projection, this warning is raised when the number of components, which quantifies the dimensionality of the target projection space, is higher than the number of features, which quantifies the dimensionality of the original source space, to imply that the dimensionality of the problem will not be reduced. Changed in version 0.18: Moved from sklearn.utils. Attributes
args
Methods
with_traceback Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
with_traceback()
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. | sklearn.modules.generated.sklearn.exceptions.datadimensionalitywarning |
class sklearn.exceptions.DataDimensionalityWarning [source]
Custom warning to notify potential issues with data dimensionality. For example, in random projection, this warning is raised when the number of components, which quantifies the dimensionality of the target projection space, is higher than the number of features, which quantifies the dimensionality of the original source space, to imply that the dimensionality of the problem will not be reduced. Changed in version 0.18: Moved from sklearn.utils. Attributes
args
Methods
with_traceback Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
with_traceback()
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. | sklearn.modules.generated.sklearn.exceptions.datadimensionalitywarning#sklearn.exceptions.DataDimensionalityWarning |
with_traceback()
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. | sklearn.modules.generated.sklearn.exceptions.datadimensionalitywarning#sklearn.exceptions.DataDimensionalityWarning.with_traceback |
class sklearn.exceptions.EfficiencyWarning [source]
Warning used to notify the user of inefficient computation. This warning notifies the user that the efficiency may not be optimal due to some reason which may be included as a part of the warning message. This may be subclassed into a more specific Warning class. New in version 0.18. Attributes
args
Methods
with_traceback Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
with_traceback()
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. | sklearn.modules.generated.sklearn.exceptions.efficiencywarning#sklearn.exceptions.EfficiencyWarning |
sklearn.exceptions.EfficiencyWarning
class sklearn.exceptions.EfficiencyWarning [source]
Warning used to notify the user of inefficient computation. This warning notifies the user that the efficiency may not be optimal due to some reason which may be included as a part of the warning message. This may be subclassed into a more specific Warning class. New in version 0.18. Attributes
args
Methods
with_traceback Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
with_traceback()
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. | sklearn.modules.generated.sklearn.exceptions.efficiencywarning |
with_traceback()
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. | sklearn.modules.generated.sklearn.exceptions.efficiencywarning#sklearn.exceptions.EfficiencyWarning.with_traceback |
class sklearn.exceptions.FitFailedWarning [source]
Warning class used if there is an error while fitting the estimator. This Warning is used in meta estimators GridSearchCV and RandomizedSearchCV and the cross-validation helper function cross_val_score to warn when there is an error while fitting the estimator. Changed in version 0.18: Moved from sklearn.cross_validation. Attributes
args
Methods
with_traceback Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
with_traceback()
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. | sklearn.modules.generated.sklearn.exceptions.fitfailedwarning#sklearn.exceptions.FitFailedWarning |
sklearn.exceptions.FitFailedWarning
class sklearn.exceptions.FitFailedWarning [source]
Warning class used if there is an error while fitting the estimator. This Warning is used in meta estimators GridSearchCV and RandomizedSearchCV and the cross-validation helper function cross_val_score to warn when there is an error while fitting the estimator. Changed in version 0.18: Moved from sklearn.cross_validation. Attributes
args
Methods
with_traceback Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
with_traceback()
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. | sklearn.modules.generated.sklearn.exceptions.fitfailedwarning |
with_traceback()
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. | sklearn.modules.generated.sklearn.exceptions.fitfailedwarning#sklearn.exceptions.FitFailedWarning.with_traceback |
sklearn.exceptions.NotFittedError
class sklearn.exceptions.NotFittedError [source]
Exception class to raise if estimator is used before fitting. This class inherits from both ValueError and AttributeError to help with exception handling and backward compatibility. Attributes
args
Examples >>> from sklearn.svm import LinearSVC
>>> from sklearn.exceptions import NotFittedError
>>> try:
... LinearSVC().predict([[1, 2], [2, 3], [3, 4]])
... except NotFittedError as e:
... print(repr(e))
NotFittedError("This LinearSVC instance is not fitted yet. Call 'fit' with
appropriate arguments before using this estimator."...)
Changed in version 0.18: Moved from sklearn.utils.validation. Methods
with_traceback Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
with_traceback()
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. | sklearn.modules.generated.sklearn.exceptions.notfittederror |
class sklearn.exceptions.NotFittedError [source]
Exception class to raise if estimator is used before fitting. This class inherits from both ValueError and AttributeError to help with exception handling and backward compatibility. Attributes
args
Examples >>> from sklearn.svm import LinearSVC
>>> from sklearn.exceptions import NotFittedError
>>> try:
... LinearSVC().predict([[1, 2], [2, 3], [3, 4]])
... except NotFittedError as e:
... print(repr(e))
NotFittedError("This LinearSVC instance is not fitted yet. Call 'fit' with
appropriate arguments before using this estimator."...)
Changed in version 0.18: Moved from sklearn.utils.validation. Methods
with_traceback Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
with_traceback()
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. | sklearn.modules.generated.sklearn.exceptions.notfittederror#sklearn.exceptions.NotFittedError |
with_traceback()
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. | sklearn.modules.generated.sklearn.exceptions.notfittederror#sklearn.exceptions.NotFittedError.with_traceback |
class sklearn.exceptions.UndefinedMetricWarning [source]
Warning used when the metric is invalid Changed in version 0.18: Moved from sklearn.base. Attributes
args
Methods
with_traceback Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
with_traceback()
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. | sklearn.modules.generated.sklearn.exceptions.undefinedmetricwarning#sklearn.exceptions.UndefinedMetricWarning |
sklearn.exceptions.UndefinedMetricWarning
class sklearn.exceptions.UndefinedMetricWarning [source]
Warning used when the metric is invalid Changed in version 0.18: Moved from sklearn.base. Attributes
args
Methods
with_traceback Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
with_traceback()
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. | sklearn.modules.generated.sklearn.exceptions.undefinedmetricwarning |
with_traceback()
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. | sklearn.modules.generated.sklearn.exceptions.undefinedmetricwarning#sklearn.exceptions.UndefinedMetricWarning.with_traceback |
class sklearn.feature_extraction.DictVectorizer(*, dtype=<class 'numpy.float64'>, separator='=', sparse=True, sort=True) [source]
Transforms lists of feature-value mappings to vectors. This transformer turns lists of mappings (dict-like objects) of feature names to feature values into Numpy arrays or scipy.sparse matrices for use with scikit-learn estimators. When feature values are strings, this transformer will do a binary one-hot (aka one-of-K) coding: one boolean-valued feature is constructed for each of the possible string values that the feature can take on. For instance, a feature “f” that can take on the values “ham” and “spam” will become two features in the output, one signifying “f=ham”, the other “f=spam”. If a feature value is a sequence or set of strings, this transformer will iterate over the values and will count the occurrences of each string value. However, note that this transformer will only do a binary one-hot encoding when feature values are of type string. If categorical features are represented as numeric values such as int or iterables of strings, the DictVectorizer can be followed by OneHotEncoder to complete binary one-hot encoding. Features that do not occur in a sample (mapping) will have a zero value in the resulting array/matrix. Read more in the User Guide. Parameters
dtypedtype, default=np.float64
The type of feature values. Passed to Numpy array/scipy.sparse matrix constructors as the dtype argument.
separatorstr, default=”=”
Separator string used when constructing new features for one-hot coding.
sparsebool, default=True
Whether transform should produce scipy.sparse matrices.
sortbool, default=True
Whether feature_names_ and vocabulary_ should be sorted when fitting. Attributes
vocabulary_dict
A dictionary mapping feature names to feature indices.
feature_names_list
A list of length n_features containing the feature names (e.g., “f=ham” and “f=spam”). See also
FeatureHasher
Performs vectorization using only a hash function.
sklearn.preprocessing.OrdinalEncoder
Handles nominal/categorical features encoded as columns of arbitrary data types. Examples >>> from sklearn.feature_extraction import DictVectorizer
>>> v = DictVectorizer(sparse=False)
>>> D = [{'foo': 1, 'bar': 2}, {'foo': 3, 'baz': 1}]
>>> X = v.fit_transform(D)
>>> X
array([[2., 0., 1.],
[0., 1., 3.]])
>>> v.inverse_transform(X) == [{'bar': 2.0, 'foo': 1.0},
... {'baz': 1.0, 'foo': 3.0}]
True
>>> v.transform({'foo': 4, 'unseen_feature': 3})
array([[0., 0., 4.]])
Methods
fit(X[, y]) Learn a list of feature name -> indices mappings.
fit_transform(X[, y]) Learn a list of feature name -> indices mappings and transform X.
get_feature_names() Returns a list of feature names, ordered by their indices.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X[, dict_type]) Transform array or sparse matrix X back to feature mappings.
restrict(support[, indices]) Restrict the features to those in support using feature selection.
set_params(**params) Set the parameters of this estimator.
transform(X) Transform feature->value dicts to array or sparse matrix.
fit(X, y=None) [source]
Learn a list of feature name -> indices mappings. Parameters
XMapping or iterable over Mappings
Dict(s) or Mapping(s) from feature names (arbitrary Python objects) to feature values (strings or convertible to dtype). Changed in version 0.24: Accepts multiple string values for one categorical feature.
y(ignored)
Returns
self
fit_transform(X, y=None) [source]
Learn a list of feature name -> indices mappings and transform X. Like fit(X) followed by transform(X), but does not require materializing X in memory. Parameters
XMapping or iterable over Mappings
Dict(s) or Mapping(s) from feature names (arbitrary Python objects) to feature values (strings or convertible to dtype). Changed in version 0.24: Accepts multiple string values for one categorical feature.
y(ignored)
Returns
Xa{array, sparse matrix}
Feature vectors; always 2-d.
get_feature_names() [source]
Returns a list of feature names, ordered by their indices. If one-of-K coding is applied to categorical features, this will include the constructed feature names but not the original ones.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X, dict_type=<class 'dict'>) [source]
Transform array or sparse matrix X back to feature mappings. X must have been produced by this DictVectorizer’s transform or fit_transform method; it may only have passed through transformers that preserve the number of features and their order. In the case of one-hot/one-of-K coding, the constructed feature names and values are returned rather than the original ones. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Sample matrix.
dict_typetype, default=dict
Constructor for feature mappings. Must conform to the collections.Mapping API. Returns
Dlist of dict_type objects of shape (n_samples,)
Feature mappings for the samples in X.
restrict(support, indices=False) [source]
Restrict the features to those in support using feature selection. This function modifies the estimator in-place. Parameters
supportarray-like
Boolean mask or list of indices (as returned by the get_support member of feature selectors).
indicesbool, default=False
Whether support is a list of indices. Returns
self
Examples >>> from sklearn.feature_extraction import DictVectorizer
>>> from sklearn.feature_selection import SelectKBest, chi2
>>> v = DictVectorizer()
>>> D = [{'foo': 1, 'bar': 2}, {'foo': 3, 'baz': 1}]
>>> X = v.fit_transform(D)
>>> support = SelectKBest(chi2, k=2).fit(X, [0, 1])
>>> v.get_feature_names()
['bar', 'baz', 'foo']
>>> v.restrict(support.get_support())
DictVectorizer()
>>> v.get_feature_names()
['bar', 'foo']
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Transform feature->value dicts to array or sparse matrix. Named features not encountered during fit or fit_transform will be silently ignored. Parameters
XMapping or iterable over Mappings of shape (n_samples,)
Dict(s) or Mapping(s) from feature names (arbitrary Python objects) to feature values (strings or convertible to dtype). Returns
Xa{array, sparse matrix}
Feature vectors; always 2-d. | sklearn.modules.generated.sklearn.feature_extraction.dictvectorizer#sklearn.feature_extraction.DictVectorizer |
sklearn.feature_extraction.DictVectorizer
class sklearn.feature_extraction.DictVectorizer(*, dtype=<class 'numpy.float64'>, separator='=', sparse=True, sort=True) [source]
Transforms lists of feature-value mappings to vectors. This transformer turns lists of mappings (dict-like objects) of feature names to feature values into Numpy arrays or scipy.sparse matrices for use with scikit-learn estimators. When feature values are strings, this transformer will do a binary one-hot (aka one-of-K) coding: one boolean-valued feature is constructed for each of the possible string values that the feature can take on. For instance, a feature “f” that can take on the values “ham” and “spam” will become two features in the output, one signifying “f=ham”, the other “f=spam”. If a feature value is a sequence or set of strings, this transformer will iterate over the values and will count the occurrences of each string value. However, note that this transformer will only do a binary one-hot encoding when feature values are of type string. If categorical features are represented as numeric values such as int or iterables of strings, the DictVectorizer can be followed by OneHotEncoder to complete binary one-hot encoding. Features that do not occur in a sample (mapping) will have a zero value in the resulting array/matrix. Read more in the User Guide. Parameters
dtypedtype, default=np.float64
The type of feature values. Passed to Numpy array/scipy.sparse matrix constructors as the dtype argument.
separatorstr, default=”=”
Separator string used when constructing new features for one-hot coding.
sparsebool, default=True
Whether transform should produce scipy.sparse matrices.
sortbool, default=True
Whether feature_names_ and vocabulary_ should be sorted when fitting. Attributes
vocabulary_dict
A dictionary mapping feature names to feature indices.
feature_names_list
A list of length n_features containing the feature names (e.g., “f=ham” and “f=spam”). See also
FeatureHasher
Performs vectorization using only a hash function.
sklearn.preprocessing.OrdinalEncoder
Handles nominal/categorical features encoded as columns of arbitrary data types. Examples >>> from sklearn.feature_extraction import DictVectorizer
>>> v = DictVectorizer(sparse=False)
>>> D = [{'foo': 1, 'bar': 2}, {'foo': 3, 'baz': 1}]
>>> X = v.fit_transform(D)
>>> X
array([[2., 0., 1.],
[0., 1., 3.]])
>>> v.inverse_transform(X) == [{'bar': 2.0, 'foo': 1.0},
... {'baz': 1.0, 'foo': 3.0}]
True
>>> v.transform({'foo': 4, 'unseen_feature': 3})
array([[0., 0., 4.]])
Methods
fit(X[, y]) Learn a list of feature name -> indices mappings.
fit_transform(X[, y]) Learn a list of feature name -> indices mappings and transform X.
get_feature_names() Returns a list of feature names, ordered by their indices.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X[, dict_type]) Transform array or sparse matrix X back to feature mappings.
restrict(support[, indices]) Restrict the features to those in support using feature selection.
set_params(**params) Set the parameters of this estimator.
transform(X) Transform feature->value dicts to array or sparse matrix.
fit(X, y=None) [source]
Learn a list of feature name -> indices mappings. Parameters
XMapping or iterable over Mappings
Dict(s) or Mapping(s) from feature names (arbitrary Python objects) to feature values (strings or convertible to dtype). Changed in version 0.24: Accepts multiple string values for one categorical feature.
y(ignored)
Returns
self
fit_transform(X, y=None) [source]
Learn a list of feature name -> indices mappings and transform X. Like fit(X) followed by transform(X), but does not require materializing X in memory. Parameters
XMapping or iterable over Mappings
Dict(s) or Mapping(s) from feature names (arbitrary Python objects) to feature values (strings or convertible to dtype). Changed in version 0.24: Accepts multiple string values for one categorical feature.
y(ignored)
Returns
Xa{array, sparse matrix}
Feature vectors; always 2-d.
get_feature_names() [source]
Returns a list of feature names, ordered by their indices. If one-of-K coding is applied to categorical features, this will include the constructed feature names but not the original ones.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X, dict_type=<class 'dict'>) [source]
Transform array or sparse matrix X back to feature mappings. X must have been produced by this DictVectorizer’s transform or fit_transform method; it may only have passed through transformers that preserve the number of features and their order. In the case of one-hot/one-of-K coding, the constructed feature names and values are returned rather than the original ones. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Sample matrix.
dict_typetype, default=dict
Constructor for feature mappings. Must conform to the collections.Mapping API. Returns
Dlist of dict_type objects of shape (n_samples,)
Feature mappings for the samples in X.
restrict(support, indices=False) [source]
Restrict the features to those in support using feature selection. This function modifies the estimator in-place. Parameters
supportarray-like
Boolean mask or list of indices (as returned by the get_support member of feature selectors).
indicesbool, default=False
Whether support is a list of indices. Returns
self
Examples >>> from sklearn.feature_extraction import DictVectorizer
>>> from sklearn.feature_selection import SelectKBest, chi2
>>> v = DictVectorizer()
>>> D = [{'foo': 1, 'bar': 2}, {'foo': 3, 'baz': 1}]
>>> X = v.fit_transform(D)
>>> support = SelectKBest(chi2, k=2).fit(X, [0, 1])
>>> v.get_feature_names()
['bar', 'baz', 'foo']
>>> v.restrict(support.get_support())
DictVectorizer()
>>> v.get_feature_names()
['bar', 'foo']
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Transform feature->value dicts to array or sparse matrix. Named features not encountered during fit or fit_transform will be silently ignored. Parameters
XMapping or iterable over Mappings of shape (n_samples,)
Dict(s) or Mapping(s) from feature names (arbitrary Python objects) to feature values (strings or convertible to dtype). Returns
Xa{array, sparse matrix}
Feature vectors; always 2-d.
Examples using sklearn.feature_extraction.DictVectorizer
Column Transformer with Heterogeneous Data Sources
FeatureHasher and DictVectorizer Comparison | sklearn.modules.generated.sklearn.feature_extraction.dictvectorizer |
fit(X, y=None) [source]
Learn a list of feature name -> indices mappings. Parameters
XMapping or iterable over Mappings
Dict(s) or Mapping(s) from feature names (arbitrary Python objects) to feature values (strings or convertible to dtype). Changed in version 0.24: Accepts multiple string values for one categorical feature.
y(ignored)
Returns
self | sklearn.modules.generated.sklearn.feature_extraction.dictvectorizer#sklearn.feature_extraction.DictVectorizer.fit |
fit_transform(X, y=None) [source]
Learn a list of feature name -> indices mappings and transform X. Like fit(X) followed by transform(X), but does not require materializing X in memory. Parameters
XMapping or iterable over Mappings
Dict(s) or Mapping(s) from feature names (arbitrary Python objects) to feature values (strings or convertible to dtype). Changed in version 0.24: Accepts multiple string values for one categorical feature.
y(ignored)
Returns
Xa{array, sparse matrix}
Feature vectors; always 2-d. | sklearn.modules.generated.sklearn.feature_extraction.dictvectorizer#sklearn.feature_extraction.DictVectorizer.fit_transform |
get_feature_names() [source]
Returns a list of feature names, ordered by their indices. If one-of-K coding is applied to categorical features, this will include the constructed feature names but not the original ones. | sklearn.modules.generated.sklearn.feature_extraction.dictvectorizer#sklearn.feature_extraction.DictVectorizer.get_feature_names |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.feature_extraction.dictvectorizer#sklearn.feature_extraction.DictVectorizer.get_params |
inverse_transform(X, dict_type=<class 'dict'>) [source]
Transform array or sparse matrix X back to feature mappings. X must have been produced by this DictVectorizer’s transform or fit_transform method; it may only have passed through transformers that preserve the number of features and their order. In the case of one-hot/one-of-K coding, the constructed feature names and values are returned rather than the original ones. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Sample matrix.
dict_typetype, default=dict
Constructor for feature mappings. Must conform to the collections.Mapping API. Returns
Dlist of dict_type objects of shape (n_samples,)
Feature mappings for the samples in X. | sklearn.modules.generated.sklearn.feature_extraction.dictvectorizer#sklearn.feature_extraction.DictVectorizer.inverse_transform |
restrict(support, indices=False) [source]
Restrict the features to those in support using feature selection. This function modifies the estimator in-place. Parameters
supportarray-like
Boolean mask or list of indices (as returned by the get_support member of feature selectors).
indicesbool, default=False
Whether support is a list of indices. Returns
self
Examples >>> from sklearn.feature_extraction import DictVectorizer
>>> from sklearn.feature_selection import SelectKBest, chi2
>>> v = DictVectorizer()
>>> D = [{'foo': 1, 'bar': 2}, {'foo': 3, 'baz': 1}]
>>> X = v.fit_transform(D)
>>> support = SelectKBest(chi2, k=2).fit(X, [0, 1])
>>> v.get_feature_names()
['bar', 'baz', 'foo']
>>> v.restrict(support.get_support())
DictVectorizer()
>>> v.get_feature_names()
['bar', 'foo'] | sklearn.modules.generated.sklearn.feature_extraction.dictvectorizer#sklearn.feature_extraction.DictVectorizer.restrict |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.feature_extraction.dictvectorizer#sklearn.feature_extraction.DictVectorizer.set_params |
transform(X) [source]
Transform feature->value dicts to array or sparse matrix. Named features not encountered during fit or fit_transform will be silently ignored. Parameters
XMapping or iterable over Mappings of shape (n_samples,)
Dict(s) or Mapping(s) from feature names (arbitrary Python objects) to feature values (strings or convertible to dtype). Returns
Xa{array, sparse matrix}
Feature vectors; always 2-d. | sklearn.modules.generated.sklearn.feature_extraction.dictvectorizer#sklearn.feature_extraction.DictVectorizer.transform |
class sklearn.feature_extraction.FeatureHasher(n_features=1048576, *, input_type='dict', dtype=<class 'numpy.float64'>, alternate_sign=True) [source]
Implements feature hashing, aka the hashing trick. This class turns sequences of symbolic feature names (strings) into scipy.sparse matrices, using a hash function to compute the matrix column corresponding to a name. The hash function employed is the signed 32-bit version of Murmurhash3. Feature names of type byte string are used as-is. Unicode strings are converted to UTF-8 first, but no Unicode normalization is done. Feature values must be (finite) numbers. This class is a low-memory alternative to DictVectorizer and CountVectorizer, intended for large-scale (online) learning and situations where memory is tight, e.g. when running prediction code on embedded devices. Read more in the User Guide. New in version 0.13. Parameters
n_featuresint, default=2**20
The number of features (columns) in the output matrices. Small numbers of features are likely to cause hash collisions, but large numbers will cause larger coefficient dimensions in linear learners.
input_type{“dict”, “pair”, “string”}, default=”dict”
Either “dict” (the default) to accept dictionaries over (feature_name, value); “pair” to accept pairs of (feature_name, value); or “string” to accept single strings. feature_name should be a string, while value should be a number. In the case of “string”, a value of 1 is implied. The feature_name is hashed to find the appropriate column for the feature. The value’s sign might be flipped in the output (but see non_negative, below).
dtypenumpy dtype, default=np.float64
The type of feature values. Passed to scipy.sparse matrix constructors as the dtype argument. Do not set this to bool, np.boolean or any unsigned integer type.
alternate_signbool, default=True
When True, an alternating sign is added to the features as to approximately conserve the inner product in the hashed space even for small n_features. This approach is similar to sparse random projection. .. versionchanged:: 0.19
alternate_sign replaces the now deprecated non_negative parameter. See also
DictVectorizer
Vectorizes string-valued features using a hash table.
sklearn.preprocessing.OneHotEncoder
Handles nominal/categorical features. Examples >>> from sklearn.feature_extraction import FeatureHasher
>>> h = FeatureHasher(n_features=10)
>>> D = [{'dog': 1, 'cat':2, 'elephant':4},{'dog': 2, 'run': 5}]
>>> f = h.transform(D)
>>> f.toarray()
array([[ 0., 0., -4., -1., 0., 0., 0., 0., 0., 2.],
[ 0., 0., 0., -2., -5., 0., 0., 0., 0., 0.]])
Methods
fit([X, y]) No-op.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(raw_X) Transform a sequence of instances to a scipy.sparse matrix.
fit(X=None, y=None) [source]
No-op. This method doesn’t do anything. It exists purely for compatibility with the scikit-learn transformer API. Parameters
Xndarray
Returns
selfFeatureHasher
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(raw_X) [source]
Transform a sequence of instances to a scipy.sparse matrix. Parameters
raw_Xiterable over iterable over raw features, length = n_samples
Samples. Each sample must be iterable an (e.g., a list or tuple) containing/generating feature names (and optionally values, see the input_type constructor argument) which will be hashed. raw_X need not support the len function, so it can be the result of a generator; n_samples is determined on the fly. Returns
Xsparse matrix of shape (n_samples, n_features)
Feature matrix, for use with estimators or further transformers. | sklearn.modules.generated.sklearn.feature_extraction.featurehasher#sklearn.feature_extraction.FeatureHasher |
sklearn.feature_extraction.FeatureHasher
class sklearn.feature_extraction.FeatureHasher(n_features=1048576, *, input_type='dict', dtype=<class 'numpy.float64'>, alternate_sign=True) [source]
Implements feature hashing, aka the hashing trick. This class turns sequences of symbolic feature names (strings) into scipy.sparse matrices, using a hash function to compute the matrix column corresponding to a name. The hash function employed is the signed 32-bit version of Murmurhash3. Feature names of type byte string are used as-is. Unicode strings are converted to UTF-8 first, but no Unicode normalization is done. Feature values must be (finite) numbers. This class is a low-memory alternative to DictVectorizer and CountVectorizer, intended for large-scale (online) learning and situations where memory is tight, e.g. when running prediction code on embedded devices. Read more in the User Guide. New in version 0.13. Parameters
n_featuresint, default=2**20
The number of features (columns) in the output matrices. Small numbers of features are likely to cause hash collisions, but large numbers will cause larger coefficient dimensions in linear learners.
input_type{“dict”, “pair”, “string”}, default=”dict”
Either “dict” (the default) to accept dictionaries over (feature_name, value); “pair” to accept pairs of (feature_name, value); or “string” to accept single strings. feature_name should be a string, while value should be a number. In the case of “string”, a value of 1 is implied. The feature_name is hashed to find the appropriate column for the feature. The value’s sign might be flipped in the output (but see non_negative, below).
dtypenumpy dtype, default=np.float64
The type of feature values. Passed to scipy.sparse matrix constructors as the dtype argument. Do not set this to bool, np.boolean or any unsigned integer type.
alternate_signbool, default=True
When True, an alternating sign is added to the features as to approximately conserve the inner product in the hashed space even for small n_features. This approach is similar to sparse random projection. .. versionchanged:: 0.19
alternate_sign replaces the now deprecated non_negative parameter. See also
DictVectorizer
Vectorizes string-valued features using a hash table.
sklearn.preprocessing.OneHotEncoder
Handles nominal/categorical features. Examples >>> from sklearn.feature_extraction import FeatureHasher
>>> h = FeatureHasher(n_features=10)
>>> D = [{'dog': 1, 'cat':2, 'elephant':4},{'dog': 2, 'run': 5}]
>>> f = h.transform(D)
>>> f.toarray()
array([[ 0., 0., -4., -1., 0., 0., 0., 0., 0., 2.],
[ 0., 0., 0., -2., -5., 0., 0., 0., 0., 0.]])
Methods
fit([X, y]) No-op.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(raw_X) Transform a sequence of instances to a scipy.sparse matrix.
fit(X=None, y=None) [source]
No-op. This method doesn’t do anything. It exists purely for compatibility with the scikit-learn transformer API. Parameters
Xndarray
Returns
selfFeatureHasher
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(raw_X) [source]
Transform a sequence of instances to a scipy.sparse matrix. Parameters
raw_Xiterable over iterable over raw features, length = n_samples
Samples. Each sample must be iterable an (e.g., a list or tuple) containing/generating feature names (and optionally values, see the input_type constructor argument) which will be hashed. raw_X need not support the len function, so it can be the result of a generator; n_samples is determined on the fly. Returns
Xsparse matrix of shape (n_samples, n_features)
Feature matrix, for use with estimators or further transformers.
Examples using sklearn.feature_extraction.FeatureHasher
FeatureHasher and DictVectorizer Comparison | sklearn.modules.generated.sklearn.feature_extraction.featurehasher |
fit(X=None, y=None) [source]
No-op. This method doesn’t do anything. It exists purely for compatibility with the scikit-learn transformer API. Parameters
Xndarray
Returns
selfFeatureHasher | sklearn.modules.generated.sklearn.feature_extraction.featurehasher#sklearn.feature_extraction.FeatureHasher.fit |
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | sklearn.modules.generated.sklearn.feature_extraction.featurehasher#sklearn.feature_extraction.FeatureHasher.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.feature_extraction.featurehasher#sklearn.feature_extraction.FeatureHasher.get_params |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.feature_extraction.featurehasher#sklearn.feature_extraction.FeatureHasher.set_params |
transform(raw_X) [source]
Transform a sequence of instances to a scipy.sparse matrix. Parameters
raw_Xiterable over iterable over raw features, length = n_samples
Samples. Each sample must be iterable an (e.g., a list or tuple) containing/generating feature names (and optionally values, see the input_type constructor argument) which will be hashed. raw_X need not support the len function, so it can be the result of a generator; n_samples is determined on the fly. Returns
Xsparse matrix of shape (n_samples, n_features)
Feature matrix, for use with estimators or further transformers. | sklearn.modules.generated.sklearn.feature_extraction.featurehasher#sklearn.feature_extraction.FeatureHasher.transform |
sklearn.feature_extraction.image.extract_patches_2d(image, patch_size, *, max_patches=None, random_state=None) [source]
Reshape a 2D image into a collection of patches The resulting patches are allocated in a dedicated array. Read more in the User Guide. Parameters
imagendarray of shape (image_height, image_width) or (image_height, image_width, n_channels)
The original image data. For color images, the last dimension specifies the channel: a RGB image would have n_channels=3.
patch_sizetuple of int (patch_height, patch_width)
The dimensions of one patch.
max_patchesint or float, default=None
The maximum number of patches to extract. If max_patches is a float between 0 and 1, it is taken to be a proportion of the total number of patches.
random_stateint, RandomState instance, default=None
Determines the random number generator used for random sampling when max_patches is not None. Use an int to make the randomness deterministic. See Glossary. Returns
patchesarray of shape (n_patches, patch_height, patch_width) or (n_patches, patch_height, patch_width, n_channels)
The collection of patches extracted from the image, where n_patches is either max_patches or the total number of patches that can be extracted. Examples >>> from sklearn.datasets import load_sample_image
>>> from sklearn.feature_extraction import image
>>> # Use the array data from the first image in this dataset:
>>> one_image = load_sample_image("china.jpg")
>>> print('Image shape: {}'.format(one_image.shape))
Image shape: (427, 640, 3)
>>> patches = image.extract_patches_2d(one_image, (2, 2))
>>> print('Patches shape: {}'.format(patches.shape))
Patches shape: (272214, 2, 2, 3)
>>> # Here are just two of these patches:
>>> print(patches[1])
[[[174 201 231]
[174 201 231]]
[[173 200 230]
[173 200 230]]]
>>> print(patches[800])
[[[187 214 243]
[188 215 244]]
[[187 214 243]
[188 215 244]]] | sklearn.modules.generated.sklearn.feature_extraction.image.extract_patches_2d#sklearn.feature_extraction.image.extract_patches_2d |
sklearn.feature_extraction.image.grid_to_graph(n_x, n_y, n_z=1, *, mask=None, return_as=<class 'scipy.sparse.coo.coo_matrix'>, dtype=<class 'int'>) [source]
Graph of the pixel-to-pixel connections Edges exist if 2 voxels are connected. Parameters
n_xint
Dimension in x axis
n_yint
Dimension in y axis
n_zint, default=1
Dimension in z axis
maskndarray of shape (n_x, n_y, n_z), dtype=bool, default=None
An optional mask of the image, to consider only part of the pixels.
return_asnp.ndarray or a sparse matrix class, default=sparse.coo_matrix
The class to use to build the returned adjacency matrix.
dtypedtype, default=int
The data of the returned sparse matrix. By default it is int Notes For scikit-learn versions 0.14.1 and prior, return_as=np.ndarray was handled by returning a dense np.matrix instance. Going forward, np.ndarray returns an np.ndarray, as expected. For compatibility, user code relying on this method should wrap its calls in np.asarray to avoid type issues. | sklearn.modules.generated.sklearn.feature_extraction.image.grid_to_graph#sklearn.feature_extraction.image.grid_to_graph |
sklearn.feature_extraction.image.img_to_graph(img, *, mask=None, return_as=<class 'scipy.sparse.coo.coo_matrix'>, dtype=None) [source]
Graph of the pixel-to-pixel gradient connections Edges are weighted with the gradient values. Read more in the User Guide. Parameters
imgndarray of shape (height, width) or (height, width, channel)
2D or 3D image.
maskndarray of shape (height, width) or (height, width, channel), dtype=bool, default=None
An optional mask of the image, to consider only part of the pixels.
return_asnp.ndarray or a sparse matrix class, default=sparse.coo_matrix
The class to use to build the returned adjacency matrix.
dtypedtype, default=None
The data of the returned sparse matrix. By default it is the dtype of img Notes For scikit-learn versions 0.14.1 and prior, return_as=np.ndarray was handled by returning a dense np.matrix instance. Going forward, np.ndarray returns an np.ndarray, as expected. For compatibility, user code relying on this method should wrap its calls in np.asarray to avoid type issues. | sklearn.modules.generated.sklearn.feature_extraction.image.img_to_graph#sklearn.feature_extraction.image.img_to_graph |
class sklearn.feature_extraction.image.PatchExtractor(*, patch_size=None, max_patches=None, random_state=None) [source]
Extracts patches from a collection of images Read more in the User Guide. New in version 0.9. Parameters
patch_sizetuple of int (patch_height, patch_width), default=None
The dimensions of one patch.
max_patchesint or float, default=None
The maximum number of patches per image to extract. If max_patches is a float in (0, 1), it is taken to mean a proportion of the total number of patches.
random_stateint, RandomState instance, default=None
Determines the random number generator used for random sampling when max_patches is not None. Use an int to make the randomness deterministic. See Glossary. Examples >>> from sklearn.datasets import load_sample_images
>>> from sklearn.feature_extraction import image
>>> # Use the array data from the second image in this dataset:
>>> X = load_sample_images().images[1]
>>> print('Image shape: {}'.format(X.shape))
Image shape: (427, 640, 3)
>>> pe = image.PatchExtractor(patch_size=(2, 2))
>>> pe_fit = pe.fit(X)
>>> pe_trans = pe.transform(X)
>>> print('Patches shape: {}'.format(pe_trans.shape))
Patches shape: (545706, 2, 2)
Methods
fit(X[, y]) Do nothing and return the estimator unchanged.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Transforms the image samples in X into a matrix of patch data.
fit(X, y=None) [source]
Do nothing and return the estimator unchanged. This method is just there to implement the usual API and hence work in pipelines. Parameters
Xarray-like of shape (n_samples, n_features)
Training data.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Transforms the image samples in X into a matrix of patch data. Parameters
Xndarray of shape (n_samples, image_height, image_width) or (n_samples, image_height, image_width, n_channels)
Array of images from which to extract patches. For color images, the last dimension specifies the channel: a RGB image would have n_channels=3. Returns
patchesarray of shape (n_patches, patch_height, patch_width) or (n_patches, patch_height, patch_width, n_channels)
The collection of patches extracted from the images, where n_patches is either n_samples * max_patches or the total number of patches that can be extracted. | sklearn.modules.generated.sklearn.feature_extraction.image.patchextractor#sklearn.feature_extraction.image.PatchExtractor |
sklearn.feature_extraction.image.PatchExtractor
class sklearn.feature_extraction.image.PatchExtractor(*, patch_size=None, max_patches=None, random_state=None) [source]
Extracts patches from a collection of images Read more in the User Guide. New in version 0.9. Parameters
patch_sizetuple of int (patch_height, patch_width), default=None
The dimensions of one patch.
max_patchesint or float, default=None
The maximum number of patches per image to extract. If max_patches is a float in (0, 1), it is taken to mean a proportion of the total number of patches.
random_stateint, RandomState instance, default=None
Determines the random number generator used for random sampling when max_patches is not None. Use an int to make the randomness deterministic. See Glossary. Examples >>> from sklearn.datasets import load_sample_images
>>> from sklearn.feature_extraction import image
>>> # Use the array data from the second image in this dataset:
>>> X = load_sample_images().images[1]
>>> print('Image shape: {}'.format(X.shape))
Image shape: (427, 640, 3)
>>> pe = image.PatchExtractor(patch_size=(2, 2))
>>> pe_fit = pe.fit(X)
>>> pe_trans = pe.transform(X)
>>> print('Patches shape: {}'.format(pe_trans.shape))
Patches shape: (545706, 2, 2)
Methods
fit(X[, y]) Do nothing and return the estimator unchanged.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Transforms the image samples in X into a matrix of patch data.
fit(X, y=None) [source]
Do nothing and return the estimator unchanged. This method is just there to implement the usual API and hence work in pipelines. Parameters
Xarray-like of shape (n_samples, n_features)
Training data.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Transforms the image samples in X into a matrix of patch data. Parameters
Xndarray of shape (n_samples, image_height, image_width) or (n_samples, image_height, image_width, n_channels)
Array of images from which to extract patches. For color images, the last dimension specifies the channel: a RGB image would have n_channels=3. Returns
patchesarray of shape (n_patches, patch_height, patch_width) or (n_patches, patch_height, patch_width, n_channels)
The collection of patches extracted from the images, where n_patches is either n_samples * max_patches or the total number of patches that can be extracted. | sklearn.modules.generated.sklearn.feature_extraction.image.patchextractor |
fit(X, y=None) [source]
Do nothing and return the estimator unchanged. This method is just there to implement the usual API and hence work in pipelines. Parameters
Xarray-like of shape (n_samples, n_features)
Training data. | sklearn.modules.generated.sklearn.feature_extraction.image.patchextractor#sklearn.feature_extraction.image.PatchExtractor.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.feature_extraction.image.patchextractor#sklearn.feature_extraction.image.PatchExtractor.get_params |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.feature_extraction.image.patchextractor#sklearn.feature_extraction.image.PatchExtractor.set_params |
transform(X) [source]
Transforms the image samples in X into a matrix of patch data. Parameters
Xndarray of shape (n_samples, image_height, image_width) or (n_samples, image_height, image_width, n_channels)
Array of images from which to extract patches. For color images, the last dimension specifies the channel: a RGB image would have n_channels=3. Returns
patchesarray of shape (n_patches, patch_height, patch_width) or (n_patches, patch_height, patch_width, n_channels)
The collection of patches extracted from the images, where n_patches is either n_samples * max_patches or the total number of patches that can be extracted. | sklearn.modules.generated.sklearn.feature_extraction.image.patchextractor#sklearn.feature_extraction.image.PatchExtractor.transform |
sklearn.feature_extraction.image.reconstruct_from_patches_2d(patches, image_size) [source]
Reconstruct the image from all of its patches. Patches are assumed to overlap and the image is constructed by filling in the patches from left to right, top to bottom, averaging the overlapping regions. Read more in the User Guide. Parameters
patchesndarray of shape (n_patches, patch_height, patch_width) or (n_patches, patch_height, patch_width, n_channels)
The complete set of patches. If the patches contain colour information, channels are indexed along the last dimension: RGB patches would have n_channels=3.
image_sizetuple of int (image_height, image_width) or (image_height, image_width, n_channels)
The size of the image that will be reconstructed. Returns
imagendarray of shape image_size
The reconstructed image. | sklearn.modules.generated.sklearn.feature_extraction.image.reconstruct_from_patches_2d#sklearn.feature_extraction.image.reconstruct_from_patches_2d |
class sklearn.feature_extraction.text.CountVectorizer(*, input='content', encoding='utf-8', decode_error='strict', strip_accents=None, lowercase=True, preprocessor=None, tokenizer=None, stop_words=None, token_pattern='(?u)\\b\\w\\w+\\b', ngram_range=(1, 1), analyzer='word', max_df=1.0, min_df=1, max_features=None, vocabulary=None, binary=False, dtype=<class 'numpy.int64'>) [source]
Convert a collection of text documents to a matrix of token counts This implementation produces a sparse representation of the counts using scipy.sparse.csr_matrix. If you do not provide an a-priori dictionary and you do not use an analyzer that does some kind of feature selection then the number of features will be equal to the vocabulary size found by analyzing the data. Read more in the User Guide. Parameters
inputstring {‘filename’, ‘file’, ‘content’}, default=’content’
If ‘filename’, the sequence passed as an argument to fit is expected to be a list of filenames that need reading to fetch the raw content to analyze. If ‘file’, the sequence items must have a ‘read’ method (file-like object) that is called to fetch the bytes in memory. Otherwise the input is expected to be a sequence of items that can be of type string or byte.
encodingstring, default=’utf-8’
If bytes or files are given to analyze, this encoding is used to decode.
decode_error{‘strict’, ‘ignore’, ‘replace’}, default=’strict’
Instruction on what to do if a byte sequence is given to analyze that contains characters not of the given encoding. By default, it is ‘strict’, meaning that a UnicodeDecodeError will be raised. Other values are ‘ignore’ and ‘replace’.
strip_accents{‘ascii’, ‘unicode’}, default=None
Remove accents and perform other character normalization during the preprocessing step. ‘ascii’ is a fast method that only works on characters that have an direct ASCII mapping. ‘unicode’ is a slightly slower method that works on any characters. None (default) does nothing. Both ‘ascii’ and ‘unicode’ use NFKD normalization from unicodedata.normalize.
lowercasebool, default=True
Convert all characters to lowercase before tokenizing.
preprocessorcallable, default=None
Override the preprocessing (strip_accents and lowercase) stage while preserving the tokenizing and n-grams generation steps. Only applies if analyzer is not callable.
tokenizercallable, default=None
Override the string tokenization step while preserving the preprocessing and n-grams generation steps. Only applies if analyzer == 'word'.
stop_wordsstring {‘english’}, list, default=None
If ‘english’, a built-in stop word list for English is used. There are several known issues with ‘english’ and you should consider an alternative (see Using stop words). If a list, that list is assumed to contain stop words, all of which will be removed from the resulting tokens. Only applies if analyzer == 'word'. If None, no stop words will be used. max_df can be set to a value in the range [0.7, 1.0) to automatically detect and filter stop words based on intra corpus document frequency of terms.
token_patternstr, default=r”(?u)\b\w\w+\b”
Regular expression denoting what constitutes a “token”, only used if analyzer == 'word'. The default regexp select tokens of 2 or more alphanumeric characters (punctuation is completely ignored and always treated as a token separator). If there is a capturing group in token_pattern then the captured group content, not the entire match, becomes the token. At most one capturing group is permitted.
ngram_rangetuple (min_n, max_n), default=(1, 1)
The lower and upper boundary of the range of n-values for different word n-grams or char n-grams to be extracted. All values of n such such that min_n <= n <= max_n will be used. For example an ngram_range of (1, 1) means only unigrams, (1, 2) means unigrams and bigrams, and (2, 2) means only bigrams. Only applies if analyzer is not callable.
analyzer{‘word’, ‘char’, ‘char_wb’} or callable, default=’word’
Whether the feature should be made of word n-gram or character n-grams. Option ‘char_wb’ creates character n-grams only from text inside word boundaries; n-grams at the edges of words are padded with space. If a callable is passed it is used to extract the sequence of features out of the raw, unprocessed input. Changed in version 0.21. Since v0.21, if input is filename or file, the data is first read from the file and then passed to the given callable analyzer.
max_dffloat in range [0.0, 1.0] or int, default=1.0
When building the vocabulary ignore terms that have a document frequency strictly higher than the given threshold (corpus-specific stop words). If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None.
min_dffloat in range [0.0, 1.0] or int, default=1
When building the vocabulary ignore terms that have a document frequency strictly lower than the given threshold. This value is also called cut-off in the literature. If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None.
max_featuresint, default=None
If not None, build a vocabulary that only consider the top max_features ordered by term frequency across the corpus. This parameter is ignored if vocabulary is not None.
vocabularyMapping or iterable, default=None
Either a Mapping (e.g., a dict) where keys are terms and values are indices in the feature matrix, or an iterable over terms. If not given, a vocabulary is determined from the input documents. Indices in the mapping should not be repeated and should not have any gap between 0 and the largest index.
binarybool, default=False
If True, all non zero counts are set to 1. This is useful for discrete probabilistic models that model binary events rather than integer counts.
dtypetype, default=np.int64
Type of the matrix returned by fit_transform() or transform(). Attributes
vocabulary_dict
A mapping of terms to feature indices. fixed_vocabulary_: boolean
True if a fixed vocabulary of term to indices mapping is provided by the user
stop_words_set
Terms that were ignored because they either: occurred in too many documents (max_df) occurred in too few documents (min_df) were cut off by feature selection (max_features). This is only available if no vocabulary was given. See also
HashingVectorizer, TfidfVectorizer
Notes The stop_words_ attribute can get large and increase the model size when pickling. This attribute is provided only for introspection and can be safely removed using delattr or set to None before pickling. Examples >>> from sklearn.feature_extraction.text import CountVectorizer
>>> corpus = [
... 'This is the first document.',
... 'This document is the second document.',
... 'And this is the third one.',
... 'Is this the first document?',
... ]
>>> vectorizer = CountVectorizer()
>>> X = vectorizer.fit_transform(corpus)
>>> print(vectorizer.get_feature_names())
['and', 'document', 'first', 'is', 'one', 'second', 'the', 'third', 'this']
>>> print(X.toarray())
[[0 1 1 1 0 0 1 0 1]
[0 2 0 1 0 1 1 0 1]
[1 0 0 1 1 0 1 1 1]
[0 1 1 1 0 0 1 0 1]]
>>> vectorizer2 = CountVectorizer(analyzer='word', ngram_range=(2, 2))
>>> X2 = vectorizer2.fit_transform(corpus)
>>> print(vectorizer2.get_feature_names())
['and this', 'document is', 'first document', 'is the', 'is this',
'second document', 'the first', 'the second', 'the third', 'third one',
'this document', 'this is', 'this the']
>>> print(X2.toarray())
[[0 0 1 1 0 0 1 0 0 0 0 1 0]
[0 1 0 1 0 1 0 1 0 0 1 0 0]
[1 0 0 1 0 0 0 0 1 1 0 1 0]
[0 0 1 0 1 0 1 0 0 0 0 0 1]]
Methods
build_analyzer() Return a callable that handles preprocessing, tokenization and n-grams generation.
build_preprocessor() Return a function to preprocess the text before tokenization.
build_tokenizer() Return a function that splits a string into a sequence of tokens.
decode(doc) Decode the input into a string of unicode symbols.
fit(raw_documents[, y]) Learn a vocabulary dictionary of all tokens in the raw documents.
fit_transform(raw_documents[, y]) Learn the vocabulary dictionary and return document-term matrix.
get_feature_names() Array mapping from feature integer indices to feature name.
get_params([deep]) Get parameters for this estimator.
get_stop_words() Build or fetch the effective stop words list.
inverse_transform(X) Return terms per document with nonzero entries in X.
set_params(**params) Set the parameters of this estimator.
transform(raw_documents) Transform documents to document-term matrix.
build_analyzer() [source]
Return a callable that handles preprocessing, tokenization and n-grams generation. Returns
analyzer: callable
A function to handle preprocessing, tokenization and n-grams generation.
build_preprocessor() [source]
Return a function to preprocess the text before tokenization. Returns
preprocessor: callable
A function to preprocess the text before tokenization.
build_tokenizer() [source]
Return a function that splits a string into a sequence of tokens. Returns
tokenizer: callable
A function to split a string into a sequence of tokens.
decode(doc) [source]
Decode the input into a string of unicode symbols. The decoding strategy depends on the vectorizer parameters. Parameters
docstr
The string to decode. Returns
doc: str
A string of unicode symbols.
fit(raw_documents, y=None) [source]
Learn a vocabulary dictionary of all tokens in the raw documents. Parameters
raw_documentsiterable
An iterable which yields either str, unicode or file objects. Returns
self
fit_transform(raw_documents, y=None) [source]
Learn the vocabulary dictionary and return document-term matrix. This is equivalent to fit followed by transform, but more efficiently implemented. Parameters
raw_documentsiterable
An iterable which yields either str, unicode or file objects. Returns
Xarray of shape (n_samples, n_features)
Document-term matrix.
get_feature_names() [source]
Array mapping from feature integer indices to feature name. Returns
feature_nameslist
A list of feature names.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_stop_words() [source]
Build or fetch the effective stop words list. Returns
stop_words: list or None
A list of stop words.
inverse_transform(X) [source]
Return terms per document with nonzero entries in X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Document-term matrix. Returns
X_invlist of arrays of shape (n_samples,)
List of arrays of terms.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(raw_documents) [source]
Transform documents to document-term matrix. Extract token counts out of raw text documents using the vocabulary fitted with fit or the one provided to the constructor. Parameters
raw_documentsiterable
An iterable which yields either str, unicode or file objects. Returns
Xsparse matrix of shape (n_samples, n_features)
Document-term matrix. | sklearn.modules.generated.sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer |
sklearn.feature_extraction.text.CountVectorizer
class sklearn.feature_extraction.text.CountVectorizer(*, input='content', encoding='utf-8', decode_error='strict', strip_accents=None, lowercase=True, preprocessor=None, tokenizer=None, stop_words=None, token_pattern='(?u)\\b\\w\\w+\\b', ngram_range=(1, 1), analyzer='word', max_df=1.0, min_df=1, max_features=None, vocabulary=None, binary=False, dtype=<class 'numpy.int64'>) [source]
Convert a collection of text documents to a matrix of token counts This implementation produces a sparse representation of the counts using scipy.sparse.csr_matrix. If you do not provide an a-priori dictionary and you do not use an analyzer that does some kind of feature selection then the number of features will be equal to the vocabulary size found by analyzing the data. Read more in the User Guide. Parameters
inputstring {‘filename’, ‘file’, ‘content’}, default=’content’
If ‘filename’, the sequence passed as an argument to fit is expected to be a list of filenames that need reading to fetch the raw content to analyze. If ‘file’, the sequence items must have a ‘read’ method (file-like object) that is called to fetch the bytes in memory. Otherwise the input is expected to be a sequence of items that can be of type string or byte.
encodingstring, default=’utf-8’
If bytes or files are given to analyze, this encoding is used to decode.
decode_error{‘strict’, ‘ignore’, ‘replace’}, default=’strict’
Instruction on what to do if a byte sequence is given to analyze that contains characters not of the given encoding. By default, it is ‘strict’, meaning that a UnicodeDecodeError will be raised. Other values are ‘ignore’ and ‘replace’.
strip_accents{‘ascii’, ‘unicode’}, default=None
Remove accents and perform other character normalization during the preprocessing step. ‘ascii’ is a fast method that only works on characters that have an direct ASCII mapping. ‘unicode’ is a slightly slower method that works on any characters. None (default) does nothing. Both ‘ascii’ and ‘unicode’ use NFKD normalization from unicodedata.normalize.
lowercasebool, default=True
Convert all characters to lowercase before tokenizing.
preprocessorcallable, default=None
Override the preprocessing (strip_accents and lowercase) stage while preserving the tokenizing and n-grams generation steps. Only applies if analyzer is not callable.
tokenizercallable, default=None
Override the string tokenization step while preserving the preprocessing and n-grams generation steps. Only applies if analyzer == 'word'.
stop_wordsstring {‘english’}, list, default=None
If ‘english’, a built-in stop word list for English is used. There are several known issues with ‘english’ and you should consider an alternative (see Using stop words). If a list, that list is assumed to contain stop words, all of which will be removed from the resulting tokens. Only applies if analyzer == 'word'. If None, no stop words will be used. max_df can be set to a value in the range [0.7, 1.0) to automatically detect and filter stop words based on intra corpus document frequency of terms.
token_patternstr, default=r”(?u)\b\w\w+\b”
Regular expression denoting what constitutes a “token”, only used if analyzer == 'word'. The default regexp select tokens of 2 or more alphanumeric characters (punctuation is completely ignored and always treated as a token separator). If there is a capturing group in token_pattern then the captured group content, not the entire match, becomes the token. At most one capturing group is permitted.
ngram_rangetuple (min_n, max_n), default=(1, 1)
The lower and upper boundary of the range of n-values for different word n-grams or char n-grams to be extracted. All values of n such such that min_n <= n <= max_n will be used. For example an ngram_range of (1, 1) means only unigrams, (1, 2) means unigrams and bigrams, and (2, 2) means only bigrams. Only applies if analyzer is not callable.
analyzer{‘word’, ‘char’, ‘char_wb’} or callable, default=’word’
Whether the feature should be made of word n-gram or character n-grams. Option ‘char_wb’ creates character n-grams only from text inside word boundaries; n-grams at the edges of words are padded with space. If a callable is passed it is used to extract the sequence of features out of the raw, unprocessed input. Changed in version 0.21. Since v0.21, if input is filename or file, the data is first read from the file and then passed to the given callable analyzer.
max_dffloat in range [0.0, 1.0] or int, default=1.0
When building the vocabulary ignore terms that have a document frequency strictly higher than the given threshold (corpus-specific stop words). If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None.
min_dffloat in range [0.0, 1.0] or int, default=1
When building the vocabulary ignore terms that have a document frequency strictly lower than the given threshold. This value is also called cut-off in the literature. If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None.
max_featuresint, default=None
If not None, build a vocabulary that only consider the top max_features ordered by term frequency across the corpus. This parameter is ignored if vocabulary is not None.
vocabularyMapping or iterable, default=None
Either a Mapping (e.g., a dict) where keys are terms and values are indices in the feature matrix, or an iterable over terms. If not given, a vocabulary is determined from the input documents. Indices in the mapping should not be repeated and should not have any gap between 0 and the largest index.
binarybool, default=False
If True, all non zero counts are set to 1. This is useful for discrete probabilistic models that model binary events rather than integer counts.
dtypetype, default=np.int64
Type of the matrix returned by fit_transform() or transform(). Attributes
vocabulary_dict
A mapping of terms to feature indices. fixed_vocabulary_: boolean
True if a fixed vocabulary of term to indices mapping is provided by the user
stop_words_set
Terms that were ignored because they either: occurred in too many documents (max_df) occurred in too few documents (min_df) were cut off by feature selection (max_features). This is only available if no vocabulary was given. See also
HashingVectorizer, TfidfVectorizer
Notes The stop_words_ attribute can get large and increase the model size when pickling. This attribute is provided only for introspection and can be safely removed using delattr or set to None before pickling. Examples >>> from sklearn.feature_extraction.text import CountVectorizer
>>> corpus = [
... 'This is the first document.',
... 'This document is the second document.',
... 'And this is the third one.',
... 'Is this the first document?',
... ]
>>> vectorizer = CountVectorizer()
>>> X = vectorizer.fit_transform(corpus)
>>> print(vectorizer.get_feature_names())
['and', 'document', 'first', 'is', 'one', 'second', 'the', 'third', 'this']
>>> print(X.toarray())
[[0 1 1 1 0 0 1 0 1]
[0 2 0 1 0 1 1 0 1]
[1 0 0 1 1 0 1 1 1]
[0 1 1 1 0 0 1 0 1]]
>>> vectorizer2 = CountVectorizer(analyzer='word', ngram_range=(2, 2))
>>> X2 = vectorizer2.fit_transform(corpus)
>>> print(vectorizer2.get_feature_names())
['and this', 'document is', 'first document', 'is the', 'is this',
'second document', 'the first', 'the second', 'the third', 'third one',
'this document', 'this is', 'this the']
>>> print(X2.toarray())
[[0 0 1 1 0 0 1 0 0 0 0 1 0]
[0 1 0 1 0 1 0 1 0 0 1 0 0]
[1 0 0 1 0 0 0 0 1 1 0 1 0]
[0 0 1 0 1 0 1 0 0 0 0 0 1]]
Methods
build_analyzer() Return a callable that handles preprocessing, tokenization and n-grams generation.
build_preprocessor() Return a function to preprocess the text before tokenization.
build_tokenizer() Return a function that splits a string into a sequence of tokens.
decode(doc) Decode the input into a string of unicode symbols.
fit(raw_documents[, y]) Learn a vocabulary dictionary of all tokens in the raw documents.
fit_transform(raw_documents[, y]) Learn the vocabulary dictionary and return document-term matrix.
get_feature_names() Array mapping from feature integer indices to feature name.
get_params([deep]) Get parameters for this estimator.
get_stop_words() Build or fetch the effective stop words list.
inverse_transform(X) Return terms per document with nonzero entries in X.
set_params(**params) Set the parameters of this estimator.
transform(raw_documents) Transform documents to document-term matrix.
build_analyzer() [source]
Return a callable that handles preprocessing, tokenization and n-grams generation. Returns
analyzer: callable
A function to handle preprocessing, tokenization and n-grams generation.
build_preprocessor() [source]
Return a function to preprocess the text before tokenization. Returns
preprocessor: callable
A function to preprocess the text before tokenization.
build_tokenizer() [source]
Return a function that splits a string into a sequence of tokens. Returns
tokenizer: callable
A function to split a string into a sequence of tokens.
decode(doc) [source]
Decode the input into a string of unicode symbols. The decoding strategy depends on the vectorizer parameters. Parameters
docstr
The string to decode. Returns
doc: str
A string of unicode symbols.
fit(raw_documents, y=None) [source]
Learn a vocabulary dictionary of all tokens in the raw documents. Parameters
raw_documentsiterable
An iterable which yields either str, unicode or file objects. Returns
self
fit_transform(raw_documents, y=None) [source]
Learn the vocabulary dictionary and return document-term matrix. This is equivalent to fit followed by transform, but more efficiently implemented. Parameters
raw_documentsiterable
An iterable which yields either str, unicode or file objects. Returns
Xarray of shape (n_samples, n_features)
Document-term matrix.
get_feature_names() [source]
Array mapping from feature integer indices to feature name. Returns
feature_nameslist
A list of feature names.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_stop_words() [source]
Build or fetch the effective stop words list. Returns
stop_words: list or None
A list of stop words.
inverse_transform(X) [source]
Return terms per document with nonzero entries in X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Document-term matrix. Returns
X_invlist of arrays of shape (n_samples,)
List of arrays of terms.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(raw_documents) [source]
Transform documents to document-term matrix. Extract token counts out of raw text documents using the vocabulary fitted with fit or the one provided to the constructor. Parameters
raw_documentsiterable
An iterable which yields either str, unicode or file objects. Returns
Xsparse matrix of shape (n_samples, n_features)
Document-term matrix.
Examples using sklearn.feature_extraction.text.CountVectorizer
Topic extraction with Non-negative Matrix Factorization and Latent Dirichlet Allocation
Sample pipeline for text feature extraction and evaluation
Semi-supervised Classification on a Text Dataset | sklearn.modules.generated.sklearn.feature_extraction.text.countvectorizer |
build_analyzer() [source]
Return a callable that handles preprocessing, tokenization and n-grams generation. Returns
analyzer: callable
A function to handle preprocessing, tokenization and n-grams generation. | sklearn.modules.generated.sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer.build_analyzer |
build_preprocessor() [source]
Return a function to preprocess the text before tokenization. Returns
preprocessor: callable
A function to preprocess the text before tokenization. | sklearn.modules.generated.sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer.build_preprocessor |
build_tokenizer() [source]
Return a function that splits a string into a sequence of tokens. Returns
tokenizer: callable
A function to split a string into a sequence of tokens. | sklearn.modules.generated.sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer.build_tokenizer |
decode(doc) [source]
Decode the input into a string of unicode symbols. The decoding strategy depends on the vectorizer parameters. Parameters
docstr
The string to decode. Returns
doc: str
A string of unicode symbols. | sklearn.modules.generated.sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer.decode |
fit(raw_documents, y=None) [source]
Learn a vocabulary dictionary of all tokens in the raw documents. Parameters
raw_documentsiterable
An iterable which yields either str, unicode or file objects. Returns
self | sklearn.modules.generated.sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer.fit |
fit_transform(raw_documents, y=None) [source]
Learn the vocabulary dictionary and return document-term matrix. This is equivalent to fit followed by transform, but more efficiently implemented. Parameters
raw_documentsiterable
An iterable which yields either str, unicode or file objects. Returns
Xarray of shape (n_samples, n_features)
Document-term matrix. | sklearn.modules.generated.sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer.fit_transform |
get_feature_names() [source]
Array mapping from feature integer indices to feature name. Returns
feature_nameslist
A list of feature names. | sklearn.modules.generated.sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer.get_feature_names |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.