code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def fit(self, X, y, **fit_params):
"""Fit the estimators.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where `n_samples` is the number of samples and
`n_features` is the number of features.
y : ... | Fit the estimators.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where `n_samples` is the number of samples and
`n_features` is the number of features.
y : array-like of shape (n_samples,)
T... | fit | python | scikit-learn/scikit-learn | sklearn/ensemble/_stacking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_stacking.py | BSD-3-Clause |
def predict(self, X, **predict_params):
"""Predict target for X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where `n_samples` is the number of samples and
`n_features` is the number of features.
... | Predict target for X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where `n_samples` is the number of samples and
`n_features` is the number of features.
**predict_params : dict of str -> obj
... | predict | python | scikit-learn/scikit-learn | sklearn/ensemble/_stacking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_stacking.py | BSD-3-Clause |
def predict_proba(self, X):
"""Predict class probabilities for `X` using the final estimator.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where `n_samples` is the number of samples and
`n_features` is t... | Predict class probabilities for `X` using the final estimator.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where `n_samples` is the number of samples and
`n_features` is the number of features.
Returns... | predict_proba | python | scikit-learn/scikit-learn | sklearn/ensemble/_stacking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_stacking.py | BSD-3-Clause |
def fit(self, X, y, **fit_params):
"""Fit the estimators.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where `n_samples` is the number of samples and
`n_features` is the number of features.
y : ... | Fit the estimators.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where `n_samples` is the number of samples and
`n_features` is the number of features.
y : array-like of shape (n_samples,)
T... | fit | python | scikit-learn/scikit-learn | sklearn/ensemble/_stacking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_stacking.py | BSD-3-Clause |
def fit_transform(self, X, y, **fit_params):
"""Fit the estimators and return the predictions for X for each estimator.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where `n_samples` is the number of samples and
... | Fit the estimators and return the predictions for X for each estimator.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where `n_samples` is the number of samples and
`n_features` is the number of features.
... | fit_transform | python | scikit-learn/scikit-learn | sklearn/ensemble/_stacking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_stacking.py | BSD-3-Clause |
def _weights_not_none(self):
"""Get the weights of not `None` estimators."""
if self.weights is None:
return None
return [w for est, w in zip(self.estimators, self.weights) if est[1] != "drop"] | Get the weights of not `None` estimators. | _weights_not_none | python | scikit-learn/scikit-learn | sklearn/ensemble/_voting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_voting.py | BSD-3-Clause |
def fit(self, X, y, **fit_params):
"""Fit the estimators.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where `n_samples` is the number of samples and
`n_features` is the number of features.
y : ... | Fit the estimators.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where `n_samples` is the number of samples and
`n_features` is the number of features.
y : array-like of shape (n_samples,)
T... | fit | python | scikit-learn/scikit-learn | sklearn/ensemble/_voting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_voting.py | BSD-3-Clause |
def predict(self, X):
"""Predict class labels for X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
The input samples.
Returns
-------
maj : array-like of shape (n_samples,)
Predicted class labels.
... | Predict class labels for X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
The input samples.
Returns
-------
maj : array-like of shape (n_samples,)
Predicted class labels.
| predict | python | scikit-learn/scikit-learn | sklearn/ensemble/_voting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_voting.py | BSD-3-Clause |
def predict_proba(self, X):
"""Compute probabilities of possible outcomes for samples in X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
The input samples.
Returns
-------
avg : array-like of shape (n_samples... | Compute probabilities of possible outcomes for samples in X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
The input samples.
Returns
-------
avg : array-like of shape (n_samples, n_classes)
Weighted avera... | predict_proba | python | scikit-learn/scikit-learn | sklearn/ensemble/_voting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_voting.py | BSD-3-Clause |
def transform(self, X):
"""Return class labels or probabilities for X for each estimator.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where `n_samples` is the number of samples and
`n_features` is the n... | Return class labels or probabilities for X for each estimator.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where `n_samples` is the number of samples and
`n_features` is the number of features.
Returns... | transform | python | scikit-learn/scikit-learn | sklearn/ensemble/_voting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_voting.py | BSD-3-Clause |
def get_feature_names_out(self, input_features=None):
"""Get output feature names for transformation.
Parameters
----------
input_features : array-like of str or None, default=None
Not used, present here for API consistency by convention.
Returns
-------
... | Get output feature names for transformation.
Parameters
----------
input_features : array-like of str or None, default=None
Not used, present here for API consistency by convention.
Returns
-------
feature_names_out : ndarray of str objects
Trans... | get_feature_names_out | python | scikit-learn/scikit-learn | sklearn/ensemble/_voting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_voting.py | BSD-3-Clause |
def fit(self, X, y, **fit_params):
"""Fit the estimators.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where `n_samples` is the number of samples and
`n_features` is the number of features.
y : ... | Fit the estimators.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where `n_samples` is the number of samples and
`n_features` is the number of features.
y : array-like of shape (n_samples,)
T... | fit | python | scikit-learn/scikit-learn | sklearn/ensemble/_voting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_voting.py | BSD-3-Clause |
def predict(self, X):
"""Predict regression target for X.
The predicted regression target of an input sample is computed as the
mean predicted regression targets of the estimators in the ensemble.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples... | Predict regression target for X.
The predicted regression target of an input sample is computed as the
mean predicted regression targets of the estimators in the ensemble.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
The inp... | predict | python | scikit-learn/scikit-learn | sklearn/ensemble/_voting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_voting.py | BSD-3-Clause |
def fit(self, X, y, sample_weight=None):
"""Build a boosted classifier/regressor from the training set (X, y).
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
The training input samples. Sparse matrix can be CSC, CSR, COO,
D... | Build a boosted classifier/regressor from the training set (X, y).
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
The training input samples. Sparse matrix can be CSC, CSR, COO,
DOK, or LIL. COO, DOK, and LIL are converted to CSR.
... | fit | python | scikit-learn/scikit-learn | sklearn/ensemble/_weight_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_weight_boosting.py | BSD-3-Clause |
def staged_score(self, X, y, sample_weight=None):
"""Return staged scores for X, y.
This generator method yields the ensemble score after each iteration of
boosting and therefore allows monitoring, such as to determine the
score on a test set after each boost.
Parameters
... | Return staged scores for X, y.
This generator method yields the ensemble score after each iteration of
boosting and therefore allows monitoring, such as to determine the
score on a test set after each boost.
Parameters
----------
X : {array-like, sparse matrix} of shape... | staged_score | python | scikit-learn/scikit-learn | sklearn/ensemble/_weight_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_weight_boosting.py | BSD-3-Clause |
def feature_importances_(self):
"""The impurity-based feature importances.
The higher, the more important the feature.
The importance of a feature is computed as the (normalized)
total reduction of the criterion brought by that feature. It is also
known as the Gini importance.
... | The impurity-based feature importances.
The higher, the more important the feature.
The importance of a feature is computed as the (normalized)
total reduction of the criterion brought by that feature. It is also
known as the Gini importance.
Warning: impurity-based feature im... | feature_importances_ | python | scikit-learn/scikit-learn | sklearn/ensemble/_weight_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_weight_boosting.py | BSD-3-Clause |
def _samme_proba(estimator, n_classes, X):
"""Calculate algorithm 4, step 2, equation c) of Zhu et al [1].
References
----------
.. [1] J. Zhu, H. Zou, S. Rosset, T. Hastie, "Multi-class AdaBoost", 2009.
"""
proba = estimator.predict_proba(X)
# Displace zero probabilities so the log is de... | Calculate algorithm 4, step 2, equation c) of Zhu et al [1].
References
----------
.. [1] J. Zhu, H. Zou, S. Rosset, T. Hastie, "Multi-class AdaBoost", 2009.
| _samme_proba | python | scikit-learn/scikit-learn | sklearn/ensemble/_weight_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_weight_boosting.py | BSD-3-Clause |
def _validate_estimator(self):
"""Check the estimator and set the estimator_ attribute."""
super()._validate_estimator(default=DecisionTreeClassifier(max_depth=1))
if self.algorithm != "deprecated":
warnings.warn(
"The parameter 'algorithm' is deprecated in 1.6 and h... | Check the estimator and set the estimator_ attribute. | _validate_estimator | python | scikit-learn/scikit-learn | sklearn/ensemble/_weight_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_weight_boosting.py | BSD-3-Clause |
def _boost(self, iboost, X, y, sample_weight, random_state):
"""Implement a single boost.
Perform a single boost according to the discrete SAMME algorithm and return the
updated sample weights.
Parameters
----------
iboost : int
The index of the current boos... | Implement a single boost.
Perform a single boost according to the discrete SAMME algorithm and return the
updated sample weights.
Parameters
----------
iboost : int
The index of the current boost iteration.
X : {array-like, sparse matrix} of shape (n_sample... | _boost | python | scikit-learn/scikit-learn | sklearn/ensemble/_weight_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_weight_boosting.py | BSD-3-Clause |
def predict(self, X):
"""Predict classes for X.
The predicted class of an input sample is computed as the weighted mean
prediction of the classifiers in the ensemble.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
The ... | Predict classes for X.
The predicted class of an input sample is computed as the weighted mean
prediction of the classifiers in the ensemble.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
The training input samples. Sparse ma... | predict | python | scikit-learn/scikit-learn | sklearn/ensemble/_weight_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_weight_boosting.py | BSD-3-Clause |
def staged_predict(self, X):
"""Return staged predictions for X.
The predicted class of an input sample is computed as the weighted mean
prediction of the classifiers in the ensemble.
This generator method yields the ensemble prediction after each
iteration of boosting and ther... | Return staged predictions for X.
The predicted class of an input sample is computed as the weighted mean
prediction of the classifiers in the ensemble.
This generator method yields the ensemble prediction after each
iteration of boosting and therefore allows monitoring, such as to
... | staged_predict | python | scikit-learn/scikit-learn | sklearn/ensemble/_weight_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_weight_boosting.py | BSD-3-Clause |
def decision_function(self, X):
"""Compute the decision function of ``X``.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
The training input samples. Sparse matrix can be CSC, CSR, COO,
DOK, or LIL. COO, DOK, and LIL are co... | Compute the decision function of ``X``.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
The training input samples. Sparse matrix can be CSC, CSR, COO,
DOK, or LIL. COO, DOK, and LIL are converted to CSR.
Returns
--... | decision_function | python | scikit-learn/scikit-learn | sklearn/ensemble/_weight_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_weight_boosting.py | BSD-3-Clause |
def staged_decision_function(self, X):
"""Compute decision function of ``X`` for each boosting iteration.
This method allows monitoring (i.e. determine error on testing set)
after each boosting iteration.
Parameters
----------
X : {array-like, sparse matrix} of shape (n... | Compute decision function of ``X`` for each boosting iteration.
This method allows monitoring (i.e. determine error on testing set)
after each boosting iteration.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
The training inp... | staged_decision_function | python | scikit-learn/scikit-learn | sklearn/ensemble/_weight_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_weight_boosting.py | BSD-3-Clause |
def _compute_proba_from_decision(decision, n_classes):
"""Compute probabilities from the decision function.
This is based eq. (15) of [1] where:
p(y=c|X) = exp((1 / K-1) f_c(X)) / sum_k(exp((1 / K-1) f_k(X)))
= softmax((1 / K-1) * f(X))
References
-----... | Compute probabilities from the decision function.
This is based eq. (15) of [1] where:
p(y=c|X) = exp((1 / K-1) f_c(X)) / sum_k(exp((1 / K-1) f_k(X)))
= softmax((1 / K-1) * f(X))
References
----------
.. [1] J. Zhu, H. Zou, S. Rosset, T. Hastie, "Multi-... | _compute_proba_from_decision | python | scikit-learn/scikit-learn | sklearn/ensemble/_weight_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_weight_boosting.py | BSD-3-Clause |
def predict_proba(self, X):
"""Predict class probabilities for X.
The predicted class probabilities of an input sample is computed as
the weighted mean predicted class probabilities of the classifiers
in the ensemble.
Parameters
----------
X : {array-like, spars... | Predict class probabilities for X.
The predicted class probabilities of an input sample is computed as
the weighted mean predicted class probabilities of the classifiers
in the ensemble.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_featur... | predict_proba | python | scikit-learn/scikit-learn | sklearn/ensemble/_weight_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_weight_boosting.py | BSD-3-Clause |
def staged_predict_proba(self, X):
"""Predict class probabilities for X.
The predicted class probabilities of an input sample is computed as
the weighted mean predicted class probabilities of the classifiers
in the ensemble.
This generator method yields the ensemble predicted c... | Predict class probabilities for X.
The predicted class probabilities of an input sample is computed as
the weighted mean predicted class probabilities of the classifiers
in the ensemble.
This generator method yields the ensemble predicted class probabilities
after each iteratio... | staged_predict_proba | python | scikit-learn/scikit-learn | sklearn/ensemble/_weight_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_weight_boosting.py | BSD-3-Clause |
def _boost(self, iboost, X, y, sample_weight, random_state):
"""Implement a single boost for regression
Perform a single boost according to the AdaBoost.R2 algorithm and
return the updated sample weights.
Parameters
----------
iboost : int
The index of the c... | Implement a single boost for regression
Perform a single boost according to the AdaBoost.R2 algorithm and
return the updated sample weights.
Parameters
----------
iboost : int
The index of the current boost iteration.
X : {array-like, sparse matrix} of shap... | _boost | python | scikit-learn/scikit-learn | sklearn/ensemble/_weight_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_weight_boosting.py | BSD-3-Clause |
def predict(self, X):
"""Predict regression value for X.
The predicted regression value of an input sample is computed
as the weighted median prediction of the regressors in the ensemble.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_featu... | Predict regression value for X.
The predicted regression value of an input sample is computed
as the weighted median prediction of the regressors in the ensemble.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
The training inp... | predict | python | scikit-learn/scikit-learn | sklearn/ensemble/_weight_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_weight_boosting.py | BSD-3-Clause |
def staged_predict(self, X):
"""Return staged predictions for X.
The predicted regression value of an input sample is computed
as the weighted median prediction of the regressors in the ensemble.
This generator method yields the ensemble prediction after each
iteration of boost... | Return staged predictions for X.
The predicted regression value of an input sample is computed
as the weighted median prediction of the regressors in the ensemble.
This generator method yields the ensemble prediction after each
iteration of boosting and therefore allows monitoring, suc... | staged_predict | python | scikit-learn/scikit-learn | sklearn/ensemble/_weight_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_weight_boosting.py | BSD-3-Clause |
def test_metadata_routing_with_dynamic_method_selection(sub_estimator, caller, callee):
"""Test that metadata routing works in `BaggingClassifier` with dynamic selection of
the sub-estimator's methods. Here we test only specific test cases, where
sub-estimator methods are not present and are not tested with... | Test that metadata routing works in `BaggingClassifier` with dynamic selection of
the sub-estimator's methods. Here we test only specific test cases, where
sub-estimator methods are not present and are not tested with `ConsumingClassifier`
(which possesses all the methods) in
sklearn/tests/test_metaesti... | test_metadata_routing_with_dynamic_method_selection | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_bagging.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_bagging.py | BSD-3-Clause |
def test_poisson_vs_mse():
"""Test that random forest with poisson criterion performs better than
mse for a poisson target.
There is a similar test for DecisionTreeRegressor.
"""
rng = np.random.RandomState(42)
n_train, n_test, n_features = 500, 500, 10
X = datasets.make_low_rank_matrix(
... | Test that random forest with poisson criterion performs better than
mse for a poisson target.
There is a similar test for DecisionTreeRegressor.
| test_poisson_vs_mse | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_forest.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_forest.py | BSD-3-Clause |
def test_balance_property_random_forest(criterion):
""" "Test that sum(y_pred)==sum(y_true) on the training set."""
rng = np.random.RandomState(42)
n_train, n_test, n_features = 500, 500, 10
X = datasets.make_low_rank_matrix(
n_samples=n_train + n_test, n_features=n_features, random_state=rng
... | "Test that sum(y_pred)==sum(y_true) on the training set. | test_balance_property_random_forest | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_forest.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_forest.py | BSD-3-Clause |
def test_forest_classifier_oob(
ForestClassifier, X, y, X_type, lower_bound_accuracy, oob_score
):
"""Check that OOB score is close to score on a test set."""
X = _convert_container(X, constructor_name=X_type)
X_train, X_test, y_train, y_test = train_test_split(
X,
y,
test_size=0... | Check that OOB score is close to score on a test set. | test_forest_classifier_oob | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_forest.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_forest.py | BSD-3-Clause |
def test_forest_regressor_oob(ForestRegressor, X, y, X_type, lower_bound_r2, oob_score):
"""Check that forest-based regressor provide an OOB score close to the
score on a test set."""
X = _convert_container(X, constructor_name=X_type)
X_train, X_test, y_train, y_test = train_test_split(
X,
... | Check that forest-based regressor provide an OOB score close to the
score on a test set. | test_forest_regressor_oob | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_forest.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_forest.py | BSD-3-Clause |
def test_forest_oob_warning(ForestEstimator):
"""Check that a warning is raised when not enough estimator and the OOB
estimates will be inaccurate."""
estimator = ForestEstimator(
n_estimators=1,
oob_score=True,
bootstrap=True,
random_state=0,
)
with pytest.warns(User... | Check that a warning is raised when not enough estimator and the OOB
estimates will be inaccurate. | test_forest_oob_warning | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_forest.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_forest.py | BSD-3-Clause |
def test_forest_oob_score_requires_bootstrap(ForestEstimator):
"""Check that we raise an error if OOB score is requested without
activating bootstrapping.
"""
X = iris.data
y = iris.target
err_msg = "Out of bag estimation only available if bootstrap=True"
estimator = ForestEstimator(oob_scor... | Check that we raise an error if OOB score is requested without
activating bootstrapping.
| test_forest_oob_score_requires_bootstrap | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_forest.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_forest.py | BSD-3-Clause |
def test_classifier_error_oob_score_multiclass_multioutput(ForestClassifier):
"""Check that we raise an error with when requesting OOB score with
multiclass-multioutput classification target.
"""
rng = np.random.RandomState(42)
X = iris.data
y = rng.randint(low=0, high=5, size=(iris.data.shape[0... | Check that we raise an error with when requesting OOB score with
multiclass-multioutput classification target.
| test_classifier_error_oob_score_multiclass_multioutput | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_forest.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_forest.py | BSD-3-Clause |
def test_forest_multioutput_integral_regression_target(ForestRegressor):
"""Check that multioutput regression with integral values is not interpreted
as a multiclass-multioutput target and OOB score can be computed.
"""
rng = np.random.RandomState(42)
X = iris.data
y = rng.randint(low=0, high=10... | Check that multioutput regression with integral values is not interpreted
as a multiclass-multioutput target and OOB score can be computed.
| test_forest_multioutput_integral_regression_target | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_forest.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_forest.py | BSD-3-Clause |
def test_random_trees_embedding_feature_names_out():
"""Check feature names out for Random Trees Embedding."""
random_state = np.random.RandomState(0)
X = np.abs(random_state.randn(100, 4))
hasher = RandomTreesEmbedding(
n_estimators=2, max_depth=2, sparse_output=False, random_state=0
).fit(... | Check feature names out for Random Trees Embedding. | test_random_trees_embedding_feature_names_out | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_forest.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_forest.py | BSD-3-Clause |
def test_read_only_buffer(csr_container, monkeypatch):
"""RandomForestClassifier must work on readonly sparse data.
Non-regression test for: https://github.com/scikit-learn/scikit-learn/issues/25333
"""
monkeypatch.setattr(
sklearn.ensemble._forest,
"Parallel",
partial(Parallel,... | RandomForestClassifier must work on readonly sparse data.
Non-regression test for: https://github.com/scikit-learn/scikit-learn/issues/25333
| test_read_only_buffer | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_forest.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_forest.py | BSD-3-Clause |
def test_round_samples_to_one_when_samples_too_low(class_weight):
"""Check low max_samples works and is rounded to one.
Non-regression test for gh-24037.
"""
X, y = datasets.load_wine(return_X_y=True)
forest = RandomForestClassifier(
n_estimators=10, max_samples=1e-4, class_weight=class_wei... | Check low max_samples works and is rounded to one.
Non-regression test for gh-24037.
| test_round_samples_to_one_when_samples_too_low | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_forest.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_forest.py | BSD-3-Clause |
def test_estimators_samples(ForestClass, bootstrap, seed):
"""Estimators_samples_ property should be consistent.
Tests consistency across fits and whether or not the seed for the random generator
is set.
"""
X, y = make_hastie_10_2(n_samples=200, random_state=1)
if bootstrap:
max_sampl... | Estimators_samples_ property should be consistent.
Tests consistency across fits and whether or not the seed for the random generator
is set.
| test_estimators_samples | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_forest.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_forest.py | BSD-3-Clause |
def test_missing_values_is_resilient(make_data, Forest):
"""Check that forest can deal with missing values and has decent performance."""
rng = np.random.RandomState(0)
n_samples, n_features = 1000, 10
X, y = make_data(n_samples=n_samples, n_features=n_features, random_state=rng)
# Create dataset ... | Check that forest can deal with missing values and has decent performance. | test_missing_values_is_resilient | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_forest.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_forest.py | BSD-3-Clause |
def test_missing_value_is_predictive(Forest):
"""Check that the forest learns when missing values are only present for
a predictive feature."""
rng = np.random.RandomState(0)
n_samples = 300
expected_score = 0.75
X_non_predictive = rng.standard_normal(size=(n_samples, 10))
y = rng.randint(0... | Check that the forest learns when missing values are only present for
a predictive feature. | test_missing_value_is_predictive | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_forest.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_forest.py | BSD-3-Clause |
def test_non_supported_criterion_raises_error_with_missing_values(Forest):
"""Raise error for unsupported criterion when there are missing values."""
X = np.array([[0, 1, 2], [np.nan, 0, 2.0]])
y = [0.5, 1.0]
forest = Forest(criterion="absolute_error")
msg = ".*does not accept missing values"
... | Raise error for unsupported criterion when there are missing values. | test_non_supported_criterion_raises_error_with_missing_values | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_forest.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_forest.py | BSD-3-Clause |
def test_raise_if_init_has_no_predict_proba():
"""Test raise if init_ has no predict_proba method."""
clf = GradientBoostingClassifier(init=GradientBoostingRegressor)
msg = (
"The 'init' parameter of GradientBoostingClassifier must be a str among "
"{'zero'}, None or an object implementing '... | Test raise if init_ has no predict_proba method. | test_raise_if_init_has_no_predict_proba | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_gradient_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_gradient_boosting.py | BSD-3-Clause |
def test_feature_importance_regression(
fetch_california_housing_fxt, global_random_seed
):
"""Test that Gini importance is calculated correctly.
This test follows the example from [1]_ (pg. 373).
.. [1] Friedman, J., Hastie, T., & Tibshirani, R. (2001). The elements
of statistical learning. Ne... | Test that Gini importance is calculated correctly.
This test follows the example from [1]_ (pg. 373).
.. [1] Friedman, J., Hastie, T., & Tibshirani, R. (2001). The elements
of statistical learning. New York: Springer series in statistics.
| test_feature_importance_regression | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_gradient_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_gradient_boosting.py | BSD-3-Clause |
def test_oob_attributes_error(GradientBoostingEstimator, oob_attribute):
"""
Check that we raise an AttributeError when the OOB statistics were not computed.
"""
X, y = datasets.make_hastie_10_2(n_samples=100, random_state=1)
estimator = GradientBoostingEstimator(
n_estimators=100,
r... |
Check that we raise an AttributeError when the OOB statistics were not computed.
| test_oob_attributes_error | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_gradient_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_gradient_boosting.py | BSD-3-Clause |
def test_warm_start_state_oob_scores(GradientBoosting):
"""
Check that the states of the OOB scores are cleared when used with `warm_start`.
"""
X, y = datasets.make_hastie_10_2(n_samples=100, random_state=1)
n_estimators = 100
estimator = GradientBoosting(
n_estimators=n_estimators,
... |
Check that the states of the OOB scores are cleared when used with `warm_start`.
| test_warm_start_state_oob_scores | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_gradient_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_gradient_boosting.py | BSD-3-Clause |
def test_huber_vs_mean_and_median():
"""Check that huber lies between absolute and squared error."""
n_rep = 100
n_samples = 10
y = np.tile(np.arange(n_samples), n_rep)
x1 = np.minimum(y, n_samples / 2)
x2 = np.minimum(-y, -n_samples / 2)
X = np.c_[x1, x2]
rng = np.random.RandomState(42... | Check that huber lies between absolute and squared error. | test_huber_vs_mean_and_median | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_gradient_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_gradient_boosting.py | BSD-3-Clause |
def test_safe_divide():
"""Test that _safe_divide handles division by zero."""
with warnings.catch_warnings():
warnings.simplefilter("error")
assert _safe_divide(np.float64(1e300), 0) == 0
assert _safe_divide(np.float64(0.0), np.float64(0.0)) == 0
with pytest.warns(RuntimeWarning, ma... | Test that _safe_divide handles division by zero. | test_safe_divide | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_gradient_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_gradient_boosting.py | BSD-3-Clause |
def test_squared_error_exact_backward_compat():
"""Test squared error GBT backward compat on a simple dataset.
The results to compare against are taken from scikit-learn v1.2.0.
"""
n_samples = 10
y = np.arange(n_samples)
x1 = np.minimum(y, n_samples / 2)
x2 = np.minimum(-y, -n_samples / 2)... | Test squared error GBT backward compat on a simple dataset.
The results to compare against are taken from scikit-learn v1.2.0.
| test_squared_error_exact_backward_compat | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_gradient_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_gradient_boosting.py | BSD-3-Clause |
def test_huber_exact_backward_compat():
"""Test huber GBT backward compat on a simple dataset.
The results to compare against are taken from scikit-learn v1.2.0.
"""
n_samples = 10
y = np.arange(n_samples)
x1 = np.minimum(y, n_samples / 2)
x2 = np.minimum(-y, -n_samples / 2)
X = np.c_[x... | Test huber GBT backward compat on a simple dataset.
The results to compare against are taken from scikit-learn v1.2.0.
| test_huber_exact_backward_compat | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_gradient_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_gradient_boosting.py | BSD-3-Clause |
def test_binomial_error_exact_backward_compat():
"""Test binary log_loss GBT backward compat on a simple dataset.
The results to compare against are taken from scikit-learn v1.2.0.
"""
n_samples = 10
y = np.arange(n_samples) % 2
x1 = np.minimum(y, n_samples / 2)
x2 = np.minimum(-y, -n_sampl... | Test binary log_loss GBT backward compat on a simple dataset.
The results to compare against are taken from scikit-learn v1.2.0.
| test_binomial_error_exact_backward_compat | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_gradient_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_gradient_boosting.py | BSD-3-Clause |
def test_multinomial_error_exact_backward_compat():
"""Test multiclass log_loss GBT backward compat on a simple dataset.
The results to compare against are taken from scikit-learn v1.2.0.
"""
n_samples = 10
y = np.arange(n_samples) % 4
x1 = np.minimum(y, n_samples / 2)
x2 = np.minimum(-y, -... | Test multiclass log_loss GBT backward compat on a simple dataset.
The results to compare against are taken from scikit-learn v1.2.0.
| test_multinomial_error_exact_backward_compat | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_gradient_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_gradient_boosting.py | BSD-3-Clause |
def test_gb_denominator_zero(global_random_seed):
"""Test _update_terminal_regions denominator is not zero.
For instance for log loss based binary classification, the line search step might
become nan/inf as denominator = hessian = prob * (1 - prob) and prob = 0 or 1 can
happen.
Here, we create a s... | Test _update_terminal_regions denominator is not zero.
For instance for log loss based binary classification, the line search step might
become nan/inf as denominator = hessian = prob * (1 - prob) and prob = 0 or 1 can
happen.
Here, we create a situation were this happens (at least with roughly 80%) ba... | test_gb_denominator_zero | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_gradient_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_gradient_boosting.py | BSD-3-Clause |
def test_iforest(global_random_seed):
"""Check Isolation Forest for various parameter settings."""
X_train = np.array([[0, 1], [1, 2]])
X_test = np.array([[2, 1], [1, 1]])
grid = ParameterGrid(
{"n_estimators": [3], "max_samples": [0.5, 1.0, 3], "bootstrap": [True, False]}
)
with ignor... | Check Isolation Forest for various parameter settings. | test_iforest | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_iforest.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_iforest.py | BSD-3-Clause |
def test_iforest_sparse(global_random_seed, sparse_container):
"""Check IForest for various parameter settings on sparse input."""
rng = check_random_state(global_random_seed)
X_train, X_test = train_test_split(diabetes.data[:50], random_state=rng)
grid = ParameterGrid({"max_samples": [0.5, 1.0], "boots... | Check IForest for various parameter settings on sparse input. | test_iforest_sparse | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_iforest.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_iforest.py | BSD-3-Clause |
def test_iforest_error():
"""Test that it gives proper exception on deficient input."""
X = iris.data
# The dataset has less than 256 samples, explicitly setting
# max_samples > n_samples should result in a warning. If not set
# explicitly there should be no warning
warn_msg = "max_samples will... | Test that it gives proper exception on deficient input. | test_iforest_error | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_iforest.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_iforest.py | BSD-3-Clause |
def test_recalculate_max_depth():
"""Check max_depth recalculation when max_samples is reset to n_samples"""
X = iris.data
clf = IsolationForest().fit(X)
for est in clf.estimators_:
assert est.max_depth == int(np.ceil(np.log2(X.shape[0]))) | Check max_depth recalculation when max_samples is reset to n_samples | test_recalculate_max_depth | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_iforest.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_iforest.py | BSD-3-Clause |
def test_iforest_warm_start():
"""Test iterative addition of iTrees to an iForest"""
rng = check_random_state(0)
X = rng.randn(20, 2)
# fit first 10 trees
clf = IsolationForest(
n_estimators=10, max_samples=20, random_state=rng, warm_start=True
)
clf.fit(X)
# remember the 1st t... | Test iterative addition of iTrees to an iForest | test_iforest_warm_start | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_iforest.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_iforest.py | BSD-3-Clause |
def test_iforest_with_uniform_data():
"""Test whether iforest predicts inliers when using uniform data"""
# 2-d array of all 1s
X = np.ones((100, 10))
iforest = IsolationForest()
iforest.fit(X)
rng = np.random.RandomState(0)
assert all(iforest.predict(X) == 1)
assert all(iforest.predi... | Test whether iforest predicts inliers when using uniform data | test_iforest_with_uniform_data | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_iforest.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_iforest.py | BSD-3-Clause |
def test_iforest_with_n_jobs_does_not_segfault(csc_container):
"""Check that Isolation Forest does not segfault with n_jobs=2
Non-regression test for #23252
"""
X, _ = make_classification(n_samples=85_000, n_features=100, random_state=0)
X = csc_container(X)
IsolationForest(n_estimators=10, max... | Check that Isolation Forest does not segfault with n_jobs=2
Non-regression test for #23252
| test_iforest_with_n_jobs_does_not_segfault | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_iforest.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_iforest.py | BSD-3-Clause |
def test_iforest_preserve_feature_names():
"""Check that feature names are preserved when contamination is not "auto".
Feature names are required for consistency checks during scoring.
Non-regression test for Issue #25844
"""
pd = pytest.importorskip("pandas")
rng = np.random.RandomState(0)
... | Check that feature names are preserved when contamination is not "auto".
Feature names are required for consistency checks during scoring.
Non-regression test for Issue #25844
| test_iforest_preserve_feature_names | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_iforest.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_iforest.py | BSD-3-Clause |
def test_iforest_sparse_input_float_contamination(sparse_container):
"""Check that `IsolationForest` accepts sparse matrix input and float value for
contamination.
Non-regression test for:
https://github.com/scikit-learn/scikit-learn/issues/27626
"""
X, _ = make_classification(n_samples=50, n_f... | Check that `IsolationForest` accepts sparse matrix input and float value for
contamination.
Non-regression test for:
https://github.com/scikit-learn/scikit-learn/issues/27626
| test_iforest_sparse_input_float_contamination | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_iforest.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_iforest.py | BSD-3-Clause |
def test_iforest_predict_parallel(global_random_seed, contamination, n_jobs):
"""Check that `IsolationForest.predict` is parallelized."""
# toy sample (the last two samples are outliers)
X = [[-2, -1], [-1, -1], [-1, -2], [1, 1], [1, 2], [2, 1], [7, 4], [-5, 9]]
# Test IsolationForest
clf = Isolati... | Check that `IsolationForest.predict` is parallelized. | test_iforest_predict_parallel | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_iforest.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_iforest.py | BSD-3-Clause |
def test_stacking_prefit(Stacker, Estimator, stack_method, final_estimator, X, y):
"""Check the behaviour of stacking when `cv='prefit'`"""
X_train1, X_train2, y_train1, y_train2 = train_test_split(
X, y, random_state=42, test_size=0.5
)
estimators = [
("d0", Estimator().fit(X_train1, y_... | Check the behaviour of stacking when `cv='prefit'` | test_stacking_prefit | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_stacking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_stacking.py | BSD-3-Clause |
def test_stacking_classifier_multilabel_predict_proba(estimator):
"""Check the behaviour for the multilabel classification case and the
`predict_proba` stacking method.
Estimators are not consistent with the output arrays and we need to ensure that
we handle all cases.
"""
X_train, X_test, y_tr... | Check the behaviour for the multilabel classification case and the
`predict_proba` stacking method.
Estimators are not consistent with the output arrays and we need to ensure that
we handle all cases.
| test_stacking_classifier_multilabel_predict_proba | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_stacking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_stacking.py | BSD-3-Clause |
def test_stacking_classifier_multilabel_decision_function():
"""Check the behaviour for the multilabel classification case and the
`decision_function` stacking method. Only `RidgeClassifier` supports this
case.
"""
X_train, X_test, y_train, y_test = train_test_split(
X_multilabel, y_multilab... | Check the behaviour for the multilabel classification case and the
`decision_function` stacking method. Only `RidgeClassifier` supports this
case.
| test_stacking_classifier_multilabel_decision_function | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_stacking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_stacking.py | BSD-3-Clause |
def test_stacking_classifier_multilabel_auto_predict(stack_method, passthrough):
"""Check the behaviour for the multilabel classification case for stack methods
supported for all estimators or automatically picked up.
"""
X_train, X_test, y_train, y_test = train_test_split(
X_multilabel, y_multi... | Check the behaviour for the multilabel classification case for stack methods
supported for all estimators or automatically picked up.
| test_stacking_classifier_multilabel_auto_predict | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_stacking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_stacking.py | BSD-3-Clause |
def test_stacking_classifier_base_regressor():
"""Check that a regressor can be used as the first layer in `StackingClassifier`."""
X_train, X_test, y_train, y_test = train_test_split(
scale(X_iris), y_iris, stratify=y_iris, random_state=42
)
clf = StackingClassifier(estimators=[("ridge", Ridge(... | Check that a regressor can be used as the first layer in `StackingClassifier`. | test_stacking_classifier_base_regressor | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_stacking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_stacking.py | BSD-3-Clause |
def test_stacking_final_estimator_attribute_error():
"""Check that we raise the proper AttributeError when the final estimator
does not implement the `decision_function` method, which is decorated with
`available_if`.
Non-regression test for:
https://github.com/scikit-learn/scikit-learn/issues/2810... | Check that we raise the proper AttributeError when the final estimator
does not implement the `decision_function` method, which is decorated with
`available_if`.
Non-regression test for:
https://github.com/scikit-learn/scikit-learn/issues/28108
| test_stacking_final_estimator_attribute_error | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_stacking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_stacking.py | BSD-3-Clause |
def test_metadata_routing_for_stacking_estimators(Estimator, Child, prop, prop_value):
"""Test that metadata is routed correctly for Stacking*."""
est = Estimator(
[
(
"sub_est1",
Child(registry=_Registry()).set_fit_request(**{prop: True}),
),
... | Test that metadata is routed correctly for Stacking*. | test_metadata_routing_for_stacking_estimators | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_stacking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_stacking.py | BSD-3-Clause |
def test_majority_label_iris(global_random_seed):
"""Check classification by majority label on dataset iris."""
clf1 = LogisticRegression(random_state=global_random_seed)
clf2 = RandomForestClassifier(n_estimators=10, random_state=global_random_seed)
clf3 = GaussianNB()
eclf = VotingClassifier(
... | Check classification by majority label on dataset iris. | test_majority_label_iris | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_voting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_voting.py | BSD-3-Clause |
def test_tie_situation():
"""Check voting classifier selects smaller class label in tie situation."""
clf1 = LogisticRegression(random_state=123)
clf2 = RandomForestClassifier(random_state=123)
eclf = VotingClassifier(estimators=[("lr", clf1), ("rf", clf2)], voting="hard")
assert clf1.fit(X, y).pred... | Check voting classifier selects smaller class label in tie situation. | test_tie_situation | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_voting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_voting.py | BSD-3-Clause |
def test_weights_iris(global_random_seed):
"""Check classification by average probabilities on dataset iris."""
clf1 = LogisticRegression(random_state=global_random_seed)
clf2 = RandomForestClassifier(n_estimators=10, random_state=global_random_seed)
clf3 = GaussianNB()
eclf = VotingClassifier(
... | Check classification by average probabilities on dataset iris. | test_weights_iris | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_voting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_voting.py | BSD-3-Clause |
def test_weights_regressor():
"""Check weighted average regression prediction on diabetes dataset."""
reg1 = DummyRegressor(strategy="mean")
reg2 = DummyRegressor(strategy="median")
reg3 = DummyRegressor(strategy="quantile", quantile=0.2)
ereg = VotingRegressor(
[("mean", reg1), ("median", r... | Check weighted average regression prediction on diabetes dataset. | test_weights_regressor | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_voting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_voting.py | BSD-3-Clause |
def test_predict_on_toy_problem(global_random_seed):
"""Manually check predicted class labels for toy dataset."""
clf1 = LogisticRegression(random_state=global_random_seed)
clf2 = RandomForestClassifier(n_estimators=10, random_state=global_random_seed)
clf3 = GaussianNB()
X = np.array(
[[-1... | Manually check predicted class labels for toy dataset. | test_predict_on_toy_problem | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_voting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_voting.py | BSD-3-Clause |
def test_predict_proba_on_toy_problem():
"""Calculate predicted probabilities on toy dataset."""
clf1 = LogisticRegression(random_state=123)
clf2 = RandomForestClassifier(random_state=123)
clf3 = GaussianNB()
X = np.array([[-1.1, -1.5], [-1.2, -1.4], [-3.4, -2.2], [1.1, 1.2]])
y = np.array([1, 1... | Calculate predicted probabilities on toy dataset. | test_predict_proba_on_toy_problem | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_voting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_voting.py | BSD-3-Clause |
def test_multilabel():
"""Check if error is raised for multilabel classification."""
X, y = make_multilabel_classification(
n_classes=2, n_labels=1, allow_unlabeled=False, random_state=123
)
clf = OneVsRestClassifier(SVC(kernel="linear"))
eclf = VotingClassifier(estimators=[("ovr", clf)], v... | Check if error is raised for multilabel classification. | test_multilabel | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_voting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_voting.py | BSD-3-Clause |
def test_parallel_fit(global_random_seed):
"""Check parallel backend of VotingClassifier on toy dataset."""
clf1 = LogisticRegression(random_state=global_random_seed)
clf2 = RandomForestClassifier(n_estimators=10, random_state=global_random_seed)
clf3 = GaussianNB()
X = np.array([[-1.1, -1.5], [-1.2... | Check parallel backend of VotingClassifier on toy dataset. | test_parallel_fit | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_voting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_voting.py | BSD-3-Clause |
def test_sample_weight_kwargs():
"""Check that VotingClassifier passes sample_weight as kwargs"""
class MockClassifier(ClassifierMixin, BaseEstimator):
"""Mock Classifier to check that sample_weight is received as kwargs"""
def fit(self, X, y, *args, **sample_weight):
assert "sampl... | Check that VotingClassifier passes sample_weight as kwargs | test_sample_weight_kwargs | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_voting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_voting.py | BSD-3-Clause |
def test_transform(global_random_seed):
"""Check transform method of VotingClassifier on toy dataset."""
clf1 = LogisticRegression(random_state=global_random_seed)
clf2 = RandomForestClassifier(n_estimators=10, random_state=global_random_seed)
clf3 = GaussianNB()
X = np.array([[-1.1, -1.5], [-1.2, -... | Check transform method of VotingClassifier on toy dataset. | test_transform | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_voting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_voting.py | BSD-3-Clause |
def test_get_features_names_out_classifier(kwargs, expected_names):
"""Check get_feature_names_out for classifier for different settings."""
X = [[1, 2], [3, 4], [5, 6], [1, 1.2]]
y = [0, 1, 2, 0]
voting = VotingClassifier(
estimators=[
("lr", LogisticRegression(random_state=0)),
... | Check get_feature_names_out for classifier for different settings. | test_get_features_names_out_classifier | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_voting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_voting.py | BSD-3-Clause |
def test_get_features_names_out_classifier_error():
"""Check that error is raised when voting="soft" and flatten_transform=False."""
X = [[1, 2], [3, 4], [5, 6]]
y = [0, 1, 2]
voting = VotingClassifier(
estimators=[
("lr", LogisticRegression(random_state=0)),
("tree", De... | Check that error is raised when voting="soft" and flatten_transform=False. | test_get_features_names_out_classifier_error | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_voting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_voting.py | BSD-3-Clause |
def test_metadata_routing_for_voting_estimators(Estimator, Child, prop):
"""Test that metadata is routed correctly for Voting*."""
X = np.array([[0, 1], [2, 2], [4, 6]])
y = [1, 2, 3]
sample_weight, metadata = [1, 1, 1], "a"
est = Estimator(
[
(
"sub_est1",
... | Test that metadata is routed correctly for Voting*. | test_metadata_routing_for_voting_estimators | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_voting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_voting.py | BSD-3-Clause |
def fit(self, X, y, sample_weight=None):
"""Modification on fit caries data type for later verification."""
super().fit(X, y, sample_weight=sample_weight)
self.data_type_ = type(X)
return self | Modification on fit caries data type for later verification. | fit | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_weight_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_weight_boosting.py | BSD-3-Clause |
def test_sample_weight_adaboost_regressor():
"""
AdaBoostRegressor should work without sample_weights in the base estimator
The random weighted sampling is done internally in the _boost method in
AdaBoostRegressor.
"""
class DummyEstimator(BaseEstimator):
def fit(self, X, y):
... |
AdaBoostRegressor should work without sample_weights in the base estimator
The random weighted sampling is done internally in the _boost method in
AdaBoostRegressor.
| test_sample_weight_adaboost_regressor | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_weight_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_weight_boosting.py | BSD-3-Clause |
def test_multidimensional_X():
"""
Check that the AdaBoost estimators can work with n-dimensional
data matrix
"""
rng = np.random.RandomState(0)
X = rng.randn(51, 3, 3)
yc = rng.choice([0, 1], 51)
yr = rng.randn(51)
boost = AdaBoostClassifier(DummyClassifier(strategy="most_frequent... |
Check that the AdaBoost estimators can work with n-dimensional
data matrix
| test_multidimensional_X | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_weight_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_weight_boosting.py | BSD-3-Clause |
def test_adaboost_numerically_stable_feature_importance_with_small_weights():
"""Check that we don't create NaN feature importance with numerically
instable inputs.
Non-regression test for:
https://github.com/scikit-learn/scikit-learn/issues/20320
"""
rng = np.random.RandomState(42)
X = rng... | Check that we don't create NaN feature importance with numerically
instable inputs.
Non-regression test for:
https://github.com/scikit-learn/scikit-learn/issues/20320
| test_adaboost_numerically_stable_feature_importance_with_small_weights | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_weight_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_weight_boosting.py | BSD-3-Clause |
def test_adaboost_decision_function(global_random_seed):
"""Check that the decision function respects the symmetric constraint for weak
learners.
Non-regression test for:
https://github.com/scikit-learn/scikit-learn/issues/26520
"""
n_classes = 3
X, y = datasets.make_classification(
... | Check that the decision function respects the symmetric constraint for weak
learners.
Non-regression test for:
https://github.com/scikit-learn/scikit-learn/issues/26520
| test_adaboost_decision_function | python | scikit-learn/scikit-learn | sklearn/ensemble/tests/test_weight_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/tests/test_weight_boosting.py | BSD-3-Clause |
def _find_binning_thresholds(col_data, max_bins):
"""Extract quantiles from a continuous feature.
Missing values are ignored for finding the thresholds.
Parameters
----------
col_data : array-like, shape (n_samples,)
The continuous feature to bin.
max_bins: int
The maximum numb... | Extract quantiles from a continuous feature.
Missing values are ignored for finding the thresholds.
Parameters
----------
col_data : array-like, shape (n_samples,)
The continuous feature to bin.
max_bins: int
The maximum number of bins to use for non-missing values. If for a
... | _find_binning_thresholds | python | scikit-learn/scikit-learn | sklearn/ensemble/_hist_gradient_boosting/binning.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_hist_gradient_boosting/binning.py | BSD-3-Clause |
def fit(self, X, y=None):
"""Fit data X by computing the binning thresholds.
The last bin is reserved for missing values, whether missing values
are present in the data or not.
Parameters
----------
X : array-like of shape (n_samples, n_features)
The data to... | Fit data X by computing the binning thresholds.
The last bin is reserved for missing values, whether missing values
are present in the data or not.
Parameters
----------
X : array-like of shape (n_samples, n_features)
The data to bin.
y: None
Ign... | fit | python | scikit-learn/scikit-learn | sklearn/ensemble/_hist_gradient_boosting/binning.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_hist_gradient_boosting/binning.py | BSD-3-Clause |
def transform(self, X):
"""Bin data X.
Missing values will be mapped to the last bin.
For categorical features, the mapping will be incorrect for unknown
categories. Since the BinMapper is given known_categories of the
entire training data (i.e. before the call to train_test_sp... | Bin data X.
Missing values will be mapped to the last bin.
For categorical features, the mapping will be incorrect for unknown
categories. Since the BinMapper is given known_categories of the
entire training data (i.e. before the call to train_test_split() in
case of early-stop... | transform | python | scikit-learn/scikit-learn | sklearn/ensemble/_hist_gradient_boosting/binning.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_hist_gradient_boosting/binning.py | BSD-3-Clause |
def make_known_categories_bitsets(self):
"""Create bitsets of known categories.
Returns
-------
- known_cat_bitsets : ndarray of shape (n_categorical_features, 8)
Array of bitsets of known categories, for each categorical feature.
- f_idx_map : ndarray of shape (n_fe... | Create bitsets of known categories.
Returns
-------
- known_cat_bitsets : ndarray of shape (n_categorical_features, 8)
Array of bitsets of known categories, for each categorical feature.
- f_idx_map : ndarray of shape (n_features,)
Map from original feature index... | make_known_categories_bitsets | python | scikit-learn/scikit-learn | sklearn/ensemble/_hist_gradient_boosting/binning.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_hist_gradient_boosting/binning.py | BSD-3-Clause |
def _update_leaves_values(loss, grower, y_true, raw_prediction, sample_weight):
"""Update the leaf values to be predicted by the tree.
Update equals:
loss.fit_intercept_only(y_true - raw_prediction)
This is only applied if loss.differentiable is False.
Note: It only works, if the loss is a fun... | Update the leaf values to be predicted by the tree.
Update equals:
loss.fit_intercept_only(y_true - raw_prediction)
This is only applied if loss.differentiable is False.
Note: It only works, if the loss is a function of the residual, as is the
case for AbsoluteError and PinballLoss. Otherwise,... | _update_leaves_values | python | scikit-learn/scikit-learn | sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py | BSD-3-Clause |
def _patch_raw_predict(estimator, raw_predictions):
"""Context manager that patches _raw_predict to return raw_predictions.
`raw_predictions` is typically a precomputed array to avoid redundant
state-wise computations fitting with early stopping enabled: in this case
`raw_predictions` is incrementally ... | Context manager that patches _raw_predict to return raw_predictions.
`raw_predictions` is typically a precomputed array to avoid redundant
state-wise computations fitting with early stopping enabled: in this case
`raw_predictions` is incrementally updated whenever we add a tree to the
boosted ensemble.... | _patch_raw_predict | python | scikit-learn/scikit-learn | sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py | BSD-3-Clause |
def _validate_parameters(self):
"""Validate parameters passed to __init__.
The parameters that are directly passed to the grower are checked in
TreeGrower."""
if self.monotonic_cst is not None and self.n_trees_per_iteration_ != 1:
raise ValueError(
"monotonic... | Validate parameters passed to __init__.
The parameters that are directly passed to the grower are checked in
TreeGrower. | _validate_parameters | python | scikit-learn/scikit-learn | sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py | BSD-3-Clause |
def _preprocess_X(self, X, *, reset):
"""Preprocess and validate X.
Parameters
----------
X : {array-like, pandas DataFrame} of shape (n_samples, n_features)
Input data.
reset : bool
Whether to reset the `n_features_in_` and `feature_names_in_ attributes... | Preprocess and validate X.
Parameters
----------
X : {array-like, pandas DataFrame} of shape (n_samples, n_features)
Input data.
reset : bool
Whether to reset the `n_features_in_` and `feature_names_in_ attributes.
Returns
-------
X : nd... | _preprocess_X | python | scikit-learn/scikit-learn | sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py | BSD-3-Clause |
def _check_categories(self):
"""Check categories found by the preprocessor and return their encoded values.
Returns a list of length ``self.n_features_in_``, with one entry per
input feature.
For non-categorical features, the corresponding entry is ``None``.
For categorical fe... | Check categories found by the preprocessor and return their encoded values.
Returns a list of length ``self.n_features_in_``, with one entry per
input feature.
For non-categorical features, the corresponding entry is ``None``.
For categorical features, the corresponding entry is an ar... | _check_categories | python | scikit-learn/scikit-learn | sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py | BSD-3-Clause |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.