code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def test_spline_transformer_periodic_splines_smoothness(degree):
"""Test that spline transformation is smooth at first / last knot."""
X = np.linspace(-2, 10, 10_000)[:, None]
transformer = SplineTransformer(
degree=degree,
extrapolation="periodic",
knots=[[0.0], [1.0], [3.0], [4.0]... | Test that spline transformation is smooth at first / last knot. | test_spline_transformer_periodic_splines_smoothness | python | scikit-learn/scikit-learn | sklearn/preprocessing/tests/test_polynomial.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/preprocessing/tests/test_polynomial.py | BSD-3-Clause |
def test_spline_transformer_extrapolation(bias, intercept, degree):
"""Test that B-spline extrapolation works correctly."""
# we use a straight line for that
X = np.linspace(-1, 1, 100)[:, None]
y = X.squeeze()
# 'constant'
pipe = Pipeline(
[
[
"spline",
... | Test that B-spline extrapolation works correctly. | test_spline_transformer_extrapolation | python | scikit-learn/scikit-learn | sklearn/preprocessing/tests/test_polynomial.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/preprocessing/tests/test_polynomial.py | BSD-3-Clause |
def test_spline_transformer_kbindiscretizer(global_random_seed):
"""Test that a B-spline of degree=0 is equivalent to KBinsDiscretizer."""
rng = np.random.RandomState(global_random_seed)
X = rng.randn(200).reshape(200, 1)
n_bins = 5
n_knots = n_bins + 1
splt = SplineTransformer(
n_knots... | Test that a B-spline of degree=0 is equivalent to KBinsDiscretizer. | test_spline_transformer_kbindiscretizer | python | scikit-learn/scikit-learn | sklearn/preprocessing/tests/test_polynomial.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/preprocessing/tests/test_polynomial.py | BSD-3-Clause |
def test_spline_transformer_n_features_out(
n_knots, include_bias, degree, extrapolation, sparse_output
):
"""Test that transform results in n_features_out_ features."""
splt = SplineTransformer(
n_knots=n_knots,
degree=degree,
include_bias=include_bias,
extrapolation=extrapo... | Test that transform results in n_features_out_ features. | test_spline_transformer_n_features_out | python | scikit-learn/scikit-learn | sklearn/preprocessing/tests/test_polynomial.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/preprocessing/tests/test_polynomial.py | BSD-3-Clause |
def test_polynomial_features_input_validation(params, err_msg):
"""Test that we raise errors for invalid input in PolynomialFeatures."""
X = [[1], [2]]
with pytest.raises(ValueError, match=err_msg):
PolynomialFeatures(**params).fit(X) | Test that we raise errors for invalid input in PolynomialFeatures. | test_polynomial_features_input_validation | python | scikit-learn/scikit-learn | sklearn/preprocessing/tests/test_polynomial.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/preprocessing/tests/test_polynomial.py | BSD-3-Clause |
def test_polynomial_features_one_feature(
single_feature_degree3,
degree,
include_bias,
interaction_only,
indices,
X_container,
):
"""Test PolynomialFeatures on single feature up to degree 3."""
X, P = single_feature_degree3
if X_container is not None:
X = X_container(X)
... | Test PolynomialFeatures on single feature up to degree 3. | test_polynomial_features_one_feature | python | scikit-learn/scikit-learn | sklearn/preprocessing/tests/test_polynomial.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/preprocessing/tests/test_polynomial.py | BSD-3-Clause |
def test_polynomial_features_two_features(
two_features_degree3,
degree,
include_bias,
interaction_only,
indices,
X_container,
):
"""Test PolynomialFeatures on 2 features up to degree 3."""
X, P = two_features_degree3
if X_container is not None:
X = X_container(X)
tf = Po... | Test PolynomialFeatures on 2 features up to degree 3. | test_polynomial_features_two_features | python | scikit-learn/scikit-learn | sklearn/preprocessing/tests/test_polynomial.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/preprocessing/tests/test_polynomial.py | BSD-3-Clause |
def test_csr_polynomial_expansion_index_overflow_non_regression(
interaction_only, include_bias, csr_container
):
"""Check the automatic index dtype promotion to `np.int64` when needed.
This ensures that sufficiently large input configurations get
properly promoted to use `np.int64` for index and indpt... | Check the automatic index dtype promotion to `np.int64` when needed.
This ensures that sufficiently large input configurations get
properly promoted to use `np.int64` for index and indptr representation
while preserving data integrity. Non-regression test for gh-16803.
Note that this is only possible ... | test_csr_polynomial_expansion_index_overflow_non_regression | python | scikit-learn/scikit-learn | sklearn/preprocessing/tests/test_polynomial.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/preprocessing/tests/test_polynomial.py | BSD-3-Clause |
def test_csr_polynomial_expansion_index_overflow(
degree, n_features, interaction_only, include_bias, csr_container
):
"""Tests known edge-cases to the dtype promotion strategy and custom
Cython code, including a current bug in the upstream
`scipy.sparse.hstack`.
"""
data = [1.0]
# Use int32... | Tests known edge-cases to the dtype promotion strategy and custom
Cython code, including a current bug in the upstream
`scipy.sparse.hstack`.
| test_csr_polynomial_expansion_index_overflow | python | scikit-learn/scikit-learn | sklearn/preprocessing/tests/test_polynomial.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/preprocessing/tests/test_polynomial.py | BSD-3-Clause |
def test_polynomial_features_behaviour_on_zero_degree(sparse_container):
"""Check that PolynomialFeatures raises error when degree=0 and include_bias=False,
and output a single constant column when include_bias=True
"""
X = np.ones((10, 2))
poly = PolynomialFeatures(degree=0, include_bias=False)
... | Check that PolynomialFeatures raises error when degree=0 and include_bias=False,
and output a single constant column when include_bias=True
| test_polynomial_features_behaviour_on_zero_degree | python | scikit-learn/scikit-learn | sklearn/preprocessing/tests/test_polynomial.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/preprocessing/tests/test_polynomial.py | BSD-3-Clause |
def _encode_target(X_ordinal, y_numeric, n_categories, smooth):
"""Simple Python implementation of target encoding."""
cur_encodings = np.zeros(n_categories, dtype=np.float64)
y_mean = np.mean(y_numeric)
if smooth == "auto":
y_variance = np.var(y_numeric)
for c in range(n_categories):
... | Simple Python implementation of target encoding. | _encode_target | python | scikit-learn/scikit-learn | sklearn/preprocessing/tests/test_target_encoder.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/preprocessing/tests/test_target_encoder.py | BSD-3-Clause |
def test_encoding(categories, unknown_value, global_random_seed, smooth, target_type):
"""Check encoding for binary and continuous targets.
Compare the values returned by `TargetEncoder.fit_transform` against the
expected encodings for cv splits from a naive reference Python
implementation in _encode_t... | Check encoding for binary and continuous targets.
Compare the values returned by `TargetEncoder.fit_transform` against the
expected encodings for cv splits from a naive reference Python
implementation in _encode_target.
| test_encoding | python | scikit-learn/scikit-learn | sklearn/preprocessing/tests/test_target_encoder.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/preprocessing/tests/test_target_encoder.py | BSD-3-Clause |
def test_custom_categories(X, categories, smooth):
"""Custom categories with unknown categories that are not in training data."""
rng = np.random.RandomState(0)
y = rng.uniform(low=-10, high=20, size=X.shape[0])
enc = TargetEncoder(categories=categories, smooth=smooth, random_state=0).fit(X, y)
# T... | Custom categories with unknown categories that are not in training data. | test_custom_categories | python | scikit-learn/scikit-learn | sklearn/preprocessing/tests/test_target_encoder.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/preprocessing/tests/test_target_encoder.py | BSD-3-Clause |
def test_use_regression_target():
"""Check inferred and specified `target_type` on regression target."""
X = np.array([[0, 1, 0, 1, 0, 1]]).T
y = np.array([1.0, 2.0, 3.0, 2.0, 3.0, 4.0])
enc = TargetEncoder(cv=2)
with pytest.warns(
UserWarning,
match=re.escape(
"The leas... | Check inferred and specified `target_type` on regression target. | test_use_regression_target | python | scikit-learn/scikit-learn | sklearn/preprocessing/tests/test_target_encoder.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/preprocessing/tests/test_target_encoder.py | BSD-3-Clause |
def test_multiple_features_quick(to_pandas, smooth, target_type):
"""Check target encoder with multiple features."""
X_ordinal = np.array(
[[1, 1], [0, 1], [1, 1], [2, 1], [1, 0], [0, 1], [1, 0], [0, 0]], dtype=np.int64
)
if target_type == "binary-str":
y_train = np.array(["a", "b", "a",... | Check target encoder with multiple features. | test_multiple_features_quick | python | scikit-learn/scikit-learn | sklearn/preprocessing/tests/test_target_encoder.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/preprocessing/tests/test_target_encoder.py | BSD-3-Clause |
def test_constant_target_and_feature(y, y_mean, smooth):
"""Check edge case where feature and target is constant."""
X = np.array([[1] * 20]).T
n_samples = X.shape[0]
enc = TargetEncoder(cv=2, smooth=smooth, random_state=0)
X_trans = enc.fit_transform(X, y)
assert_allclose(X_trans, np.repeat([[... | Check edge case where feature and target is constant. | test_constant_target_and_feature | python | scikit-learn/scikit-learn | sklearn/preprocessing/tests/test_target_encoder.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/preprocessing/tests/test_target_encoder.py | BSD-3-Clause |
def test_smooth_zero():
"""Check edge case with zero smoothing and cv does not contain category."""
X = np.array([[0, 0, 0, 0, 0, 1, 1, 1, 1, 1]]).T
y = np.array([2.1, 4.3, 1.2, 3.1, 1.0, 9.0, 10.3, 14.2, 13.3, 15.0])
enc = TargetEncoder(smooth=0.0, shuffle=False, cv=2)
X_trans = enc.fit_transform(... | Check edge case with zero smoothing and cv does not contain category. | test_smooth_zero | python | scikit-learn/scikit-learn | sklearn/preprocessing/tests/test_target_encoder.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/preprocessing/tests/test_target_encoder.py | BSD-3-Clause |
def test_pandas_copy_on_write():
"""
Test target-encoder cython code when y is read-only.
The numpy array underlying df["y"] is read-only when copy-on-write is enabled.
Non-regression test for gh-27879.
"""
pd = pytest.importorskip("pandas", minversion="2.0")
with pd.option_context("mode.co... |
Test target-encoder cython code when y is read-only.
The numpy array underlying df["y"] is read-only when copy-on-write is enabled.
Non-regression test for gh-27879.
| test_pandas_copy_on_write | python | scikit-learn/scikit-learn | sklearn/preprocessing/tests/test_target_encoder.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/preprocessing/tests/test_target_encoder.py | BSD-3-Clause |
def predict(self, X):
"""Perform inductive inference across the model.
Parameters
----------
X : array-like of shape (n_samples, n_features)
The data matrix.
Returns
-------
y : ndarray of shape (n_samples,)
Predictions for input data.
... | Perform inductive inference across the model.
Parameters
----------
X : array-like of shape (n_samples, n_features)
The data matrix.
Returns
-------
y : ndarray of shape (n_samples,)
Predictions for input data.
| predict | python | scikit-learn/scikit-learn | sklearn/semi_supervised/_label_propagation.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/semi_supervised/_label_propagation.py | BSD-3-Clause |
def predict_proba(self, X):
"""Predict probability for each possible outcome.
Compute the probability estimates for each single sample in X
and each possible outcome seen during training (categorical
distribution).
Parameters
----------
X : array-like of shape (... | Predict probability for each possible outcome.
Compute the probability estimates for each single sample in X
and each possible outcome seen during training (categorical
distribution).
Parameters
----------
X : array-like of shape (n_samples, n_features)
The ... | predict_proba | python | scikit-learn/scikit-learn | sklearn/semi_supervised/_label_propagation.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/semi_supervised/_label_propagation.py | BSD-3-Clause |
def fit(self, X, y):
"""Fit a semi-supervised label propagation model to X.
The input samples (labeled and unlabeled) are provided by matrix X,
and target labels are provided by matrix y. We conventionally apply the
label -1 to unlabeled samples in matrix y in a semi-supervised
... | Fit a semi-supervised label propagation model to X.
The input samples (labeled and unlabeled) are provided by matrix X,
and target labels are provided by matrix y. We conventionally apply the
label -1 to unlabeled samples in matrix y in a semi-supervised
classification.
Paramet... | fit | python | scikit-learn/scikit-learn | sklearn/semi_supervised/_label_propagation.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/semi_supervised/_label_propagation.py | BSD-3-Clause |
def _build_graph(self):
"""Matrix representing a fully connected graph between each sample
This basic implementation creates a non-stochastic affinity matrix, so
class distributions will exceed 1 (normalization may be desired).
"""
if self.kernel == "knn":
self.nn_fi... | Matrix representing a fully connected graph between each sample
This basic implementation creates a non-stochastic affinity matrix, so
class distributions will exceed 1 (normalization may be desired).
| _build_graph | python | scikit-learn/scikit-learn | sklearn/semi_supervised/_label_propagation.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/semi_supervised/_label_propagation.py | BSD-3-Clause |
def _build_graph(self):
"""Graph matrix for Label Spreading computes the graph laplacian"""
# compute affinity matrix (or gram matrix)
if self.kernel == "knn":
self.nn_fit = None
n_samples = self.X_.shape[0]
affinity_matrix = self._get_kernel(self.X_)
laplacia... | Graph matrix for Label Spreading computes the graph laplacian | _build_graph | python | scikit-learn/scikit-learn | sklearn/semi_supervised/_label_propagation.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/semi_supervised/_label_propagation.py | BSD-3-Clause |
def _get_estimator(self):
"""Get the estimator.
Returns
-------
estimator_ : estimator object
The cloned estimator object.
"""
# TODO(1.8): remove and only keep clone(self.estimator)
if self.estimator is None and self.base_estimator != "deprecated":
... | Get the estimator.
Returns
-------
estimator_ : estimator object
The cloned estimator object.
| _get_estimator | python | scikit-learn/scikit-learn | sklearn/semi_supervised/_self_training.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/semi_supervised/_self_training.py | BSD-3-Clause |
def fit(self, X, y, **params):
"""
Fit self-training classifier using `X`, `y` as training data.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data.
y : {array-like, sparse matrix} of shape (n_s... |
Fit self-training classifier using `X`, `y` as training data.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data.
y : {array-like, sparse matrix} of shape (n_samples,)
Array representing th... | fit | python | scikit-learn/scikit-learn | sklearn/semi_supervised/_self_training.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/semi_supervised/_self_training.py | BSD-3-Clause |
def predict(self, X, **params):
"""Predict the classes of `X`.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data.
**params : dict of str -> object
Parameters to pass to the underlying estim... | Predict the classes of `X`.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data.
**params : dict of str -> object
Parameters to pass to the underlying estimator's ``predict`` method.
.. ... | predict | python | scikit-learn/scikit-learn | sklearn/semi_supervised/_self_training.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/semi_supervised/_self_training.py | BSD-3-Clause |
def predict_proba(self, X, **params):
"""Predict probability for each possible outcome.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data.
**params : dict of str -> object
Parameters to pas... | Predict probability for each possible outcome.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data.
**params : dict of str -> object
Parameters to pass to the underlying estimator's
``pre... | predict_proba | python | scikit-learn/scikit-learn | sklearn/semi_supervised/_self_training.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/semi_supervised/_self_training.py | BSD-3-Clause |
def decision_function(self, X, **params):
"""Call decision function of the `estimator`.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data.
**params : dict of str -> object
Parameters to pas... | Call decision function of the `estimator`.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data.
**params : dict of str -> object
Parameters to pass to the underlying estimator's
``decisio... | decision_function | python | scikit-learn/scikit-learn | sklearn/semi_supervised/_self_training.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/semi_supervised/_self_training.py | BSD-3-Clause |
def predict_log_proba(self, X, **params):
"""Predict log probability for each possible outcome.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data.
**params : dict of str -> object
Parameter... | Predict log probability for each possible outcome.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data.
**params : dict of str -> object
Parameters to pass to the underlying estimator's
`... | predict_log_proba | python | scikit-learn/scikit-learn | sklearn/semi_supervised/_self_training.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/semi_supervised/_self_training.py | BSD-3-Clause |
def score(self, X, y, **params):
"""Call score on the `estimator`.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data.
y : array-like of shape (n_samples,)
Array representing the labels.
... | Call score on the `estimator`.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data.
y : array-like of shape (n_samples,)
Array representing the labels.
**params : dict of str -> object
... | score | python | scikit-learn/scikit-learn | sklearn/semi_supervised/_self_training.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/semi_supervised/_self_training.py | BSD-3-Clause |
def test_self_training_estimator_attribute_error():
"""Check that we raise the proper AttributeErrors when the `estimator`
does not implement the `predict_proba` method, which is called from within
`fit`, or `decision_function`, which is decorated with `available_if`.
Non-regression test for:
https... | Check that we raise the proper AttributeErrors when the `estimator`
does not implement the `predict_proba` method, which is called from within
`fit`, or `decision_function`, which is decorated with `available_if`.
Non-regression test for:
https://github.com/scikit-learn/scikit-learn/issues/28108
| test_self_training_estimator_attribute_error | python | scikit-learn/scikit-learn | sklearn/semi_supervised/tests/test_self_training.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/semi_supervised/tests/test_self_training.py | BSD-3-Clause |
def _one_vs_one_coef(dual_coef, n_support, support_vectors):
"""Generate primal coefficients from dual coefficients
for the one-vs-one multi class LibSVM in the case
of a linear kernel."""
# get 1vs1 weights for all n*(n-1) classifiers.
# this is somewhat messy.
# shape of dual_coef_ is nSV * (... | Generate primal coefficients from dual coefficients
for the one-vs-one multi class LibSVM in the case
of a linear kernel. | _one_vs_one_coef | python | scikit-learn/scikit-learn | sklearn/svm/_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/_base.py | BSD-3-Clause |
def fit(self, X, y, sample_weight=None):
"""Fit the SVM model according to the given training data.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features) \
or (n_samples, n_samples)
Training vectors, where `n_samples` is the n... | Fit the SVM model according to the given training data.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples)
Training vectors, where `n_samples` is the number of samples
and `n_features` is the n... | fit | python | scikit-learn/scikit-learn | sklearn/svm/_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/_base.py | BSD-3-Clause |
def predict(self, X):
"""Perform regression on samples in X.
For an one-class model, +1 (inlier) or -1 (outlier) is returned.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
For kernel="precomputed", the expected shape of X is
... | Perform regression on samples in X.
For an one-class model, +1 (inlier) or -1 (outlier) is returned.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
For kernel="precomputed", the expected shape of X is
(n_samples_test, n_sa... | predict | python | scikit-learn/scikit-learn | sklearn/svm/_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/_base.py | BSD-3-Clause |
def _compute_kernel(self, X):
"""Return the data transformed by a callable kernel"""
if callable(self.kernel):
# in the case of precomputed kernel given as a function, we
# have to compute explicitly the kernel matrix
kernel = self.kernel(X, self.__Xfit)
i... | Return the data transformed by a callable kernel | _compute_kernel | python | scikit-learn/scikit-learn | sklearn/svm/_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/_base.py | BSD-3-Clause |
def _decision_function(self, X):
"""Evaluates the decision function for the samples in X.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Returns
-------
X : array-like of shape (n_samples, n_class * (n_class-1) / 2)
Returns the... | Evaluates the decision function for the samples in X.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Returns
-------
X : array-like of shape (n_samples, n_class * (n_class-1) / 2)
Returns the decision function of the sample for each cl... | _decision_function | python | scikit-learn/scikit-learn | sklearn/svm/_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/_base.py | BSD-3-Clause |
def coef_(self):
"""Weights assigned to the features when `kernel="linear"`.
Returns
-------
ndarray of shape (n_features, n_classes)
"""
if self.kernel != "linear":
raise AttributeError("coef_ is only available when using a linear kernel")
coef = se... | Weights assigned to the features when `kernel="linear"`.
Returns
-------
ndarray of shape (n_features, n_classes)
| coef_ | python | scikit-learn/scikit-learn | sklearn/svm/_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/_base.py | BSD-3-Clause |
def n_support_(self):
"""Number of support vectors for each class."""
try:
check_is_fitted(self)
except NotFittedError:
raise AttributeError
svm_type = LIBSVM_IMPL.index(self._impl)
if svm_type in (0, 1):
return self._n_support
else:
... | Number of support vectors for each class. | n_support_ | python | scikit-learn/scikit-learn | sklearn/svm/_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/_base.py | BSD-3-Clause |
def decision_function(self, X):
"""Evaluate the decision function for the samples in X.
Parameters
----------
X : array-like of shape (n_samples, n_features)
The input samples.
Returns
-------
X : ndarray of shape (n_samples, n_classes * (n_classes-1... | Evaluate the decision function for the samples in X.
Parameters
----------
X : array-like of shape (n_samples, n_features)
The input samples.
Returns
-------
X : ndarray of shape (n_samples, n_classes * (n_classes-1) / 2)
Returns the decision fun... | decision_function | python | scikit-learn/scikit-learn | sklearn/svm/_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/_base.py | BSD-3-Clause |
def predict(self, X):
"""Perform classification on samples in X.
For an one-class model, +1 or -1 is returned.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features) or \
(n_samples_test, n_samples_train)
For kernel="p... | Perform classification on samples in X.
For an one-class model, +1 or -1 is returned.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples_test, n_samples_train)
For kernel="precomputed", the expected shape of ... | predict | python | scikit-learn/scikit-learn | sklearn/svm/_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/_base.py | BSD-3-Clause |
def predict_proba(self, X):
"""Compute probabilities of possible outcomes for samples in X.
The model needs to have probability information computed at training
time: fit with attribute `probability` set to True.
Parameters
----------
X : array-like of shape (n_samples,... | Compute probabilities of possible outcomes for samples in X.
The model needs to have probability information computed at training
time: fit with attribute `probability` set to True.
Parameters
----------
X : array-like of shape (n_samples, n_features)
For kernel="pr... | predict_proba | python | scikit-learn/scikit-learn | sklearn/svm/_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/_base.py | BSD-3-Clause |
def _get_liblinear_solver_type(multi_class, penalty, loss, dual):
"""Find the liblinear magic number for the solver.
This number depends on the values of the following attributes:
- multi_class
- penalty
- loss
- dual
The same number is also internally used by LibLinear to determin... | Find the liblinear magic number for the solver.
This number depends on the values of the following attributes:
- multi_class
- penalty
- loss
- dual
The same number is also internally used by LibLinear to determine
which solver to use.
| _get_liblinear_solver_type | python | scikit-learn/scikit-learn | sklearn/svm/_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/_base.py | BSD-3-Clause |
def _fit_liblinear(
X,
y,
C,
fit_intercept,
intercept_scaling,
class_weight,
penalty,
dual,
verbose,
max_iter,
tol,
random_state=None,
multi_class="ovr",
loss="logistic_regression",
epsilon=0.1,
sample_weight=None,
):
"""Used by Logistic Regression (an... | Used by Logistic Regression (and CV) and LinearSVC/LinearSVR.
Preprocessing is done in this function before supplying it to liblinear.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples and
... | _fit_liblinear | python | scikit-learn/scikit-learn | sklearn/svm/_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/_base.py | BSD-3-Clause |
def l1_min_c(X, y, *, loss="squared_hinge", fit_intercept=True, intercept_scaling=1.0):
"""Return the lowest bound for `C`.
The lower bound for `C` is computed such that for `C` in `(l1_min_C, infinity)`
the model is guaranteed not to be empty. This applies to l1 penalized
classifiers, such as :class:`... | Return the lowest bound for `C`.
The lower bound for `C` is computed such that for `C` in `(l1_min_C, infinity)`
the model is guaranteed not to be empty. This applies to l1 penalized
classifiers, such as :class:`sklearn.svm.LinearSVC` with penalty='l1' and
:class:`sklearn.linear_model.LogisticRegressio... | l1_min_c | python | scikit-learn/scikit-learn | sklearn/svm/_bounds.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/_bounds.py | BSD-3-Clause |
def _validate_dual_parameter(dual, loss, penalty, multi_class, X):
"""Helper function to assign the value of dual parameter."""
if dual == "auto":
if X.shape[0] < X.shape[1]:
try:
_get_liblinear_solver_type(multi_class, penalty, loss, True)
return True
... | Helper function to assign the value of dual parameter. | _validate_dual_parameter | python | scikit-learn/scikit-learn | sklearn/svm/_classes.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/_classes.py | BSD-3-Clause |
def fit(self, X, y, sample_weight=None):
"""Fit the model according to the given training data.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples and
`n_features` is ... | Fit the model according to the given training data.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples and
`n_features` is the number of features.
y : array-like of s... | fit | python | scikit-learn/scikit-learn | sklearn/svm/_classes.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/_classes.py | BSD-3-Clause |
def fit(self, X, y=None, sample_weight=None):
"""Detect the soft boundary of the set of samples X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Set of samples, where `n_samples` is the number of samples and
`n_features` i... | Detect the soft boundary of the set of samples X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Set of samples, where `n_samples` is the number of samples and
`n_features` is the number of features.
y : Ignored
... | fit | python | scikit-learn/scikit-learn | sklearn/svm/_classes.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/_classes.py | BSD-3-Clause |
def predict(self, X):
"""Perform classification on samples in X.
For a one-class model, +1 or -1 is returned.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features) or \
(n_samples_test, n_samples_train)
For kernel="pr... | Perform classification on samples in X.
For a one-class model, +1 or -1 is returned.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples_test, n_samples_train)
For kernel="precomputed", the expected shape of X... | predict | python | scikit-learn/scikit-learn | sklearn/svm/_classes.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/_classes.py | BSD-3-Clause |
def test_newrand_default():
"""Test that bounded_rand_int_wrap without seeding respects the range
Note this test should pass either if executed alone, or in conjunctions
with other tests that call set_seed explicit in any order: it checks
invariants on the RNG instead of specific values.
"""
ge... | Test that bounded_rand_int_wrap without seeding respects the range
Note this test should pass either if executed alone, or in conjunctions
with other tests that call set_seed explicit in any order: it checks
invariants on the RNG instead of specific values.
| test_newrand_default | python | scikit-learn/scikit-learn | sklearn/svm/tests/test_bounds.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/tests/test_bounds.py | BSD-3-Clause |
def test_svc(X_train, y_train, X_test, kernel, sparse_container):
"""Check that sparse SVC gives the same result as SVC."""
X_train = sparse_container(X_train)
clf = svm.SVC(
gamma=1,
kernel=kernel,
probability=True,
random_state=0,
decision_function_shape="ovo",
... | Check that sparse SVC gives the same result as SVC. | test_svc | python | scikit-learn/scikit-learn | sklearn/svm/tests/test_sparse.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/tests/test_sparse.py | BSD-3-Clause |
def test_svc_ovr_tie_breaking(SVCClass):
"""Test if predict breaks ties in OVR mode.
Related issue: https://github.com/scikit-learn/scikit-learn/issues/8277
"""
if SVCClass.__name__ == "NuSVC" and _IS_32BIT:
# XXX: known failure to be investigated. Either the code needs to be
# fixed or ... | Test if predict breaks ties in OVR mode.
Related issue: https://github.com/scikit-learn/scikit-learn/issues/8277
| test_svc_ovr_tie_breaking | python | scikit-learn/scikit-learn | sklearn/svm/tests/test_svm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/tests/test_svm.py | BSD-3-Clause |
def test_custom_kernel_not_array_input(Estimator):
"""Test using a custom kernel that is not fed with array-like for floats"""
data = ["A A", "A", "B", "B B", "A B"]
X = np.array([[2, 0], [1, 0], [0, 1], [0, 2], [1, 1]]) # count encoding
y = np.array([1, 1, 2, 2, 1])
def string_kernel(X1, X2):
... | Test using a custom kernel that is not fed with array-like for floats | test_custom_kernel_not_array_input | python | scikit-learn/scikit-learn | sklearn/svm/tests/test_svm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/tests/test_svm.py | BSD-3-Clause |
def test_svc_raises_error_internal_representation():
"""Check that SVC raises error when internal representation is altered.
Non-regression test for #18891 and https://nvd.nist.gov/vuln/detail/CVE-2020-28975
"""
clf = svm.SVC(kernel="linear").fit(X, Y)
clf._n_support[0] = 1000000
msg = "The in... | Check that SVC raises error when internal representation is altered.
Non-regression test for #18891 and https://nvd.nist.gov/vuln/detail/CVE-2020-28975
| test_svc_raises_error_internal_representation | python | scikit-learn/scikit-learn | sklearn/svm/tests/test_svm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/tests/test_svm.py | BSD-3-Clause |
def test_svm_with_infinite_C(Estimator, make_dataset, C_inf, global_random_seed):
"""Check that we can pass `C=inf` that is equivalent to a very large C value.
Non-regression test for
https://github.com/scikit-learn/scikit-learn/issues/29772
"""
X, y = make_dataset(random_state=global_random_seed)
... | Check that we can pass `C=inf` that is equivalent to a very large C value.
Non-regression test for
https://github.com/scikit-learn/scikit-learn/issues/29772
| test_svm_with_infinite_C | python | scikit-learn/scikit-learn | sklearn/svm/tests/test_svm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/tests/test_svm.py | BSD-3-Clause |
def record_metadata(obj, record_default=True, **kwargs):
"""Utility function to store passed metadata to a method of obj.
If record_default is False, kwargs whose values are "default" are skipped.
This is so that checks on keyword arguments whose default was not changed
are skipped.
"""
stack ... | Utility function to store passed metadata to a method of obj.
If record_default is False, kwargs whose values are "default" are skipped.
This is so that checks on keyword arguments whose default was not changed
are skipped.
| record_metadata | python | scikit-learn/scikit-learn | sklearn/tests/metadata_routing_common.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/metadata_routing_common.py | BSD-3-Clause |
def check_recorded_metadata(obj, method, parent, split_params=tuple(), **kwargs):
"""Check whether the expected metadata is passed to the object's method.
Parameters
----------
obj : estimator object
sub-estimator to check routed params for
method : str
sub-estimator's method where ... | Check whether the expected metadata is passed to the object's method.
Parameters
----------
obj : estimator object
sub-estimator to check routed params for
method : str
sub-estimator's method where metadata is routed to, or otherwise in
the context of metadata routing referred t... | check_recorded_metadata | python | scikit-learn/scikit-learn | sklearn/tests/metadata_routing_common.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/metadata_routing_common.py | BSD-3-Clause |
def assert_request_is_empty(metadata_request, exclude=None):
"""Check if a metadata request dict is empty.
One can exclude a method or a list of methods from the check using the
``exclude`` parameter. If metadata_request is a MetadataRouter, then
``exclude`` can be of the form ``{"object" : [method, ..... | Check if a metadata request dict is empty.
One can exclude a method or a list of methods from the check using the
``exclude`` parameter. If metadata_request is a MetadataRouter, then
``exclude`` can be of the form ``{"object" : [method, ...]}``.
| assert_request_is_empty | python | scikit-learn/scikit-learn | sklearn/tests/metadata_routing_common.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/metadata_routing_common.py | BSD-3-Clause |
def test_clone_protocol():
"""Checks that clone works with `__sklearn_clone__` protocol."""
class FrozenEstimator(BaseEstimator):
def __init__(self, fitted_estimator):
self.fitted_estimator = fitted_estimator
def __getattr__(self, name):
return getattr(self.fitted_estim... | Checks that clone works with `__sklearn_clone__` protocol. | test_clone_protocol | python | scikit-learn/scikit-learn | sklearn/tests/test_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_base.py | BSD-3-Clause |
def test_n_features_in_validation():
"""Check that `_check_n_features` validates data when reset=False"""
est = MyEstimator()
X_train = [[1, 2, 3], [4, 5, 6]]
_check_n_features(est, X_train, reset=True)
assert est.n_features_in_ == 3
msg = "X does not contain any features, but MyEstimator is e... | Check that `_check_n_features` validates data when reset=False | test_n_features_in_validation | python | scikit-learn/scikit-learn | sklearn/tests/test_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_base.py | BSD-3-Clause |
def test_n_features_in_no_validation():
"""Check that `_check_n_features` does not validate data when
n_features_in_ is not defined."""
est = MyEstimator()
_check_n_features(est, "invalid X", reset=True)
assert not hasattr(est, "n_features_in_")
# does not raise
_check_n_features(est, "inv... | Check that `_check_n_features` does not validate data when
n_features_in_ is not defined. | test_n_features_in_no_validation | python | scikit-learn/scikit-learn | sklearn/tests/test_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_base.py | BSD-3-Clause |
def test_clone_keeps_output_config():
"""Check that clone keeps the set_output config."""
ss = StandardScaler().set_output(transform="pandas")
config = _get_output_config("transform", ss)
ss_clone = clone(ss)
config_clone = _get_output_config("transform", ss_clone)
assert config == config_clon... | Check that clone keeps the set_output config. | test_clone_keeps_output_config | python | scikit-learn/scikit-learn | sklearn/tests/test_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_base.py | BSD-3-Clause |
def test_estimator_empty_instance_dict(estimator):
"""Check that ``__getstate__`` returns an empty ``dict`` with an empty
instance.
Python 3.11+ changed behaviour by returning ``None`` instead of raising an
``AttributeError``. Non-regression test for gh-25188.
"""
state = estimator.__getstate__... | Check that ``__getstate__`` returns an empty ``dict`` with an empty
instance.
Python 3.11+ changed behaviour by returning ``None`` instead of raising an
``AttributeError``. Non-regression test for gh-25188.
| test_estimator_empty_instance_dict | python | scikit-learn/scikit-learn | sklearn/tests/test_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_base.py | BSD-3-Clause |
def test_estimator_getstate_using_slots_error_message():
"""Using a `BaseEstimator` with `__slots__` is not supported."""
class WithSlots:
__slots__ = ("x",)
class Estimator(BaseEstimator, WithSlots):
pass
msg = (
"You cannot use `__slots__` in objects inheriting from "
... | Using a `BaseEstimator` with `__slots__` is not supported. | test_estimator_getstate_using_slots_error_message | python | scikit-learn/scikit-learn | sklearn/tests/test_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_base.py | BSD-3-Clause |
def test_dataframe_protocol(constructor_name, minversion):
"""Uses the dataframe exchange protocol to get feature names."""
data = [[1, 4, 2], [3, 3, 6]]
columns = ["col_0", "col_1", "col_2"]
df = _convert_container(
data, constructor_name, columns_name=columns, minversion=minversion
)
... | Uses the dataframe exchange protocol to get feature names. | test_dataframe_protocol | python | scikit-learn/scikit-learn | sklearn/tests/test_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_base.py | BSD-3-Clause |
def test_transformer_fit_transform_with_metadata_in_transform():
"""Test that having a transformer with metadata for transform raises a
warning when calling fit_transform."""
class CustomTransformer(BaseEstimator, TransformerMixin):
def fit(self, X, y=None, prop=None):
return self
... | Test that having a transformer with metadata for transform raises a
warning when calling fit_transform. | test_transformer_fit_transform_with_metadata_in_transform | python | scikit-learn/scikit-learn | sklearn/tests/test_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_base.py | BSD-3-Clause |
def test_outlier_mixin_fit_predict_with_metadata_in_predict():
"""Test that having an OutlierMixin with metadata for predict raises a
warning when calling fit_predict."""
class CustomOutlierDetector(BaseEstimator, OutlierMixin):
def fit(self, X, y=None, prop=None):
return self
... | Test that having an OutlierMixin with metadata for predict raises a
warning when calling fit_predict. | test_outlier_mixin_fit_predict_with_metadata_in_predict | python | scikit-learn/scikit-learn | sklearn/tests/test_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_base.py | BSD-3-Clause |
def test_get_params_html():
"""Check the behaviour of the `_get_params_html` method."""
est = MyEstimator(empty="test")
assert est._get_params_html() == {"l1": 0, "empty": "test"}
assert est._get_params_html().non_default == ("empty",) | Check the behaviour of the `_get_params_html` method. | test_get_params_html | python | scikit-learn/scikit-learn | sklearn/tests/test_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_base.py | BSD-3-Clause |
def test_sigmoid_calibration():
"""Test calibration values with Platt sigmoid model"""
exF = np.array([5, -4, 1.0])
exY = np.array([1, -1, -1])
# computed from my python port of the C++ code in LibSVM
AB_lin_libsvm = np.array([-0.20261354391187855, 0.65236314980010512])
assert_array_almost_equal... | Test calibration values with Platt sigmoid model | test_sigmoid_calibration | python | scikit-learn/scikit-learn | sklearn/tests/test_calibration.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_calibration.py | BSD-3-Clause |
def test_calibration_nan_imputer(ensemble):
"""Test that calibration can accept nan"""
X, y = make_classification(
n_samples=10, n_features=2, n_informative=2, n_redundant=0, random_state=42
)
X[0, 0] = np.nan
clf = Pipeline(
[("imputer", SimpleImputer()), ("rf", RandomForestClassifi... | Test that calibration can accept nan | test_calibration_nan_imputer | python | scikit-learn/scikit-learn | sklearn/tests/test_calibration.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_calibration.py | BSD-3-Clause |
def test_calibration_accepts_ndarray(X):
"""Test that calibration accepts n-dimensional arrays as input"""
y = [1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0]
class MockTensorClassifier(ClassifierMixin, BaseEstimator):
"""A toy estimator that accepts tensor inputs"""
def fit(self, X, y):
... | Test that calibration accepts n-dimensional arrays as input | test_calibration_accepts_ndarray | python | scikit-learn/scikit-learn | sklearn/tests/test_calibration.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_calibration.py | BSD-3-Clause |
def test_calibration_dict_pipeline(dict_data, dict_data_pipeline):
"""Test that calibration works in prefit pipeline with transformer
`X` is not array-like, sparse matrix or dataframe at the start.
See https://github.com/scikit-learn/scikit-learn/issues/8710
Also test it can predict without running in... | Test that calibration works in prefit pipeline with transformer
`X` is not array-like, sparse matrix or dataframe at the start.
See https://github.com/scikit-learn/scikit-learn/issues/8710
Also test it can predict without running into validation errors.
See https://github.com/scikit-learn/scikit-learn... | test_calibration_dict_pipeline | python | scikit-learn/scikit-learn | sklearn/tests/test_calibration.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_calibration.py | BSD-3-Clause |
def test_calibration_curve_pos_label_error_str(dtype_y_str):
"""Check error message when a `pos_label` is not specified with `str` targets."""
rng = np.random.RandomState(42)
y1 = np.array(["spam"] * 3 + ["eggs"] * 2, dtype=dtype_y_str)
y2 = rng.randint(0, 2, size=y1.size)
err_msg = (
"y_tr... | Check error message when a `pos_label` is not specified with `str` targets. | test_calibration_curve_pos_label_error_str | python | scikit-learn/scikit-learn | sklearn/tests/test_calibration.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_calibration.py | BSD-3-Clause |
def test_calibration_curve_pos_label(dtype_y_str):
"""Check the behaviour when passing explicitly `pos_label`."""
y_true = np.array([0, 0, 0, 1, 1, 1, 1, 1, 1])
classes = np.array(["spam", "egg"], dtype=dtype_y_str)
y_true_str = classes[y_true]
y_pred = np.array([0.1, 0.2, 0.3, 0.4, 0.65, 0.7, 0.8, ... | Check the behaviour when passing explicitly `pos_label`. | test_calibration_curve_pos_label | python | scikit-learn/scikit-learn | sklearn/tests/test_calibration.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_calibration.py | BSD-3-Clause |
def test_calibration_display_kwargs(pyplot, iris_data_binary, kwargs):
"""Check that matplotlib aliases are handled."""
X, y = iris_data_binary
lr = LogisticRegression().fit(X, y)
viz = CalibrationDisplay.from_estimator(lr, X, y, **kwargs)
assert viz.line_.get_color() == "red"
assert viz.line_... | Check that matplotlib aliases are handled. | test_calibration_display_kwargs | python | scikit-learn/scikit-learn | sklearn/tests/test_calibration.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_calibration.py | BSD-3-Clause |
def test_calibration_display_pos_label(
pyplot, iris_data_binary, pos_label, expected_pos_label
):
"""Check the behaviour of `pos_label` in the `CalibrationDisplay`."""
X, y = iris_data_binary
lr = LogisticRegression().fit(X, y)
viz = CalibrationDisplay.from_estimator(lr, X, y, pos_label=pos_label)... | Check the behaviour of `pos_label` in the `CalibrationDisplay`. | test_calibration_display_pos_label | python | scikit-learn/scikit-learn | sklearn/tests/test_calibration.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_calibration.py | BSD-3-Clause |
def test_calibrated_classifier_cv_double_sample_weights_equivalence(method, ensemble):
"""Check that passing repeating twice the dataset `X` is equivalent to
passing a `sample_weight` with a factor 2."""
X, y = load_iris(return_X_y=True)
# Scale the data to avoid any convergence issue
X = StandardSc... | Check that passing repeating twice the dataset `X` is equivalent to
passing a `sample_weight` with a factor 2. | test_calibrated_classifier_cv_double_sample_weights_equivalence | python | scikit-learn/scikit-learn | sklearn/tests/test_calibration.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_calibration.py | BSD-3-Clause |
def test_calibration_with_fit_params(fit_params_type, data):
"""Tests that fit_params are passed to the underlying base estimator.
Non-regression test for:
https://github.com/scikit-learn/scikit-learn/issues/12384
"""
X, y = data
fit_params = {
"a": _convert_container(y, fit_params_type... | Tests that fit_params are passed to the underlying base estimator.
Non-regression test for:
https://github.com/scikit-learn/scikit-learn/issues/12384
| test_calibration_with_fit_params | python | scikit-learn/scikit-learn | sklearn/tests/test_calibration.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_calibration.py | BSD-3-Clause |
def test_calibration_with_sample_weight_estimator(sample_weight, data):
"""Tests that sample_weight is passed to the underlying base
estimator.
"""
X, y = data
clf = CheckingClassifier(expected_sample_weight=True)
pc_clf = CalibratedClassifierCV(clf)
pc_clf.fit(X, y, sample_weight=sample_we... | Tests that sample_weight is passed to the underlying base
estimator.
| test_calibration_with_sample_weight_estimator | python | scikit-learn/scikit-learn | sklearn/tests/test_calibration.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_calibration.py | BSD-3-Clause |
def test_calibration_without_sample_weight_estimator(data):
"""Check that even if the estimator doesn't support
sample_weight, fitting with sample_weight still works.
There should be a warning, since the sample_weight is not passed
on to the estimator.
"""
X, y = data
sample_weight = np.one... | Check that even if the estimator doesn't support
sample_weight, fitting with sample_weight still works.
There should be a warning, since the sample_weight is not passed
on to the estimator.
| test_calibration_without_sample_weight_estimator | python | scikit-learn/scikit-learn | sklearn/tests/test_calibration.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_calibration.py | BSD-3-Clause |
def test_calibration_with_non_sample_aligned_fit_param(data):
"""Check that CalibratedClassifierCV does not enforce sample alignment
for fit parameters."""
class TestClassifier(LogisticRegression):
def fit(self, X, y, sample_weight=None, fit_param=None):
assert fit_param is not None
... | Check that CalibratedClassifierCV does not enforce sample alignment
for fit parameters. | test_calibration_with_non_sample_aligned_fit_param | python | scikit-learn/scikit-learn | sklearn/tests/test_calibration.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_calibration.py | BSD-3-Clause |
def test_calibrated_classifier_cv_works_with_large_confidence_scores(
global_random_seed,
):
"""Test that :class:`CalibratedClassifierCV` works with large confidence
scores when using the `sigmoid` method, particularly with the
:class:`SGDClassifier`.
Non-regression test for issue #26766.
"""
... | Test that :class:`CalibratedClassifierCV` works with large confidence
scores when using the `sigmoid` method, particularly with the
:class:`SGDClassifier`.
Non-regression test for issue #26766.
| test_calibrated_classifier_cv_works_with_large_confidence_scores | python | scikit-learn/scikit-learn | sklearn/tests/test_calibration.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_calibration.py | BSD-3-Clause |
def test_float32_predict_proba(data, use_sample_weight, method):
"""Check that CalibratedClassifierCV works with float32 predict proba.
Non-regression test for gh-28245 and gh-28247.
"""
if use_sample_weight:
# Use dtype=np.float64 to check that this does not trigger an
# unintentional ... | Check that CalibratedClassifierCV works with float32 predict proba.
Non-regression test for gh-28245 and gh-28247.
| test_float32_predict_proba | python | scikit-learn/scikit-learn | sklearn/tests/test_calibration.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_calibration.py | BSD-3-Clause |
def test_error_less_class_samples_than_folds():
"""Check that CalibratedClassifierCV works with string targets.
non-regression test for issue #28841.
"""
X = np.random.normal(size=(20, 3))
y = ["a"] * 10 + ["b"] * 10
CalibratedClassifierCV(cv=3).fit(X, y) | Check that CalibratedClassifierCV works with string targets.
non-regression test for issue #28841.
| test_error_less_class_samples_than_folds | python | scikit-learn/scikit-learn | sklearn/tests/test_calibration.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_calibration.py | BSD-3-Clause |
def test_check_estimator_generate_only_deprecation():
"""Check that check_estimator with generate_only=True raises a deprecation
warning."""
with pytest.warns(FutureWarning, match="`generate_only` is deprecated in 1.6"):
all_instance_gen_checks = check_estimator(
LogisticRegression(), ge... | Check that check_estimator with generate_only=True raises a deprecation
warning. | test_check_estimator_generate_only_deprecation | python | scikit-learn/scikit-learn | sklearn/tests/test_common.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_common.py | BSD-3-Clause |
def set_assume_finite(assume_finite, sleep_duration):
"""Return the value of assume_finite after waiting `sleep_duration`."""
with config_context(assume_finite=assume_finite):
time.sleep(sleep_duration)
return get_config()["assume_finite"] | Return the value of assume_finite after waiting `sleep_duration`. | set_assume_finite | python | scikit-learn/scikit-learn | sklearn/tests/test_config.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_config.py | BSD-3-Clause |
def test_config_threadsafe_joblib(backend):
"""Test that the global config is threadsafe with all joblib backends.
Two jobs are spawned and sets assume_finite to two different values.
When the job with a duration 0.1s completes, the assume_finite value
should be the same as the value passed to the funct... | Test that the global config is threadsafe with all joblib backends.
Two jobs are spawned and sets assume_finite to two different values.
When the job with a duration 0.1s completes, the assume_finite value
should be the same as the value passed to the function. In other words,
it is not influenced by th... | test_config_threadsafe_joblib | python | scikit-learn/scikit-learn | sklearn/tests/test_config.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_config.py | BSD-3-Clause |
def test_config_threadsafe():
"""Uses threads directly to test that the global config does not change
between threads. Same test as `test_config_threadsafe_joblib` but with
`ThreadPoolExecutor`."""
assume_finites = [False, True, False, True]
sleep_durations = [0.1, 0.2, 0.1, 0.2]
with ThreadPo... | Uses threads directly to test that the global config does not change
between threads. Same test as `test_config_threadsafe_joblib` but with
`ThreadPoolExecutor`. | test_config_threadsafe | python | scikit-learn/scikit-learn | sklearn/tests/test_config.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_config.py | BSD-3-Clause |
def test_config_array_api_dispatch_error_scipy(monkeypatch):
"""Check error when SciPy is too old"""
monkeypatch.setattr(sklearn.utils._array_api.scipy, "__version__", "1.13.0")
with pytest.raises(ImportError, match="SciPy must be 1.14.0 or newer"):
with config_context(array_api_dispatch=True):
... | Check error when SciPy is too old | test_config_array_api_dispatch_error_scipy | python | scikit-learn/scikit-learn | sklearn/tests/test_config.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_config.py | BSD-3-Clause |
def generate_dataset(n_samples, centers, covariances, random_state=None):
"""Generate a multivariate normal data given some centers and
covariances"""
rng = check_random_state(random_state)
X = np.vstack(
[
rng.multivariate_normal(mean, cov, size=n_samples // ... | Generate a multivariate normal data given some centers and
covariances | generate_dataset | python | scikit-learn/scikit-learn | sklearn/tests/test_discriminant_analysis.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_discriminant_analysis.py | BSD-3-Clause |
def test_qda_prior_type(priors_type):
"""Check that priors accept array-like."""
priors = [0.5, 0.5]
clf = QuadraticDiscriminantAnalysis(
priors=_convert_container([0.5, 0.5], priors_type)
).fit(X6, y6)
assert isinstance(clf.priors_, np.ndarray)
assert_array_equal(clf.priors_, priors) | Check that priors accept array-like. | test_qda_prior_type | python | scikit-learn/scikit-learn | sklearn/tests/test_discriminant_analysis.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_discriminant_analysis.py | BSD-3-Clause |
def test_qda_prior_copy():
"""Check that altering `priors` without `fit` doesn't change `priors_`"""
priors = np.array([0.5, 0.5])
qda = QuadraticDiscriminantAnalysis(priors=priors).fit(X, y)
# we expect the following
assert_array_equal(qda.priors_, qda.priors)
# altering `priors` without `fit... | Check that altering `priors` without `fit` doesn't change `priors_` | test_qda_prior_copy | python | scikit-learn/scikit-learn | sklearn/tests/test_discriminant_analysis.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_discriminant_analysis.py | BSD-3-Clause |
def test_raises_value_error_on_same_number_of_classes_and_samples(solver):
"""
Tests that if the number of samples equals the number
of classes, a ValueError is raised.
"""
X = np.array([[0.5, 0.6], [0.6, 0.5]])
y = np.array(["a", "b"])
clf = LinearDiscriminantAnalysis(solver=solver)
wit... |
Tests that if the number of samples equals the number
of classes, a ValueError is raised.
| test_raises_value_error_on_same_number_of_classes_and_samples | python | scikit-learn/scikit-learn | sklearn/tests/test_discriminant_analysis.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_discriminant_analysis.py | BSD-3-Clause |
def test_get_feature_names_out():
"""Check get_feature_names_out uses class name as prefix."""
est = LinearDiscriminantAnalysis().fit(X, y)
names_out = est.get_feature_names_out()
class_name_lower = "LinearDiscriminantAnalysis".lower()
expected_names_out = np.array(
[
f"{class_... | Check get_feature_names_out uses class name as prefix. | test_get_feature_names_out | python | scikit-learn/scikit-learn | sklearn/tests/test_discriminant_analysis.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_discriminant_analysis.py | BSD-3-Clause |
def filter_errors(errors, method, Klass=None):
"""
Ignore some errors based on the method type.
These rules are specific for scikit-learn."""
for code, message in errors:
# We ignore following error code,
# - RT02: The first line of the Returns section
# should contain only ... |
Ignore some errors based on the method type.
These rules are specific for scikit-learn. | filter_errors | python | scikit-learn/scikit-learn | sklearn/tests/test_docstrings.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_docstrings.py | BSD-3-Clause |
def repr_errors(res, Klass=None, method: Optional[str] = None) -> str:
"""Pretty print original docstring and the obtained errors
Parameters
----------
res : dict
result of numpydoc.validate.validate
Klass : {Estimator, Display, None}
estimator object or None
method : str
... | Pretty print original docstring and the obtained errors
Parameters
----------
res : dict
result of numpydoc.validate.validate
Klass : {Estimator, Display, None}
estimator object or None
method : str
if estimator is not None, either the method name or None.
Returns
-... | repr_errors | python | scikit-learn/scikit-learn | sklearn/tests/test_docstrings.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_docstrings.py | BSD-3-Clause |
def _get_all_fitted_attributes(estimator):
"Get all the fitted attributes of an estimator including properties"
# attributes
fit_attr = list(estimator.__dict__.keys())
# properties
with warnings.catch_warnings():
warnings.filterwarnings("error", category=FutureWarning)
for name in ... | Get all the fitted attributes of an estimator including properties | _get_all_fitted_attributes | python | scikit-learn/scikit-learn | sklearn/tests/test_docstring_parameters.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_docstring_parameters.py | BSD-3-Clause |
def test_isotonic_regression_ties_secondary_():
"""
Test isotonic regression fit, transform and fit_transform
against the "secondary" ties method and "pituitary" data from R
"isotone" package, as detailed in: J. d. Leeuw, K. Hornik, P. Mair,
Isotone Optimization in R: Pool-Adjacent-Violators Algo... |
Test isotonic regression fit, transform and fit_transform
against the "secondary" ties method and "pituitary" data from R
"isotone" package, as detailed in: J. d. Leeuw, K. Hornik, P. Mair,
Isotone Optimization in R: Pool-Adjacent-Violators Algorithm
(PAVA) and Active Set Methods
Set values... | test_isotonic_regression_ties_secondary_ | python | scikit-learn/scikit-learn | sklearn/tests/test_isotonic.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_isotonic.py | BSD-3-Clause |
def test_isotonic_regression_with_ties_in_differently_sized_groups():
"""
Non-regression test to handle issue 9432:
https://github.com/scikit-learn/scikit-learn/issues/9432
Compare against output in R:
> library("isotone")
> x <- c(0, 1, 1, 2, 3, 4)
> y <- c(0, 0, 1, 0, 0, 1)
> res1 <- ... |
Non-regression test to handle issue 9432:
https://github.com/scikit-learn/scikit-learn/issues/9432
Compare against output in R:
> library("isotone")
> x <- c(0, 1, 1, 2, 3, 4)
> y <- c(0, 0, 1, 0, 0, 1)
> res1 <- gpava(x, y, ties="secondary")
> res1$x
`isotone` version: 1.1-0, 201... | test_isotonic_regression_with_ties_in_differently_sized_groups | python | scikit-learn/scikit-learn | sklearn/tests/test_isotonic.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_isotonic.py | BSD-3-Clause |
def test_isotonic_regression_sample_weight_not_overwritten():
"""Check that calling fitting function of isotonic regression will not
overwrite `sample_weight`.
Non-regression test for:
https://github.com/scikit-learn/scikit-learn/issues/20508
"""
X, y = make_regression(n_samples=10, n_features=1... | Check that calling fitting function of isotonic regression will not
overwrite `sample_weight`.
Non-regression test for:
https://github.com/scikit-learn/scikit-learn/issues/20508
| test_isotonic_regression_sample_weight_not_overwritten | python | scikit-learn/scikit-learn | sklearn/tests/test_isotonic.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_isotonic.py | BSD-3-Clause |
def test_isotonic_regression_output_predict():
"""Check that `predict` does return the expected output type.
We need to check that `transform` will output a DataFrame and a NumPy array
when we set `transform_output` to `pandas`.
Non-regression test for:
https://github.com/scikit-learn/scikit-learn... | Check that `predict` does return the expected output type.
We need to check that `transform` will output a DataFrame and a NumPy array
when we set `transform_output` to `pandas`.
Non-regression test for:
https://github.com/scikit-learn/scikit-learn/issues/25499
| test_isotonic_regression_output_predict | python | scikit-learn/scikit-learn | sklearn/tests/test_isotonic.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tests/test_isotonic.py | BSD-3-Clause |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.