code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def test_sgd_one_class_svm_estimator_type():
"""Check that SGDOneClassSVM has the correct estimator type.
Non-regression test for if the mixin was not on the left.
"""
sgd_ocsvm = SGDOneClassSVM()
assert get_tags(sgd_ocsvm).estimator_type == "outlier_detector" | Check that SGDOneClassSVM has the correct estimator type.
Non-regression test for if the mixin was not on the left.
| test_sgd_one_class_svm_estimator_type | python | scikit-learn/scikit-learn | sklearn/linear_model/tests/test_sgd.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/tests/test_sgd.py | BSD-3-Clause |
def fit(self, X, y, sample_weight=None):
"""Fit a Generalized Linear Model.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training data.
y : array-like of shape (n_samples,)
Target values.
sample_weight :... | Fit a Generalized Linear Model.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training data.
y : array-like of shape (n_samples,)
Target values.
sample_weight : array-like of shape (n_samples,), default=None
... | fit | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/glm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/glm.py | BSD-3-Clause |
def _linear_predictor(self, X):
"""Compute the linear_predictor = `X @ coef_ + intercept_`.
Note that we often use the term raw_prediction instead of linear predictor.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Samples.
... | Compute the linear_predictor = `X @ coef_ + intercept_`.
Note that we often use the term raw_prediction instead of linear predictor.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Samples.
Returns
-------
y_pr... | _linear_predictor | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/glm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/glm.py | BSD-3-Clause |
def predict(self, X):
"""Predict using GLM with feature matrix X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Samples.
Returns
-------
y_pred : array of shape (n_samples,)
Returns predicted value... | Predict using GLM with feature matrix X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Samples.
Returns
-------
y_pred : array of shape (n_samples,)
Returns predicted values.
| predict | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/glm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/glm.py | BSD-3-Clause |
def score(self, X, y, sample_weight=None):
"""Compute D^2, the percentage of deviance explained.
D^2 is a generalization of the coefficient of determination R^2.
R^2 uses squared error and D^2 uses the deviance of this GLM, see the
:ref:`User Guide <regression_metrics>`.
D^2 is... | Compute D^2, the percentage of deviance explained.
D^2 is a generalization of the coefficient of determination R^2.
R^2 uses squared error and D^2 uses the deviance of this GLM, see the
:ref:`User Guide <regression_metrics>`.
D^2 is defined as
:math:`D^2 = 1-\frac{D(y_{true},y_... | score | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/glm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/glm.py | BSD-3-Clause |
def setup(self, X, y, sample_weight):
"""Precomputations
If None, initializes:
- self.coef
Sets:
- self.raw_prediction
- self.loss_value
"""
_, _, self.raw_prediction = self.linear_loss.weight_intercept_raw(self.coef, X)
self.loss_valu... | Precomputations
If None, initializes:
- self.coef
Sets:
- self.raw_prediction
- self.loss_value
| setup | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/_newton_solver.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/_newton_solver.py | BSD-3-Clause |
def inner_solve(self, X, y, sample_weight):
"""Compute Newton step.
Sets:
- self.coef_newton
- self.gradient_times_newton
""" | Compute Newton step.
Sets:
- self.coef_newton
- self.gradient_times_newton
| inner_solve | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/_newton_solver.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/_newton_solver.py | BSD-3-Clause |
def fallback_lbfgs_solve(self, X, y, sample_weight):
"""Fallback solver in case of emergency.
If a solver detects convergence problems, it may fall back to this methods in
the hope to exit with success instead of raising an error.
Sets:
- self.coef
- self.conver... | Fallback solver in case of emergency.
If a solver detects convergence problems, it may fall back to this methods in
the hope to exit with success instead of raising an error.
Sets:
- self.coef
- self.converged
| fallback_lbfgs_solve | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/_newton_solver.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/_newton_solver.py | BSD-3-Clause |
def line_search(self, X, y, sample_weight):
"""Backtracking line search.
Sets:
- self.coef_old
- self.coef
- self.loss_value_old
- self.loss_value
- self.gradient_old
- self.gradient
- self.raw_prediction
"""
... | Backtracking line search.
Sets:
- self.coef_old
- self.coef
- self.loss_value_old
- self.loss_value
- self.gradient_old
- self.gradient
- self.raw_prediction
| line_search | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/_newton_solver.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/_newton_solver.py | BSD-3-Clause |
def check_convergence(self, X, y, sample_weight):
"""Check for convergence.
Sets self.converged.
"""
if self.verbose:
print(" Check Convergence")
# Note: Checking maximum relative change of coefficient <= tol is a bad
# convergence criterion because even a l... | Check for convergence.
Sets self.converged.
| check_convergence | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/_newton_solver.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/_newton_solver.py | BSD-3-Clause |
def solve(self, X, y, sample_weight):
"""Solve the optimization problem.
This is the main routine.
Order of calls:
self.setup()
while iteration:
self.update_gradient_hessian()
self.inner_solve()
self.line_search()
... | Solve the optimization problem.
This is the main routine.
Order of calls:
self.setup()
while iteration:
self.update_gradient_hessian()
self.inner_solve()
self.line_search()
self.check_convergence()
self... | solve | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/_newton_solver.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/_newton_solver.py | BSD-3-Clause |
def glm_dataset(global_random_seed, request):
"""Dataset with GLM solutions, well conditioned X.
This is inspired by ols_ridge_dataset in test_ridge.py.
The construction is based on the SVD decomposition of X = U S V'.
Parameters
----------
type : {"long", "wide"}
If "long", then n_sa... | Dataset with GLM solutions, well conditioned X.
This is inspired by ols_ridge_dataset in test_ridge.py.
The construction is based on the SVD decomposition of X = U S V'.
Parameters
----------
type : {"long", "wide"}
If "long", then n_samples > n_features.
If "wide", then n_feature... | glm_dataset | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/tests/test_glm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/tests/test_glm.py | BSD-3-Clause |
def test_glm_regression(solver, fit_intercept, glm_dataset):
"""Test that GLM converges for all solvers to correct solution.
We work with a simple constructed data set with known solution.
"""
model, X, y, _, coef_with_intercept, coef_without_intercept, alpha = glm_dataset
params = dict(
al... | Test that GLM converges for all solvers to correct solution.
We work with a simple constructed data set with known solution.
| test_glm_regression | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/tests/test_glm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/tests/test_glm.py | BSD-3-Clause |
def test_glm_regression_hstacked_X(solver, fit_intercept, glm_dataset):
"""Test that GLM converges for all solvers to correct solution on hstacked data.
We work with a simple constructed data set with known solution.
Fit on [X] with alpha is the same as fit on [X, X]/2 with alpha/2.
For long X, [X, X] ... | Test that GLM converges for all solvers to correct solution on hstacked data.
We work with a simple constructed data set with known solution.
Fit on [X] with alpha is the same as fit on [X, X]/2 with alpha/2.
For long X, [X, X] is still a long but singular matrix.
| test_glm_regression_hstacked_X | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/tests/test_glm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/tests/test_glm.py | BSD-3-Clause |
def test_glm_regression_vstacked_X(solver, fit_intercept, glm_dataset):
"""Test that GLM converges for all solvers to correct solution on vstacked data.
We work with a simple constructed data set with known solution.
Fit on [X] with alpha is the same as fit on [X], [y]
... | Test that GLM converges for all solvers to correct solution on vstacked data.
We work with a simple constructed data set with known solution.
Fit on [X] with alpha is the same as fit on [X], [y]
[X], [y] with 1 * alpha.
It is the same alpha as the average los... | test_glm_regression_vstacked_X | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/tests/test_glm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/tests/test_glm.py | BSD-3-Clause |
def test_glm_regression_unpenalized(solver, fit_intercept, glm_dataset):
"""Test that unpenalized GLM converges for all solvers to correct solution.
We work with a simple constructed data set with known solution.
Note: This checks the minimum norm solution for wide X, i.e.
n_samples < n_features:
... | Test that unpenalized GLM converges for all solvers to correct solution.
We work with a simple constructed data set with known solution.
Note: This checks the minimum norm solution for wide X, i.e.
n_samples < n_features:
min ||w||_2 subject to w = argmin deviance(X, y, w)
| test_glm_regression_unpenalized | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/tests/test_glm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/tests/test_glm.py | BSD-3-Clause |
def test_glm_regression_unpenalized_hstacked_X(solver, fit_intercept, glm_dataset):
"""Test that unpenalized GLM converges for all solvers to correct solution.
We work with a simple constructed data set with known solution.
GLM fit on [X] is the same as fit on [X, X]/2.
For long X, [X, X] is a singular... | Test that unpenalized GLM converges for all solvers to correct solution.
We work with a simple constructed data set with known solution.
GLM fit on [X] is the same as fit on [X, X]/2.
For long X, [X, X] is a singular matrix and we check against the minimum norm
solution:
min ||w||_2 subject to ... | test_glm_regression_unpenalized_hstacked_X | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/tests/test_glm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/tests/test_glm.py | BSD-3-Clause |
def test_glm_regression_unpenalized_vstacked_X(solver, fit_intercept, glm_dataset):
"""Test that unpenalized GLM converges for all solvers to correct solution.
We work with a simple constructed data set with known solution.
GLM fit on [X] is the same as fit on [X], [y]
... | Test that unpenalized GLM converges for all solvers to correct solution.
We work with a simple constructed data set with known solution.
GLM fit on [X] is the same as fit on [X], [y]
[X], [y].
For wide X, [X', X'] is a singular matrix and we check against the minimu... | test_glm_regression_unpenalized_vstacked_X | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/tests/test_glm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/tests/test_glm.py | BSD-3-Clause |
def test_sample_weights_validation():
"""Test the raised errors in the validation of sample_weight."""
# scalar value but not positive
X = [[1]]
y = [1]
weights = 0
glm = _GeneralizedLinearRegressor()
# Positive weights are accepted
glm.fit(X, y, sample_weight=1)
# 2d array
wei... | Test the raised errors in the validation of sample_weight. | test_sample_weights_validation | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/tests/test_glm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/tests/test_glm.py | BSD-3-Clause |
def test_glm_wrong_y_range(glm):
"""
Test that fitting a GLM model raises a ValueError when `y` contains
values outside the valid range for the given distribution.
Generalized Linear Models (GLMs) with certain distributions, such as
Poisson, Gamma, and Tweedie (with power > 1), require `y` to be
... |
Test that fitting a GLM model raises a ValueError when `y` contains
values outside the valid range for the given distribution.
Generalized Linear Models (GLMs) with certain distributions, such as
Poisson, Gamma, and Tweedie (with power > 1), require `y` to be
non-negative. This test ensures that p... | test_glm_wrong_y_range | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/tests/test_glm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/tests/test_glm.py | BSD-3-Clause |
def test_glm_identity_regression(fit_intercept):
"""Test GLM regression with identity link on a simple dataset."""
coef = [1.0, 2.0]
X = np.array([[1, 1, 1, 1, 1], [0, 1, 2, 3, 4]]).T
y = np.dot(X, coef)
glm = _GeneralizedLinearRegressor(
alpha=0,
fit_intercept=fit_intercept,
... | Test GLM regression with identity link on a simple dataset. | test_glm_identity_regression | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/tests/test_glm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/tests/test_glm.py | BSD-3-Clause |
def test_glm_sample_weight_consistency(fit_intercept, alpha, GLMEstimator):
"""Test that the impact of sample_weight is consistent"""
rng = np.random.RandomState(0)
n_samples, n_features = 10, 5
X = rng.rand(n_samples, n_features)
y = rng.rand(n_samples)
glm_params = dict(alpha=alpha, fit_inter... | Test that the impact of sample_weight is consistent | test_glm_sample_weight_consistency | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/tests/test_glm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/tests/test_glm.py | BSD-3-Clause |
def test_glm_log_regression(solver, fit_intercept, estimator):
"""Test GLM regression with log link on a simple dataset."""
coef = [0.2, -0.1]
X = np.array([[0, 1, 2, 3, 4], [1, 1, 1, 1, 1]]).T
y = np.exp(np.dot(X, coef))
glm = clone(estimator).set_params(
alpha=0,
fit_intercept=fit_... | Test GLM regression with log link on a simple dataset. | test_glm_log_regression | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/tests/test_glm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/tests/test_glm.py | BSD-3-Clause |
def test_warm_start(solver, fit_intercept, global_random_seed):
"""
Test that `warm_start=True` enables incremental fitting in PoissonRegressor.
This test verifies that when using `warm_start=True`, the model continues
optimizing from previous coefficients instead of restarting from scratch.
It ens... |
Test that `warm_start=True` enables incremental fitting in PoissonRegressor.
This test verifies that when using `warm_start=True`, the model continues
optimizing from previous coefficients instead of restarting from scratch.
It ensures that after an initial fit with `max_iter=1`, the model has a
h... | test_warm_start | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/tests/test_glm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/tests/test_glm.py | BSD-3-Clause |
def test_normal_ridge_comparison(
n_samples, n_features, fit_intercept, sample_weight, request
):
"""Compare with Ridge regression for Normal distributions."""
test_size = 10
X, y = make_regression(
n_samples=n_samples + test_size,
n_features=n_features,
n_informative=n_features ... | Compare with Ridge regression for Normal distributions. | test_normal_ridge_comparison | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/tests/test_glm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/tests/test_glm.py | BSD-3-Clause |
def test_poisson_glmnet(solver):
"""Compare Poisson regression with L2 regularization and LogLink to glmnet"""
# library("glmnet")
# options(digits=10)
# df <- data.frame(a=c(-2,-1,1,2), b=c(0,0,1,1), y=c(0,1,1,2))
# x <- data.matrix(df[,c("a", "b")])
# y <- df$y
# fit <- glmnet(x=x, y=y, al... | Compare Poisson regression with L2 regularization and LogLink to glmnet | test_poisson_glmnet | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/tests/test_glm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/tests/test_glm.py | BSD-3-Clause |
def test_tweedie_link_argument(name, link_class):
"""Test GLM link argument set as string."""
y = np.array([0.1, 0.5]) # in range of all distributions
X = np.array([[1], [2]])
glm = TweedieRegressor(power=1, link=name).fit(X, y)
assert isinstance(glm._base_loss.link, link_class) | Test GLM link argument set as string. | test_tweedie_link_argument | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/tests/test_glm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/tests/test_glm.py | BSD-3-Clause |
def test_tweedie_link_auto(power, expected_link_class):
"""Test that link='auto' delivers the expected link function"""
y = np.array([0.1, 0.5]) # in range of all distributions
X = np.array([[1], [2]])
glm = TweedieRegressor(link="auto", power=power).fit(X, y)
assert isinstance(glm._base_loss.link,... | Test that link='auto' delivers the expected link function | test_tweedie_link_auto | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/tests/test_glm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/tests/test_glm.py | BSD-3-Clause |
def test_tweedie_score(regression_data, power, link):
"""Test that GLM score equals d2_tweedie_score for Tweedie losses."""
X, y = regression_data
# make y positive
y = np.abs(y) + 1.0
glm = TweedieRegressor(power=power, link=link).fit(X, y)
assert glm.score(X, y) == pytest.approx(
d2_tw... | Test that GLM score equals d2_tweedie_score for Tweedie losses. | test_tweedie_score | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/tests/test_glm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/tests/test_glm.py | BSD-3-Clause |
def test_linalg_warning_with_newton_solver(global_random_seed):
"""
Test that the Newton solver raises a warning and falls back to LBFGS when
encountering a singular or ill-conditioned Hessian matrix.
This test assess the behavior of `PoissonRegressor` with the "newton-cholesky"
solver.
It veri... |
Test that the Newton solver raises a warning and falls back to LBFGS when
encountering a singular or ill-conditioned Hessian matrix.
This test assess the behavior of `PoissonRegressor` with the "newton-cholesky"
solver.
It verifies the following:-
- The model significantly improves upon the co... | test_linalg_warning_with_newton_solver | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/tests/test_glm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/tests/test_glm.py | BSD-3-Clause |
def test_newton_solver_verbosity(capsys, verbose):
"""Test the std output of verbose newton solvers."""
y = np.array([1, 2], dtype=float)
X = np.array([[1.0, 0], [0, 1]], dtype=float)
linear_loss = LinearModelLoss(base_loss=HalfPoissonLoss(), fit_intercept=False)
sol = NewtonCholeskySolver(
... | Test the std output of verbose newton solvers. | test_newton_solver_verbosity | python | scikit-learn/scikit-learn | sklearn/linear_model/_glm/tests/test_glm.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_glm/tests/test_glm.py | BSD-3-Clause |
def reconstruction_error(self):
"""Compute the reconstruction error for the embedding.
Returns
-------
reconstruction_error : float
Reconstruction error.
Notes
-----
The cost function of an isomap embedding is
``E = frobenius_norm[K(D) - K(D... | Compute the reconstruction error for the embedding.
Returns
-------
reconstruction_error : float
Reconstruction error.
Notes
-----
The cost function of an isomap embedding is
``E = frobenius_norm[K(D) - K(D_fit)] / n_samples``
Where D is th... | reconstruction_error | python | scikit-learn/scikit-learn | sklearn/manifold/_isomap.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/_isomap.py | BSD-3-Clause |
def transform(self, X):
"""Transform X.
This is implemented by linking the points X into the graph of geodesic
distances of the training data. First the `n_neighbors` nearest
neighbors of X are found in the training data, and from these the
shortest geodesic distances from each ... | Transform X.
This is implemented by linking the points X into the graph of geodesic
distances of the training data. First the `n_neighbors` nearest
neighbors of X are found in the training data, and from these the
shortest geodesic distances from each point in X to each point in
... | transform | python | scikit-learn/scikit-learn | sklearn/manifold/_isomap.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/_isomap.py | BSD-3-Clause |
def barycenter_weights(X, Y, indices, reg=1e-3):
"""Compute barycenter weights of X from Y along the first axis
We estimate the weights to assign to each point in Y[indices] to recover
the point X[i]. The barycenter weights sum to 1.
Parameters
----------
X : array-like, shape (n_samples, n_di... | Compute barycenter weights of X from Y along the first axis
We estimate the weights to assign to each point in Y[indices] to recover
the point X[i]. The barycenter weights sum to 1.
Parameters
----------
X : array-like, shape (n_samples, n_dim)
Y : array-like, shape (n_samples, n_dim)
in... | barycenter_weights | python | scikit-learn/scikit-learn | sklearn/manifold/_locally_linear.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/_locally_linear.py | BSD-3-Clause |
def barycenter_kneighbors_graph(X, n_neighbors, reg=1e-3, n_jobs=None):
"""Computes the barycenter weighted graph of k-Neighbors for points in X
Parameters
----------
X : {array-like, NearestNeighbors}
Sample data, shape = (n_samples, n_features), in the form of a
numpy array or a Neare... | Computes the barycenter weighted graph of k-Neighbors for points in X
Parameters
----------
X : {array-like, NearestNeighbors}
Sample data, shape = (n_samples, n_features), in the form of a
numpy array or a NearestNeighbors object.
n_neighbors : int
Number of neighbors for each... | barycenter_kneighbors_graph | python | scikit-learn/scikit-learn | sklearn/manifold/_locally_linear.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/_locally_linear.py | BSD-3-Clause |
def null_space(
M, k, k_skip=1, eigen_solver="arpack", tol=1e-6, max_iter=100, random_state=None
):
"""
Find the null space of a matrix M.
Parameters
----------
M : {array, matrix, sparse matrix, LinearOperator}
Input covariance matrix: should be symmetric positive semi-definite
k ... |
Find the null space of a matrix M.
Parameters
----------
M : {array, matrix, sparse matrix, LinearOperator}
Input covariance matrix: should be symmetric positive semi-definite
k : int
Number of eigenvalues/vectors to return
k_skip : int, default=1
Number of low eigenv... | null_space | python | scikit-learn/scikit-learn | sklearn/manifold/_locally_linear.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/_locally_linear.py | BSD-3-Clause |
def locally_linear_embedding(
X,
*,
n_neighbors,
n_components,
reg=1e-3,
eigen_solver="auto",
tol=1e-6,
max_iter=100,
method="standard",
hessian_tol=1e-4,
modified_tol=1e-12,
random_state=None,
n_jobs=None,
):
"""Perform a Locally Linear Embedding analysis on the ... | Perform a Locally Linear Embedding analysis on the data.
Read more in the :ref:`User Guide <locally_linear_embedding>`.
Parameters
----------
X : {array-like, NearestNeighbors}
Sample data, shape = (n_samples, n_features), in the form of a
numpy array or a NearestNeighbors object.
... | locally_linear_embedding | python | scikit-learn/scikit-learn | sklearn/manifold/_locally_linear.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/_locally_linear.py | BSD-3-Clause |
def transform(self, X):
"""
Transform new points into embedding space.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training set.
Returns
-------
X_new : ndarray of shape (n_samples, n_components)
Returns ... |
Transform new points into embedding space.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training set.
Returns
-------
X_new : ndarray of shape (n_samples, n_components)
Returns the instance itself.
Notes... | transform | python | scikit-learn/scikit-learn | sklearn/manifold/_locally_linear.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/_locally_linear.py | BSD-3-Clause |
def _smacof_single(
dissimilarities,
metric=True,
n_components=2,
init=None,
max_iter=300,
verbose=0,
eps=1e-6,
random_state=None,
normalized_stress=False,
):
"""Computes multidimensional scaling using SMACOF algorithm.
Parameters
----------
dissimilarities : ndarray... | Computes multidimensional scaling using SMACOF algorithm.
Parameters
----------
dissimilarities : ndarray of shape (n_samples, n_samples)
Pairwise dissimilarities between the points. Must be symmetric.
metric : bool, default=True
Compute metric or nonmetric SMACOF algorithm.
Wh... | _smacof_single | python | scikit-learn/scikit-learn | sklearn/manifold/_mds.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/_mds.py | BSD-3-Clause |
def smacof(
dissimilarities,
*,
metric=True,
n_components=2,
init=None,
n_init="warn",
n_jobs=None,
max_iter=300,
verbose=0,
eps=1e-6,
random_state=None,
return_n_iter=False,
normalized_stress="auto",
):
"""Compute multidimensional scaling using the SMACOF algorit... | Compute multidimensional scaling using the SMACOF algorithm.
The SMACOF (Scaling by MAjorizing a COmplicated Function) algorithm is a
multidimensional scaling algorithm which minimizes an objective function
(the *stress*) using a majorization technique. Stress majorization, also
known as the Guttman Tr... | smacof | python | scikit-learn/scikit-learn | sklearn/manifold/_mds.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/_mds.py | BSD-3-Clause |
def fit_transform(self, X, y=None, init=None):
"""
Fit the data from `X`, and returns the embedded coordinates.
Parameters
----------
X : array-like of shape (n_samples, n_features) or \
(n_samples, n_samples)
Input data. If ``dissimilarity=='precompu... |
Fit the data from `X`, and returns the embedded coordinates.
Parameters
----------
X : array-like of shape (n_samples, n_features) or (n_samples, n_samples)
Input data. If ``dissimilarity=='precomputed'``, the input should
be the dissimilarity ma... | fit_transform | python | scikit-learn/scikit-learn | sklearn/manifold/_mds.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/_mds.py | BSD-3-Clause |
def _graph_connected_component(graph, node_id):
"""Find the largest graph connected components that contains one
given node.
Parameters
----------
graph : array-like of shape (n_samples, n_samples)
Adjacency matrix of the graph, non-zero weight means an edge
between the nodes.
... | Find the largest graph connected components that contains one
given node.
Parameters
----------
graph : array-like of shape (n_samples, n_samples)
Adjacency matrix of the graph, non-zero weight means an edge
between the nodes.
node_id : int
The index of the query node of th... | _graph_connected_component | python | scikit-learn/scikit-learn | sklearn/manifold/_spectral_embedding.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/_spectral_embedding.py | BSD-3-Clause |
def _graph_is_connected(graph):
"""Return whether the graph is connected (True) or Not (False).
Parameters
----------
graph : {array-like, sparse matrix} of shape (n_samples, n_samples)
Adjacency matrix of the graph, non-zero weight means an edge
between the nodes.
Returns
----... | Return whether the graph is connected (True) or Not (False).
Parameters
----------
graph : {array-like, sparse matrix} of shape (n_samples, n_samples)
Adjacency matrix of the graph, non-zero weight means an edge
between the nodes.
Returns
-------
is_connected : bool
Tru... | _graph_is_connected | python | scikit-learn/scikit-learn | sklearn/manifold/_spectral_embedding.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/_spectral_embedding.py | BSD-3-Clause |
def _set_diag(laplacian, value, norm_laplacian):
"""Set the diagonal of the laplacian matrix and convert it to a
sparse format well suited for eigenvalue decomposition.
Parameters
----------
laplacian : {ndarray, sparse matrix}
The graph laplacian.
value : float
The value of th... | Set the diagonal of the laplacian matrix and convert it to a
sparse format well suited for eigenvalue decomposition.
Parameters
----------
laplacian : {ndarray, sparse matrix}
The graph laplacian.
value : float
The value of the diagonal.
norm_laplacian : bool
Whether t... | _set_diag | python | scikit-learn/scikit-learn | sklearn/manifold/_spectral_embedding.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/_spectral_embedding.py | BSD-3-Clause |
def spectral_embedding(
adjacency,
*,
n_components=8,
eigen_solver=None,
random_state=None,
eigen_tol="auto",
norm_laplacian=True,
drop_first=True,
):
"""Project the sample on the first eigenvectors of the graph Laplacian.
The adjacency matrix is used to compute a normalized gra... | Project the sample on the first eigenvectors of the graph Laplacian.
The adjacency matrix is used to compute a normalized graph Laplacian
whose spectrum (especially the eigenvectors associated to the
smallest eigenvalues) has an interpretation in terms of minimal
number of cuts necessary to split the g... | spectral_embedding | python | scikit-learn/scikit-learn | sklearn/manifold/_spectral_embedding.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/_spectral_embedding.py | BSD-3-Clause |
def _get_affinity_matrix(self, X, Y=None):
"""Calculate the affinity matrix from data
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples
and `n_features` is the number of features.
... | Calculate the affinity matrix from data
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples
and `n_features` is the number of features.
If affinity is "precomputed"
X : ... | _get_affinity_matrix | python | scikit-learn/scikit-learn | sklearn/manifold/_spectral_embedding.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/_spectral_embedding.py | BSD-3-Clause |
def fit(self, X, y=None):
"""Fit the model from data in X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples
and `n_features` is the number of features.
... | Fit the model from data in X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples
and `n_features` is the number of features.
If affinity is "precomputed"
... | fit | python | scikit-learn/scikit-learn | sklearn/manifold/_spectral_embedding.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/_spectral_embedding.py | BSD-3-Clause |
def _kl_divergence(
params,
P,
degrees_of_freedom,
n_samples,
n_components,
skip_num_points=0,
compute_error=True,
):
"""t-SNE objective function: gradient of the KL divergence
of p_ijs and q_ijs and the absolute error.
Parameters
----------
params : ndarray of shape (n_... | t-SNE objective function: gradient of the KL divergence
of p_ijs and q_ijs and the absolute error.
Parameters
----------
params : ndarray of shape (n_params,)
Unraveled embedding.
P : ndarray of shape (n_samples * (n_samples-1) / 2,)
Condensed joint probability matrix.
degrees... | _kl_divergence | python | scikit-learn/scikit-learn | sklearn/manifold/_t_sne.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/_t_sne.py | BSD-3-Clause |
def _kl_divergence_bh(
params,
P,
degrees_of_freedom,
n_samples,
n_components,
angle=0.5,
skip_num_points=0,
verbose=False,
compute_error=True,
num_threads=1,
):
"""t-SNE objective function: KL divergence of p_ijs and q_ijs.
Uses Barnes-Hut tree methods to calculate the ... | t-SNE objective function: KL divergence of p_ijs and q_ijs.
Uses Barnes-Hut tree methods to calculate the gradient that
runs in O(NlogN) instead of O(N^2).
Parameters
----------
params : ndarray of shape (n_params,)
Unraveled embedding.
P : sparse matrix of shape (n_samples, n_sample)... | _kl_divergence_bh | python | scikit-learn/scikit-learn | sklearn/manifold/_t_sne.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/_t_sne.py | BSD-3-Clause |
def _gradient_descent(
objective,
p0,
it,
max_iter,
n_iter_check=1,
n_iter_without_progress=300,
momentum=0.8,
learning_rate=200.0,
min_gain=0.01,
min_grad_norm=1e-7,
verbose=0,
args=None,
kwargs=None,
):
"""Batch gradient descent with momentum and individual gain... | Batch gradient descent with momentum and individual gains.
Parameters
----------
objective : callable
Should return a tuple of cost and gradient for a given parameter
vector. When expensive to compute, the cost can optionally
be None and can be computed every n_iter_check steps usin... | _gradient_descent | python | scikit-learn/scikit-learn | sklearn/manifold/_t_sne.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/_t_sne.py | BSD-3-Clause |
def trustworthiness(X, X_embedded, *, n_neighbors=5, metric="euclidean"):
r"""Indicate to what extent the local structure is retained.
The trustworthiness is within [0, 1]. It is defined as
.. math::
T(k) = 1 - \frac{2}{nk (2n - 3k - 1)} \sum^n_{i=1}
\sum_{j \in \mathcal{N}_{i}^{k}} \... | Indicate to what extent the local structure is retained.
The trustworthiness is within [0, 1]. It is defined as
.. math::
T(k) = 1 - \frac{2}{nk (2n - 3k - 1)} \sum^n_{i=1}
\sum_{j \in \mathcal{N}_{i}^{k}} \max(0, (r(i, j) - k))
where for each sample i, :math:`\mathcal{N}_{i}^{k}` ar... | trustworthiness | python | scikit-learn/scikit-learn | sklearn/manifold/_t_sne.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/_t_sne.py | BSD-3-Clause |
def _fit(self, X, skip_num_points=0):
"""Private function to fit the model using X as training data."""
if isinstance(self.init, str) and self.init == "pca" and issparse(X):
raise TypeError(
"PCA initialization is currently not supported "
"with the sparse in... | Private function to fit the model using X as training data. | _fit | python | scikit-learn/scikit-learn | sklearn/manifold/_t_sne.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/_t_sne.py | BSD-3-Clause |
def fit_transform(self, X, y=None):
"""Fit X into an embedded space and return that transformed output.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features) or \
(n_samples, n_samples)
If the metric is 'precomputed' X must be a s... | Fit X into an embedded space and return that transformed output.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples)
If the metric is 'precomputed' X must be a square distance
matrix. Otherwise it c... | fit_transform | python | scikit-learn/scikit-learn | sklearn/manifold/_t_sne.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/_t_sne.py | BSD-3-Clause |
def test_isomap_fitted_attributes_dtype(global_dtype):
"""Check that the fitted attributes are stored accordingly to the
data type of X."""
iso = manifold.Isomap(n_neighbors=2)
X = np.array([[1, 2], [3, 4], [5, 6]], dtype=global_dtype)
iso.fit(X)
assert iso.dist_matrix_.dtype == global_dtype
... | Check that the fitted attributes are stored accordingly to the
data type of X. | test_isomap_fitted_attributes_dtype | python | scikit-learn/scikit-learn | sklearn/manifold/tests/test_isomap.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/tests/test_isomap.py | BSD-3-Clause |
def test_isomap_dtype_equivalence():
"""Check the equivalence of the results with 32 and 64 bits input."""
iso_32 = manifold.Isomap(n_neighbors=2)
X_32 = np.array([[1, 2], [3, 4], [5, 6]], dtype=np.float32)
iso_32.fit(X_32)
iso_64 = manifold.Isomap(n_neighbors=2)
X_64 = np.array([[1, 2], [3, 4]... | Check the equivalence of the results with 32 and 64 bits input. | test_isomap_dtype_equivalence | python | scikit-learn/scikit-learn | sklearn/manifold/tests/test_isomap.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/tests/test_isomap.py | BSD-3-Clause |
def test_normed_stress(k):
"""Test that non-metric MDS normalized stress is scale-invariant."""
sim = np.array([[0, 5, 3, 4], [5, 0, 2, 2], [3, 2, 0, 1], [4, 2, 1, 0]])
X1, stress1 = mds.smacof(sim, metric=False, max_iter=5, random_state=0)
X2, stress2 = mds.smacof(k * sim, metric=False, max_iter=5, ra... | Test that non-metric MDS normalized stress is scale-invariant. | test_normed_stress | python | scikit-learn/scikit-learn | sklearn/manifold/tests/test_mds.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/tests/test_mds.py | BSD-3-Clause |
def _assert_equal_with_sign_flipping(A, B, tol=0.0):
"""Check array A and B are equal with possible sign flipping on
each column"""
tol_squared = tol**2
for A_col, B_col in zip(A.T, B.T):
assert (
np.max((A_col - B_col) ** 2) <= tol_squared
or np.max((A_col + B_col) ** 2)... | Check array A and B are equal with possible sign flipping on
each column | _assert_equal_with_sign_flipping | python | scikit-learn/scikit-learn | sklearn/manifold/tests/test_spectral_embedding.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/tests/test_spectral_embedding.py | BSD-3-Clause |
def test_spectral_embedding_preserves_dtype(eigen_solver, dtype):
"""Check that `SpectralEmbedding is preserving the dtype of the fitted
attribute and transformed data.
Ideally, this test should be covered by the common test
`check_transformer_preserve_dtypes`. However, this test only run
with tran... | Check that `SpectralEmbedding is preserving the dtype of the fitted
attribute and transformed data.
Ideally, this test should be covered by the common test
`check_transformer_preserve_dtypes`. However, this test only run
with transformers implementing `transform` while `SpectralEmbedding`
implement... | test_spectral_embedding_preserves_dtype | python | scikit-learn/scikit-learn | sklearn/manifold/tests/test_spectral_embedding.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/tests/test_spectral_embedding.py | BSD-3-Clause |
def test_spectral_eigen_tol_auto(monkeypatch, solver, csr_container):
"""Test that `eigen_tol="auto"` is resolved correctly"""
if solver == "amg" and not pyamg_available:
pytest.skip("PyAMG is not available.")
X, _ = make_blobs(
n_samples=200, random_state=0, centers=[[1, 1], [-1, -1]], clus... | Test that `eigen_tol="auto"` is resolved correctly | test_spectral_eigen_tol_auto | python | scikit-learn/scikit-learn | sklearn/manifold/tests/test_spectral_embedding.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/tests/test_spectral_embedding.py | BSD-3-Clause |
def test_trustworthiness_n_neighbors_error():
"""Raise an error when n_neighbors >= n_samples / 2.
Non-regression test for #18567.
"""
regex = "n_neighbors .+ should be less than .+"
rng = np.random.RandomState(42)
X = rng.rand(7, 4)
X_embedded = rng.rand(7, 2)
with pytest.raises(ValueE... | Raise an error when n_neighbors >= n_samples / 2.
Non-regression test for #18567.
| test_trustworthiness_n_neighbors_error | python | scikit-learn/scikit-learn | sklearn/manifold/tests/test_t_sne.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/tests/test_t_sne.py | BSD-3-Clause |
def test_optimization_minimizes_kl_divergence():
"""t-SNE should give a lower KL divergence with more iterations."""
random_state = check_random_state(0)
X, _ = make_blobs(n_features=3, random_state=random_state)
kl_divergences = []
for max_iter in [250, 300, 350]:
tsne = TSNE(
n... | t-SNE should give a lower KL divergence with more iterations. | test_optimization_minimizes_kl_divergence | python | scikit-learn/scikit-learn | sklearn/manifold/tests/test_t_sne.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/tests/test_t_sne.py | BSD-3-Clause |
def test_sparse_precomputed_distance(sparse_container):
"""Make sure that TSNE works identically for sparse and dense matrix"""
random_state = check_random_state(0)
X = random_state.randn(100, 2)
D_sparse = kneighbors_graph(X, n_neighbors=100, mode="distance", include_self=True)
D = pairwise_distan... | Make sure that TSNE works identically for sparse and dense matrix | test_sparse_precomputed_distance | python | scikit-learn/scikit-learn | sklearn/manifold/tests/test_t_sne.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/tests/test_t_sne.py | BSD-3-Clause |
def test_uniform_grid(method):
"""Make sure that TSNE can approximately recover a uniform 2D grid
Due to ties in distances between point in X_2d_grid, this test is platform
dependent for ``method='barnes_hut'`` due to numerical imprecision.
Also, t-SNE is not assured to converge to the right solution ... | Make sure that TSNE can approximately recover a uniform 2D grid
Due to ties in distances between point in X_2d_grid, this test is platform
dependent for ``method='barnes_hut'`` due to numerical imprecision.
Also, t-SNE is not assured to converge to the right solution because bad
initialization can lea... | test_uniform_grid | python | scikit-learn/scikit-learn | sklearn/manifold/tests/test_t_sne.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/tests/test_t_sne.py | BSD-3-Clause |
def test_tsne_with_different_distance_metrics(metric, dist_func, method):
"""Make sure that TSNE works for different distance metrics"""
if method == "barnes_hut" and metric == "manhattan":
# The distances computed by `manhattan_distances` differ slightly from those
# computed internally by Nea... | Make sure that TSNE works for different distance metrics | test_tsne_with_different_distance_metrics | python | scikit-learn/scikit-learn | sklearn/manifold/tests/test_t_sne.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/tests/test_t_sne.py | BSD-3-Clause |
def test_tsne_n_jobs(method):
"""Make sure that the n_jobs parameter doesn't impact the output"""
random_state = check_random_state(0)
n_features = 10
X = random_state.randn(30, n_features)
X_tr_ref = TSNE(
n_components=2,
method=method,
perplexity=25.0,
angle=0,
... | Make sure that the n_jobs parameter doesn't impact the output | test_tsne_n_jobs | python | scikit-learn/scikit-learn | sklearn/manifold/tests/test_t_sne.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/tests/test_t_sne.py | BSD-3-Clause |
def test_tsne_with_mahalanobis_distance():
"""Make sure that method_parameters works with mahalanobis distance."""
random_state = check_random_state(0)
n_samples, n_features = 300, 10
X = random_state.randn(n_samples, n_features)
default_params = {
"perplexity": 40,
"max_iter": 250,
... | Make sure that method_parameters works with mahalanobis distance. | test_tsne_with_mahalanobis_distance | python | scikit-learn/scikit-learn | sklearn/manifold/tests/test_t_sne.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/tests/test_t_sne.py | BSD-3-Clause |
def test_tsne_perplexity_validation(perplexity):
"""Make sure that perplexity > n_samples results in a ValueError"""
random_state = check_random_state(0)
X = random_state.randn(20, 2)
est = TSNE(
learning_rate="auto",
init="pca",
perplexity=perplexity,
random_state=rando... | Make sure that perplexity > n_samples results in a ValueError | test_tsne_perplexity_validation | python | scikit-learn/scikit-learn | sklearn/manifold/tests/test_t_sne.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/tests/test_t_sne.py | BSD-3-Clause |
def test_tsne_works_with_pandas_output():
"""Make sure that TSNE works when the output is set to "pandas".
Non-regression test for gh-25365.
"""
pytest.importorskip("pandas")
with config_context(transform_output="pandas"):
arr = np.arange(35 * 4).reshape(35, 4)
TSNE(n_components=2).... | Make sure that TSNE works when the output is set to "pandas".
Non-regression test for gh-25365.
| test_tsne_works_with_pandas_output | python | scikit-learn/scikit-learn | sklearn/manifold/tests/test_t_sne.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/tests/test_t_sne.py | BSD-3-Clause |
def _return_float_dtype(X, Y):
"""
1. If dtype of X and Y is float32, then dtype float32 is returned.
2. Else dtype float is returned.
"""
if not issparse(X) and not isinstance(X, np.ndarray):
X = np.asarray(X)
if Y is None:
Y_dtype = X.dtype
elif not issparse(Y) and not isi... |
1. If dtype of X and Y is float32, then dtype float32 is returned.
2. Else dtype float is returned.
| _return_float_dtype | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def check_pairwise_arrays(
X,
Y,
*,
precomputed=False,
dtype="infer_float",
accept_sparse="csr",
force_all_finite="deprecated",
ensure_all_finite=None,
ensure_2d=True,
copy=False,
):
"""Set X and Y appropriately and checks inputs.
If Y is None, it is set as a pointer to ... | Set X and Y appropriately and checks inputs.
If Y is None, it is set as a pointer to X (i.e. not a copy).
If Y is given, this does not happen.
All distance metrics should use this function first to assert that the
given parameters are correct and safe to use.
Specifically, this function first ensu... | check_pairwise_arrays | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def check_paired_arrays(X, Y):
"""Set X and Y appropriately and checks inputs for paired distances.
All paired distance metrics should use this function first to assert that
the given parameters are correct and safe to use.
Specifically, this function first ensures that both X and Y are arrays,
th... | Set X and Y appropriately and checks inputs for paired distances.
All paired distance metrics should use this function first to assert that
the given parameters are correct and safe to use.
Specifically, this function first ensures that both X and Y are arrays,
then checks that they are at least two d... | check_paired_arrays | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def euclidean_distances(
X, Y=None, *, Y_norm_squared=None, squared=False, X_norm_squared=None
):
"""
Compute the distance matrix between each pair from a feature array X and Y.
For efficiency reasons, the euclidean distance between a pair of row
vector x and y is computed as::
dist(x, y) ... |
Compute the distance matrix between each pair from a feature array X and Y.
For efficiency reasons, the euclidean distance between a pair of row
vector x and y is computed as::
dist(x, y) = sqrt(dot(x, x) - 2 * dot(x, y) + dot(y, y))
This formulation has two advantages over other ways of com... | euclidean_distances | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def _euclidean_distances(X, Y, X_norm_squared=None, Y_norm_squared=None, squared=False):
"""Computational part of euclidean_distances
Assumes inputs are already checked.
If norms are passed as float32, they are unused. If arrays are passed as
float32, norms needs to be recomputed on upcast chunks.
... | Computational part of euclidean_distances
Assumes inputs are already checked.
If norms are passed as float32, they are unused. If arrays are passed as
float32, norms needs to be recomputed on upcast chunks.
TODO: use a float64 accumulator in row_norms to avoid the latter.
| _euclidean_distances | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def nan_euclidean_distances(
X, Y=None, *, squared=False, missing_values=np.nan, copy=True
):
"""Calculate the euclidean distances in the presence of missing values.
Compute the euclidean distance between each pair of samples in X and Y,
where Y=X is assumed if Y=None. When calculating the distance bet... | Calculate the euclidean distances in the presence of missing values.
Compute the euclidean distance between each pair of samples in X and Y,
where Y=X is assumed if Y=None. When calculating the distance between a
pair of samples, this formulation ignores feature coordinates with a
missing value in eith... | nan_euclidean_distances | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def pairwise_distances_argmin_min(
X, Y, *, axis=1, metric="euclidean", metric_kwargs=None
):
"""Compute minimum distances between one point and a set of points.
This function computes for each row in X, the index of the row of Y which
is closest (according to the specified distance). The minimal dista... | Compute minimum distances between one point and a set of points.
This function computes for each row in X, the index of the row of Y which
is closest (according to the specified distance). The minimal distances are
also returned.
This is mostly equivalent to calling::
(pairwise_distances(X, Y... | pairwise_distances_argmin_min | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def pairwise_distances_argmin(X, Y, *, axis=1, metric="euclidean", metric_kwargs=None):
"""Compute minimum distances between one point and a set of points.
This function computes for each row in X, the index of the row of Y which
is closest (according to the specified distance).
This is mostly equival... | Compute minimum distances between one point and a set of points.
This function computes for each row in X, the index of the row of Y which
is closest (according to the specified distance).
This is mostly equivalent to calling::
pairwise_distances(X, Y=Y, metric=metric).argmin(axis=axis)
but ... | pairwise_distances_argmin | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def haversine_distances(X, Y=None):
"""Compute the Haversine distance between samples in X and Y.
The Haversine (or great circle) distance is the angular distance between
two points on the surface of a sphere. The first coordinate of each point
is assumed to be the latitude, the second is the longitude... | Compute the Haversine distance between samples in X and Y.
The Haversine (or great circle) distance is the angular distance between
two points on the surface of a sphere. The first coordinate of each point
is assumed to be the latitude, the second is the longitude, given
in radians. The dimension of th... | haversine_distances | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def manhattan_distances(X, Y=None):
"""Compute the L1 distances between the vectors in X and Y.
Read more in the :ref:`User Guide <metrics>`.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples_X, n_features)
An array where each row is a sample and each column is a fe... | Compute the L1 distances between the vectors in X and Y.
Read more in the :ref:`User Guide <metrics>`.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples_X, n_features)
An array where each row is a sample and each column is a feature.
Y : {array-like, sparse matrix}... | manhattan_distances | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def cosine_distances(X, Y=None):
"""Compute cosine distance between samples in X and Y.
Cosine distance is defined as 1.0 minus the cosine similarity.
Read more in the :ref:`User Guide <metrics>`.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples_X, n_features)
... | Compute cosine distance between samples in X and Y.
Cosine distance is defined as 1.0 minus the cosine similarity.
Read more in the :ref:`User Guide <metrics>`.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples_X, n_features)
Matrix `X`.
Y : {array-like, spars... | cosine_distances | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def paired_euclidean_distances(X, Y):
"""Compute the paired euclidean distances between X and Y.
Read more in the :ref:`User Guide <metrics>`.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Input array/matrix X.
Y : {array-like, sparse matrix} o... | Compute the paired euclidean distances between X and Y.
Read more in the :ref:`User Guide <metrics>`.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Input array/matrix X.
Y : {array-like, sparse matrix} of shape (n_samples, n_features)
Input... | paired_euclidean_distances | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def paired_manhattan_distances(X, Y):
"""Compute the paired L1 distances between X and Y.
Distances are calculated between (X[0], Y[0]), (X[1], Y[1]), ...,
(X[n_samples], Y[n_samples]).
Read more in the :ref:`User Guide <metrics>`.
Parameters
----------
X : {array-like, sparse matrix} of ... | Compute the paired L1 distances between X and Y.
Distances are calculated between (X[0], Y[0]), (X[1], Y[1]), ...,
(X[n_samples], Y[n_samples]).
Read more in the :ref:`User Guide <metrics>`.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
An arra... | paired_manhattan_distances | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def paired_cosine_distances(X, Y):
"""
Compute the paired cosine distances between X and Y.
Read more in the :ref:`User Guide <metrics>`.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
An array where each row is a sample and each column is a feat... |
Compute the paired cosine distances between X and Y.
Read more in the :ref:`User Guide <metrics>`.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
An array where each row is a sample and each column is a feature.
Y : {array-like, sparse matrix} ... | paired_cosine_distances | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def paired_distances(X, Y, *, metric="euclidean", **kwds):
"""
Compute the paired distances between X and Y.
Compute the distances between (X[0], Y[0]), (X[1], Y[1]), etc...
Read more in the :ref:`User Guide <metrics>`.
Parameters
----------
X : ndarray of shape (n_samples, n_features)
... |
Compute the paired distances between X and Y.
Compute the distances between (X[0], Y[0]), (X[1], Y[1]), etc...
Read more in the :ref:`User Guide <metrics>`.
Parameters
----------
X : ndarray of shape (n_samples, n_features)
Array 1 for distance computation.
Y : ndarray of shape ... | paired_distances | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def linear_kernel(X, Y=None, dense_output=True):
"""
Compute the linear kernel between X and Y.
Read more in the :ref:`User Guide <linear_kernel>`.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples_X, n_features)
A feature array.
Y : {array-like, sparse mat... |
Compute the linear kernel between X and Y.
Read more in the :ref:`User Guide <linear_kernel>`.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples_X, n_features)
A feature array.
Y : {array-like, sparse matrix} of shape (n_samples_Y, n_features), default=None
... | linear_kernel | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def polynomial_kernel(X, Y=None, degree=3, gamma=None, coef0=1):
"""
Compute the polynomial kernel between X and Y.
.. code-block:: text
K(X, Y) = (gamma <X, Y> + coef0) ^ degree
Read more in the :ref:`User Guide <polynomial_kernel>`.
Parameters
----------
X : {array-like, sparse... |
Compute the polynomial kernel between X and Y.
.. code-block:: text
K(X, Y) = (gamma <X, Y> + coef0) ^ degree
Read more in the :ref:`User Guide <polynomial_kernel>`.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples_X, n_features)
A feature array.
... | polynomial_kernel | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def sigmoid_kernel(X, Y=None, gamma=None, coef0=1):
"""Compute the sigmoid kernel between X and Y.
.. code-block:: text
K(X, Y) = tanh(gamma <X, Y> + coef0)
Read more in the :ref:`User Guide <sigmoid_kernel>`.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples_... | Compute the sigmoid kernel between X and Y.
.. code-block:: text
K(X, Y) = tanh(gamma <X, Y> + coef0)
Read more in the :ref:`User Guide <sigmoid_kernel>`.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples_X, n_features)
A feature array.
Y : {array-lik... | sigmoid_kernel | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def rbf_kernel(X, Y=None, gamma=None):
"""Compute the rbf (gaussian) kernel between X and Y.
.. code-block:: text
K(x, y) = exp(-gamma ||x-y||^2)
for each pair of rows x in X and y in Y.
Read more in the :ref:`User Guide <rbf_kernel>`.
Parameters
----------
X : {array-like, spar... | Compute the rbf (gaussian) kernel between X and Y.
.. code-block:: text
K(x, y) = exp(-gamma ||x-y||^2)
for each pair of rows x in X and y in Y.
Read more in the :ref:`User Guide <rbf_kernel>`.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples_X, n_features)
... | rbf_kernel | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def laplacian_kernel(X, Y=None, gamma=None):
"""Compute the laplacian kernel between X and Y.
The laplacian kernel is defined as:
.. code-block:: text
K(x, y) = exp(-gamma ||x-y||_1)
for each pair of rows x in X and y in Y.
Read more in the :ref:`User Guide <laplacian_kernel>`.
.. v... | Compute the laplacian kernel between X and Y.
The laplacian kernel is defined as:
.. code-block:: text
K(x, y) = exp(-gamma ||x-y||_1)
for each pair of rows x in X and y in Y.
Read more in the :ref:`User Guide <laplacian_kernel>`.
.. versionadded:: 0.17
Parameters
----------
... | laplacian_kernel | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def cosine_similarity(X, Y=None, dense_output=True):
"""Compute cosine similarity between samples in X and Y.
Cosine similarity, or the cosine kernel, computes similarity as the
normalized dot product of X and Y:
.. code-block:: text
K(X, Y) = <X, Y> / (||X||*||Y||)
On L2-normalized data... | Compute cosine similarity between samples in X and Y.
Cosine similarity, or the cosine kernel, computes similarity as the
normalized dot product of X and Y:
.. code-block:: text
K(X, Y) = <X, Y> / (||X||*||Y||)
On L2-normalized data, this function is equivalent to linear_kernel.
Read mo... | cosine_similarity | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def additive_chi2_kernel(X, Y=None):
"""Compute the additive chi-squared kernel between observations in X and Y.
The chi-squared kernel is computed between each pair of rows in X and Y. X
and Y have to be non-negative. This kernel is most commonly applied to
histograms.
The chi-squared kernel is ... | Compute the additive chi-squared kernel between observations in X and Y.
The chi-squared kernel is computed between each pair of rows in X and Y. X
and Y have to be non-negative. This kernel is most commonly applied to
histograms.
The chi-squared kernel is given by:
.. code-block:: text
... | additive_chi2_kernel | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def chi2_kernel(X, Y=None, gamma=1.0):
"""Compute the exponential chi-squared kernel between X and Y.
The chi-squared kernel is computed between each pair of rows in X and Y. X
and Y have to be non-negative. This kernel is most commonly applied to
histograms.
The chi-squared kernel is given by:
... | Compute the exponential chi-squared kernel between X and Y.
The chi-squared kernel is computed between each pair of rows in X and Y. X
and Y have to be non-negative. This kernel is most commonly applied to
histograms.
The chi-squared kernel is given by:
.. code-block:: text
k(x, y) = ex... | chi2_kernel | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def _parallel_pairwise(X, Y, func, n_jobs, **kwds):
"""Break the pairwise matrix in n_jobs even slices
and compute them using multithreading."""
if Y is None:
Y = X
X, Y, dtype = _return_float_dtype(X, Y)
if effective_n_jobs(n_jobs) == 1:
return func(X, Y, **kwds)
# enforce a ... | Break the pairwise matrix in n_jobs even slices
and compute them using multithreading. | _parallel_pairwise | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def _pairwise_callable(X, Y, metric, ensure_all_finite=True, **kwds):
"""Handle the callable case for pairwise_{distances,kernels}."""
X, Y = check_pairwise_arrays(
X,
Y,
dtype=None,
ensure_all_finite=ensure_all_finite,
# No input dimension checking done for custom metric... | Handle the callable case for pairwise_{distances,kernels}. | _pairwise_callable | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def _check_chunk_size(reduced, chunk_size):
"""Checks chunk is a sequence of expected size or a tuple of same."""
if reduced is None:
return
is_tuple = isinstance(reduced, tuple)
if not is_tuple:
reduced = (reduced,)
if any(isinstance(r, tuple) or not hasattr(r, "__iter__") for r in ... | Checks chunk is a sequence of expected size or a tuple of same. | _check_chunk_size | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def _precompute_metric_params(X, Y, metric=None, **kwds):
"""Precompute data-derived metric parameters if not provided."""
if metric == "seuclidean" and "V" not in kwds:
if X is Y:
V = np.var(X, axis=0, ddof=1)
else:
raise ValueError(
"The 'V' parameter is... | Precompute data-derived metric parameters if not provided. | _precompute_metric_params | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def pairwise_distances_chunked(
X,
Y=None,
*,
reduce_func=None,
metric="euclidean",
n_jobs=None,
working_memory=None,
**kwds,
):
"""Generate a distance matrix chunk by chunk with optional reduction.
In cases where not all of a pairwise distance matrix needs to be
stored at o... | Generate a distance matrix chunk by chunk with optional reduction.
In cases where not all of a pairwise distance matrix needs to be
stored at once, this is used to calculate pairwise distances in
``working_memory``-sized chunks. If ``reduce_func`` is given, it is
run on each chunk and its return value... | pairwise_distances_chunked | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def pairwise_kernels(
X, Y=None, metric="linear", *, filter_params=False, n_jobs=None, **kwds
):
"""Compute the kernel between arrays X and optional array Y.
This function takes one or two feature arrays or a kernel matrix, and returns
a kernel matrix.
- If `X` is a feature array, of shape (n_samp... | Compute the kernel between arrays X and optional array Y.
This function takes one or two feature arrays or a kernel matrix, and returns
a kernel matrix.
- If `X` is a feature array, of shape (n_samples_X, n_features), and:
- `Y` is `None` and `metric` is not 'precomputed', the pairwise kernels
... | pairwise_kernels | python | scikit-learn/scikit-learn | sklearn/metrics/pairwise.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/pairwise.py | BSD-3-Clause |
def _average_binary_score(binary_metric, y_true, y_score, average, sample_weight=None):
"""Average a binary metric for multilabel classification.
Parameters
----------
y_true : array, shape = [n_samples] or [n_samples, n_classes]
True binary labels in binary label indicators.
y_score : arr... | Average a binary metric for multilabel classification.
Parameters
----------
y_true : array, shape = [n_samples] or [n_samples, n_classes]
True binary labels in binary label indicators.
y_score : array, shape = [n_samples] or [n_samples, n_classes]
Target scores, can either be probabil... | _average_binary_score | python | scikit-learn/scikit-learn | sklearn/metrics/_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_base.py | BSD-3-Clause |
def _average_multiclass_ovo_score(binary_metric, y_true, y_score, average="macro"):
"""Average one-versus-one scores for multiclass classification.
Uses the binary metric for one-vs-one multiclass classification,
where the score is computed according to the Hand & Till (2001) algorithm.
Parameters
... | Average one-versus-one scores for multiclass classification.
Uses the binary metric for one-vs-one multiclass classification,
where the score is computed according to the Hand & Till (2001) algorithm.
Parameters
----------
binary_metric : callable
The binary metric function to use that acc... | _average_multiclass_ovo_score | python | scikit-learn/scikit-learn | sklearn/metrics/_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_base.py | BSD-3-Clause |
def _check_targets(y_true, y_pred):
"""Check that y_true and y_pred belong to the same classification task.
This converts multiclass or binary types to a common shape, and raises a
ValueError for a mix of multilabel and multiclass targets, a mix of
multilabel formats, for the presence of continuous-val... | Check that y_true and y_pred belong to the same classification task.
This converts multiclass or binary types to a common shape, and raises a
ValueError for a mix of multilabel and multiclass targets, a mix of
multilabel formats, for the presence of continuous-valued or multioutput
targets, or for targ... | _check_targets | python | scikit-learn/scikit-learn | sklearn/metrics/_classification.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_classification.py | BSD-3-Clause |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.