code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def transform(self, X):
"""Apply dimensionality reduction to X.
X is projected on the first principal components previously extracted
from a training set.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
New data, where ... | Apply dimensionality reduction to X.
X is projected on the first principal components previously extracted
from a training set.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
New data, where `n_samples` is the number of sample... | transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_base.py | BSD-3-Clause |
def inverse_transform(self, X):
"""Transform data back to its original space.
In other words, return an input `X_original` whose transform would be X.
Parameters
----------
X : array-like of shape (n_samples, n_components)
New data, where `n_samples` is the number o... | Transform data back to its original space.
In other words, return an input `X_original` whose transform would be X.
Parameters
----------
X : array-like of shape (n_samples, n_components)
New data, where `n_samples` is the number of samples
and `n_components` is... | inverse_transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_base.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_base.py | BSD-3-Clause |
def _sparse_encode_precomputed(
X,
dictionary,
*,
gram=None,
cov=None,
algorithm="lasso_lars",
regularization=None,
copy_cov=True,
init=None,
max_iter=1000,
verbose=0,
positive=False,
):
"""Generic sparse coding with precomputed Gram and/or covariance matrices.
E... | Generic sparse coding with precomputed Gram and/or covariance matrices.
Each row of the result is the solution to a Lasso problem.
Parameters
----------
X : ndarray of shape (n_samples, n_features)
Data matrix.
dictionary : ndarray of shape (n_components, n_features)
The dictionar... | _sparse_encode_precomputed | python | scikit-learn/scikit-learn | sklearn/decomposition/_dict_learning.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_dict_learning.py | BSD-3-Clause |
def sparse_encode(
X,
dictionary,
*,
gram=None,
cov=None,
algorithm="lasso_lars",
n_nonzero_coefs=None,
alpha=None,
copy_cov=True,
init=None,
max_iter=1000,
n_jobs=None,
check_input=True,
verbose=0,
positive=False,
):
"""Sparse coding.
Each row of the... | Sparse coding.
Each row of the result is the solution to a sparse coding problem.
The goal is to find a sparse array `code` such that::
X ~= code * dictionary
Read more in the :ref:`User Guide <SparseCoder>`.
Parameters
----------
X : array-like of shape (n_samples, n_features)
... | sparse_encode | python | scikit-learn/scikit-learn | sklearn/decomposition/_dict_learning.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_dict_learning.py | BSD-3-Clause |
def _sparse_encode(
X,
dictionary,
*,
gram=None,
cov=None,
algorithm="lasso_lars",
n_nonzero_coefs=None,
alpha=None,
copy_cov=True,
init=None,
max_iter=1000,
n_jobs=None,
verbose=0,
positive=False,
):
"""Sparse coding without input/parameter validation."""
... | Sparse coding without input/parameter validation. | _sparse_encode | python | scikit-learn/scikit-learn | sklearn/decomposition/_dict_learning.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_dict_learning.py | BSD-3-Clause |
def _update_dict(
dictionary,
Y,
code,
A=None,
B=None,
verbose=False,
random_state=None,
positive=False,
):
"""Update the dense dictionary factor in place.
Parameters
----------
dictionary : ndarray of shape (n_components, n_features)
Value of the dictionary at t... | Update the dense dictionary factor in place.
Parameters
----------
dictionary : ndarray of shape (n_components, n_features)
Value of the dictionary at the previous iteration.
Y : ndarray of shape (n_samples, n_features)
Data matrix.
code : ndarray of shape (n_samples, n_components... | _update_dict | python | scikit-learn/scikit-learn | sklearn/decomposition/_dict_learning.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_dict_learning.py | BSD-3-Clause |
def dict_learning_online(
X,
n_components=2,
*,
alpha=1,
max_iter=100,
return_code=True,
dict_init=None,
callback=None,
batch_size=256,
verbose=False,
shuffle=True,
n_jobs=None,
method="lars",
random_state=None,
positive_dict=False,
positive_code=False,
... | Solve a dictionary learning matrix factorization problem online.
Finds the best dictionary and the corresponding sparse code for
approximating the data matrix X by solving::
(U^*, V^*) = argmin 0.5 || X - U V ||_Fro^2 + alpha * || U ||_1,1
(U,V)
with || V_k ||... | dict_learning_online | python | scikit-learn/scikit-learn | sklearn/decomposition/_dict_learning.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_dict_learning.py | BSD-3-Clause |
def dict_learning(
X,
n_components,
*,
alpha,
max_iter=100,
tol=1e-8,
method="lars",
n_jobs=None,
dict_init=None,
code_init=None,
callback=None,
verbose=False,
random_state=None,
return_n_iter=False,
positive_dict=False,
positive_code=False,
method_max... | Solve a dictionary learning matrix factorization problem.
Finds the best dictionary and the corresponding sparse code for
approximating the data matrix X by solving::
(U^*, V^*) = argmin 0.5 || X - U V ||_Fro^2 + alpha * || U ||_1,1
(U,V)
with || V_k ||_2 = 1 f... | dict_learning | python | scikit-learn/scikit-learn | sklearn/decomposition/_dict_learning.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_dict_learning.py | BSD-3-Clause |
def _transform(self, X, dictionary):
"""Private method allowing to accommodate both DictionaryLearning and
SparseCoder."""
X = validate_data(self, X, reset=False)
if hasattr(self, "alpha") and self.transform_alpha is None:
transform_alpha = self.alpha
else:
... | Private method allowing to accommodate both DictionaryLearning and
SparseCoder. | _transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_dict_learning.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_dict_learning.py | BSD-3-Clause |
def transform(self, X):
"""Encode the data as a sparse combination of the dictionary atoms.
Coding method is determined by the object parameter
`transform_algorithm`.
Parameters
----------
X : ndarray of shape (n_samples, n_features)
Test data to be transfor... | Encode the data as a sparse combination of the dictionary atoms.
Coding method is determined by the object parameter
`transform_algorithm`.
Parameters
----------
X : ndarray of shape (n_samples, n_features)
Test data to be transformed, must have the same number of
... | transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_dict_learning.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_dict_learning.py | BSD-3-Clause |
def inverse_transform(self, X):
"""Transform data back to its original space.
Parameters
----------
X : array-like of shape (n_samples, n_components)
Data to be transformed back. Must have the same number of
components as the data used to train the model.
... | Transform data back to its original space.
Parameters
----------
X : array-like of shape (n_samples, n_components)
Data to be transformed back. Must have the same number of
components as the data used to train the model.
Returns
-------
X_origina... | inverse_transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_dict_learning.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_dict_learning.py | BSD-3-Clause |
def fit_transform(self, X, y=None):
"""Fit the model from data in X and return the transformed data.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples
and `n_features` is the number o... | Fit the model from data in X and return the transformed data.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples
and `n_features` is the number of features.
y : Ignored
No... | fit_transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_dict_learning.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_dict_learning.py | BSD-3-Clause |
def _minibatch_step(self, X, dictionary, random_state, step):
"""Perform the update on the dictionary for one minibatch."""
batch_size = X.shape[0]
# Compute code for this batch
code = _sparse_encode(
X,
dictionary,
algorithm=self._fit_algorithm,
... | Perform the update on the dictionary for one minibatch. | _minibatch_step | python | scikit-learn/scikit-learn | sklearn/decomposition/_dict_learning.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_dict_learning.py | BSD-3-Clause |
def _check_convergence(
self, X, batch_cost, new_dict, old_dict, n_samples, step, n_steps
):
"""Helper function to encapsulate the early stopping logic.
Early stopping is based on two factors:
- A small change of the dictionary between two minibatch updates. This is
contro... | Helper function to encapsulate the early stopping logic.
Early stopping is based on two factors:
- A small change of the dictionary between two minibatch updates. This is
controlled by the tol parameter.
- No more improvement on a smoothed estimate of the objective function for a
... | _check_convergence | python | scikit-learn/scikit-learn | sklearn/decomposition/_dict_learning.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_dict_learning.py | BSD-3-Clause |
def fit(self, X, y=None):
"""Fit the model from data in X.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples
and `n_features` is the number of features.
y : Ignored
... | Fit the model from data in X.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples
and `n_features` is the number of features.
y : Ignored
Not used, present for API consiste... | fit | python | scikit-learn/scikit-learn | sklearn/decomposition/_dict_learning.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_dict_learning.py | BSD-3-Clause |
def partial_fit(self, X, y=None):
"""Update the model using the data in X as a mini-batch.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples
and `n_features` is the number of features... | Update the model using the data in X as a mini-batch.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples
and `n_features` is the number of features.
y : Ignored
Not used, ... | partial_fit | python | scikit-learn/scikit-learn | sklearn/decomposition/_dict_learning.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_dict_learning.py | BSD-3-Clause |
def fit(self, X, y=None):
"""Fit the FactorAnalysis model to X using SVD based approach.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training data.
y : Ignored
Ignored parameter.
Returns
-------
self : o... | Fit the FactorAnalysis model to X using SVD based approach.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training data.
y : Ignored
Ignored parameter.
Returns
-------
self : object
FactorAnalysis clas... | fit | python | scikit-learn/scikit-learn | sklearn/decomposition/_factor_analysis.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_factor_analysis.py | BSD-3-Clause |
def transform(self, X):
"""Apply dimensionality reduction to X using the model.
Compute the expected mean of the latent variables.
See Barber, 21.2.33 (or Bishop, 12.66).
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training data.
... | Apply dimensionality reduction to X using the model.
Compute the expected mean of the latent variables.
See Barber, 21.2.33 (or Bishop, 12.66).
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training data.
Returns
-------
... | transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_factor_analysis.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_factor_analysis.py | BSD-3-Clause |
def get_covariance(self):
"""Compute data covariance with the FactorAnalysis model.
``cov = components_.T * components_ + diag(noise_variance)``
Returns
-------
cov : ndarray of shape (n_features, n_features)
Estimated covariance of data.
"""
check_i... | Compute data covariance with the FactorAnalysis model.
``cov = components_.T * components_ + diag(noise_variance)``
Returns
-------
cov : ndarray of shape (n_features, n_features)
Estimated covariance of data.
| get_covariance | python | scikit-learn/scikit-learn | sklearn/decomposition/_factor_analysis.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_factor_analysis.py | BSD-3-Clause |
def get_precision(self):
"""Compute data precision matrix with the FactorAnalysis model.
Returns
-------
precision : ndarray of shape (n_features, n_features)
Estimated precision of data.
"""
check_is_fitted(self)
n_features = self.components_.shape[... | Compute data precision matrix with the FactorAnalysis model.
Returns
-------
precision : ndarray of shape (n_features, n_features)
Estimated precision of data.
| get_precision | python | scikit-learn/scikit-learn | sklearn/decomposition/_factor_analysis.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_factor_analysis.py | BSD-3-Clause |
def score_samples(self, X):
"""Compute the log-likelihood of each sample.
Parameters
----------
X : ndarray of shape (n_samples, n_features)
The data.
Returns
-------
ll : ndarray of shape (n_samples,)
Log-likelihood of each sample under ... | Compute the log-likelihood of each sample.
Parameters
----------
X : ndarray of shape (n_samples, n_features)
The data.
Returns
-------
ll : ndarray of shape (n_samples,)
Log-likelihood of each sample under the current model.
| score_samples | python | scikit-learn/scikit-learn | sklearn/decomposition/_factor_analysis.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_factor_analysis.py | BSD-3-Clause |
def _ica_def(X, tol, g, fun_args, max_iter, w_init):
"""Deflationary FastICA using fun approx to neg-entropy function
Used internally by FastICA.
"""
n_components = w_init.shape[0]
W = np.zeros((n_components, n_components), dtype=X.dtype)
n_iter = []
# j is the index of the extracted comp... | Deflationary FastICA using fun approx to neg-entropy function
Used internally by FastICA.
| _ica_def | python | scikit-learn/scikit-learn | sklearn/decomposition/_fastica.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_fastica.py | BSD-3-Clause |
def _ica_par(X, tol, g, fun_args, max_iter, w_init):
"""Parallel FastICA.
Used internally by FastICA --main loop
"""
W = _sym_decorrelation(w_init)
del w_init
p_ = float(X.shape[1])
for ii in range(max_iter):
gwtx, g_wtx = g(np.dot(W, X), fun_args)
W1 = _sym_decorrelation(n... | Parallel FastICA.
Used internally by FastICA --main loop
| _ica_par | python | scikit-learn/scikit-learn | sklearn/decomposition/_fastica.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_fastica.py | BSD-3-Clause |
def fastica(
X,
n_components=None,
*,
algorithm="parallel",
whiten="unit-variance",
fun="logcosh",
fun_args=None,
max_iter=200,
tol=1e-04,
w_init=None,
whiten_solver="svd",
random_state=None,
return_X_mean=False,
compute_sources=True,
return_n_iter=False,
):
... | Perform Fast Independent Component Analysis.
The implementation is based on [1]_.
Read more in the :ref:`User Guide <ICA>`.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples and
`n_features` is the num... | fastica | python | scikit-learn/scikit-learn | sklearn/decomposition/_fastica.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_fastica.py | BSD-3-Clause |
def _fit_transform(self, X, compute_sources=False):
"""Fit the model.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training data, where `n_samples` is the number of samples
and `n_features` is the number of features.
compute_sour... | Fit the model.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training data, where `n_samples` is the number of samples
and `n_features` is the number of features.
compute_sources : bool, default=False
If False, sources are not... | _fit_transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_fastica.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_fastica.py | BSD-3-Clause |
def transform(self, X, copy=True):
"""Recover the sources from X (apply the unmixing matrix).
Parameters
----------
X : array-like of shape (n_samples, n_features)
Data to transform, where `n_samples` is the number of samples
and `n_features` is the number of fea... | Recover the sources from X (apply the unmixing matrix).
Parameters
----------
X : array-like of shape (n_samples, n_features)
Data to transform, where `n_samples` is the number of samples
and `n_features` is the number of features.
copy : bool, default=True
... | transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_fastica.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_fastica.py | BSD-3-Clause |
def inverse_transform(self, X, copy=True):
"""Transform the sources back to the mixed data (apply mixing matrix).
Parameters
----------
X : array-like of shape (n_samples, n_components)
Sources, where `n_samples` is the number of samples
and `n_components` is the... | Transform the sources back to the mixed data (apply mixing matrix).
Parameters
----------
X : array-like of shape (n_samples, n_components)
Sources, where `n_samples` is the number of samples
and `n_components` is the number of components.
copy : bool, default=Tr... | inverse_transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_fastica.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_fastica.py | BSD-3-Clause |
def fit(self, X, y=None):
"""Fit the model with X, using minibatches of size batch_size.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training data, where `n_samples` is the number of samples and
`n_features` is the numbe... | Fit the model with X, using minibatches of size batch_size.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training data, where `n_samples` is the number of samples and
`n_features` is the number of features.
y : Ignored
... | fit | python | scikit-learn/scikit-learn | sklearn/decomposition/_incremental_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_incremental_pca.py | BSD-3-Clause |
def partial_fit(self, X, y=None, check_input=True):
"""Incremental fit with X. All of X is processed as a single batch.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training data, where `n_samples` is the number of samples and
`n_features... | Incremental fit with X. All of X is processed as a single batch.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training data, where `n_samples` is the number of samples and
`n_features` is the number of features.
y : Ignored
N... | partial_fit | python | scikit-learn/scikit-learn | sklearn/decomposition/_incremental_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_incremental_pca.py | BSD-3-Clause |
def transform(self, X):
"""Apply dimensionality reduction to X.
X is projected on the first principal components previously extracted
from a training set, using minibatches of size batch_size if X is
sparse.
Parameters
----------
X : {array-like, sparse matrix} ... | Apply dimensionality reduction to X.
X is projected on the first principal components previously extracted
from a training set, using minibatches of size batch_size if X is
sparse.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
... | transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_incremental_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_incremental_pca.py | BSD-3-Clause |
def fit(self, X, y=None):
"""Fit the model from data in X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples
and `n_features` is the number of features.
y : ... | Fit the model from data in X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples
and `n_features` is the number of features.
y : Ignored
Not used, present... | fit | python | scikit-learn/scikit-learn | sklearn/decomposition/_kernel_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_kernel_pca.py | BSD-3-Clause |
def fit_transform(self, X, y=None, **params):
"""Fit the model from data in X and transform X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples
and `n_features` is t... | Fit the model from data in X and transform X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples
and `n_features` is the number of features.
y : Ignored
N... | fit_transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_kernel_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_kernel_pca.py | BSD-3-Clause |
def transform(self, X):
"""Transform X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples
and `n_features` is the number of features.
Returns
-------... | Transform X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples
and `n_features` is the number of features.
Returns
-------
X_new : ndarray of shape (... | transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_kernel_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_kernel_pca.py | BSD-3-Clause |
def _update_doc_distribution(
X,
exp_topic_word_distr,
doc_topic_prior,
max_doc_update_iter,
mean_change_tol,
cal_sstats,
random_state,
):
"""E-step: update document-topic distribution.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
... | E-step: update document-topic distribution.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix.
exp_topic_word_distr : ndarray of shape (n_topics, n_features)
Exponential value of expectation of log topic word distribution.
... | _update_doc_distribution | python | scikit-learn/scikit-learn | sklearn/decomposition/_lda.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_lda.py | BSD-3-Clause |
def _e_step(self, X, cal_sstats, random_init, parallel=None):
"""E-step in EM update.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix.
cal_sstats : bool
Parameter that indicate whether to calcul... | E-step in EM update.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix.
cal_sstats : bool
Parameter that indicate whether to calculate sufficient statistics
or not. Set ``cal_sstats`` to True ... | _e_step | python | scikit-learn/scikit-learn | sklearn/decomposition/_lda.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_lda.py | BSD-3-Clause |
def _em_step(self, X, total_samples, batch_update, parallel=None):
"""EM update for 1 iteration.
update `component_` by batch VB or online VB.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix.
total... | EM update for 1 iteration.
update `component_` by batch VB or online VB.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix.
total_samples : int
Total number of documents. It is only used when
... | _em_step | python | scikit-learn/scikit-learn | sklearn/decomposition/_lda.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_lda.py | BSD-3-Clause |
def _check_non_neg_array(self, X, reset_n_features, whom):
"""check X format
check X format and make sure no negative value in X.
Parameters
----------
X : array-like or sparse matrix
"""
dtype = [np.float64, np.float32] if reset_n_features else self.component... | check X format
check X format and make sure no negative value in X.
Parameters
----------
X : array-like or sparse matrix
| _check_non_neg_array | python | scikit-learn/scikit-learn | sklearn/decomposition/_lda.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_lda.py | BSD-3-Clause |
def partial_fit(self, X, y=None):
"""Online VB with Mini-Batch update.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix.
y : Ignored
Not used, present here for API consistency by convention.
... | Online VB with Mini-Batch update.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix.
y : Ignored
Not used, present here for API consistency by convention.
Returns
-------
self
... | partial_fit | python | scikit-learn/scikit-learn | sklearn/decomposition/_lda.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_lda.py | BSD-3-Clause |
def fit(self, X, y=None):
"""Learn model for the data X with variational Bayes method.
When `learning_method` is 'online', use mini-batch update.
Otherwise, use batch update.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
... | Learn model for the data X with variational Bayes method.
When `learning_method` is 'online', use mini-batch update.
Otherwise, use batch update.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix.
y ... | fit | python | scikit-learn/scikit-learn | sklearn/decomposition/_lda.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_lda.py | BSD-3-Clause |
def transform(self, X, *, normalize=True):
"""Transform data X according to the fitted model.
.. versionchanged:: 0.18
`doc_topic_distr` is now normalized.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Document wo... | Transform data X according to the fitted model.
.. versionchanged:: 0.18
`doc_topic_distr` is now normalized.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix.
normalize : bool, default=True
... | transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_lda.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_lda.py | BSD-3-Clause |
def _approx_bound(self, X, doc_topic_distr, sub_sampling):
"""Estimate the variational bound.
Estimate the variational bound over "all documents" using only the
documents passed in as X. Since log-likelihood of each word cannot
be computed directly, we use this bound to estimate it.
... | Estimate the variational bound.
Estimate the variational bound over "all documents" using only the
documents passed in as X. Since log-likelihood of each word cannot
be computed directly, we use this bound to estimate it.
Parameters
----------
X : {array-like, sparse ma... | _approx_bound | python | scikit-learn/scikit-learn | sklearn/decomposition/_lda.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_lda.py | BSD-3-Clause |
def score(self, X, y=None):
"""Calculate approximate log-likelihood as score.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix.
y : Ignored
Not used, present here for API consistency by conventio... | Calculate approximate log-likelihood as score.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix.
y : Ignored
Not used, present here for API consistency by convention.
Returns
-------
... | score | python | scikit-learn/scikit-learn | sklearn/decomposition/_lda.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_lda.py | BSD-3-Clause |
def _perplexity_precomp_distr(self, X, doc_topic_distr=None, sub_sampling=False):
"""Calculate approximate perplexity for data X with ability to accept
precomputed doc_topic_distr
Perplexity is defined as exp(-1. * log-likelihood per word)
Parameters
----------
X : {arr... | Calculate approximate perplexity for data X with ability to accept
precomputed doc_topic_distr
Perplexity is defined as exp(-1. * log-likelihood per word)
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix.
... | _perplexity_precomp_distr | python | scikit-learn/scikit-learn | sklearn/decomposition/_lda.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_lda.py | BSD-3-Clause |
def perplexity(self, X, sub_sampling=False):
"""Calculate approximate perplexity for data X.
Perplexity is defined as exp(-1. * log-likelihood per word)
.. versionchanged:: 0.19
*doc_topic_distr* argument has been deprecated and is ignored
because user no longer has acces... | Calculate approximate perplexity for data X.
Perplexity is defined as exp(-1. * log-likelihood per word)
.. versionchanged:: 0.19
*doc_topic_distr* argument has been deprecated and is ignored
because user no longer has access to unnormalized distribution
Parameters
... | perplexity | python | scikit-learn/scikit-learn | sklearn/decomposition/_lda.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_lda.py | BSD-3-Clause |
def _beta_divergence(X, W, H, beta, square_root=False):
"""Compute the beta-divergence of X and dot(W, H).
Parameters
----------
X : float or array-like of shape (n_samples, n_features)
W : float or array-like of shape (n_samples, n_components)
H : float or array-like of shape (n_components, ... | Compute the beta-divergence of X and dot(W, H).
Parameters
----------
X : float or array-like of shape (n_samples, n_features)
W : float or array-like of shape (n_samples, n_components)
H : float or array-like of shape (n_components, n_features)
beta : float or {'frobenius', 'kullback-leible... | _beta_divergence | python | scikit-learn/scikit-learn | sklearn/decomposition/_nmf.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_nmf.py | BSD-3-Clause |
def _special_sparse_dot(W, H, X):
"""Computes np.dot(W, H), only where X is non zero."""
if sp.issparse(X):
ii, jj = X.nonzero()
n_vals = ii.shape[0]
dot_vals = np.empty(n_vals)
n_components = W.shape[1]
batch_size = max(n_components, n_vals // n_components)
for ... | Computes np.dot(W, H), only where X is non zero. | _special_sparse_dot | python | scikit-learn/scikit-learn | sklearn/decomposition/_nmf.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_nmf.py | BSD-3-Clause |
def _initialize_nmf(X, n_components, init=None, eps=1e-6, random_state=None):
"""Algorithms for NMF initialization.
Computes an initial guess for the non-negative
rank k matrix approximation for X: X = WH.
Parameters
----------
X : array-like of shape (n_samples, n_features)
The data m... | Algorithms for NMF initialization.
Computes an initial guess for the non-negative
rank k matrix approximation for X: X = WH.
Parameters
----------
X : array-like of shape (n_samples, n_features)
The data matrix to be decomposed.
n_components : int
The number of components desi... | _initialize_nmf | python | scikit-learn/scikit-learn | sklearn/decomposition/_nmf.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_nmf.py | BSD-3-Clause |
def _update_coordinate_descent(X, W, Ht, l1_reg, l2_reg, shuffle, random_state):
"""Helper function for _fit_coordinate_descent.
Update W to minimize the objective function, iterating once over all
coordinates. By symmetry, to update H, one can call
_update_coordinate_descent(X.T, Ht, W, ...).
"""... | Helper function for _fit_coordinate_descent.
Update W to minimize the objective function, iterating once over all
coordinates. By symmetry, to update H, one can call
_update_coordinate_descent(X.T, Ht, W, ...).
| _update_coordinate_descent | python | scikit-learn/scikit-learn | sklearn/decomposition/_nmf.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_nmf.py | BSD-3-Clause |
def _fit_coordinate_descent(
X,
W,
H,
tol=1e-4,
max_iter=200,
l1_reg_W=0,
l1_reg_H=0,
l2_reg_W=0,
l2_reg_H=0,
update_H=True,
verbose=0,
shuffle=False,
random_state=None,
):
"""Compute Non-negative Matrix Factorization (NMF) with Coordinate Descent
The objecti... | Compute Non-negative Matrix Factorization (NMF) with Coordinate Descent
The objective function is minimized with an alternating minimization of W
and H. Each minimization is done with a cyclic (up to a permutation of the
features) Coordinate Descent.
Parameters
----------
X : array-like of sha... | _fit_coordinate_descent | python | scikit-learn/scikit-learn | sklearn/decomposition/_nmf.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_nmf.py | BSD-3-Clause |
def _fit_multiplicative_update(
X,
W,
H,
beta_loss="frobenius",
max_iter=200,
tol=1e-4,
l1_reg_W=0,
l1_reg_H=0,
l2_reg_W=0,
l2_reg_H=0,
update_H=True,
verbose=0,
):
"""Compute Non-negative Matrix Factorization with Multiplicative Update.
The objective function is... | Compute Non-negative Matrix Factorization with Multiplicative Update.
The objective function is _beta_divergence(X, WH) and is minimized with an
alternating minimization of W and H. Each minimization is done with a
Multiplicative Update.
Parameters
----------
X : array-like of shape (n_samples... | _fit_multiplicative_update | python | scikit-learn/scikit-learn | sklearn/decomposition/_nmf.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_nmf.py | BSD-3-Clause |
def non_negative_factorization(
X,
W=None,
H=None,
n_components="auto",
*,
init=None,
update_H=True,
solver="cd",
beta_loss="frobenius",
tol=1e-4,
max_iter=200,
alpha_W=0.0,
alpha_H="same",
l1_ratio=0.0,
random_state=None,
verbose=0,
shuffle=False,
):
... | Compute Non-negative Matrix Factorization (NMF).
Find two non-negative matrices (W, H) whose product approximates the non-
negative matrix X. This factorization can be used for example for
dimensionality reduction, source separation or topic extraction.
The objective function is:
.. math::
... | non_negative_factorization | python | scikit-learn/scikit-learn | sklearn/decomposition/_nmf.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_nmf.py | BSD-3-Clause |
def fit(self, X, y=None, **params):
"""Learn a NMF model for the data X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples
and `n_features` is the number of features.... | Learn a NMF model for the data X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples
and `n_features` is the number of features.
y : Ignored
Not used, pre... | fit | python | scikit-learn/scikit-learn | sklearn/decomposition/_nmf.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_nmf.py | BSD-3-Clause |
def inverse_transform(self, X):
"""Transform data back to its original space.
.. versionadded:: 0.18
Parameters
----------
X : {ndarray, sparse matrix} of shape (n_samples, n_components)
Transformed data matrix.
Returns
-------
X_original : ... | Transform data back to its original space.
.. versionadded:: 0.18
Parameters
----------
X : {ndarray, sparse matrix} of shape (n_samples, n_components)
Transformed data matrix.
Returns
-------
X_original : ndarray of shape (n_samples, n_features)
... | inverse_transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_nmf.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_nmf.py | BSD-3-Clause |
def fit_transform(self, X, y=None, W=None, H=None):
"""Learn a NMF model for the data X and returns the transformed data.
This is more efficient than calling fit followed by transform.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
... | Learn a NMF model for the data X and returns the transformed data.
This is more efficient than calling fit followed by transform.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of sampl... | fit_transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_nmf.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_nmf.py | BSD-3-Clause |
def _fit_transform(self, X, y=None, W=None, H=None, update_H=True):
"""Learn a NMF model for the data X and returns the transformed data.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Data matrix to be decomposed
y : Ignored
... | Learn a NMF model for the data X and returns the transformed data.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Data matrix to be decomposed
y : Ignored
W : array-like of shape (n_samples, n_components), default=None
... | _fit_transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_nmf.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_nmf.py | BSD-3-Clause |
def transform(self, X):
"""Transform the data X according to the fitted NMF model.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples
and `n_features` is the number of... | Transform the data X according to the fitted NMF model.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples
and `n_features` is the number of features.
Returns
... | transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_nmf.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_nmf.py | BSD-3-Clause |
def _solve_W(self, X, H, max_iter):
"""Minimize the objective function w.r.t W.
Update W with H being fixed, until convergence. This is the heart
of `transform` but it's also used during `fit` when doing fresh restarts.
"""
avg = np.sqrt(X.mean() / self._n_components)
W ... | Minimize the objective function w.r.t W.
Update W with H being fixed, until convergence. This is the heart
of `transform` but it's also used during `fit` when doing fresh restarts.
| _solve_W | python | scikit-learn/scikit-learn | sklearn/decomposition/_nmf.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_nmf.py | BSD-3-Clause |
def _minibatch_step(self, X, W, H, update_H):
"""Perform the update of W and H for one minibatch."""
batch_size = X.shape[0]
# get scaled regularization terms. Done for each minibatch to take into account
# variable sizes of minibatches.
l1_reg_W, l1_reg_H, l2_reg_W, l2_reg_H = ... | Perform the update of W and H for one minibatch. | _minibatch_step | python | scikit-learn/scikit-learn | sklearn/decomposition/_nmf.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_nmf.py | BSD-3-Clause |
def fit_transform(self, X, y=None, W=None, H=None):
"""Learn a NMF model for the data X and returns the transformed data.
This is more efficient than calling fit followed by transform.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
... | Learn a NMF model for the data X and returns the transformed data.
This is more efficient than calling fit followed by transform.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Data matrix to be decomposed.
y : Ignored
... | fit_transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_nmf.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_nmf.py | BSD-3-Clause |
def _fit_transform(self, X, W=None, H=None, update_H=True):
"""Learn a NMF model for the data X and returns the transformed data.
Parameters
----------
X : {ndarray, sparse matrix} of shape (n_samples, n_features)
Data matrix to be decomposed.
W : array-like of shap... | Learn a NMF model for the data X and returns the transformed data.
Parameters
----------
X : {ndarray, sparse matrix} of shape (n_samples, n_features)
Data matrix to be decomposed.
W : array-like of shape (n_samples, n_components), default=None
If `init='custom'... | _fit_transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_nmf.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_nmf.py | BSD-3-Clause |
def transform(self, X):
"""Transform the data X according to the fitted MiniBatchNMF model.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Data matrix to be transformed by the model.
Returns
-------
W : ndarray... | Transform the data X according to the fitted MiniBatchNMF model.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Data matrix to be transformed by the model.
Returns
-------
W : ndarray of shape (n_samples, n_components)... | transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_nmf.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_nmf.py | BSD-3-Clause |
def partial_fit(self, X, y=None, W=None, H=None):
"""Update the model using the data in `X` as a mini-batch.
This method is expected to be called several times consecutively
on different chunks of a dataset so as to implement out-of-core
or online learning.
This is especially u... | Update the model using the data in `X` as a mini-batch.
This method is expected to be called several times consecutively
on different chunks of a dataset so as to implement out-of-core
or online learning.
This is especially useful when the whole dataset is too big to fit in
mem... | partial_fit | python | scikit-learn/scikit-learn | sklearn/decomposition/_nmf.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_nmf.py | BSD-3-Clause |
def _assess_dimension(spectrum, rank, n_samples):
"""Compute the log-likelihood of a rank ``rank`` dataset.
The dataset is assumed to be embedded in gaussian noise of shape(n,
dimf) having spectrum ``spectrum``. This implements the method of
T. P. Minka.
Parameters
----------
spectrum : nd... | Compute the log-likelihood of a rank ``rank`` dataset.
The dataset is assumed to be embedded in gaussian noise of shape(n,
dimf) having spectrum ``spectrum``. This implements the method of
T. P. Minka.
Parameters
----------
spectrum : ndarray of shape (n_features,)
Data spectrum.
r... | _assess_dimension | python | scikit-learn/scikit-learn | sklearn/decomposition/_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_pca.py | BSD-3-Clause |
def _infer_dimension(spectrum, n_samples):
"""Infers the dimension of a dataset with a given spectrum.
The returned value will be in [1, n_features - 1].
"""
xp, _ = get_namespace(spectrum)
ll = xp.empty_like(spectrum)
ll[0] = -xp.inf # we don't want to return n_components = 0
for rank in... | Infers the dimension of a dataset with a given spectrum.
The returned value will be in [1, n_features - 1].
| _infer_dimension | python | scikit-learn/scikit-learn | sklearn/decomposition/_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_pca.py | BSD-3-Clause |
def fit_transform(self, X, y=None):
"""Fit the model with X and apply the dimensionality reduction on X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training data, where `n_samples` is the number of samples
and `n_featur... | Fit the model with X and apply the dimensionality reduction on X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training data, where `n_samples` is the number of samples
and `n_features` is the number of features.
y : Ign... | fit_transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_pca.py | BSD-3-Clause |
def _fit(self, X):
"""Dispatch to the right submethod depending on the chosen solver."""
xp, is_array_api_compliant = get_namespace(X)
# Raise an error for sparse input and unsupported svd_solver
if issparse(X) and self.svd_solver not in ["auto", "arpack", "covariance_eigh"]:
... | Dispatch to the right submethod depending on the chosen solver. | _fit | python | scikit-learn/scikit-learn | sklearn/decomposition/_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_pca.py | BSD-3-Clause |
def _fit_full(self, X, n_components, xp, is_array_api_compliant):
"""Fit the model by computing full SVD on X."""
n_samples, n_features = X.shape
if n_components == "mle":
if n_samples < n_features:
raise ValueError(
"n_components='mle' is only su... | Fit the model by computing full SVD on X. | _fit_full | python | scikit-learn/scikit-learn | sklearn/decomposition/_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_pca.py | BSD-3-Clause |
def _fit_truncated(self, X, n_components, xp):
"""Fit the model by computing truncated SVD (by ARPACK or randomized)
on X.
"""
n_samples, n_features = X.shape
svd_solver = self._fit_svd_solver
if isinstance(n_components, str):
raise ValueError(
... | Fit the model by computing truncated SVD (by ARPACK or randomized)
on X.
| _fit_truncated | python | scikit-learn/scikit-learn | sklearn/decomposition/_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_pca.py | BSD-3-Clause |
def score_samples(self, X):
"""Return the log-likelihood of each sample.
See. "Pattern Recognition and Machine Learning"
by C. Bishop, 12.2.1 p. 574
or http://www.miketipping.com/papers/met-mppca.pdf
Parameters
----------
X : array-like of shape (n_samples, n_fe... | Return the log-likelihood of each sample.
See. "Pattern Recognition and Machine Learning"
by C. Bishop, 12.2.1 p. 574
or http://www.miketipping.com/papers/met-mppca.pdf
Parameters
----------
X : array-like of shape (n_samples, n_features)
The data.
... | score_samples | python | scikit-learn/scikit-learn | sklearn/decomposition/_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_pca.py | BSD-3-Clause |
def fit(self, X, y=None):
"""Fit the model from data in X.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples
and `n_features` is the number of features.
y : Ignored
... | Fit the model from data in X.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training vector, where `n_samples` is the number of samples
and `n_features` is the number of features.
y : Ignored
Not used, present here for API con... | fit | python | scikit-learn/scikit-learn | sklearn/decomposition/_sparse_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_sparse_pca.py | BSD-3-Clause |
def transform(self, X):
"""Least Squares projection of the data onto the sparse components.
To avoid instability issues in case the system is under-determined,
regularization can be applied (Ridge regression) via the
`ridge_alpha` parameter.
Note that Sparse PCA components orth... | Least Squares projection of the data onto the sparse components.
To avoid instability issues in case the system is under-determined,
regularization can be applied (Ridge regression) via the
`ridge_alpha` parameter.
Note that Sparse PCA components orthogonality is not enforced as in PCA... | transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_sparse_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_sparse_pca.py | BSD-3-Clause |
def inverse_transform(self, X):
"""Transform data from the latent space to the original space.
This inversion is an approximation due to the loss of information
induced by the forward decomposition.
.. versionadded:: 1.2
Parameters
----------
X : ndarray of sha... | Transform data from the latent space to the original space.
This inversion is an approximation due to the loss of information
induced by the forward decomposition.
.. versionadded:: 1.2
Parameters
----------
X : ndarray of shape (n_samples, n_components)
Da... | inverse_transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_sparse_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_sparse_pca.py | BSD-3-Clause |
def fit_transform(self, X, y=None):
"""Fit model to X and perform dimensionality reduction on X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training data.
y : Ignored
Not used, present here for API consistency ... | Fit model to X and perform dimensionality reduction on X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training data.
y : Ignored
Not used, present here for API consistency by convention.
Returns
-------... | fit_transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_truncated_svd.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_truncated_svd.py | BSD-3-Clause |
def transform(self, X):
"""Perform dimensionality reduction on X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
New data.
Returns
-------
X_new : ndarray of shape (n_samples, n_components)
Reduced ... | Perform dimensionality reduction on X.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
New data.
Returns
-------
X_new : ndarray of shape (n_samples, n_components)
Reduced version of X. This will always be a... | transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_truncated_svd.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_truncated_svd.py | BSD-3-Clause |
def inverse_transform(self, X):
"""Transform X back to its original space.
Returns an array X_original whose transform would be X.
Parameters
----------
X : array-like of shape (n_samples, n_components)
New data.
Returns
-------
X_original :... | Transform X back to its original space.
Returns an array X_original whose transform would be X.
Parameters
----------
X : array-like of shape (n_samples, n_components)
New data.
Returns
-------
X_original : ndarray of shape (n_samples, n_features)
... | inverse_transform | python | scikit-learn/scikit-learn | sklearn/decomposition/_truncated_svd.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/_truncated_svd.py | BSD-3-Clause |
def test_get_feature_names_out(estimator):
"""Check feature names for dict learning estimators."""
estimator.fit(X)
n_components = X.shape[1]
feature_names_out = estimator.get_feature_names_out()
estimator_name = estimator.__class__.__name__.lower()
assert_array_equal(
feature_names_out... | Check feature names for dict learning estimators. | test_get_feature_names_out | python | scikit-learn/scikit-learn | sklearn/decomposition/tests/test_dict_learning.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/tests/test_dict_learning.py | BSD-3-Clause |
def center_and_norm(x, axis=-1):
"""Centers and norms x **in place**
Parameters
-----------
x: ndarray
Array with an axis of observations (statistical units) measured on
random variables.
axis: int, optional
Axis along which the mean and variance are calculated.
"""
... | Centers and norms x **in place**
Parameters
-----------
x: ndarray
Array with an axis of observations (statistical units) measured on
random variables.
axis: int, optional
Axis along which the mean and variance are calculated.
| center_and_norm | python | scikit-learn/scikit-learn | sklearn/decomposition/tests/test_fastica.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/tests/test_fastica.py | BSD-3-Clause |
def test_fit_transform(global_random_seed, global_dtype):
"""Test unit variance of transformed data using FastICA algorithm.
Check that `fit_transform` gives the same result as applying
`fit` and then `transform`.
Bug #13056
"""
# multivariate uniform data in [0, 1]
rng = np.random.RandomS... | Test unit variance of transformed data using FastICA algorithm.
Check that `fit_transform` gives the same result as applying
`fit` and then `transform`.
Bug #13056
| test_fit_transform | python | scikit-learn/scikit-learn | sklearn/decomposition/tests/test_fastica.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/tests/test_fastica.py | BSD-3-Clause |
def test_fastica_whiten_unit_variance(global_random_seed):
"""Test unit variance of transformed data using FastICA algorithm.
Bug #13056
"""
rng = np.random.RandomState(global_random_seed)
X = rng.random_sample((100, 10))
n_components = X.shape[1]
ica = FastICA(n_components=n_components, wh... | Test unit variance of transformed data using FastICA algorithm.
Bug #13056
| test_fastica_whiten_unit_variance | python | scikit-learn/scikit-learn | sklearn/decomposition/tests/test_fastica.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/tests/test_fastica.py | BSD-3-Clause |
def test_fastica_eigh_low_rank_warning(global_random_seed):
"""Test FastICA eigh solver raises warning for low-rank data."""
rng = np.random.RandomState(global_random_seed)
A = rng.randn(10, 2)
X = A @ A.T
ica = FastICA(random_state=0, whiten="unit-variance", whiten_solver="eigh")
msg = "There a... | Test FastICA eigh solver raises warning for low-rank data. | test_fastica_eigh_low_rank_warning | python | scikit-learn/scikit-learn | sklearn/decomposition/tests/test_fastica.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/tests/test_fastica.py | BSD-3-Clause |
def test_incremental_pca_feature_names_out():
"""Check feature names out for IncrementalPCA."""
ipca = IncrementalPCA(n_components=2).fit(iris.data)
names = ipca.get_feature_names_out()
assert_array_equal([f"incrementalpca{i}" for i in range(2)], names) | Check feature names out for IncrementalPCA. | test_incremental_pca_feature_names_out | python | scikit-learn/scikit-learn | sklearn/decomposition/tests/test_incremental_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/tests/test_incremental_pca.py | BSD-3-Clause |
def test_kernel_pca(global_random_seed):
"""Nominal test for all solvers and all known kernels + a custom one
It tests
- that fit_transform is equivalent to fit+transform
- that the shapes of transforms and inverse transforms are correct
"""
rng = np.random.RandomState(global_random_seed)
... | Nominal test for all solvers and all known kernels + a custom one
It tests
- that fit_transform is equivalent to fit+transform
- that the shapes of transforms and inverse transforms are correct
| test_kernel_pca | python | scikit-learn/scikit-learn | sklearn/decomposition/tests/test_kernel_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/tests/test_kernel_pca.py | BSD-3-Clause |
def test_kernel_pca_invalid_parameters():
"""Check that kPCA raises an error if the parameters are invalid
Tests fitting inverse transform with a precomputed kernel raises a
ValueError.
"""
estimator = KernelPCA(
n_components=10, fit_inverse_transform=True, kernel="precomputed"
)
er... | Check that kPCA raises an error if the parameters are invalid
Tests fitting inverse transform with a precomputed kernel raises a
ValueError.
| test_kernel_pca_invalid_parameters | python | scikit-learn/scikit-learn | sklearn/decomposition/tests/test_kernel_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/tests/test_kernel_pca.py | BSD-3-Clause |
def test_kernel_pca_consistent_transform(global_random_seed):
"""Check robustness to mutations in the original training array
Test that after fitting a kPCA model, it stays independent of any
mutation of the values of the original data object by relying on an
internal copy.
"""
# X_fit_ needs t... | Check robustness to mutations in the original training array
Test that after fitting a kPCA model, it stays independent of any
mutation of the values of the original data object by relying on an
internal copy.
| test_kernel_pca_consistent_transform | python | scikit-learn/scikit-learn | sklearn/decomposition/tests/test_kernel_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/tests/test_kernel_pca.py | BSD-3-Clause |
def test_kernel_pca_deterministic_output(global_random_seed):
"""Test that Kernel PCA produces deterministic output
Tests that the same inputs and random state produce the same output.
"""
rng = np.random.RandomState(global_random_seed)
X = rng.rand(10, 10)
eigen_solver = ("arpack", "dense")
... | Test that Kernel PCA produces deterministic output
Tests that the same inputs and random state produce the same output.
| test_kernel_pca_deterministic_output | python | scikit-learn/scikit-learn | sklearn/decomposition/tests/test_kernel_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/tests/test_kernel_pca.py | BSD-3-Clause |
def test_kernel_pca_sparse(csr_container, global_random_seed):
"""Test that kPCA works on a sparse data input.
Same test as ``test_kernel_pca except inverse_transform`` since it's not
implemented for sparse matrices.
"""
rng = np.random.RandomState(global_random_seed)
X_fit = csr_container(rng.... | Test that kPCA works on a sparse data input.
Same test as ``test_kernel_pca except inverse_transform`` since it's not
implemented for sparse matrices.
| test_kernel_pca_sparse | python | scikit-learn/scikit-learn | sklearn/decomposition/tests/test_kernel_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/tests/test_kernel_pca.py | BSD-3-Clause |
def test_kernel_pca_linear_kernel(solver, n_features, global_random_seed):
"""Test that kPCA with linear kernel is equivalent to PCA for all solvers.
KernelPCA with linear kernel should produce the same output as PCA.
"""
rng = np.random.RandomState(global_random_seed)
X_fit = rng.random_sample((5,... | Test that kPCA with linear kernel is equivalent to PCA for all solvers.
KernelPCA with linear kernel should produce the same output as PCA.
| test_kernel_pca_linear_kernel | python | scikit-learn/scikit-learn | sklearn/decomposition/tests/test_kernel_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/tests/test_kernel_pca.py | BSD-3-Clause |
def test_kernel_pca_n_components():
"""Test that `n_components` is correctly taken into account for projections
For all solvers this tests that the output has the correct shape depending
on the selected number of components.
"""
rng = np.random.RandomState(0)
X_fit = rng.random_sample((5, 4))
... | Test that `n_components` is correctly taken into account for projections
For all solvers this tests that the output has the correct shape depending
on the selected number of components.
| test_kernel_pca_n_components | python | scikit-learn/scikit-learn | sklearn/decomposition/tests/test_kernel_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/tests/test_kernel_pca.py | BSD-3-Clause |
def test_remove_zero_eig():
"""Check that the ``remove_zero_eig`` parameter works correctly.
Tests that the null-space (Zero) eigenvalues are removed when
remove_zero_eig=True, whereas they are not by default.
"""
X = np.array([[1 - 1e-30, 1], [1, 1], [1, 1 - 1e-20]])
# n_components=None (defa... | Check that the ``remove_zero_eig`` parameter works correctly.
Tests that the null-space (Zero) eigenvalues are removed when
remove_zero_eig=True, whereas they are not by default.
| test_remove_zero_eig | python | scikit-learn/scikit-learn | sklearn/decomposition/tests/test_kernel_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/tests/test_kernel_pca.py | BSD-3-Clause |
def test_leave_zero_eig():
"""Non-regression test for issue #12141 (PR #12143)
This test checks that fit().transform() returns the same result as
fit_transform() in case of non-removed zero eigenvalue.
"""
X_fit = np.array([[1, 1], [0, 0]])
# Assert that even with all np warnings on, there is ... | Non-regression test for issue #12141 (PR #12143)
This test checks that fit().transform() returns the same result as
fit_transform() in case of non-removed zero eigenvalue.
| test_leave_zero_eig | python | scikit-learn/scikit-learn | sklearn/decomposition/tests/test_kernel_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/tests/test_kernel_pca.py | BSD-3-Clause |
def test_kernel_pca_precomputed(global_random_seed):
"""Test that kPCA works with a precomputed kernel, for all solvers"""
rng = np.random.RandomState(global_random_seed)
X_fit = rng.random_sample((5, 4))
X_pred = rng.random_sample((2, 4))
for eigen_solver in ("dense", "arpack", "randomized"):
... | Test that kPCA works with a precomputed kernel, for all solvers | test_kernel_pca_precomputed | python | scikit-learn/scikit-learn | sklearn/decomposition/tests/test_kernel_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/tests/test_kernel_pca.py | BSD-3-Clause |
def test_kernel_pca_precomputed_non_symmetric(solver):
"""Check that the kernel centerer works.
Tests that a non symmetric precomputed kernel is actually accepted
because the kernel centerer does its job correctly.
"""
# a non symmetric gram matrix
K = [[1, 2], [3, 40]]
kpca = KernelPCA(
... | Check that the kernel centerer works.
Tests that a non symmetric precomputed kernel is actually accepted
because the kernel centerer does its job correctly.
| test_kernel_pca_precomputed_non_symmetric | python | scikit-learn/scikit-learn | sklearn/decomposition/tests/test_kernel_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/tests/test_kernel_pca.py | BSD-3-Clause |
def test_gridsearch_pipeline():
"""Check that kPCA works as expected in a grid search pipeline
Test if we can do a grid-search to find parameters to separate
circles with a perceptron model.
"""
X, y = make_circles(n_samples=400, factor=0.3, noise=0.05, random_state=0)
kpca = KernelPCA(kernel="... | Check that kPCA works as expected in a grid search pipeline
Test if we can do a grid-search to find parameters to separate
circles with a perceptron model.
| test_gridsearch_pipeline | python | scikit-learn/scikit-learn | sklearn/decomposition/tests/test_kernel_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/tests/test_kernel_pca.py | BSD-3-Clause |
def test_gridsearch_pipeline_precomputed():
"""Check that kPCA works as expected in a grid search pipeline (2)
Test if we can do a grid-search to find parameters to separate
circles with a perceptron model. This test uses a precomputed kernel.
"""
X, y = make_circles(n_samples=400, factor=0.3, nois... | Check that kPCA works as expected in a grid search pipeline (2)
Test if we can do a grid-search to find parameters to separate
circles with a perceptron model. This test uses a precomputed kernel.
| test_gridsearch_pipeline_precomputed | python | scikit-learn/scikit-learn | sklearn/decomposition/tests/test_kernel_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/tests/test_kernel_pca.py | BSD-3-Clause |
def test_nested_circles():
"""Check that kPCA projects in a space where nested circles are separable
Tests that 2D nested circles become separable with a perceptron when
projected in the first 2 kPCA using an RBF kernel, while raw samples
are not directly separable in the original space.
"""
X,... | Check that kPCA projects in a space where nested circles are separable
Tests that 2D nested circles become separable with a perceptron when
projected in the first 2 kPCA using an RBF kernel, while raw samples
are not directly separable in the original space.
| test_nested_circles | python | scikit-learn/scikit-learn | sklearn/decomposition/tests/test_kernel_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/tests/test_kernel_pca.py | BSD-3-Clause |
def test_kernel_conditioning():
"""Check that ``_check_psd_eigenvalues`` is correctly called in kPCA
Non-regression test for issue #12140 (PR #12145).
"""
# create a pathological X leading to small non-zero eigenvalue
X = [[5, 1], [5 + 1e-8, 1e-8], [5 + 1e-8, 0]]
kpca = KernelPCA(kernel="linea... | Check that ``_check_psd_eigenvalues`` is correctly called in kPCA
Non-regression test for issue #12140 (PR #12145).
| test_kernel_conditioning | python | scikit-learn/scikit-learn | sklearn/decomposition/tests/test_kernel_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/tests/test_kernel_pca.py | BSD-3-Clause |
def test_precomputed_kernel_not_psd(solver):
"""Check how KernelPCA works with non-PSD kernels depending on n_components
Tests for all methods what happens with a non PSD gram matrix (this
can happen in an isomap scenario, or with custom kernel functions, or
maybe with ill-posed datasets).
When ``... | Check how KernelPCA works with non-PSD kernels depending on n_components
Tests for all methods what happens with a non PSD gram matrix (this
can happen in an isomap scenario, or with custom kernel functions, or
maybe with ill-posed datasets).
When ``n_component`` is large enough to capture a negative ... | test_precomputed_kernel_not_psd | python | scikit-learn/scikit-learn | sklearn/decomposition/tests/test_kernel_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/tests/test_kernel_pca.py | BSD-3-Clause |
def test_kernel_pca_solvers_equivalence(n_components):
"""Check that 'dense' 'arpack' & 'randomized' solvers give similar results"""
# Generate random data
n_train, n_test = 1_000, 100
X, _ = make_circles(
n_samples=(n_train + n_test), factor=0.3, noise=0.05, random_state=0
)
X_fit, X_p... | Check that 'dense' 'arpack' & 'randomized' solvers give similar results | test_kernel_pca_solvers_equivalence | python | scikit-learn/scikit-learn | sklearn/decomposition/tests/test_kernel_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/tests/test_kernel_pca.py | BSD-3-Clause |
def test_kernel_pca_inverse_transform_reconstruction():
"""Test if the reconstruction is a good approximation.
Note that in general it is not possible to get an arbitrarily good
reconstruction because of kernel centering that does not
preserve all the information of the original data.
"""
X, *_... | Test if the reconstruction is a good approximation.
Note that in general it is not possible to get an arbitrarily good
reconstruction because of kernel centering that does not
preserve all the information of the original data.
| test_kernel_pca_inverse_transform_reconstruction | python | scikit-learn/scikit-learn | sklearn/decomposition/tests/test_kernel_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/tests/test_kernel_pca.py | BSD-3-Clause |
def test_32_64_decomposition_shape():
"""Test that the decomposition is similar for 32 and 64 bits data
Non regression test for
https://github.com/scikit-learn/scikit-learn/issues/18146
"""
X, y = make_blobs(
n_samples=30, centers=[[0, 0, 0], [1, 1, 1]], random_state=0, cluster_std=0.1
... | Test that the decomposition is similar for 32 and 64 bits data
Non regression test for
https://github.com/scikit-learn/scikit-learn/issues/18146
| test_32_64_decomposition_shape | python | scikit-learn/scikit-learn | sklearn/decomposition/tests/test_kernel_pca.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/tests/test_kernel_pca.py | BSD-3-Clause |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.