doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
transform(X) [source]
Encode the data as a sparse combination of the dictionary atoms. Coding method is determined by the object parameter transform_algorithm. Parameters
Xndarray of shape (n_samples, n_features)
Test data to be transformed, must have the same number of features as the data used to train the model. Returns
X_newndarray of shape (n_samples, n_components)
Transformed data. | sklearn.modules.generated.sklearn.decomposition.dictionarylearning#sklearn.decomposition.DictionaryLearning.transform |
sklearn.decomposition.dict_learning(X, n_components, *, alpha, max_iter=100, tol=1e-08, method='lars', n_jobs=None, dict_init=None, code_init=None, callback=None, verbose=False, random_state=None, return_n_iter=False, positive_dict=False, positive_code=False, method_max_iter=1000) [source]
Solves a dictionary learning matrix factorization problem. Finds the best dictionary and the corresponding sparse code for approximating the data matrix X by solving: (U^*, V^*) = argmin 0.5 || X - U V ||_2^2 + alpha * || U ||_1
(U,V)
with || V_k ||_2 = 1 for all 0 <= k < n_components
where V is the dictionary and U is the sparse code. Read more in the User Guide. Parameters
Xndarray of shape (n_samples, n_features)
Data matrix.
n_componentsint
Number of dictionary atoms to extract.
alphaint
Sparsity controlling parameter.
max_iterint, default=100
Maximum number of iterations to perform.
tolfloat, default=1e-8
Tolerance for the stopping condition.
method{‘lars’, ‘cd’}, default=’lars’
The method used:
'lars': uses the least angle regression method to solve the lasso
problem (linear_model.lars_path);
'cd': uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). Lars will be faster if the estimated components are sparse.
n_jobsint, default=None
Number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
dict_initndarray of shape (n_components, n_features), default=None
Initial value for the dictionary for warm restart scenarios.
code_initndarray of shape (n_samples, n_components), default=None
Initial value for the sparse code for warm restart scenarios.
callbackcallable, default=None
Callable that gets invoked every five iterations
verbosebool, default=False
To control the verbosity of the procedure.
random_stateint, RandomState instance or None, default=None
Used for randomly initializing the dictionary. Pass an int for reproducible results across multiple function calls. See Glossary.
return_n_iterbool, default=False
Whether or not to return the number of iterations.
positive_dictbool, default=False
Whether to enforce positivity when finding the dictionary. New in version 0.20.
positive_codebool, default=False
Whether to enforce positivity when finding the code. New in version 0.20.
method_max_iterint, default=1000
Maximum number of iterations to perform. New in version 0.22. Returns
codendarray of shape (n_samples, n_components)
The sparse code factor in the matrix factorization.
dictionaryndarray of shape (n_components, n_features),
The dictionary factor in the matrix factorization.
errorsarray
Vector of errors at each iteration.
n_iterint
Number of iterations run. Returned only if return_n_iter is set to True. See also
dict_learning_online
DictionaryLearning
MiniBatchDictionaryLearning
SparsePCA
MiniBatchSparsePCA | sklearn.modules.generated.sklearn.decomposition.dict_learning#sklearn.decomposition.dict_learning |
sklearn.decomposition.dict_learning_online(X, n_components=2, *, alpha=1, n_iter=100, return_code=True, dict_init=None, callback=None, batch_size=3, verbose=False, shuffle=True, n_jobs=None, method='lars', iter_offset=0, random_state=None, return_inner_stats=False, inner_stats=None, return_n_iter=False, positive_dict=False, positive_code=False, method_max_iter=1000) [source]
Solves a dictionary learning matrix factorization problem online. Finds the best dictionary and the corresponding sparse code for approximating the data matrix X by solving: (U^*, V^*) = argmin 0.5 || X - U V ||_2^2 + alpha * || U ||_1
(U,V)
with || V_k ||_2 = 1 for all 0 <= k < n_components
where V is the dictionary and U is the sparse code. This is accomplished by repeatedly iterating over mini-batches by slicing the input data. Read more in the User Guide. Parameters
Xndarray of shape (n_samples, n_features)
Data matrix.
n_componentsint, default=2
Number of dictionary atoms to extract.
alphafloat, default=1
Sparsity controlling parameter.
n_iterint, default=100
Number of mini-batch iterations to perform.
return_codebool, default=True
Whether to also return the code U or just the dictionary V.
dict_initndarray of shape (n_components, n_features), default=None
Initial value for the dictionary for warm restart scenarios.
callbackcallable, default=None
callable that gets invoked every five iterations.
batch_sizeint, default=3
The number of samples to take in each batch.
verbosebool, default=False
To control the verbosity of the procedure.
shufflebool, default=True
Whether to shuffle the data before splitting it in batches.
n_jobsint, default=None
Number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
method{‘lars’, ‘cd’}, default=’lars’
'lars': uses the least angle regression method to solve the lasso problem (linear_model.lars_path);
'cd': uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). Lars will be faster if the estimated components are sparse.
iter_offsetint, default=0
Number of previous iterations completed on the dictionary used for initialization.
random_stateint, RandomState instance or None, default=None
Used for initializing the dictionary when dict_init is not specified, randomly shuffling the data when shuffle is set to True, and updating the dictionary. Pass an int for reproducible results across multiple function calls. See Glossary.
return_inner_statsbool, default=False
Return the inner statistics A (dictionary covariance) and B (data approximation). Useful to restart the algorithm in an online setting. If return_inner_stats is True, return_code is ignored.
inner_statstuple of (A, B) ndarrays, default=None
Inner sufficient statistics that are kept by the algorithm. Passing them at initialization is useful in online settings, to avoid losing the history of the evolution. A (n_components, n_components) is the dictionary covariance matrix. B (n_features, n_components) is the data approximation matrix.
return_n_iterbool, default=False
Whether or not to return the number of iterations.
positive_dictbool, default=False
Whether to enforce positivity when finding the dictionary. New in version 0.20.
positive_codebool, default=False
Whether to enforce positivity when finding the code. New in version 0.20.
method_max_iterint, default=1000
Maximum number of iterations to perform when solving the lasso problem. New in version 0.22. Returns
codendarray of shape (n_samples, n_components),
The sparse code (only returned if return_code=True).
dictionaryndarray of shape (n_components, n_features),
The solutions to the dictionary learning problem.
n_iterint
Number of iterations run. Returned only if return_n_iter is set to True. See also
dict_learning
DictionaryLearning
MiniBatchDictionaryLearning
SparsePCA
MiniBatchSparsePCA | sklearn.modules.generated.sklearn.decomposition.dict_learning_online#sklearn.decomposition.dict_learning_online |
class sklearn.decomposition.FactorAnalysis(n_components=None, *, tol=0.01, copy=True, max_iter=1000, noise_variance_init=None, svd_method='randomized', iterated_power=3, rotation=None, random_state=0) [source]
Factor Analysis (FA). A simple linear generative model with Gaussian latent variables. The observations are assumed to be caused by a linear transformation of lower dimensional latent factors and added Gaussian noise. Without loss of generality the factors are distributed according to a Gaussian with zero mean and unit covariance. The noise is also zero mean and has an arbitrary diagonal covariance matrix. If we would restrict the model further, by assuming that the Gaussian noise is even isotropic (all diagonal entries are the same) we would obtain PPCA. FactorAnalysis performs a maximum likelihood estimate of the so-called loading matrix, the transformation of the latent variables to the observed ones, using SVD based approach. Read more in the User Guide. New in version 0.13. Parameters
n_componentsint, default=None
Dimensionality of latent space, the number of components of X that are obtained after transform. If None, n_components is set to the number of features.
tolfloat, defaul=1e-2
Stopping tolerance for log-likelihood increase.
copybool, default=True
Whether to make a copy of X. If False, the input X gets overwritten during fitting.
max_iterint, default=1000
Maximum number of iterations.
noise_variance_initndarray of shape (n_features,), default=None
The initial guess of the noise variance for each feature. If None, it defaults to np.ones(n_features).
svd_method{‘lapack’, ‘randomized’}, default=’randomized’
Which SVD method to use. If ‘lapack’ use standard SVD from scipy.linalg, if ‘randomized’ use fast randomized_svd function. Defaults to ‘randomized’. For most applications ‘randomized’ will be sufficiently precise while providing significant speed gains. Accuracy can also be improved by setting higher values for iterated_power. If this is not sufficient, for maximum precision you should choose ‘lapack’.
iterated_powerint, default=3
Number of iterations for the power method. 3 by default. Only used if svd_method equals ‘randomized’.
rotation{‘varimax’, ‘quartimax’}, default=None
If not None, apply the indicated rotation. Currently, varimax and quartimax are implemented. See “The varimax criterion for analytic rotation in factor analysis” H. F. Kaiser, 1958. New in version 0.24.
random_stateint or RandomState instance, default=0
Only used when svd_method equals ‘randomized’. Pass an int for reproducible results across multiple function calls. See Glossary. Attributes
components_ndarray of shape (n_components, n_features)
Components with maximum variance.
loglike_list of shape (n_iterations,)
The log likelihood at each iteration.
noise_variance_ndarray of shape (n_features,)
The estimated noise variance for each feature.
n_iter_int
Number of iterations run.
mean_ndarray of shape (n_features,)
Per-feature empirical mean, estimated from the training set. See also
PCA
Principal component analysis is also a latent linear variable model which however assumes equal noise variance for each feature. This extra assumption makes probabilistic PCA faster as it can be computed in closed form.
FastICA
Independent component analysis, a latent variable model with non-Gaussian latent variables. References David Barber, Bayesian Reasoning and Machine Learning, Algorithm 21.1. Christopher M. Bishop: Pattern Recognition and Machine Learning, Chapter 12.2.4. Examples >>> from sklearn.datasets import load_digits
>>> from sklearn.decomposition import FactorAnalysis
>>> X, _ = load_digits(return_X_y=True)
>>> transformer = FactorAnalysis(n_components=7, random_state=0)
>>> X_transformed = transformer.fit_transform(X)
>>> X_transformed.shape
(1797, 7)
Methods
fit(X[, y]) Fit the FactorAnalysis model to X using SVD based approach
fit_transform(X[, y]) Fit to data, then transform it.
get_covariance() Compute data covariance with the FactorAnalysis model.
get_params([deep]) Get parameters for this estimator.
get_precision() Compute data precision matrix with the FactorAnalysis model.
score(X[, y]) Compute the average log-likelihood of the samples
score_samples(X) Compute the log-likelihood of each sample
set_params(**params) Set the parameters of this estimator.
transform(X) Apply dimensionality reduction to X using the model.
fit(X, y=None) [source]
Fit the FactorAnalysis model to X using SVD based approach Parameters
Xarray-like of shape (n_samples, n_features)
Training data.
yIgnored
Returns
self
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_covariance() [source]
Compute data covariance with the FactorAnalysis model. cov = components_.T * components_ + diag(noise_variance) Returns
covndarray of shape (n_features, n_features)
Estimated covariance of data.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_precision() [source]
Compute data precision matrix with the FactorAnalysis model. Returns
precisionndarray of shape (n_features, n_features)
Estimated precision of data.
score(X, y=None) [source]
Compute the average log-likelihood of the samples Parameters
Xndarray of shape (n_samples, n_features)
The data
yIgnored
Returns
llfloat
Average log-likelihood of the samples under the current model
score_samples(X) [source]
Compute the log-likelihood of each sample Parameters
Xndarray of shape (n_samples, n_features)
The data Returns
llndarray of shape (n_samples,)
Log-likelihood of each sample under the current model
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Apply dimensionality reduction to X using the model. Compute the expected mean of the latent variables. See Barber, 21.2.33 (or Bishop, 12.66). Parameters
Xarray-like of shape (n_samples, n_features)
Training data. Returns
X_newndarray of shape (n_samples, n_components)
The latent variables of X. | sklearn.modules.generated.sklearn.decomposition.factoranalysis#sklearn.decomposition.FactorAnalysis |
sklearn.decomposition.FactorAnalysis
class sklearn.decomposition.FactorAnalysis(n_components=None, *, tol=0.01, copy=True, max_iter=1000, noise_variance_init=None, svd_method='randomized', iterated_power=3, rotation=None, random_state=0) [source]
Factor Analysis (FA). A simple linear generative model with Gaussian latent variables. The observations are assumed to be caused by a linear transformation of lower dimensional latent factors and added Gaussian noise. Without loss of generality the factors are distributed according to a Gaussian with zero mean and unit covariance. The noise is also zero mean and has an arbitrary diagonal covariance matrix. If we would restrict the model further, by assuming that the Gaussian noise is even isotropic (all diagonal entries are the same) we would obtain PPCA. FactorAnalysis performs a maximum likelihood estimate of the so-called loading matrix, the transformation of the latent variables to the observed ones, using SVD based approach. Read more in the User Guide. New in version 0.13. Parameters
n_componentsint, default=None
Dimensionality of latent space, the number of components of X that are obtained after transform. If None, n_components is set to the number of features.
tolfloat, defaul=1e-2
Stopping tolerance for log-likelihood increase.
copybool, default=True
Whether to make a copy of X. If False, the input X gets overwritten during fitting.
max_iterint, default=1000
Maximum number of iterations.
noise_variance_initndarray of shape (n_features,), default=None
The initial guess of the noise variance for each feature. If None, it defaults to np.ones(n_features).
svd_method{‘lapack’, ‘randomized’}, default=’randomized’
Which SVD method to use. If ‘lapack’ use standard SVD from scipy.linalg, if ‘randomized’ use fast randomized_svd function. Defaults to ‘randomized’. For most applications ‘randomized’ will be sufficiently precise while providing significant speed gains. Accuracy can also be improved by setting higher values for iterated_power. If this is not sufficient, for maximum precision you should choose ‘lapack’.
iterated_powerint, default=3
Number of iterations for the power method. 3 by default. Only used if svd_method equals ‘randomized’.
rotation{‘varimax’, ‘quartimax’}, default=None
If not None, apply the indicated rotation. Currently, varimax and quartimax are implemented. See “The varimax criterion for analytic rotation in factor analysis” H. F. Kaiser, 1958. New in version 0.24.
random_stateint or RandomState instance, default=0
Only used when svd_method equals ‘randomized’. Pass an int for reproducible results across multiple function calls. See Glossary. Attributes
components_ndarray of shape (n_components, n_features)
Components with maximum variance.
loglike_list of shape (n_iterations,)
The log likelihood at each iteration.
noise_variance_ndarray of shape (n_features,)
The estimated noise variance for each feature.
n_iter_int
Number of iterations run.
mean_ndarray of shape (n_features,)
Per-feature empirical mean, estimated from the training set. See also
PCA
Principal component analysis is also a latent linear variable model which however assumes equal noise variance for each feature. This extra assumption makes probabilistic PCA faster as it can be computed in closed form.
FastICA
Independent component analysis, a latent variable model with non-Gaussian latent variables. References David Barber, Bayesian Reasoning and Machine Learning, Algorithm 21.1. Christopher M. Bishop: Pattern Recognition and Machine Learning, Chapter 12.2.4. Examples >>> from sklearn.datasets import load_digits
>>> from sklearn.decomposition import FactorAnalysis
>>> X, _ = load_digits(return_X_y=True)
>>> transformer = FactorAnalysis(n_components=7, random_state=0)
>>> X_transformed = transformer.fit_transform(X)
>>> X_transformed.shape
(1797, 7)
Methods
fit(X[, y]) Fit the FactorAnalysis model to X using SVD based approach
fit_transform(X[, y]) Fit to data, then transform it.
get_covariance() Compute data covariance with the FactorAnalysis model.
get_params([deep]) Get parameters for this estimator.
get_precision() Compute data precision matrix with the FactorAnalysis model.
score(X[, y]) Compute the average log-likelihood of the samples
score_samples(X) Compute the log-likelihood of each sample
set_params(**params) Set the parameters of this estimator.
transform(X) Apply dimensionality reduction to X using the model.
fit(X, y=None) [source]
Fit the FactorAnalysis model to X using SVD based approach Parameters
Xarray-like of shape (n_samples, n_features)
Training data.
yIgnored
Returns
self
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_covariance() [source]
Compute data covariance with the FactorAnalysis model. cov = components_.T * components_ + diag(noise_variance) Returns
covndarray of shape (n_features, n_features)
Estimated covariance of data.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_precision() [source]
Compute data precision matrix with the FactorAnalysis model. Returns
precisionndarray of shape (n_features, n_features)
Estimated precision of data.
score(X, y=None) [source]
Compute the average log-likelihood of the samples Parameters
Xndarray of shape (n_samples, n_features)
The data
yIgnored
Returns
llfloat
Average log-likelihood of the samples under the current model
score_samples(X) [source]
Compute the log-likelihood of each sample Parameters
Xndarray of shape (n_samples, n_features)
The data Returns
llndarray of shape (n_samples,)
Log-likelihood of each sample under the current model
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Apply dimensionality reduction to X using the model. Compute the expected mean of the latent variables. See Barber, 21.2.33 (or Bishop, 12.66). Parameters
Xarray-like of shape (n_samples, n_features)
Training data. Returns
X_newndarray of shape (n_samples, n_components)
The latent variables of X.
Examples using sklearn.decomposition.FactorAnalysis
Factor Analysis (with rotation) to visualize patterns
Model selection with Probabilistic PCA and Factor Analysis (FA)
Faces dataset decompositions | sklearn.modules.generated.sklearn.decomposition.factoranalysis |
fit(X, y=None) [source]
Fit the FactorAnalysis model to X using SVD based approach Parameters
Xarray-like of shape (n_samples, n_features)
Training data.
yIgnored
Returns
self | sklearn.modules.generated.sklearn.decomposition.factoranalysis#sklearn.decomposition.FactorAnalysis.fit |
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | sklearn.modules.generated.sklearn.decomposition.factoranalysis#sklearn.decomposition.FactorAnalysis.fit_transform |
get_covariance() [source]
Compute data covariance with the FactorAnalysis model. cov = components_.T * components_ + diag(noise_variance) Returns
covndarray of shape (n_features, n_features)
Estimated covariance of data. | sklearn.modules.generated.sklearn.decomposition.factoranalysis#sklearn.decomposition.FactorAnalysis.get_covariance |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.decomposition.factoranalysis#sklearn.decomposition.FactorAnalysis.get_params |
get_precision() [source]
Compute data precision matrix with the FactorAnalysis model. Returns
precisionndarray of shape (n_features, n_features)
Estimated precision of data. | sklearn.modules.generated.sklearn.decomposition.factoranalysis#sklearn.decomposition.FactorAnalysis.get_precision |
score(X, y=None) [source]
Compute the average log-likelihood of the samples Parameters
Xndarray of shape (n_samples, n_features)
The data
yIgnored
Returns
llfloat
Average log-likelihood of the samples under the current model | sklearn.modules.generated.sklearn.decomposition.factoranalysis#sklearn.decomposition.FactorAnalysis.score |
score_samples(X) [source]
Compute the log-likelihood of each sample Parameters
Xndarray of shape (n_samples, n_features)
The data Returns
llndarray of shape (n_samples,)
Log-likelihood of each sample under the current model | sklearn.modules.generated.sklearn.decomposition.factoranalysis#sklearn.decomposition.FactorAnalysis.score_samples |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.decomposition.factoranalysis#sklearn.decomposition.FactorAnalysis.set_params |
transform(X) [source]
Apply dimensionality reduction to X using the model. Compute the expected mean of the latent variables. See Barber, 21.2.33 (or Bishop, 12.66). Parameters
Xarray-like of shape (n_samples, n_features)
Training data. Returns
X_newndarray of shape (n_samples, n_components)
The latent variables of X. | sklearn.modules.generated.sklearn.decomposition.factoranalysis#sklearn.decomposition.FactorAnalysis.transform |
class sklearn.decomposition.FastICA(n_components=None, *, algorithm='parallel', whiten=True, fun='logcosh', fun_args=None, max_iter=200, tol=0.0001, w_init=None, random_state=None) [source]
FastICA: a fast algorithm for Independent Component Analysis. Read more in the User Guide. Parameters
n_componentsint, default=None
Number of components to use. If None is passed, all are used.
algorithm{‘parallel’, ‘deflation’}, default=’parallel’
Apply parallel or deflational algorithm for FastICA.
whitenbool, default=True
If whiten is false, the data is already considered to be whitened, and no whitening is performed.
fun{‘logcosh’, ‘exp’, ‘cube’} or callable, default=’logcosh’
The functional form of the G function used in the approximation to neg-entropy. Could be either ‘logcosh’, ‘exp’, or ‘cube’. You can also provide your own function. It should return a tuple containing the value of the function, and of its derivative, in the point. Example: def my_g(x):
return x ** 3, (3 * x ** 2).mean(axis=-1)
fun_argsdict, default=None
Arguments to send to the functional form. If empty and if fun=’logcosh’, fun_args will take value {‘alpha’ : 1.0}.
max_iterint, default=200
Maximum number of iterations during fit.
tolfloat, default=1e-4
Tolerance on update at each iteration.
w_initndarray of shape (n_components, n_components), default=None
The mixing matrix to be used to initialize the algorithm.
random_stateint, RandomState instance or None, default=None
Used to initialize w_init when not specified, with a normal distribution. Pass an int, for reproducible results across multiple function calls. See Glossary. Attributes
components_ndarray of shape (n_components, n_features)
The linear operator to apply to the data to get the independent sources. This is equal to the unmixing matrix when whiten is False, and equal to np.dot(unmixing_matrix, self.whitening_) when whiten is True.
mixing_ndarray of shape (n_features, n_components)
The pseudo-inverse of components_. It is the linear operator that maps independent sources to the data.
mean_ndarray of shape(n_features,)
The mean over features. Only set if self.whiten is True.
n_iter_int
If the algorithm is “deflation”, n_iter is the maximum number of iterations run across all components. Else they are just the number of iterations taken to converge.
whitening_ndarray of shape (n_components, n_features)
Only set if whiten is ‘True’. This is the pre-whitening matrix that projects data onto the first n_components principal components. Notes Implementation based on A. Hyvarinen and E. Oja, Independent Component Analysis: Algorithms and Applications, Neural Networks, 13(4-5), 2000, pp. 411-430 Examples >>> from sklearn.datasets import load_digits
>>> from sklearn.decomposition import FastICA
>>> X, _ = load_digits(return_X_y=True)
>>> transformer = FastICA(n_components=7,
... random_state=0)
>>> X_transformed = transformer.fit_transform(X)
>>> X_transformed.shape
(1797, 7)
Methods
fit(X[, y]) Fit the model to X.
fit_transform(X[, y]) Fit the model and recover the sources from X.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X[, copy]) Transform the sources back to the mixed data (apply mixing matrix).
set_params(**params) Set the parameters of this estimator.
transform(X[, copy]) Recover the sources from X (apply the unmixing matrix).
fit(X, y=None) [source]
Fit the model to X. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Returns
self
fit_transform(X, y=None) [source]
Fit the model and recover the sources from X. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Returns
X_newndarray of shape (n_samples, n_components)
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X, copy=True) [source]
Transform the sources back to the mixed data (apply mixing matrix). Parameters
Xarray-like of shape (n_samples, n_components)
Sources, where n_samples is the number of samples and n_components is the number of components.
copybool, default=True
If False, data passed to fit are overwritten. Defaults to True. Returns
X_newndarray of shape (n_samples, n_features)
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X, copy=True) [source]
Recover the sources from X (apply the unmixing matrix). Parameters
Xarray-like of shape (n_samples, n_features)
Data to transform, where n_samples is the number of samples and n_features is the number of features.
copybool, default=True
If False, data passed to fit can be overwritten. Defaults to True. Returns
X_newndarray of shape (n_samples, n_components) | sklearn.modules.generated.sklearn.decomposition.fastica#sklearn.decomposition.FastICA |
sklearn.decomposition.fastica(X, n_components=None, *, algorithm='parallel', whiten=True, fun='logcosh', fun_args=None, max_iter=200, tol=0.0001, w_init=None, random_state=None, return_X_mean=False, compute_sources=True, return_n_iter=False) [source]
Perform Fast Independent Component Analysis. Read more in the User Guide. Parameters
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features.
n_componentsint, default=None
Number of components to extract. If None no dimension reduction is performed.
algorithm{‘parallel’, ‘deflation’}, default=’parallel’
Apply a parallel or deflational FASTICA algorithm.
whitenbool, default=True
If True perform an initial whitening of the data. If False, the data is assumed to have already been preprocessed: it should be centered, normed and white. Otherwise you will get incorrect results. In this case the parameter n_components will be ignored.
fun{‘logcosh’, ‘exp’, ‘cube’} or callable, default=’logcosh’
The functional form of the G function used in the approximation to neg-entropy. Could be either ‘logcosh’, ‘exp’, or ‘cube’. You can also provide your own function. It should return a tuple containing the value of the function, and of its derivative, in the point. The derivative should be averaged along its last dimension. Example: def my_g(x):
return x ** 3, np.mean(3 * x ** 2, axis=-1)
fun_argsdict, default=None
Arguments to send to the functional form. If empty or None and if fun=’logcosh’, fun_args will take value {‘alpha’ : 1.0}
max_iterint, default=200
Maximum number of iterations to perform.
tolfloat, default=1e-04
A positive scalar giving the tolerance at which the un-mixing matrix is considered to have converged.
w_initndarray of shape (n_components, n_components), default=None
Initial un-mixing array of dimension (n.comp,n.comp). If None (default) then an array of normal r.v.’s is used.
random_stateint, RandomState instance or None, default=None
Used to initialize w_init when not specified, with a normal distribution. Pass an int, for reproducible results across multiple function calls. See Glossary.
return_X_meanbool, default=False
If True, X_mean is returned too.
compute_sourcesbool, default=True
If False, sources are not computed, but only the rotation matrix. This can save memory when working with big data. Defaults to True.
return_n_iterbool, default=False
Whether or not to return the number of iterations. Returns
Kndarray of shape (n_components, n_features) or None
If whiten is ‘True’, K is the pre-whitening matrix that projects data onto the first n_components principal components. If whiten is ‘False’, K is ‘None’.
Wndarray of shape (n_components, n_components)
The square matrix that unmixes the data after whitening. The mixing matrix is the pseudo-inverse of matrix W K if K is not None, else it is the inverse of W.
Sndarray of shape (n_samples, n_components) or None
Estimated source matrix
X_meanndarray of shape (n_features,)
The mean over features. Returned only if return_X_mean is True.
n_iterint
If the algorithm is “deflation”, n_iter is the maximum number of iterations run across all components. Else they are just the number of iterations taken to converge. This is returned only when return_n_iter is set to True. Notes The data matrix X is considered to be a linear combination of non-Gaussian (independent) components i.e. X = AS where columns of S contain the independent components and A is a linear mixing matrix. In short ICA attempts to un-mix' the data by estimating an
un-mixing matrix W where ``S = W K X.` While FastICA was proposed to estimate as many sources as features, it is possible to estimate less by setting n_components < n_features. It this case K is not a square matrix and the estimated A is the pseudo-inverse of W K. This implementation was originally made for data of shape [n_features, n_samples]. Now the input is transposed before the algorithm is applied. This makes it slightly faster for Fortran-ordered input. Implemented using FastICA: A. Hyvarinen and E. Oja, Independent Component Analysis: Algorithms and Applications, Neural Networks, 13(4-5), 2000, pp. 411-430 | sklearn.modules.generated.fastica-function#sklearn.decomposition.fastica |
sklearn.decomposition.FastICA
class sklearn.decomposition.FastICA(n_components=None, *, algorithm='parallel', whiten=True, fun='logcosh', fun_args=None, max_iter=200, tol=0.0001, w_init=None, random_state=None) [source]
FastICA: a fast algorithm for Independent Component Analysis. Read more in the User Guide. Parameters
n_componentsint, default=None
Number of components to use. If None is passed, all are used.
algorithm{‘parallel’, ‘deflation’}, default=’parallel’
Apply parallel or deflational algorithm for FastICA.
whitenbool, default=True
If whiten is false, the data is already considered to be whitened, and no whitening is performed.
fun{‘logcosh’, ‘exp’, ‘cube’} or callable, default=’logcosh’
The functional form of the G function used in the approximation to neg-entropy. Could be either ‘logcosh’, ‘exp’, or ‘cube’. You can also provide your own function. It should return a tuple containing the value of the function, and of its derivative, in the point. Example: def my_g(x):
return x ** 3, (3 * x ** 2).mean(axis=-1)
fun_argsdict, default=None
Arguments to send to the functional form. If empty and if fun=’logcosh’, fun_args will take value {‘alpha’ : 1.0}.
max_iterint, default=200
Maximum number of iterations during fit.
tolfloat, default=1e-4
Tolerance on update at each iteration.
w_initndarray of shape (n_components, n_components), default=None
The mixing matrix to be used to initialize the algorithm.
random_stateint, RandomState instance or None, default=None
Used to initialize w_init when not specified, with a normal distribution. Pass an int, for reproducible results across multiple function calls. See Glossary. Attributes
components_ndarray of shape (n_components, n_features)
The linear operator to apply to the data to get the independent sources. This is equal to the unmixing matrix when whiten is False, and equal to np.dot(unmixing_matrix, self.whitening_) when whiten is True.
mixing_ndarray of shape (n_features, n_components)
The pseudo-inverse of components_. It is the linear operator that maps independent sources to the data.
mean_ndarray of shape(n_features,)
The mean over features. Only set if self.whiten is True.
n_iter_int
If the algorithm is “deflation”, n_iter is the maximum number of iterations run across all components. Else they are just the number of iterations taken to converge.
whitening_ndarray of shape (n_components, n_features)
Only set if whiten is ‘True’. This is the pre-whitening matrix that projects data onto the first n_components principal components. Notes Implementation based on A. Hyvarinen and E. Oja, Independent Component Analysis: Algorithms and Applications, Neural Networks, 13(4-5), 2000, pp. 411-430 Examples >>> from sklearn.datasets import load_digits
>>> from sklearn.decomposition import FastICA
>>> X, _ = load_digits(return_X_y=True)
>>> transformer = FastICA(n_components=7,
... random_state=0)
>>> X_transformed = transformer.fit_transform(X)
>>> X_transformed.shape
(1797, 7)
Methods
fit(X[, y]) Fit the model to X.
fit_transform(X[, y]) Fit the model and recover the sources from X.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X[, copy]) Transform the sources back to the mixed data (apply mixing matrix).
set_params(**params) Set the parameters of this estimator.
transform(X[, copy]) Recover the sources from X (apply the unmixing matrix).
fit(X, y=None) [source]
Fit the model to X. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Returns
self
fit_transform(X, y=None) [source]
Fit the model and recover the sources from X. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Returns
X_newndarray of shape (n_samples, n_components)
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X, copy=True) [source]
Transform the sources back to the mixed data (apply mixing matrix). Parameters
Xarray-like of shape (n_samples, n_components)
Sources, where n_samples is the number of samples and n_components is the number of components.
copybool, default=True
If False, data passed to fit are overwritten. Defaults to True. Returns
X_newndarray of shape (n_samples, n_features)
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X, copy=True) [source]
Recover the sources from X (apply the unmixing matrix). Parameters
Xarray-like of shape (n_samples, n_features)
Data to transform, where n_samples is the number of samples and n_features is the number of features.
copybool, default=True
If False, data passed to fit can be overwritten. Defaults to True. Returns
X_newndarray of shape (n_samples, n_components)
Examples using sklearn.decomposition.FastICA
Blind source separation using FastICA
FastICA on 2D point clouds
Faces dataset decompositions | sklearn.modules.generated.sklearn.decomposition.fastica |
fit(X, y=None) [source]
Fit the model to X. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Returns
self | sklearn.modules.generated.sklearn.decomposition.fastica#sklearn.decomposition.FastICA.fit |
fit_transform(X, y=None) [source]
Fit the model and recover the sources from X. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Returns
X_newndarray of shape (n_samples, n_components) | sklearn.modules.generated.sklearn.decomposition.fastica#sklearn.decomposition.FastICA.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.decomposition.fastica#sklearn.decomposition.FastICA.get_params |
inverse_transform(X, copy=True) [source]
Transform the sources back to the mixed data (apply mixing matrix). Parameters
Xarray-like of shape (n_samples, n_components)
Sources, where n_samples is the number of samples and n_components is the number of components.
copybool, default=True
If False, data passed to fit are overwritten. Defaults to True. Returns
X_newndarray of shape (n_samples, n_features) | sklearn.modules.generated.sklearn.decomposition.fastica#sklearn.decomposition.FastICA.inverse_transform |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.decomposition.fastica#sklearn.decomposition.FastICA.set_params |
transform(X, copy=True) [source]
Recover the sources from X (apply the unmixing matrix). Parameters
Xarray-like of shape (n_samples, n_features)
Data to transform, where n_samples is the number of samples and n_features is the number of features.
copybool, default=True
If False, data passed to fit can be overwritten. Defaults to True. Returns
X_newndarray of shape (n_samples, n_components) | sklearn.modules.generated.sklearn.decomposition.fastica#sklearn.decomposition.FastICA.transform |
class sklearn.decomposition.IncrementalPCA(n_components=None, *, whiten=False, copy=True, batch_size=None) [source]
Incremental principal components analysis (IPCA). Linear dimensionality reduction using Singular Value Decomposition of the data, keeping only the most significant singular vectors to project the data to a lower dimensional space. The input data is centered but not scaled for each feature before applying the SVD. Depending on the size of the input data, this algorithm can be much more memory efficient than a PCA, and allows sparse input. This algorithm has constant memory complexity, on the order of batch_size * n_features, enabling use of np.memmap files without loading the entire file into memory. For sparse matrices, the input is converted to dense in batches (in order to be able to subtract the mean) which avoids storing the entire dense matrix at any one time. The computational overhead of each SVD is O(batch_size * n_features ** 2), but only 2 * batch_size samples remain in memory at a time. There will be n_samples / batch_size SVD computations to get the principal components, versus 1 large SVD of complexity O(n_samples * n_features ** 2) for PCA. Read more in the User Guide. New in version 0.16. Parameters
n_componentsint, default=None
Number of components to keep. If n_components is None, then n_components is set to min(n_samples, n_features).
whitenbool, default=False
When True (False by default) the components_ vectors are divided by n_samples times components_ to ensure uncorrelated outputs with unit component-wise variances. Whitening will remove some information from the transformed signal (the relative variance scales of the components) but can sometimes improve the predictive accuracy of the downstream estimators by making data respect some hard-wired assumptions.
copybool, default=True
If False, X will be overwritten. copy=False can be used to save memory but is unsafe for general use.
batch_sizeint, default=None
The number of samples to use for each batch. Only used when calling fit. If batch_size is None, then batch_size is inferred from the data and set to 5 * n_features, to provide a balance between approximation accuracy and memory consumption. Attributes
components_ndarray of shape (n_components, n_features)
Components with maximum variance.
explained_variance_ndarray of shape (n_components,)
Variance explained by each of the selected components.
explained_variance_ratio_ndarray of shape (n_components,)
Percentage of variance explained by each of the selected components. If all components are stored, the sum of explained variances is equal to 1.0.
singular_values_ndarray of shape (n_components,)
The singular values corresponding to each of the selected components. The singular values are equal to the 2-norms of the n_components variables in the lower-dimensional space.
mean_ndarray of shape (n_features,)
Per-feature empirical mean, aggregate over calls to partial_fit.
var_ndarray of shape (n_features,)
Per-feature empirical variance, aggregate over calls to partial_fit.
noise_variance_float
The estimated noise covariance following the Probabilistic PCA model from Tipping and Bishop 1999. See “Pattern Recognition and Machine Learning” by C. Bishop, 12.2.1 p. 574 or http://www.miketipping.com/papers/met-mppca.pdf.
n_components_int
The estimated number of components. Relevant when n_components=None.
n_samples_seen_int
The number of samples processed by the estimator. Will be reset on new calls to fit, but increments across partial_fit calls.
batch_size_int
Inferred batch size from batch_size. See also
PCA
KernelPCA
SparsePCA
TruncatedSVD
Notes Implements the incremental PCA model from: D. Ross, J. Lim, R. Lin, M. Yang, Incremental Learning for Robust Visual Tracking, International Journal of Computer Vision, Volume 77, Issue 1-3, pp. 125-141, May 2008. See https://www.cs.toronto.edu/~dross/ivt/RossLimLinYang_ijcv.pdf This model is an extension of the Sequential Karhunen-Loeve Transform from: A. Levy and M. Lindenbaum, Sequential Karhunen-Loeve Basis Extraction and its Application to Images, IEEE Transactions on Image Processing, Volume 9, Number 8, pp. 1371-1374, August 2000. See https://www.cs.technion.ac.il/~mic/doc/skl-ip.pdf We have specifically abstained from an optimization used by authors of both papers, a QR decomposition used in specific situations to reduce the algorithmic complexity of the SVD. The source for this technique is Matrix Computations, Third Edition, G. Holub and C. Van Loan, Chapter 5, section 5.4.4, pp 252-253.. This technique has been omitted because it is advantageous only when decomposing a matrix with n_samples (rows) >= 5/3 * n_features (columns), and hurts the readability of the implemented algorithm. This would be a good opportunity for future optimization, if it is deemed necessary. References D. Ross, J. Lim, R. Lin, M. Yang. Incremental Learning for Robust Visual Tracking, International Journal of Computer Vision, Volume 77, Issue 1-3, pp. 125-141, May 2008. G. Golub and C. Van Loan. Matrix Computations, Third Edition, Chapter 5, Section 5.4.4, pp. 252-253. Examples >>> from sklearn.datasets import load_digits
>>> from sklearn.decomposition import IncrementalPCA
>>> from scipy import sparse
>>> X, _ = load_digits(return_X_y=True)
>>> transformer = IncrementalPCA(n_components=7, batch_size=200)
>>> # either partially fit on smaller batches of data
>>> transformer.partial_fit(X[:100, :])
IncrementalPCA(batch_size=200, n_components=7)
>>> # or let the fit function itself divide the data into batches
>>> X_sparse = sparse.csr_matrix(X)
>>> X_transformed = transformer.fit_transform(X_sparse)
>>> X_transformed.shape
(1797, 7)
Methods
fit(X[, y]) Fit the model with X, using minibatches of size batch_size.
fit_transform(X[, y]) Fit to data, then transform it.
get_covariance() Compute data covariance with the generative model.
get_params([deep]) Get parameters for this estimator.
get_precision() Compute data precision matrix with the generative model.
inverse_transform(X) Transform data back to its original space.
partial_fit(X[, y, check_input]) Incremental fit with X.
set_params(**params) Set the parameters of this estimator.
transform(X) Apply dimensionality reduction to X.
fit(X, y=None) [source]
Fit the model with X, using minibatches of size batch_size. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Returns
selfobject
Returns the instance itself.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_covariance() [source]
Compute data covariance with the generative model. cov = components_.T * S**2 * components_ + sigma2 * eye(n_features) where S**2 contains the explained variances, and sigma2 contains the noise variances. Returns
covarray, shape=(n_features, n_features)
Estimated covariance of data.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_precision() [source]
Compute data precision matrix with the generative model. Equals the inverse of the covariance but computed with the matrix inversion lemma for efficiency. Returns
precisionarray, shape=(n_features, n_features)
Estimated precision of data.
inverse_transform(X) [source]
Transform data back to its original space. In other words, return an input X_original whose transform would be X. Parameters
Xarray-like, shape (n_samples, n_components)
New data, where n_samples is the number of samples and n_components is the number of components. Returns
X_original array-like, shape (n_samples, n_features)
Notes If whitening is enabled, inverse_transform will compute the exact inverse operation, which includes reversing whitening.
partial_fit(X, y=None, check_input=True) [source]
Incremental fit with X. All of X is processed as a single batch. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
check_inputbool, default=True
Run check_array on X.
yIgnored
Returns
selfobject
Returns the instance itself.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Apply dimensionality reduction to X. X is projected on the first principal components previously extracted from a training set, using minibatches of size batch_size if X is sparse. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
New data, where n_samples is the number of samples and n_features is the number of features. Returns
X_newndarray of shape (n_samples, n_components)
Examples >>> import numpy as np
>>> from sklearn.decomposition import IncrementalPCA
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2],
... [1, 1], [2, 1], [3, 2]])
>>> ipca = IncrementalPCA(n_components=2, batch_size=3)
>>> ipca.fit(X)
IncrementalPCA(batch_size=3, n_components=2)
>>> ipca.transform(X) | sklearn.modules.generated.sklearn.decomposition.incrementalpca#sklearn.decomposition.IncrementalPCA |
sklearn.decomposition.IncrementalPCA
class sklearn.decomposition.IncrementalPCA(n_components=None, *, whiten=False, copy=True, batch_size=None) [source]
Incremental principal components analysis (IPCA). Linear dimensionality reduction using Singular Value Decomposition of the data, keeping only the most significant singular vectors to project the data to a lower dimensional space. The input data is centered but not scaled for each feature before applying the SVD. Depending on the size of the input data, this algorithm can be much more memory efficient than a PCA, and allows sparse input. This algorithm has constant memory complexity, on the order of batch_size * n_features, enabling use of np.memmap files without loading the entire file into memory. For sparse matrices, the input is converted to dense in batches (in order to be able to subtract the mean) which avoids storing the entire dense matrix at any one time. The computational overhead of each SVD is O(batch_size * n_features ** 2), but only 2 * batch_size samples remain in memory at a time. There will be n_samples / batch_size SVD computations to get the principal components, versus 1 large SVD of complexity O(n_samples * n_features ** 2) for PCA. Read more in the User Guide. New in version 0.16. Parameters
n_componentsint, default=None
Number of components to keep. If n_components is None, then n_components is set to min(n_samples, n_features).
whitenbool, default=False
When True (False by default) the components_ vectors are divided by n_samples times components_ to ensure uncorrelated outputs with unit component-wise variances. Whitening will remove some information from the transformed signal (the relative variance scales of the components) but can sometimes improve the predictive accuracy of the downstream estimators by making data respect some hard-wired assumptions.
copybool, default=True
If False, X will be overwritten. copy=False can be used to save memory but is unsafe for general use.
batch_sizeint, default=None
The number of samples to use for each batch. Only used when calling fit. If batch_size is None, then batch_size is inferred from the data and set to 5 * n_features, to provide a balance between approximation accuracy and memory consumption. Attributes
components_ndarray of shape (n_components, n_features)
Components with maximum variance.
explained_variance_ndarray of shape (n_components,)
Variance explained by each of the selected components.
explained_variance_ratio_ndarray of shape (n_components,)
Percentage of variance explained by each of the selected components. If all components are stored, the sum of explained variances is equal to 1.0.
singular_values_ndarray of shape (n_components,)
The singular values corresponding to each of the selected components. The singular values are equal to the 2-norms of the n_components variables in the lower-dimensional space.
mean_ndarray of shape (n_features,)
Per-feature empirical mean, aggregate over calls to partial_fit.
var_ndarray of shape (n_features,)
Per-feature empirical variance, aggregate over calls to partial_fit.
noise_variance_float
The estimated noise covariance following the Probabilistic PCA model from Tipping and Bishop 1999. See “Pattern Recognition and Machine Learning” by C. Bishop, 12.2.1 p. 574 or http://www.miketipping.com/papers/met-mppca.pdf.
n_components_int
The estimated number of components. Relevant when n_components=None.
n_samples_seen_int
The number of samples processed by the estimator. Will be reset on new calls to fit, but increments across partial_fit calls.
batch_size_int
Inferred batch size from batch_size. See also
PCA
KernelPCA
SparsePCA
TruncatedSVD
Notes Implements the incremental PCA model from: D. Ross, J. Lim, R. Lin, M. Yang, Incremental Learning for Robust Visual Tracking, International Journal of Computer Vision, Volume 77, Issue 1-3, pp. 125-141, May 2008. See https://www.cs.toronto.edu/~dross/ivt/RossLimLinYang_ijcv.pdf This model is an extension of the Sequential Karhunen-Loeve Transform from: A. Levy and M. Lindenbaum, Sequential Karhunen-Loeve Basis Extraction and its Application to Images, IEEE Transactions on Image Processing, Volume 9, Number 8, pp. 1371-1374, August 2000. See https://www.cs.technion.ac.il/~mic/doc/skl-ip.pdf We have specifically abstained from an optimization used by authors of both papers, a QR decomposition used in specific situations to reduce the algorithmic complexity of the SVD. The source for this technique is Matrix Computations, Third Edition, G. Holub and C. Van Loan, Chapter 5, section 5.4.4, pp 252-253.. This technique has been omitted because it is advantageous only when decomposing a matrix with n_samples (rows) >= 5/3 * n_features (columns), and hurts the readability of the implemented algorithm. This would be a good opportunity for future optimization, if it is deemed necessary. References D. Ross, J. Lim, R. Lin, M. Yang. Incremental Learning for Robust Visual Tracking, International Journal of Computer Vision, Volume 77, Issue 1-3, pp. 125-141, May 2008. G. Golub and C. Van Loan. Matrix Computations, Third Edition, Chapter 5, Section 5.4.4, pp. 252-253. Examples >>> from sklearn.datasets import load_digits
>>> from sklearn.decomposition import IncrementalPCA
>>> from scipy import sparse
>>> X, _ = load_digits(return_X_y=True)
>>> transformer = IncrementalPCA(n_components=7, batch_size=200)
>>> # either partially fit on smaller batches of data
>>> transformer.partial_fit(X[:100, :])
IncrementalPCA(batch_size=200, n_components=7)
>>> # or let the fit function itself divide the data into batches
>>> X_sparse = sparse.csr_matrix(X)
>>> X_transformed = transformer.fit_transform(X_sparse)
>>> X_transformed.shape
(1797, 7)
Methods
fit(X[, y]) Fit the model with X, using minibatches of size batch_size.
fit_transform(X[, y]) Fit to data, then transform it.
get_covariance() Compute data covariance with the generative model.
get_params([deep]) Get parameters for this estimator.
get_precision() Compute data precision matrix with the generative model.
inverse_transform(X) Transform data back to its original space.
partial_fit(X[, y, check_input]) Incremental fit with X.
set_params(**params) Set the parameters of this estimator.
transform(X) Apply dimensionality reduction to X.
fit(X, y=None) [source]
Fit the model with X, using minibatches of size batch_size. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Returns
selfobject
Returns the instance itself.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_covariance() [source]
Compute data covariance with the generative model. cov = components_.T * S**2 * components_ + sigma2 * eye(n_features) where S**2 contains the explained variances, and sigma2 contains the noise variances. Returns
covarray, shape=(n_features, n_features)
Estimated covariance of data.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_precision() [source]
Compute data precision matrix with the generative model. Equals the inverse of the covariance but computed with the matrix inversion lemma for efficiency. Returns
precisionarray, shape=(n_features, n_features)
Estimated precision of data.
inverse_transform(X) [source]
Transform data back to its original space. In other words, return an input X_original whose transform would be X. Parameters
Xarray-like, shape (n_samples, n_components)
New data, where n_samples is the number of samples and n_components is the number of components. Returns
X_original array-like, shape (n_samples, n_features)
Notes If whitening is enabled, inverse_transform will compute the exact inverse operation, which includes reversing whitening.
partial_fit(X, y=None, check_input=True) [source]
Incremental fit with X. All of X is processed as a single batch. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
check_inputbool, default=True
Run check_array on X.
yIgnored
Returns
selfobject
Returns the instance itself.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Apply dimensionality reduction to X. X is projected on the first principal components previously extracted from a training set, using minibatches of size batch_size if X is sparse. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
New data, where n_samples is the number of samples and n_features is the number of features. Returns
X_newndarray of shape (n_samples, n_components)
Examples >>> import numpy as np
>>> from sklearn.decomposition import IncrementalPCA
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2],
... [1, 1], [2, 1], [3, 2]])
>>> ipca = IncrementalPCA(n_components=2, batch_size=3)
>>> ipca.fit(X)
IncrementalPCA(batch_size=3, n_components=2)
>>> ipca.transform(X)
Examples using sklearn.decomposition.IncrementalPCA
Incremental PCA | sklearn.modules.generated.sklearn.decomposition.incrementalpca |
fit(X, y=None) [source]
Fit the model with X, using minibatches of size batch_size. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Returns
selfobject
Returns the instance itself. | sklearn.modules.generated.sklearn.decomposition.incrementalpca#sklearn.decomposition.IncrementalPCA.fit |
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | sklearn.modules.generated.sklearn.decomposition.incrementalpca#sklearn.decomposition.IncrementalPCA.fit_transform |
get_covariance() [source]
Compute data covariance with the generative model. cov = components_.T * S**2 * components_ + sigma2 * eye(n_features) where S**2 contains the explained variances, and sigma2 contains the noise variances. Returns
covarray, shape=(n_features, n_features)
Estimated covariance of data. | sklearn.modules.generated.sklearn.decomposition.incrementalpca#sklearn.decomposition.IncrementalPCA.get_covariance |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.decomposition.incrementalpca#sklearn.decomposition.IncrementalPCA.get_params |
get_precision() [source]
Compute data precision matrix with the generative model. Equals the inverse of the covariance but computed with the matrix inversion lemma for efficiency. Returns
precisionarray, shape=(n_features, n_features)
Estimated precision of data. | sklearn.modules.generated.sklearn.decomposition.incrementalpca#sklearn.decomposition.IncrementalPCA.get_precision |
inverse_transform(X) [source]
Transform data back to its original space. In other words, return an input X_original whose transform would be X. Parameters
Xarray-like, shape (n_samples, n_components)
New data, where n_samples is the number of samples and n_components is the number of components. Returns
X_original array-like, shape (n_samples, n_features)
Notes If whitening is enabled, inverse_transform will compute the exact inverse operation, which includes reversing whitening. | sklearn.modules.generated.sklearn.decomposition.incrementalpca#sklearn.decomposition.IncrementalPCA.inverse_transform |
partial_fit(X, y=None, check_input=True) [source]
Incremental fit with X. All of X is processed as a single batch. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
check_inputbool, default=True
Run check_array on X.
yIgnored
Returns
selfobject
Returns the instance itself. | sklearn.modules.generated.sklearn.decomposition.incrementalpca#sklearn.decomposition.IncrementalPCA.partial_fit |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.decomposition.incrementalpca#sklearn.decomposition.IncrementalPCA.set_params |
transform(X) [source]
Apply dimensionality reduction to X. X is projected on the first principal components previously extracted from a training set, using minibatches of size batch_size if X is sparse. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
New data, where n_samples is the number of samples and n_features is the number of features. Returns
X_newndarray of shape (n_samples, n_components)
Examples >>> import numpy as np
>>> from sklearn.decomposition import IncrementalPCA
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2],
... [1, 1], [2, 1], [3, 2]])
>>> ipca = IncrementalPCA(n_components=2, batch_size=3)
>>> ipca.fit(X)
IncrementalPCA(batch_size=3, n_components=2)
>>> ipca.transform(X) | sklearn.modules.generated.sklearn.decomposition.incrementalpca#sklearn.decomposition.IncrementalPCA.transform |
class sklearn.decomposition.KernelPCA(n_components=None, *, kernel='linear', gamma=None, degree=3, coef0=1, kernel_params=None, alpha=1.0, fit_inverse_transform=False, eigen_solver='auto', tol=0, max_iter=None, remove_zero_eig=False, random_state=None, copy_X=True, n_jobs=None) [source]
Kernel Principal component analysis (KPCA). Non-linear dimensionality reduction through the use of kernels (see Pairwise metrics, Affinities and Kernels). Read more in the User Guide. Parameters
n_componentsint, default=None
Number of components. If None, all non-zero components are kept.
kernel{‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘cosine’, ‘precomputed’}, default=’linear’
Kernel used for PCA.
gammafloat, default=None
Kernel coefficient for rbf, poly and sigmoid kernels. Ignored by other kernels. If gamma is None, then it is set to 1/n_features.
degreeint, default=3
Degree for poly kernels. Ignored by other kernels.
coef0float, default=1
Independent term in poly and sigmoid kernels. Ignored by other kernels.
kernel_paramsdict, default=None
Parameters (keyword arguments) and values for kernel passed as callable object. Ignored by other kernels.
alphafloat, default=1.0
Hyperparameter of the ridge regression that learns the inverse transform (when fit_inverse_transform=True).
fit_inverse_transformbool, default=False
Learn the inverse transform for non-precomputed kernels. (i.e. learn to find the pre-image of a point)
eigen_solver{‘auto’, ‘dense’, ‘arpack’}, default=’auto’
Select eigensolver to use. If n_components is much less than the number of training samples, arpack may be more efficient than the dense eigensolver.
tolfloat, default=0
Convergence tolerance for arpack. If 0, optimal value will be chosen by arpack.
max_iterint, default=None
Maximum number of iterations for arpack. If None, optimal value will be chosen by arpack.
remove_zero_eigbool, default=False
If True, then all components with zero eigenvalues are removed, so that the number of components in the output may be < n_components (and sometimes even zero due to numerical instability). When n_components is None, this parameter is ignored and components with zero eigenvalues are removed regardless.
random_stateint, RandomState instance or None, default=None
Used when eigen_solver == ‘arpack’. Pass an int for reproducible results across multiple function calls. See Glossary. New in version 0.18.
copy_Xbool, default=True
If True, input X is copied and stored by the model in the X_fit_ attribute. If no further changes will be done to X, setting copy_X=False saves memory by storing a reference. New in version 0.18.
n_jobsint, default=None
The number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. New in version 0.18. Attributes
lambdas_ndarray of shape (n_components,)
Eigenvalues of the centered kernel matrix in decreasing order. If n_components and remove_zero_eig are not set, then all values are stored.
alphas_ndarray of shape (n_samples, n_components)
Eigenvectors of the centered kernel matrix. If n_components and remove_zero_eig are not set, then all components are stored.
dual_coef_ndarray of shape (n_samples, n_features)
Inverse transform matrix. Only available when fit_inverse_transform is True.
X_transformed_fit_ndarray of shape (n_samples, n_components)
Projection of the fitted data on the kernel principal components. Only available when fit_inverse_transform is True.
X_fit_ndarray of shape (n_samples, n_features)
The data used to fit the model. If copy_X=False, then X_fit_ is a reference. This attribute is used for the calls to transform. References Kernel PCA was introduced in:
Bernhard Schoelkopf, Alexander J. Smola, and Klaus-Robert Mueller. 1999. Kernel principal component analysis. In Advances in kernel methods, MIT Press, Cambridge, MA, USA 327-352. Examples >>> from sklearn.datasets import load_digits
>>> from sklearn.decomposition import KernelPCA
>>> X, _ = load_digits(return_X_y=True)
>>> transformer = KernelPCA(n_components=7, kernel='linear')
>>> X_transformed = transformer.fit_transform(X)
>>> X_transformed.shape
(1797, 7)
Methods
fit(X[, y]) Fit the model from data in X.
fit_transform(X[, y]) Fit the model from data in X and transform X.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X) Transform X back to original space.
set_params(**params) Set the parameters of this estimator.
transform(X) Transform X.
fit(X, y=None) [source]
Fit the model from data in X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features. Returns
selfobject
Returns the instance itself.
fit_transform(X, y=None, **params) [source]
Fit the model from data in X and transform X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features. Returns
X_newndarray of shape (n_samples, n_components)
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X) [source]
Transform X back to original space. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_components)
Returns
X_newndarray of shape (n_samples, n_features)
References “Learning to Find Pre-Images”, G BakIr et al, 2004.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Transform X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Returns
X_newndarray of shape (n_samples, n_components) | sklearn.modules.generated.sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA |
sklearn.decomposition.KernelPCA
class sklearn.decomposition.KernelPCA(n_components=None, *, kernel='linear', gamma=None, degree=3, coef0=1, kernel_params=None, alpha=1.0, fit_inverse_transform=False, eigen_solver='auto', tol=0, max_iter=None, remove_zero_eig=False, random_state=None, copy_X=True, n_jobs=None) [source]
Kernel Principal component analysis (KPCA). Non-linear dimensionality reduction through the use of kernels (see Pairwise metrics, Affinities and Kernels). Read more in the User Guide. Parameters
n_componentsint, default=None
Number of components. If None, all non-zero components are kept.
kernel{‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘cosine’, ‘precomputed’}, default=’linear’
Kernel used for PCA.
gammafloat, default=None
Kernel coefficient for rbf, poly and sigmoid kernels. Ignored by other kernels. If gamma is None, then it is set to 1/n_features.
degreeint, default=3
Degree for poly kernels. Ignored by other kernels.
coef0float, default=1
Independent term in poly and sigmoid kernels. Ignored by other kernels.
kernel_paramsdict, default=None
Parameters (keyword arguments) and values for kernel passed as callable object. Ignored by other kernels.
alphafloat, default=1.0
Hyperparameter of the ridge regression that learns the inverse transform (when fit_inverse_transform=True).
fit_inverse_transformbool, default=False
Learn the inverse transform for non-precomputed kernels. (i.e. learn to find the pre-image of a point)
eigen_solver{‘auto’, ‘dense’, ‘arpack’}, default=’auto’
Select eigensolver to use. If n_components is much less than the number of training samples, arpack may be more efficient than the dense eigensolver.
tolfloat, default=0
Convergence tolerance for arpack. If 0, optimal value will be chosen by arpack.
max_iterint, default=None
Maximum number of iterations for arpack. If None, optimal value will be chosen by arpack.
remove_zero_eigbool, default=False
If True, then all components with zero eigenvalues are removed, so that the number of components in the output may be < n_components (and sometimes even zero due to numerical instability). When n_components is None, this parameter is ignored and components with zero eigenvalues are removed regardless.
random_stateint, RandomState instance or None, default=None
Used when eigen_solver == ‘arpack’. Pass an int for reproducible results across multiple function calls. See Glossary. New in version 0.18.
copy_Xbool, default=True
If True, input X is copied and stored by the model in the X_fit_ attribute. If no further changes will be done to X, setting copy_X=False saves memory by storing a reference. New in version 0.18.
n_jobsint, default=None
The number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. New in version 0.18. Attributes
lambdas_ndarray of shape (n_components,)
Eigenvalues of the centered kernel matrix in decreasing order. If n_components and remove_zero_eig are not set, then all values are stored.
alphas_ndarray of shape (n_samples, n_components)
Eigenvectors of the centered kernel matrix. If n_components and remove_zero_eig are not set, then all components are stored.
dual_coef_ndarray of shape (n_samples, n_features)
Inverse transform matrix. Only available when fit_inverse_transform is True.
X_transformed_fit_ndarray of shape (n_samples, n_components)
Projection of the fitted data on the kernel principal components. Only available when fit_inverse_transform is True.
X_fit_ndarray of shape (n_samples, n_features)
The data used to fit the model. If copy_X=False, then X_fit_ is a reference. This attribute is used for the calls to transform. References Kernel PCA was introduced in:
Bernhard Schoelkopf, Alexander J. Smola, and Klaus-Robert Mueller. 1999. Kernel principal component analysis. In Advances in kernel methods, MIT Press, Cambridge, MA, USA 327-352. Examples >>> from sklearn.datasets import load_digits
>>> from sklearn.decomposition import KernelPCA
>>> X, _ = load_digits(return_X_y=True)
>>> transformer = KernelPCA(n_components=7, kernel='linear')
>>> X_transformed = transformer.fit_transform(X)
>>> X_transformed.shape
(1797, 7)
Methods
fit(X[, y]) Fit the model from data in X.
fit_transform(X[, y]) Fit the model from data in X and transform X.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X) Transform X back to original space.
set_params(**params) Set the parameters of this estimator.
transform(X) Transform X.
fit(X, y=None) [source]
Fit the model from data in X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features. Returns
selfobject
Returns the instance itself.
fit_transform(X, y=None, **params) [source]
Fit the model from data in X and transform X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features. Returns
X_newndarray of shape (n_samples, n_components)
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X) [source]
Transform X back to original space. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_components)
Returns
X_newndarray of shape (n_samples, n_features)
References “Learning to Find Pre-Images”, G BakIr et al, 2004.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Transform X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Returns
X_newndarray of shape (n_samples, n_components)
Examples using sklearn.decomposition.KernelPCA
Kernel PCA | sklearn.modules.generated.sklearn.decomposition.kernelpca |
fit(X, y=None) [source]
Fit the model from data in X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features. Returns
selfobject
Returns the instance itself. | sklearn.modules.generated.sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA.fit |
fit_transform(X, y=None, **params) [source]
Fit the model from data in X and transform X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features. Returns
X_newndarray of shape (n_samples, n_components) | sklearn.modules.generated.sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA.get_params |
inverse_transform(X) [source]
Transform X back to original space. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_components)
Returns
X_newndarray of shape (n_samples, n_features)
References “Learning to Find Pre-Images”, G BakIr et al, 2004. | sklearn.modules.generated.sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA.inverse_transform |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA.set_params |
transform(X) [source]
Transform X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Returns
X_newndarray of shape (n_samples, n_components) | sklearn.modules.generated.sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA.transform |
class sklearn.decomposition.LatentDirichletAllocation(n_components=10, *, doc_topic_prior=None, topic_word_prior=None, learning_method='batch', learning_decay=0.7, learning_offset=10.0, max_iter=10, batch_size=128, evaluate_every=- 1, total_samples=1000000.0, perp_tol=0.1, mean_change_tol=0.001, max_doc_update_iter=100, n_jobs=None, verbose=0, random_state=None) [source]
Latent Dirichlet Allocation with online variational Bayes algorithm New in version 0.17. Read more in the User Guide. Parameters
n_componentsint, default=10
Number of topics. Changed in version 0.19: n_topics was renamed to n_components
doc_topic_priorfloat, default=None
Prior of document topic distribution theta. If the value is None, defaults to 1 / n_components. In [1], this is called alpha.
topic_word_priorfloat, default=None
Prior of topic word distribution beta. If the value is None, defaults to 1 / n_components. In [1], this is called eta.
learning_method{‘batch’, ‘online’}, default=’batch’
Method used to update _component. Only used in fit method. In general, if the data size is large, the online update will be much faster than the batch update. Valid options: 'batch': Batch variational Bayes method. Use all training data in
each EM update.
Old `components_` will be overwritten in each iteration.
'online': Online variational Bayes method. In each EM update, use
mini-batch of training data to update the ``components_``
variable incrementally. The learning rate is controlled by the
``learning_decay`` and the ``learning_offset`` parameters.
Changed in version 0.20: The default learning method is now "batch".
learning_decayfloat, default=0.7
It is a parameter that control learning rate in the online learning method. The value should be set between (0.5, 1.0] to guarantee asymptotic convergence. When the value is 0.0 and batch_size is n_samples, the update method is same as batch learning. In the literature, this is called kappa.
learning_offsetfloat, default=10.
A (positive) parameter that downweights early iterations in online learning. It should be greater than 1.0. In the literature, this is called tau_0.
max_iterint, default=10
The maximum number of iterations.
batch_sizeint, default=128
Number of documents to use in each EM iteration. Only used in online learning.
evaluate_everyint, default=-1
How often to evaluate perplexity. Only used in fit method. set it to 0 or negative number to not evaluate perplexity in training at all. Evaluating perplexity can help you check convergence in training process, but it will also increase total training time. Evaluating perplexity in every iteration might increase training time up to two-fold.
total_samplesint, default=1e6
Total number of documents. Only used in the partial_fit method.
perp_tolfloat, default=1e-1
Perplexity tolerance in batch learning. Only used when evaluate_every is greater than 0.
mean_change_tolfloat, default=1e-3
Stopping tolerance for updating document topic distribution in E-step.
max_doc_update_iterint, default=100
Max number of iterations for updating document topic distribution in the E-step.
n_jobsint, default=None
The number of jobs to use in the E-step. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
verboseint, default=0
Verbosity level.
random_stateint, RandomState instance or None, default=None
Pass an int for reproducible results across multiple function calls. See Glossary. Attributes
components_ndarray of shape (n_components, n_features)
Variational parameters for topic word distribution. Since the complete conditional for topic word distribution is a Dirichlet, components_[i, j] can be viewed as pseudocount that represents the number of times word j was assigned to topic i. It can also be viewed as distribution over the words for each topic after normalization: model.components_ / model.components_.sum(axis=1)[:, np.newaxis].
exp_dirichlet_component_ndarray of shape (n_components, n_features)
Exponential value of expectation of log topic word distribution. In the literature, this is exp(E[log(beta)]).
n_batch_iter_int
Number of iterations of the EM step.
n_iter_int
Number of passes over the dataset.
bound_float
Final perplexity score on training set.
doc_topic_prior_float
Prior of document topic distribution theta. If the value is None, it is 1 / n_components.
random_state_RandomState instance
RandomState instance that is generated either from a seed, the random number generator or by np.random.
topic_word_prior_float
Prior of topic word distribution beta. If the value is None, it is 1 / n_components. References
1(1,2)
“Online Learning for Latent Dirichlet Allocation”, Matthew D. Hoffman, David M. Blei, Francis Bach, 2010 [2] “Stochastic Variational Inference”, Matthew D. Hoffman, David M. Blei,
Chong Wang, John Paisley, 2013 [3] Matthew D. Hoffman’s onlineldavb code. Link:
https://github.com/blei-lab/onlineldavb Examples >>> from sklearn.decomposition import LatentDirichletAllocation
>>> from sklearn.datasets import make_multilabel_classification
>>> # This produces a feature matrix of token counts, similar to what
>>> # CountVectorizer would produce on text.
>>> X, _ = make_multilabel_classification(random_state=0)
>>> lda = LatentDirichletAllocation(n_components=5,
... random_state=0)
>>> lda.fit(X)
LatentDirichletAllocation(...)
>>> # get topics for some given samples:
>>> lda.transform(X[-2:])
array([[0.00360392, 0.25499205, 0.0036211 , 0.64236448, 0.09541846],
[0.15297572, 0.00362644, 0.44412786, 0.39568399, 0.003586 ]])
Methods
fit(X[, y]) Learn model for the data X with variational Bayes method.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
partial_fit(X[, y]) Online VB with Mini-Batch update.
perplexity(X[, sub_sampling]) Calculate approximate perplexity for data X.
score(X[, y]) Calculate approximate log-likelihood as score.
set_params(**params) Set the parameters of this estimator.
transform(X) Transform data X according to the fitted model.
fit(X, y=None) [source]
Learn model for the data X with variational Bayes method. When learning_method is ‘online’, use mini-batch update. Otherwise, use batch update. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix.
yIgnored
Returns
self
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
partial_fit(X, y=None) [source]
Online VB with Mini-Batch update. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix.
yIgnored
Returns
self
perplexity(X, sub_sampling=False) [source]
Calculate approximate perplexity for data X. Perplexity is defined as exp(-1. * log-likelihood per word) Changed in version 0.19: doc_topic_distr argument has been deprecated and is ignored because user no longer has access to unnormalized distribution Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix.
sub_samplingbool
Do sub-sampling or not. Returns
scorefloat
Perplexity score.
score(X, y=None) [source]
Calculate approximate log-likelihood as score. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix.
yIgnored
Returns
scorefloat
Use approximate bound as score.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Transform data X according to the fitted model. Changed in version 0.18: doc_topic_distr is now normalized Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix. Returns
doc_topic_distrndarray of shape (n_samples, n_components)
Document topic distribution for X. | sklearn.modules.generated.sklearn.decomposition.latentdirichletallocation#sklearn.decomposition.LatentDirichletAllocation |
sklearn.decomposition.LatentDirichletAllocation
class sklearn.decomposition.LatentDirichletAllocation(n_components=10, *, doc_topic_prior=None, topic_word_prior=None, learning_method='batch', learning_decay=0.7, learning_offset=10.0, max_iter=10, batch_size=128, evaluate_every=- 1, total_samples=1000000.0, perp_tol=0.1, mean_change_tol=0.001, max_doc_update_iter=100, n_jobs=None, verbose=0, random_state=None) [source]
Latent Dirichlet Allocation with online variational Bayes algorithm New in version 0.17. Read more in the User Guide. Parameters
n_componentsint, default=10
Number of topics. Changed in version 0.19: n_topics was renamed to n_components
doc_topic_priorfloat, default=None
Prior of document topic distribution theta. If the value is None, defaults to 1 / n_components. In [1], this is called alpha.
topic_word_priorfloat, default=None
Prior of topic word distribution beta. If the value is None, defaults to 1 / n_components. In [1], this is called eta.
learning_method{‘batch’, ‘online’}, default=’batch’
Method used to update _component. Only used in fit method. In general, if the data size is large, the online update will be much faster than the batch update. Valid options: 'batch': Batch variational Bayes method. Use all training data in
each EM update.
Old `components_` will be overwritten in each iteration.
'online': Online variational Bayes method. In each EM update, use
mini-batch of training data to update the ``components_``
variable incrementally. The learning rate is controlled by the
``learning_decay`` and the ``learning_offset`` parameters.
Changed in version 0.20: The default learning method is now "batch".
learning_decayfloat, default=0.7
It is a parameter that control learning rate in the online learning method. The value should be set between (0.5, 1.0] to guarantee asymptotic convergence. When the value is 0.0 and batch_size is n_samples, the update method is same as batch learning. In the literature, this is called kappa.
learning_offsetfloat, default=10.
A (positive) parameter that downweights early iterations in online learning. It should be greater than 1.0. In the literature, this is called tau_0.
max_iterint, default=10
The maximum number of iterations.
batch_sizeint, default=128
Number of documents to use in each EM iteration. Only used in online learning.
evaluate_everyint, default=-1
How often to evaluate perplexity. Only used in fit method. set it to 0 or negative number to not evaluate perplexity in training at all. Evaluating perplexity can help you check convergence in training process, but it will also increase total training time. Evaluating perplexity in every iteration might increase training time up to two-fold.
total_samplesint, default=1e6
Total number of documents. Only used in the partial_fit method.
perp_tolfloat, default=1e-1
Perplexity tolerance in batch learning. Only used when evaluate_every is greater than 0.
mean_change_tolfloat, default=1e-3
Stopping tolerance for updating document topic distribution in E-step.
max_doc_update_iterint, default=100
Max number of iterations for updating document topic distribution in the E-step.
n_jobsint, default=None
The number of jobs to use in the E-step. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
verboseint, default=0
Verbosity level.
random_stateint, RandomState instance or None, default=None
Pass an int for reproducible results across multiple function calls. See Glossary. Attributes
components_ndarray of shape (n_components, n_features)
Variational parameters for topic word distribution. Since the complete conditional for topic word distribution is a Dirichlet, components_[i, j] can be viewed as pseudocount that represents the number of times word j was assigned to topic i. It can also be viewed as distribution over the words for each topic after normalization: model.components_ / model.components_.sum(axis=1)[:, np.newaxis].
exp_dirichlet_component_ndarray of shape (n_components, n_features)
Exponential value of expectation of log topic word distribution. In the literature, this is exp(E[log(beta)]).
n_batch_iter_int
Number of iterations of the EM step.
n_iter_int
Number of passes over the dataset.
bound_float
Final perplexity score on training set.
doc_topic_prior_float
Prior of document topic distribution theta. If the value is None, it is 1 / n_components.
random_state_RandomState instance
RandomState instance that is generated either from a seed, the random number generator or by np.random.
topic_word_prior_float
Prior of topic word distribution beta. If the value is None, it is 1 / n_components. References
1(1,2)
“Online Learning for Latent Dirichlet Allocation”, Matthew D. Hoffman, David M. Blei, Francis Bach, 2010 [2] “Stochastic Variational Inference”, Matthew D. Hoffman, David M. Blei,
Chong Wang, John Paisley, 2013 [3] Matthew D. Hoffman’s onlineldavb code. Link:
https://github.com/blei-lab/onlineldavb Examples >>> from sklearn.decomposition import LatentDirichletAllocation
>>> from sklearn.datasets import make_multilabel_classification
>>> # This produces a feature matrix of token counts, similar to what
>>> # CountVectorizer would produce on text.
>>> X, _ = make_multilabel_classification(random_state=0)
>>> lda = LatentDirichletAllocation(n_components=5,
... random_state=0)
>>> lda.fit(X)
LatentDirichletAllocation(...)
>>> # get topics for some given samples:
>>> lda.transform(X[-2:])
array([[0.00360392, 0.25499205, 0.0036211 , 0.64236448, 0.09541846],
[0.15297572, 0.00362644, 0.44412786, 0.39568399, 0.003586 ]])
Methods
fit(X[, y]) Learn model for the data X with variational Bayes method.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
partial_fit(X[, y]) Online VB with Mini-Batch update.
perplexity(X[, sub_sampling]) Calculate approximate perplexity for data X.
score(X[, y]) Calculate approximate log-likelihood as score.
set_params(**params) Set the parameters of this estimator.
transform(X) Transform data X according to the fitted model.
fit(X, y=None) [source]
Learn model for the data X with variational Bayes method. When learning_method is ‘online’, use mini-batch update. Otherwise, use batch update. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix.
yIgnored
Returns
self
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
partial_fit(X, y=None) [source]
Online VB with Mini-Batch update. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix.
yIgnored
Returns
self
perplexity(X, sub_sampling=False) [source]
Calculate approximate perplexity for data X. Perplexity is defined as exp(-1. * log-likelihood per word) Changed in version 0.19: doc_topic_distr argument has been deprecated and is ignored because user no longer has access to unnormalized distribution Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix.
sub_samplingbool
Do sub-sampling or not. Returns
scorefloat
Perplexity score.
score(X, y=None) [source]
Calculate approximate log-likelihood as score. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix.
yIgnored
Returns
scorefloat
Use approximate bound as score.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Transform data X according to the fitted model. Changed in version 0.18: doc_topic_distr is now normalized Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix. Returns
doc_topic_distrndarray of shape (n_samples, n_components)
Document topic distribution for X.
Examples using sklearn.decomposition.LatentDirichletAllocation
Topic extraction with Non-negative Matrix Factorization and Latent Dirichlet Allocation | sklearn.modules.generated.sklearn.decomposition.latentdirichletallocation |
fit(X, y=None) [source]
Learn model for the data X with variational Bayes method. When learning_method is ‘online’, use mini-batch update. Otherwise, use batch update. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix.
yIgnored
Returns
self | sklearn.modules.generated.sklearn.decomposition.latentdirichletallocation#sklearn.decomposition.LatentDirichletAllocation.fit |
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | sklearn.modules.generated.sklearn.decomposition.latentdirichletallocation#sklearn.decomposition.LatentDirichletAllocation.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.decomposition.latentdirichletallocation#sklearn.decomposition.LatentDirichletAllocation.get_params |
partial_fit(X, y=None) [source]
Online VB with Mini-Batch update. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix.
yIgnored
Returns
self | sklearn.modules.generated.sklearn.decomposition.latentdirichletallocation#sklearn.decomposition.LatentDirichletAllocation.partial_fit |
perplexity(X, sub_sampling=False) [source]
Calculate approximate perplexity for data X. Perplexity is defined as exp(-1. * log-likelihood per word) Changed in version 0.19: doc_topic_distr argument has been deprecated and is ignored because user no longer has access to unnormalized distribution Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix.
sub_samplingbool
Do sub-sampling or not. Returns
scorefloat
Perplexity score. | sklearn.modules.generated.sklearn.decomposition.latentdirichletallocation#sklearn.decomposition.LatentDirichletAllocation.perplexity |
score(X, y=None) [source]
Calculate approximate log-likelihood as score. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix.
yIgnored
Returns
scorefloat
Use approximate bound as score. | sklearn.modules.generated.sklearn.decomposition.latentdirichletallocation#sklearn.decomposition.LatentDirichletAllocation.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.decomposition.latentdirichletallocation#sklearn.decomposition.LatentDirichletAllocation.set_params |
transform(X) [source]
Transform data X according to the fitted model. Changed in version 0.18: doc_topic_distr is now normalized Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Document word matrix. Returns
doc_topic_distrndarray of shape (n_samples, n_components)
Document topic distribution for X. | sklearn.modules.generated.sklearn.decomposition.latentdirichletallocation#sklearn.decomposition.LatentDirichletAllocation.transform |
class sklearn.decomposition.MiniBatchDictionaryLearning(n_components=None, *, alpha=1, n_iter=1000, fit_algorithm='lars', n_jobs=None, batch_size=3, shuffle=True, dict_init=None, transform_algorithm='omp', transform_n_nonzero_coefs=None, transform_alpha=None, verbose=False, split_sign=False, random_state=None, positive_code=False, positive_dict=False, transform_max_iter=1000) [source]
Mini-batch dictionary learning Finds a dictionary (a set of atoms) that can best be used to represent data using a sparse code. Solves the optimization problem: (U^*,V^*) = argmin 0.5 || X - U V ||_2^2 + alpha * || U ||_1
(U,V)
with || V_k ||_2 = 1 for all 0 <= k < n_components
Read more in the User Guide. Parameters
n_componentsint, default=None
Number of dictionary elements to extract.
alphafloat, default=1
Sparsity controlling parameter.
n_iterint, default=1000
Total number of iterations to perform.
fit_algorithm{‘lars’, ‘cd’}, default=’lars’
The algorithm used:
'lars': uses the least angle regression method to solve the lasso problem (linear_model.lars_path)
'cd': uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). Lars will be faster if the estimated components are sparse.
n_jobsint, default=None
Number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
batch_sizeint, default=3
Number of samples in each mini-batch.
shufflebool, default=True
Whether to shuffle the samples before forming batches.
dict_initndarray of shape (n_components, n_features), default=None
initial value of the dictionary for warm restart scenarios
transform_algorithm{‘lasso_lars’, ‘lasso_cd’, ‘lars’, ‘omp’, ‘threshold’}, default=’omp’
Algorithm used to transform the data:
'lars': uses the least angle regression method (linear_model.lars_path);
'lasso_lars': uses Lars to compute the Lasso solution.
'lasso_cd': uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). 'lasso_lars' will be faster if the estimated components are sparse.
'omp': uses orthogonal matching pursuit to estimate the sparse solution.
'threshold': squashes to zero all coefficients less than alpha from the projection dictionary * X'.
transform_n_nonzero_coefsint, default=None
Number of nonzero coefficients to target in each column of the solution. This is only used by algorithm='lars' and algorithm='omp' and is overridden by alpha in the omp case. If None, then transform_n_nonzero_coefs=int(n_features / 10).
transform_alphafloat, default=None
If algorithm='lasso_lars' or algorithm='lasso_cd', alpha is the penalty applied to the L1 norm. If algorithm='threshold', alpha is the absolute value of the threshold below which coefficients will be squashed to zero. If algorithm='omp', alpha is the tolerance parameter: the value of the reconstruction error targeted. In this case, it overrides n_nonzero_coefs. If None, default to 1.
verbosebool, default=False
To control the verbosity of the procedure.
split_signbool, default=False
Whether to split the sparse feature vector into the concatenation of its negative part and its positive part. This can improve the performance of downstream classifiers.
random_stateint, RandomState instance or None, default=None
Used for initializing the dictionary when dict_init is not specified, randomly shuffling the data when shuffle is set to True, and updating the dictionary. Pass an int for reproducible results across multiple function calls. See Glossary.
positive_codebool, default=False
Whether to enforce positivity when finding the code. New in version 0.20.
positive_dictbool, default=False
Whether to enforce positivity when finding the dictionary. New in version 0.20.
transform_max_iterint, default=1000
Maximum number of iterations to perform if algorithm='lasso_cd' or 'lasso_lars'. New in version 0.22. Attributes
components_ndarray of shape (n_components, n_features)
Components extracted from the data.
inner_stats_tuple of (A, B) ndarrays
Internal sufficient statistics that are kept by the algorithm. Keeping them is useful in online settings, to avoid losing the history of the evolution, but they shouldn’t have any use for the end user. A (n_components, n_components) is the dictionary covariance matrix. B (n_features, n_components) is the data approximation matrix.
n_iter_int
Number of iterations run.
iter_offset_int
The number of iteration on data batches that has been performed before.
random_state_RandomState instance
RandomState instance that is generated either from a seed, the random number generattor or by np.random. See also
SparseCoder
DictionaryLearning
SparsePCA
MiniBatchSparsePCA
Notes References: J. Mairal, F. Bach, J. Ponce, G. Sapiro, 2009: Online dictionary learning for sparse coding (https://www.di.ens.fr/sierra/pdfs/icml09.pdf) Examples >>> import numpy as np
>>> from sklearn.datasets import make_sparse_coded_signal
>>> from sklearn.decomposition import MiniBatchDictionaryLearning
>>> X, dictionary, code = make_sparse_coded_signal(
... n_samples=100, n_components=15, n_features=20, n_nonzero_coefs=10,
... random_state=42)
>>> dict_learner = MiniBatchDictionaryLearning(
... n_components=15, transform_algorithm='lasso_lars', random_state=42,
... )
>>> X_transformed = dict_learner.fit_transform(X)
We can check the level of sparsity of X_transformed: >>> np.mean(X_transformed == 0)
0.87...
We can compare the average squared euclidean norm of the reconstruction error of the sparse coded signal relative to the squared euclidean norm of the original signal: >>> X_hat = X_transformed @ dict_learner.components_
>>> np.mean(np.sum((X_hat - X) ** 2, axis=1) / np.sum(X ** 2, axis=1))
0.10...
Methods
fit(X[, y]) Fit the model from data in X.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
partial_fit(X[, y, iter_offset]) Updates the model using the data in X as a mini-batch.
set_params(**params) Set the parameters of this estimator.
transform(X) Encode the data as a sparse combination of the dictionary atoms.
fit(X, y=None) [source]
Fit the model from data in X. Parameters
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yIgnored
Returns
selfobject
Returns the instance itself.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
partial_fit(X, y=None, iter_offset=None) [source]
Updates the model using the data in X as a mini-batch. Parameters
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yIgnored
iter_offsetint, default=None
The number of iteration on data batches that has been performed before this call to partial_fit. This is optional: if no number is passed, the memory of the object is used. Returns
selfobject
Returns the instance itself.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Encode the data as a sparse combination of the dictionary atoms. Coding method is determined by the object parameter transform_algorithm. Parameters
Xndarray of shape (n_samples, n_features)
Test data to be transformed, must have the same number of features as the data used to train the model. Returns
X_newndarray of shape (n_samples, n_components)
Transformed data. | sklearn.modules.generated.sklearn.decomposition.minibatchdictionarylearning#sklearn.decomposition.MiniBatchDictionaryLearning |
sklearn.decomposition.MiniBatchDictionaryLearning
class sklearn.decomposition.MiniBatchDictionaryLearning(n_components=None, *, alpha=1, n_iter=1000, fit_algorithm='lars', n_jobs=None, batch_size=3, shuffle=True, dict_init=None, transform_algorithm='omp', transform_n_nonzero_coefs=None, transform_alpha=None, verbose=False, split_sign=False, random_state=None, positive_code=False, positive_dict=False, transform_max_iter=1000) [source]
Mini-batch dictionary learning Finds a dictionary (a set of atoms) that can best be used to represent data using a sparse code. Solves the optimization problem: (U^*,V^*) = argmin 0.5 || X - U V ||_2^2 + alpha * || U ||_1
(U,V)
with || V_k ||_2 = 1 for all 0 <= k < n_components
Read more in the User Guide. Parameters
n_componentsint, default=None
Number of dictionary elements to extract.
alphafloat, default=1
Sparsity controlling parameter.
n_iterint, default=1000
Total number of iterations to perform.
fit_algorithm{‘lars’, ‘cd’}, default=’lars’
The algorithm used:
'lars': uses the least angle regression method to solve the lasso problem (linear_model.lars_path)
'cd': uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). Lars will be faster if the estimated components are sparse.
n_jobsint, default=None
Number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
batch_sizeint, default=3
Number of samples in each mini-batch.
shufflebool, default=True
Whether to shuffle the samples before forming batches.
dict_initndarray of shape (n_components, n_features), default=None
initial value of the dictionary for warm restart scenarios
transform_algorithm{‘lasso_lars’, ‘lasso_cd’, ‘lars’, ‘omp’, ‘threshold’}, default=’omp’
Algorithm used to transform the data:
'lars': uses the least angle regression method (linear_model.lars_path);
'lasso_lars': uses Lars to compute the Lasso solution.
'lasso_cd': uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). 'lasso_lars' will be faster if the estimated components are sparse.
'omp': uses orthogonal matching pursuit to estimate the sparse solution.
'threshold': squashes to zero all coefficients less than alpha from the projection dictionary * X'.
transform_n_nonzero_coefsint, default=None
Number of nonzero coefficients to target in each column of the solution. This is only used by algorithm='lars' and algorithm='omp' and is overridden by alpha in the omp case. If None, then transform_n_nonzero_coefs=int(n_features / 10).
transform_alphafloat, default=None
If algorithm='lasso_lars' or algorithm='lasso_cd', alpha is the penalty applied to the L1 norm. If algorithm='threshold', alpha is the absolute value of the threshold below which coefficients will be squashed to zero. If algorithm='omp', alpha is the tolerance parameter: the value of the reconstruction error targeted. In this case, it overrides n_nonzero_coefs. If None, default to 1.
verbosebool, default=False
To control the verbosity of the procedure.
split_signbool, default=False
Whether to split the sparse feature vector into the concatenation of its negative part and its positive part. This can improve the performance of downstream classifiers.
random_stateint, RandomState instance or None, default=None
Used for initializing the dictionary when dict_init is not specified, randomly shuffling the data when shuffle is set to True, and updating the dictionary. Pass an int for reproducible results across multiple function calls. See Glossary.
positive_codebool, default=False
Whether to enforce positivity when finding the code. New in version 0.20.
positive_dictbool, default=False
Whether to enforce positivity when finding the dictionary. New in version 0.20.
transform_max_iterint, default=1000
Maximum number of iterations to perform if algorithm='lasso_cd' or 'lasso_lars'. New in version 0.22. Attributes
components_ndarray of shape (n_components, n_features)
Components extracted from the data.
inner_stats_tuple of (A, B) ndarrays
Internal sufficient statistics that are kept by the algorithm. Keeping them is useful in online settings, to avoid losing the history of the evolution, but they shouldn’t have any use for the end user. A (n_components, n_components) is the dictionary covariance matrix. B (n_features, n_components) is the data approximation matrix.
n_iter_int
Number of iterations run.
iter_offset_int
The number of iteration on data batches that has been performed before.
random_state_RandomState instance
RandomState instance that is generated either from a seed, the random number generattor or by np.random. See also
SparseCoder
DictionaryLearning
SparsePCA
MiniBatchSparsePCA
Notes References: J. Mairal, F. Bach, J. Ponce, G. Sapiro, 2009: Online dictionary learning for sparse coding (https://www.di.ens.fr/sierra/pdfs/icml09.pdf) Examples >>> import numpy as np
>>> from sklearn.datasets import make_sparse_coded_signal
>>> from sklearn.decomposition import MiniBatchDictionaryLearning
>>> X, dictionary, code = make_sparse_coded_signal(
... n_samples=100, n_components=15, n_features=20, n_nonzero_coefs=10,
... random_state=42)
>>> dict_learner = MiniBatchDictionaryLearning(
... n_components=15, transform_algorithm='lasso_lars', random_state=42,
... )
>>> X_transformed = dict_learner.fit_transform(X)
We can check the level of sparsity of X_transformed: >>> np.mean(X_transformed == 0)
0.87...
We can compare the average squared euclidean norm of the reconstruction error of the sparse coded signal relative to the squared euclidean norm of the original signal: >>> X_hat = X_transformed @ dict_learner.components_
>>> np.mean(np.sum((X_hat - X) ** 2, axis=1) / np.sum(X ** 2, axis=1))
0.10...
Methods
fit(X[, y]) Fit the model from data in X.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
partial_fit(X[, y, iter_offset]) Updates the model using the data in X as a mini-batch.
set_params(**params) Set the parameters of this estimator.
transform(X) Encode the data as a sparse combination of the dictionary atoms.
fit(X, y=None) [source]
Fit the model from data in X. Parameters
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yIgnored
Returns
selfobject
Returns the instance itself.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
partial_fit(X, y=None, iter_offset=None) [source]
Updates the model using the data in X as a mini-batch. Parameters
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yIgnored
iter_offsetint, default=None
The number of iteration on data batches that has been performed before this call to partial_fit. This is optional: if no number is passed, the memory of the object is used. Returns
selfobject
Returns the instance itself.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Encode the data as a sparse combination of the dictionary atoms. Coding method is determined by the object parameter transform_algorithm. Parameters
Xndarray of shape (n_samples, n_features)
Test data to be transformed, must have the same number of features as the data used to train the model. Returns
X_newndarray of shape (n_samples, n_components)
Transformed data.
Examples using sklearn.decomposition.MiniBatchDictionaryLearning
Image denoising using dictionary learning
Faces dataset decompositions | sklearn.modules.generated.sklearn.decomposition.minibatchdictionarylearning |
fit(X, y=None) [source]
Fit the model from data in X. Parameters
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yIgnored
Returns
selfobject
Returns the instance itself. | sklearn.modules.generated.sklearn.decomposition.minibatchdictionarylearning#sklearn.decomposition.MiniBatchDictionaryLearning.fit |
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | sklearn.modules.generated.sklearn.decomposition.minibatchdictionarylearning#sklearn.decomposition.MiniBatchDictionaryLearning.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.decomposition.minibatchdictionarylearning#sklearn.decomposition.MiniBatchDictionaryLearning.get_params |
partial_fit(X, y=None, iter_offset=None) [source]
Updates the model using the data in X as a mini-batch. Parameters
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yIgnored
iter_offsetint, default=None
The number of iteration on data batches that has been performed before this call to partial_fit. This is optional: if no number is passed, the memory of the object is used. Returns
selfobject
Returns the instance itself. | sklearn.modules.generated.sklearn.decomposition.minibatchdictionarylearning#sklearn.decomposition.MiniBatchDictionaryLearning.partial_fit |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.decomposition.minibatchdictionarylearning#sklearn.decomposition.MiniBatchDictionaryLearning.set_params |
transform(X) [source]
Encode the data as a sparse combination of the dictionary atoms. Coding method is determined by the object parameter transform_algorithm. Parameters
Xndarray of shape (n_samples, n_features)
Test data to be transformed, must have the same number of features as the data used to train the model. Returns
X_newndarray of shape (n_samples, n_components)
Transformed data. | sklearn.modules.generated.sklearn.decomposition.minibatchdictionarylearning#sklearn.decomposition.MiniBatchDictionaryLearning.transform |
class sklearn.decomposition.MiniBatchSparsePCA(n_components=None, *, alpha=1, ridge_alpha=0.01, n_iter=100, callback=None, batch_size=3, verbose=False, shuffle=True, n_jobs=None, method='lars', random_state=None) [source]
Mini-batch Sparse Principal Components Analysis Finds the set of sparse components that can optimally reconstruct the data. The amount of sparseness is controllable by the coefficient of the L1 penalty, given by the parameter alpha. Read more in the User Guide. Parameters
n_componentsint, default=None
number of sparse atoms to extract
alphaint, default=1
Sparsity controlling parameter. Higher values lead to sparser components.
ridge_alphafloat, default=0.01
Amount of ridge shrinkage to apply in order to improve conditioning when calling the transform method.
n_iterint, default=100
number of iterations to perform for each mini batch
callbackcallable, default=None
callable that gets invoked every five iterations
batch_sizeint, default=3
the number of features to take in each mini batch
verboseint or bool, default=False
Controls the verbosity; the higher, the more messages. Defaults to 0.
shufflebool, default=True
whether to shuffle the data before splitting it in batches
n_jobsint, default=None
Number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
method{‘lars’, ‘cd’}, default=’lars’
lars: uses the least angle regression method to solve the lasso problem (linear_model.lars_path) cd: uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). Lars will be faster if the estimated components are sparse.
random_stateint, RandomState instance or None, default=None
Used for random shuffling when shuffle is set to True, during online dictionary learning. Pass an int for reproducible results across multiple function calls. See Glossary. Attributes
components_ndarray of shape (n_components, n_features)
Sparse components extracted from the data.
n_components_int
Estimated number of components. New in version 0.23.
n_iter_int
Number of iterations run.
mean_ndarray of shape (n_features,)
Per-feature empirical mean, estimated from the training set. Equal to X.mean(axis=0). See also
PCA
SparsePCA
DictionaryLearning
Examples >>> import numpy as np
>>> from sklearn.datasets import make_friedman1
>>> from sklearn.decomposition import MiniBatchSparsePCA
>>> X, _ = make_friedman1(n_samples=200, n_features=30, random_state=0)
>>> transformer = MiniBatchSparsePCA(n_components=5, batch_size=50,
... random_state=0)
>>> transformer.fit(X)
MiniBatchSparsePCA(...)
>>> X_transformed = transformer.transform(X)
>>> X_transformed.shape
(200, 5)
>>> # most values in the components_ are zero (sparsity)
>>> np.mean(transformer.components_ == 0)
0.94
Methods
fit(X[, y]) Fit the model from data in X.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Least Squares projection of the data onto the sparse components.
fit(X, y=None) [source]
Fit the model from data in X. Parameters
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yIgnored
Returns
selfobject
Returns the instance itself.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Least Squares projection of the data onto the sparse components. To avoid instability issues in case the system is under-determined, regularization can be applied (Ridge regression) via the ridge_alpha parameter. Note that Sparse PCA components orthogonality is not enforced as in PCA hence one cannot use a simple linear projection. Parameters
Xndarray of shape (n_samples, n_features)
Test data to be transformed, must have the same number of features as the data used to train the model. Returns
X_newndarray of shape (n_samples, n_components)
Transformed data. | sklearn.modules.generated.sklearn.decomposition.minibatchsparsepca#sklearn.decomposition.MiniBatchSparsePCA |
sklearn.decomposition.MiniBatchSparsePCA
class sklearn.decomposition.MiniBatchSparsePCA(n_components=None, *, alpha=1, ridge_alpha=0.01, n_iter=100, callback=None, batch_size=3, verbose=False, shuffle=True, n_jobs=None, method='lars', random_state=None) [source]
Mini-batch Sparse Principal Components Analysis Finds the set of sparse components that can optimally reconstruct the data. The amount of sparseness is controllable by the coefficient of the L1 penalty, given by the parameter alpha. Read more in the User Guide. Parameters
n_componentsint, default=None
number of sparse atoms to extract
alphaint, default=1
Sparsity controlling parameter. Higher values lead to sparser components.
ridge_alphafloat, default=0.01
Amount of ridge shrinkage to apply in order to improve conditioning when calling the transform method.
n_iterint, default=100
number of iterations to perform for each mini batch
callbackcallable, default=None
callable that gets invoked every five iterations
batch_sizeint, default=3
the number of features to take in each mini batch
verboseint or bool, default=False
Controls the verbosity; the higher, the more messages. Defaults to 0.
shufflebool, default=True
whether to shuffle the data before splitting it in batches
n_jobsint, default=None
Number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
method{‘lars’, ‘cd’}, default=’lars’
lars: uses the least angle regression method to solve the lasso problem (linear_model.lars_path) cd: uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). Lars will be faster if the estimated components are sparse.
random_stateint, RandomState instance or None, default=None
Used for random shuffling when shuffle is set to True, during online dictionary learning. Pass an int for reproducible results across multiple function calls. See Glossary. Attributes
components_ndarray of shape (n_components, n_features)
Sparse components extracted from the data.
n_components_int
Estimated number of components. New in version 0.23.
n_iter_int
Number of iterations run.
mean_ndarray of shape (n_features,)
Per-feature empirical mean, estimated from the training set. Equal to X.mean(axis=0). See also
PCA
SparsePCA
DictionaryLearning
Examples >>> import numpy as np
>>> from sklearn.datasets import make_friedman1
>>> from sklearn.decomposition import MiniBatchSparsePCA
>>> X, _ = make_friedman1(n_samples=200, n_features=30, random_state=0)
>>> transformer = MiniBatchSparsePCA(n_components=5, batch_size=50,
... random_state=0)
>>> transformer.fit(X)
MiniBatchSparsePCA(...)
>>> X_transformed = transformer.transform(X)
>>> X_transformed.shape
(200, 5)
>>> # most values in the components_ are zero (sparsity)
>>> np.mean(transformer.components_ == 0)
0.94
Methods
fit(X[, y]) Fit the model from data in X.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Least Squares projection of the data onto the sparse components.
fit(X, y=None) [source]
Fit the model from data in X. Parameters
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yIgnored
Returns
selfobject
Returns the instance itself.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Least Squares projection of the data onto the sparse components. To avoid instability issues in case the system is under-determined, regularization can be applied (Ridge regression) via the ridge_alpha parameter. Note that Sparse PCA components orthogonality is not enforced as in PCA hence one cannot use a simple linear projection. Parameters
Xndarray of shape (n_samples, n_features)
Test data to be transformed, must have the same number of features as the data used to train the model. Returns
X_newndarray of shape (n_samples, n_components)
Transformed data.
Examples using sklearn.decomposition.MiniBatchSparsePCA
Faces dataset decompositions | sklearn.modules.generated.sklearn.decomposition.minibatchsparsepca |
fit(X, y=None) [source]
Fit the model from data in X. Parameters
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yIgnored
Returns
selfobject
Returns the instance itself. | sklearn.modules.generated.sklearn.decomposition.minibatchsparsepca#sklearn.decomposition.MiniBatchSparsePCA.fit |
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | sklearn.modules.generated.sklearn.decomposition.minibatchsparsepca#sklearn.decomposition.MiniBatchSparsePCA.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.decomposition.minibatchsparsepca#sklearn.decomposition.MiniBatchSparsePCA.get_params |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.decomposition.minibatchsparsepca#sklearn.decomposition.MiniBatchSparsePCA.set_params |
transform(X) [source]
Least Squares projection of the data onto the sparse components. To avoid instability issues in case the system is under-determined, regularization can be applied (Ridge regression) via the ridge_alpha parameter. Note that Sparse PCA components orthogonality is not enforced as in PCA hence one cannot use a simple linear projection. Parameters
Xndarray of shape (n_samples, n_features)
Test data to be transformed, must have the same number of features as the data used to train the model. Returns
X_newndarray of shape (n_samples, n_components)
Transformed data. | sklearn.modules.generated.sklearn.decomposition.minibatchsparsepca#sklearn.decomposition.MiniBatchSparsePCA.transform |
class sklearn.decomposition.NMF(n_components=None, *, init='warn', solver='cd', beta_loss='frobenius', tol=0.0001, max_iter=200, random_state=None, alpha=0.0, l1_ratio=0.0, verbose=0, shuffle=False, regularization='both') [source]
Non-Negative Matrix Factorization (NMF). Find two non-negative matrices (W, H) whose product approximates the non- negative matrix X. This factorization can be used for example for dimensionality reduction, source separation or topic extraction. The objective function is: \[ \begin{align}\begin{aligned}0.5 * ||X - WH||_{Fro}^2 + alpha * l1_{ratio} * ||vec(W)||_1\\+ alpha * l1_{ratio} * ||vec(H)||_1\\+ 0.5 * alpha * (1 - l1_{ratio}) * ||W||_{Fro}^2\\+ 0.5 * alpha * (1 - l1_{ratio}) * ||H||_{Fro}^2\end{aligned}\end{align} \] Where: \(||A||_{Fro}^2 = \sum_{i,j} A_{ij}^2\) (Frobenius norm) \(||vec(A)||_1 = \sum_{i,j} abs(A_{ij})\) (Elementwise L1 norm) For multiplicative-update (‘mu’) solver, the Frobenius norm (\(0.5 * ||X - WH||_{Fro}^2\)) can be changed into another beta-divergence loss, by changing the beta_loss parameter. The objective function is minimized with an alternating minimization of W and H. Read more in the User Guide. Parameters
n_componentsint, default=None
Number of components, if n_components is not set all features are kept.
init{‘random’, ‘nndsvd’, ‘nndsvda’, ‘nndsvdar’, ‘custom’}, default=None
Method used to initialize the procedure. Default: None. Valid options:
None: ‘nndsvd’ if n_components <= min(n_samples, n_features), otherwise random.
'random': non-negative random matrices, scaled with: sqrt(X.mean() / n_components)
'nndsvd': Nonnegative Double Singular Value Decomposition (NNDSVD) initialization (better for sparseness)
'nndsvda': NNDSVD with zeros filled with the average of X (better when sparsity is not desired)
'nndsvdar' NNDSVD with zeros filled with small random values (generally faster, less accurate alternative to NNDSVDa for when sparsity is not desired)
'custom': use custom matrices W and H
solver{‘cd’, ‘mu’}, default=’cd’
Numerical solver to use: ‘cd’ is a Coordinate Descent solver. ‘mu’ is a Multiplicative Update solver. New in version 0.17: Coordinate Descent solver. New in version 0.19: Multiplicative Update solver.
beta_lossfloat or {‘frobenius’, ‘kullback-leibler’, ‘itakura-saito’}, default=’frobenius’
Beta divergence to be minimized, measuring the distance between X and the dot product WH. Note that values different from ‘frobenius’ (or 2) and ‘kullback-leibler’ (or 1) lead to significantly slower fits. Note that for beta_loss <= 0 (or ‘itakura-saito’), the input matrix X cannot contain zeros. Used only in ‘mu’ solver. New in version 0.19.
tolfloat, default=1e-4
Tolerance of the stopping condition.
max_iterint, default=200
Maximum number of iterations before timing out.
random_stateint, RandomState instance or None, default=None
Used for initialisation (when init == ‘nndsvdar’ or ‘random’), and in Coordinate Descent. Pass an int for reproducible results across multiple function calls. See Glossary.
alphafloat, default=0.
Constant that multiplies the regularization terms. Set it to zero to have no regularization. New in version 0.17: alpha used in the Coordinate Descent solver.
l1_ratiofloat, default=0.
The regularization mixing parameter, with 0 <= l1_ratio <= 1. For l1_ratio = 0 the penalty is an elementwise L2 penalty (aka Frobenius Norm). For l1_ratio = 1 it is an elementwise L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2. New in version 0.17: Regularization parameter l1_ratio used in the Coordinate Descent solver.
verboseint, default=0
Whether to be verbose.
shufflebool, default=False
If true, randomize the order of coordinates in the CD solver. New in version 0.17: shuffle parameter used in the Coordinate Descent solver.
regularization{‘both’, ‘components’, ‘transformation’, None}, default=’both’
Select whether the regularization affects the components (H), the transformation (W), both or none of them. New in version 0.24. Attributes
components_ndarray of shape (n_components, n_features)
Factorization matrix, sometimes called ‘dictionary’.
n_components_int
The number of components. It is same as the n_components parameter if it was given. Otherwise, it will be same as the number of features.
reconstruction_err_float
Frobenius norm of the matrix difference, or beta-divergence, between the training data X and the reconstructed data WH from the fitted model.
n_iter_int
Actual number of iterations. References Cichocki, Andrzej, and P. H. A. N. Anh-Huy. “Fast local algorithms for large scale nonnegative matrix and tensor factorizations.” IEICE transactions on fundamentals of electronics, communications and computer sciences 92.3: 708-721, 2009. Fevotte, C., & Idier, J. (2011). Algorithms for nonnegative matrix factorization with the beta-divergence. Neural Computation, 23(9). Examples >>> import numpy as np
>>> X = np.array([[1, 1], [2, 1], [3, 1.2], [4, 1], [5, 0.8], [6, 1]])
>>> from sklearn.decomposition import NMF
>>> model = NMF(n_components=2, init='random', random_state=0)
>>> W = model.fit_transform(X)
>>> H = model.components_
Methods
fit(X[, y]) Learn a NMF model for the data X.
fit_transform(X[, y, W, H]) Learn a NMF model for the data X and returns the transformed data.
get_params([deep]) Get parameters for this estimator.
inverse_transform(W) Transform data back to its original space.
set_params(**params) Set the parameters of this estimator.
transform(X) Transform the data X according to the fitted NMF model.
fit(X, y=None, **params) [source]
Learn a NMF model for the data X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data matrix to be decomposed
yIgnored
Returns
self
fit_transform(X, y=None, W=None, H=None) [source]
Learn a NMF model for the data X and returns the transformed data. This is more efficient than calling fit followed by transform. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data matrix to be decomposed
yIgnored
Warray-like of shape (n_samples, n_components)
If init=’custom’, it is used as initial guess for the solution.
Harray-like of shape (n_components, n_features)
If init=’custom’, it is used as initial guess for the solution. Returns
Wndarray of shape (n_samples, n_components)
Transformed data.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(W) [source]
Transform data back to its original space. Parameters
W{ndarray, sparse matrix} of shape (n_samples, n_components)
Transformed data matrix. Returns
X{ndarray, sparse matrix} of shape (n_samples, n_features)
Data matrix of original shape. New in version 0.18: ..
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Transform the data X according to the fitted NMF model. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data matrix to be transformed by the model. Returns
Wndarray of shape (n_samples, n_components)
Transformed data. | sklearn.modules.generated.sklearn.decomposition.nmf#sklearn.decomposition.NMF |
sklearn.decomposition.NMF
class sklearn.decomposition.NMF(n_components=None, *, init='warn', solver='cd', beta_loss='frobenius', tol=0.0001, max_iter=200, random_state=None, alpha=0.0, l1_ratio=0.0, verbose=0, shuffle=False, regularization='both') [source]
Non-Negative Matrix Factorization (NMF). Find two non-negative matrices (W, H) whose product approximates the non- negative matrix X. This factorization can be used for example for dimensionality reduction, source separation or topic extraction. The objective function is: \[ \begin{align}\begin{aligned}0.5 * ||X - WH||_{Fro}^2 + alpha * l1_{ratio} * ||vec(W)||_1\\+ alpha * l1_{ratio} * ||vec(H)||_1\\+ 0.5 * alpha * (1 - l1_{ratio}) * ||W||_{Fro}^2\\+ 0.5 * alpha * (1 - l1_{ratio}) * ||H||_{Fro}^2\end{aligned}\end{align} \] Where: \(||A||_{Fro}^2 = \sum_{i,j} A_{ij}^2\) (Frobenius norm) \(||vec(A)||_1 = \sum_{i,j} abs(A_{ij})\) (Elementwise L1 norm) For multiplicative-update (‘mu’) solver, the Frobenius norm (\(0.5 * ||X - WH||_{Fro}^2\)) can be changed into another beta-divergence loss, by changing the beta_loss parameter. The objective function is minimized with an alternating minimization of W and H. Read more in the User Guide. Parameters
n_componentsint, default=None
Number of components, if n_components is not set all features are kept.
init{‘random’, ‘nndsvd’, ‘nndsvda’, ‘nndsvdar’, ‘custom’}, default=None
Method used to initialize the procedure. Default: None. Valid options:
None: ‘nndsvd’ if n_components <= min(n_samples, n_features), otherwise random.
'random': non-negative random matrices, scaled with: sqrt(X.mean() / n_components)
'nndsvd': Nonnegative Double Singular Value Decomposition (NNDSVD) initialization (better for sparseness)
'nndsvda': NNDSVD with zeros filled with the average of X (better when sparsity is not desired)
'nndsvdar' NNDSVD with zeros filled with small random values (generally faster, less accurate alternative to NNDSVDa for when sparsity is not desired)
'custom': use custom matrices W and H
solver{‘cd’, ‘mu’}, default=’cd’
Numerical solver to use: ‘cd’ is a Coordinate Descent solver. ‘mu’ is a Multiplicative Update solver. New in version 0.17: Coordinate Descent solver. New in version 0.19: Multiplicative Update solver.
beta_lossfloat or {‘frobenius’, ‘kullback-leibler’, ‘itakura-saito’}, default=’frobenius’
Beta divergence to be minimized, measuring the distance between X and the dot product WH. Note that values different from ‘frobenius’ (or 2) and ‘kullback-leibler’ (or 1) lead to significantly slower fits. Note that for beta_loss <= 0 (or ‘itakura-saito’), the input matrix X cannot contain zeros. Used only in ‘mu’ solver. New in version 0.19.
tolfloat, default=1e-4
Tolerance of the stopping condition.
max_iterint, default=200
Maximum number of iterations before timing out.
random_stateint, RandomState instance or None, default=None
Used for initialisation (when init == ‘nndsvdar’ or ‘random’), and in Coordinate Descent. Pass an int for reproducible results across multiple function calls. See Glossary.
alphafloat, default=0.
Constant that multiplies the regularization terms. Set it to zero to have no regularization. New in version 0.17: alpha used in the Coordinate Descent solver.
l1_ratiofloat, default=0.
The regularization mixing parameter, with 0 <= l1_ratio <= 1. For l1_ratio = 0 the penalty is an elementwise L2 penalty (aka Frobenius Norm). For l1_ratio = 1 it is an elementwise L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2. New in version 0.17: Regularization parameter l1_ratio used in the Coordinate Descent solver.
verboseint, default=0
Whether to be verbose.
shufflebool, default=False
If true, randomize the order of coordinates in the CD solver. New in version 0.17: shuffle parameter used in the Coordinate Descent solver.
regularization{‘both’, ‘components’, ‘transformation’, None}, default=’both’
Select whether the regularization affects the components (H), the transformation (W), both or none of them. New in version 0.24. Attributes
components_ndarray of shape (n_components, n_features)
Factorization matrix, sometimes called ‘dictionary’.
n_components_int
The number of components. It is same as the n_components parameter if it was given. Otherwise, it will be same as the number of features.
reconstruction_err_float
Frobenius norm of the matrix difference, or beta-divergence, between the training data X and the reconstructed data WH from the fitted model.
n_iter_int
Actual number of iterations. References Cichocki, Andrzej, and P. H. A. N. Anh-Huy. “Fast local algorithms for large scale nonnegative matrix and tensor factorizations.” IEICE transactions on fundamentals of electronics, communications and computer sciences 92.3: 708-721, 2009. Fevotte, C., & Idier, J. (2011). Algorithms for nonnegative matrix factorization with the beta-divergence. Neural Computation, 23(9). Examples >>> import numpy as np
>>> X = np.array([[1, 1], [2, 1], [3, 1.2], [4, 1], [5, 0.8], [6, 1]])
>>> from sklearn.decomposition import NMF
>>> model = NMF(n_components=2, init='random', random_state=0)
>>> W = model.fit_transform(X)
>>> H = model.components_
Methods
fit(X[, y]) Learn a NMF model for the data X.
fit_transform(X[, y, W, H]) Learn a NMF model for the data X and returns the transformed data.
get_params([deep]) Get parameters for this estimator.
inverse_transform(W) Transform data back to its original space.
set_params(**params) Set the parameters of this estimator.
transform(X) Transform the data X according to the fitted NMF model.
fit(X, y=None, **params) [source]
Learn a NMF model for the data X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data matrix to be decomposed
yIgnored
Returns
self
fit_transform(X, y=None, W=None, H=None) [source]
Learn a NMF model for the data X and returns the transformed data. This is more efficient than calling fit followed by transform. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data matrix to be decomposed
yIgnored
Warray-like of shape (n_samples, n_components)
If init=’custom’, it is used as initial guess for the solution.
Harray-like of shape (n_components, n_features)
If init=’custom’, it is used as initial guess for the solution. Returns
Wndarray of shape (n_samples, n_components)
Transformed data.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(W) [source]
Transform data back to its original space. Parameters
W{ndarray, sparse matrix} of shape (n_samples, n_components)
Transformed data matrix. Returns
X{ndarray, sparse matrix} of shape (n_samples, n_features)
Data matrix of original shape. New in version 0.18: ..
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Transform the data X according to the fitted NMF model. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data matrix to be transformed by the model. Returns
Wndarray of shape (n_samples, n_components)
Transformed data.
Examples using sklearn.decomposition.NMF
Beta-divergence loss functions
Faces dataset decompositions
Topic extraction with Non-negative Matrix Factorization and Latent Dirichlet Allocation
Selecting dimensionality reduction with Pipeline and GridSearchCV | sklearn.modules.generated.sklearn.decomposition.nmf |
fit(X, y=None, **params) [source]
Learn a NMF model for the data X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data matrix to be decomposed
yIgnored
Returns
self | sklearn.modules.generated.sklearn.decomposition.nmf#sklearn.decomposition.NMF.fit |
fit_transform(X, y=None, W=None, H=None) [source]
Learn a NMF model for the data X and returns the transformed data. This is more efficient than calling fit followed by transform. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data matrix to be decomposed
yIgnored
Warray-like of shape (n_samples, n_components)
If init=’custom’, it is used as initial guess for the solution.
Harray-like of shape (n_components, n_features)
If init=’custom’, it is used as initial guess for the solution. Returns
Wndarray of shape (n_samples, n_components)
Transformed data. | sklearn.modules.generated.sklearn.decomposition.nmf#sklearn.decomposition.NMF.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.decomposition.nmf#sklearn.decomposition.NMF.get_params |
inverse_transform(W) [source]
Transform data back to its original space. Parameters
W{ndarray, sparse matrix} of shape (n_samples, n_components)
Transformed data matrix. Returns
X{ndarray, sparse matrix} of shape (n_samples, n_features)
Data matrix of original shape. New in version 0.18: .. | sklearn.modules.generated.sklearn.decomposition.nmf#sklearn.decomposition.NMF.inverse_transform |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.decomposition.nmf#sklearn.decomposition.NMF.set_params |
transform(X) [source]
Transform the data X according to the fitted NMF model. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data matrix to be transformed by the model. Returns
Wndarray of shape (n_samples, n_components)
Transformed data. | sklearn.modules.generated.sklearn.decomposition.nmf#sklearn.decomposition.NMF.transform |
sklearn.decomposition.non_negative_factorization(X, W=None, H=None, n_components=None, *, init='warn', update_H=True, solver='cd', beta_loss='frobenius', tol=0.0001, max_iter=200, alpha=0.0, l1_ratio=0.0, regularization=None, random_state=None, verbose=0, shuffle=False) [source]
Compute Non-negative Matrix Factorization (NMF). Find two non-negative matrices (W, H) whose product approximates the non- negative matrix X. This factorization can be used for example for dimensionality reduction, source separation or topic extraction. The objective function is: \[ \begin{align}\begin{aligned}0.5 * ||X - WH||_{Fro}^2 + alpha * l1_{ratio} * ||vec(W)||_1\\+ alpha * l1_{ratio} * ||vec(H)||_1\\+ 0.5 * alpha * (1 - l1_{ratio}) * ||W||_{Fro}^2\\+ 0.5 * alpha * (1 - l1_{ratio}) * ||H||_{Fro}^2\end{aligned}\end{align} \] Where: \(||A||_{Fro}^2 = \sum_{i,j} A_{ij}^2\) (Frobenius norm) \(||vec(A)||_1 = \sum_{i,j} abs(A_{ij})\) (Elementwise L1 norm) For multiplicative-update (‘mu’) solver, the Frobenius norm \((0.5 * ||X - WH||_{Fro}^2)\) can be changed into another beta-divergence loss, by changing the beta_loss parameter. The objective function is minimized with an alternating minimization of W and H. If H is given and update_H=False, it solves for W only. Parameters
Xarray-like of shape (n_samples, n_features)
Constant matrix.
Warray-like of shape (n_samples, n_components), default=None
If init=’custom’, it is used as initial guess for the solution.
Harray-like of shape (n_components, n_features), default=None
If init=’custom’, it is used as initial guess for the solution. If update_H=False, it is used as a constant, to solve for W only.
n_componentsint, default=None
Number of components, if n_components is not set all features are kept.
init{‘random’, ‘nndsvd’, ‘nndsvda’, ‘nndsvdar’, ‘custom’}, default=None
Method used to initialize the procedure. Valid options: None: ‘nndsvd’ if n_components < n_features, otherwise ‘random’.
‘random’: non-negative random matrices, scaled with:
sqrt(X.mean() / n_components)
‘nndsvd’: Nonnegative Double Singular Value Decomposition (NNDSVD)
initialization (better for sparseness)
‘nndsvda’: NNDSVD with zeros filled with the average of X
(better when sparsity is not desired)
‘nndsvdar’: NNDSVD with zeros filled with small random values
(generally faster, less accurate alternative to NNDSVDa for when sparsity is not desired) ‘custom’: use custom matrices W and H if update_H=True. If update_H=False, then only custom matrix H is used. Changed in version 0.23: The default value of init changed from ‘random’ to None in 0.23.
update_Hbool, default=True
Set to True, both W and H will be estimated from initial guesses. Set to False, only W will be estimated.
solver{‘cd’, ‘mu’}, default=’cd’
Numerical solver to use:
‘cd’ is a Coordinate Descent solver that uses Fast Hierarchical
Alternating Least Squares (Fast HALS). ‘mu’ is a Multiplicative Update solver. New in version 0.17: Coordinate Descent solver. New in version 0.19: Multiplicative Update solver.
beta_lossfloat or {‘frobenius’, ‘kullback-leibler’, ‘itakura-saito’}, default=’frobenius’
Beta divergence to be minimized, measuring the distance between X and the dot product WH. Note that values different from ‘frobenius’ (or 2) and ‘kullback-leibler’ (or 1) lead to significantly slower fits. Note that for beta_loss <= 0 (or ‘itakura-saito’), the input matrix X cannot contain zeros. Used only in ‘mu’ solver. New in version 0.19.
tolfloat, default=1e-4
Tolerance of the stopping condition.
max_iterint, default=200
Maximum number of iterations before timing out.
alphafloat, default=0.
Constant that multiplies the regularization terms.
l1_ratiofloat, default=0.
The regularization mixing parameter, with 0 <= l1_ratio <= 1. For l1_ratio = 0 the penalty is an elementwise L2 penalty (aka Frobenius Norm). For l1_ratio = 1 it is an elementwise L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2.
regularization{‘both’, ‘components’, ‘transformation’}, default=None
Select whether the regularization affects the components (H), the transformation (W), both or none of them.
random_stateint, RandomState instance or None, default=None
Used for NMF initialisation (when init == ‘nndsvdar’ or ‘random’), and in Coordinate Descent. Pass an int for reproducible results across multiple function calls. See Glossary.
verboseint, default=0
The verbosity level.
shufflebool, default=False
If true, randomize the order of coordinates in the CD solver. Returns
Wndarray of shape (n_samples, n_components)
Solution to the non-negative least squares problem.
Hndarray of shape (n_components, n_features)
Solution to the non-negative least squares problem.
n_iterint
Actual number of iterations. References Cichocki, Andrzej, and P. H. A. N. Anh-Huy. “Fast local algorithms for large scale nonnegative matrix and tensor factorizations.” IEICE transactions on fundamentals of electronics, communications and computer sciences 92.3: 708-721, 2009. Fevotte, C., & Idier, J. (2011). Algorithms for nonnegative matrix factorization with the beta-divergence. Neural Computation, 23(9). Examples >>> import numpy as np
>>> X = np.array([[1,1], [2, 1], [3, 1.2], [4, 1], [5, 0.8], [6, 1]])
>>> from sklearn.decomposition import non_negative_factorization
>>> W, H, n_iter = non_negative_factorization(X, n_components=2,
... init='random', random_state=0) | sklearn.modules.generated.sklearn.decomposition.non_negative_factorization#sklearn.decomposition.non_negative_factorization |
class sklearn.decomposition.PCA(n_components=None, *, copy=True, whiten=False, svd_solver='auto', tol=0.0, iterated_power='auto', random_state=None) [source]
Principal component analysis (PCA). Linear dimensionality reduction using Singular Value Decomposition of the data to project it to a lower dimensional space. The input data is centered but not scaled for each feature before applying the SVD. It uses the LAPACK implementation of the full SVD or a randomized truncated SVD by the method of Halko et al. 2009, depending on the shape of the input data and the number of components to extract. It can also use the scipy.sparse.linalg ARPACK implementation of the truncated SVD. Notice that this class does not support sparse input. See TruncatedSVD for an alternative with sparse data. Read more in the User Guide. Parameters
n_componentsint, float or ‘mle’, default=None
Number of components to keep. if n_components is not set all components are kept: n_components == min(n_samples, n_features)
If n_components == 'mle' and svd_solver == 'full', Minka’s MLE is used to guess the dimension. Use of n_components == 'mle' will interpret svd_solver == 'auto' as svd_solver == 'full'. If 0 < n_components < 1 and svd_solver == 'full', select the number of components such that the amount of variance that needs to be explained is greater than the percentage specified by n_components. If svd_solver == 'arpack', the number of components must be strictly less than the minimum of n_features and n_samples. Hence, the None case results in: n_components == min(n_samples, n_features) - 1
copybool, default=True
If False, data passed to fit are overwritten and running fit(X).transform(X) will not yield the expected results, use fit_transform(X) instead.
whitenbool, default=False
When True (False by default) the components_ vectors are multiplied by the square root of n_samples and then divided by the singular values to ensure uncorrelated outputs with unit component-wise variances. Whitening will remove some information from the transformed signal (the relative variance scales of the components) but can sometime improve the predictive accuracy of the downstream estimators by making their data respect some hard-wired assumptions.
svd_solver{‘auto’, ‘full’, ‘arpack’, ‘randomized’}, default=’auto’
If auto :
The solver is selected by a default policy based on X.shape and n_components: if the input data is larger than 500x500 and the number of components to extract is lower than 80% of the smallest dimension of the data, then the more efficient ‘randomized’ method is enabled. Otherwise the exact full SVD is computed and optionally truncated afterwards. If full :
run exact full SVD calling the standard LAPACK solver via scipy.linalg.svd and select the components by postprocessing If arpack :
run SVD truncated to n_components calling ARPACK solver via scipy.sparse.linalg.svds. It requires strictly 0 < n_components < min(X.shape) If randomized :
run randomized SVD by the method of Halko et al. New in version 0.18.0.
tolfloat, default=0.0
Tolerance for singular values computed by svd_solver == ‘arpack’. Must be of range [0.0, infinity). New in version 0.18.0.
iterated_powerint or ‘auto’, default=’auto’
Number of iterations for the power method computed by svd_solver == ‘randomized’. Must be of range [0, infinity). New in version 0.18.0.
random_stateint, RandomState instance or None, default=None
Used when the ‘arpack’ or ‘randomized’ solvers are used. Pass an int for reproducible results across multiple function calls. See Glossary. New in version 0.18.0. Attributes
components_ndarray of shape (n_components, n_features)
Principal axes in feature space, representing the directions of maximum variance in the data. The components are sorted by explained_variance_.
explained_variance_ndarray of shape (n_components,)
The amount of variance explained by each of the selected components. Equal to n_components largest eigenvalues of the covariance matrix of X. New in version 0.18.
explained_variance_ratio_ndarray of shape (n_components,)
Percentage of variance explained by each of the selected components. If n_components is not set then all components are stored and the sum of the ratios is equal to 1.0.
singular_values_ndarray of shape (n_components,)
The singular values corresponding to each of the selected components. The singular values are equal to the 2-norms of the n_components variables in the lower-dimensional space. New in version 0.19.
mean_ndarray of shape (n_features,)
Per-feature empirical mean, estimated from the training set. Equal to X.mean(axis=0).
n_components_int
The estimated number of components. When n_components is set to ‘mle’ or a number between 0 and 1 (with svd_solver == ‘full’) this number is estimated from input data. Otherwise it equals the parameter n_components, or the lesser value of n_features and n_samples if n_components is None.
n_features_int
Number of features in the training data.
n_samples_int
Number of samples in the training data.
noise_variance_float
The estimated noise covariance following the Probabilistic PCA model from Tipping and Bishop 1999. See “Pattern Recognition and Machine Learning” by C. Bishop, 12.2.1 p. 574 or http://www.miketipping.com/papers/met-mppca.pdf. It is required to compute the estimated data covariance and score samples. Equal to the average of (min(n_features, n_samples) - n_components) smallest eigenvalues of the covariance matrix of X. See also
KernelPCA
Kernel Principal Component Analysis.
SparsePCA
Sparse Principal Component Analysis.
TruncatedSVD
Dimensionality reduction using truncated SVD.
IncrementalPCA
Incremental Principal Component Analysis. References For n_components == ‘mle’, this class uses the method of Minka, T. P. “Automatic choice of dimensionality for PCA”. In NIPS, pp. 598-604 Implements the probabilistic PCA model from: Tipping, M. E., and Bishop, C. M. (1999). “Probabilistic principal component analysis”. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 61(3), 611-622. via the score and score_samples methods. See http://www.miketipping.com/papers/met-mppca.pdf For svd_solver == ‘arpack’, refer to scipy.sparse.linalg.svds. For svd_solver == ‘randomized’, see: Halko, N., Martinsson, P. G., and Tropp, J. A. (2011). “Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions”. SIAM review, 53(2), 217-288. and also Martinsson, P. G., Rokhlin, V., and Tygert, M. (2011). “A randomized algorithm for the decomposition of matrices”. Applied and Computational Harmonic Analysis, 30(1), 47-68. Examples >>> import numpy as np
>>> from sklearn.decomposition import PCA
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> pca = PCA(n_components=2)
>>> pca.fit(X)
PCA(n_components=2)
>>> print(pca.explained_variance_ratio_)
[0.9924... 0.0075...]
>>> print(pca.singular_values_)
[6.30061... 0.54980...]
>>> pca = PCA(n_components=2, svd_solver='full')
>>> pca.fit(X)
PCA(n_components=2, svd_solver='full')
>>> print(pca.explained_variance_ratio_)
[0.9924... 0.00755...]
>>> print(pca.singular_values_)
[6.30061... 0.54980...]
>>> pca = PCA(n_components=1, svd_solver='arpack')
>>> pca.fit(X)
PCA(n_components=1, svd_solver='arpack')
>>> print(pca.explained_variance_ratio_)
[0.99244...]
>>> print(pca.singular_values_)
[6.30061...]
Methods
fit(X[, y]) Fit the model with X.
fit_transform(X[, y]) Fit the model with X and apply the dimensionality reduction on X.
get_covariance() Compute data covariance with the generative model.
get_params([deep]) Get parameters for this estimator.
get_precision() Compute data precision matrix with the generative model.
inverse_transform(X) Transform data back to its original space.
score(X[, y]) Return the average log-likelihood of all samples.
score_samples(X) Return the log-likelihood of each sample.
set_params(**params) Set the parameters of this estimator.
transform(X) Apply dimensionality reduction to X.
fit(X, y=None) [source]
Fit the model with X. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Returns
selfobject
Returns the instance itself.
fit_transform(X, y=None) [source]
Fit the model with X and apply the dimensionality reduction on X. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Returns
X_newndarray of shape (n_samples, n_components)
Transformed values. Notes This method returns a Fortran-ordered array. To convert it to a C-ordered array, use ‘np.ascontiguousarray’.
get_covariance() [source]
Compute data covariance with the generative model. cov = components_.T * S**2 * components_ + sigma2 * eye(n_features) where S**2 contains the explained variances, and sigma2 contains the noise variances. Returns
covarray, shape=(n_features, n_features)
Estimated covariance of data.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_precision() [source]
Compute data precision matrix with the generative model. Equals the inverse of the covariance but computed with the matrix inversion lemma for efficiency. Returns
precisionarray, shape=(n_features, n_features)
Estimated precision of data.
inverse_transform(X) [source]
Transform data back to its original space. In other words, return an input X_original whose transform would be X. Parameters
Xarray-like, shape (n_samples, n_components)
New data, where n_samples is the number of samples and n_components is the number of components. Returns
X_original array-like, shape (n_samples, n_features)
Notes If whitening is enabled, inverse_transform will compute the exact inverse operation, which includes reversing whitening.
score(X, y=None) [source]
Return the average log-likelihood of all samples. See. “Pattern Recognition and Machine Learning” by C. Bishop, 12.2.1 p. 574 or http://www.miketipping.com/papers/met-mppca.pdf Parameters
Xarray-like of shape (n_samples, n_features)
The data.
yIgnored
Returns
llfloat
Average log-likelihood of the samples under the current model.
score_samples(X) [source]
Return the log-likelihood of each sample. See. “Pattern Recognition and Machine Learning” by C. Bishop, 12.2.1 p. 574 or http://www.miketipping.com/papers/met-mppca.pdf Parameters
Xarray-like of shape (n_samples, n_features)
The data. Returns
llndarray of shape (n_samples,)
Log-likelihood of each sample under the current model.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Apply dimensionality reduction to X. X is projected on the first principal components previously extracted from a training set. Parameters
Xarray-like, shape (n_samples, n_features)
New data, where n_samples is the number of samples and n_features is the number of features. Returns
X_newarray-like, shape (n_samples, n_components)
Examples >>> import numpy as np
>>> from sklearn.decomposition import IncrementalPCA
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> ipca = IncrementalPCA(n_components=2, batch_size=3)
>>> ipca.fit(X)
IncrementalPCA(batch_size=3, n_components=2)
>>> ipca.transform(X) | sklearn.modules.generated.sklearn.decomposition.pca#sklearn.decomposition.PCA |
sklearn.decomposition.PCA
class sklearn.decomposition.PCA(n_components=None, *, copy=True, whiten=False, svd_solver='auto', tol=0.0, iterated_power='auto', random_state=None) [source]
Principal component analysis (PCA). Linear dimensionality reduction using Singular Value Decomposition of the data to project it to a lower dimensional space. The input data is centered but not scaled for each feature before applying the SVD. It uses the LAPACK implementation of the full SVD or a randomized truncated SVD by the method of Halko et al. 2009, depending on the shape of the input data and the number of components to extract. It can also use the scipy.sparse.linalg ARPACK implementation of the truncated SVD. Notice that this class does not support sparse input. See TruncatedSVD for an alternative with sparse data. Read more in the User Guide. Parameters
n_componentsint, float or ‘mle’, default=None
Number of components to keep. if n_components is not set all components are kept: n_components == min(n_samples, n_features)
If n_components == 'mle' and svd_solver == 'full', Minka’s MLE is used to guess the dimension. Use of n_components == 'mle' will interpret svd_solver == 'auto' as svd_solver == 'full'. If 0 < n_components < 1 and svd_solver == 'full', select the number of components such that the amount of variance that needs to be explained is greater than the percentage specified by n_components. If svd_solver == 'arpack', the number of components must be strictly less than the minimum of n_features and n_samples. Hence, the None case results in: n_components == min(n_samples, n_features) - 1
copybool, default=True
If False, data passed to fit are overwritten and running fit(X).transform(X) will not yield the expected results, use fit_transform(X) instead.
whitenbool, default=False
When True (False by default) the components_ vectors are multiplied by the square root of n_samples and then divided by the singular values to ensure uncorrelated outputs with unit component-wise variances. Whitening will remove some information from the transformed signal (the relative variance scales of the components) but can sometime improve the predictive accuracy of the downstream estimators by making their data respect some hard-wired assumptions.
svd_solver{‘auto’, ‘full’, ‘arpack’, ‘randomized’}, default=’auto’
If auto :
The solver is selected by a default policy based on X.shape and n_components: if the input data is larger than 500x500 and the number of components to extract is lower than 80% of the smallest dimension of the data, then the more efficient ‘randomized’ method is enabled. Otherwise the exact full SVD is computed and optionally truncated afterwards. If full :
run exact full SVD calling the standard LAPACK solver via scipy.linalg.svd and select the components by postprocessing If arpack :
run SVD truncated to n_components calling ARPACK solver via scipy.sparse.linalg.svds. It requires strictly 0 < n_components < min(X.shape) If randomized :
run randomized SVD by the method of Halko et al. New in version 0.18.0.
tolfloat, default=0.0
Tolerance for singular values computed by svd_solver == ‘arpack’. Must be of range [0.0, infinity). New in version 0.18.0.
iterated_powerint or ‘auto’, default=’auto’
Number of iterations for the power method computed by svd_solver == ‘randomized’. Must be of range [0, infinity). New in version 0.18.0.
random_stateint, RandomState instance or None, default=None
Used when the ‘arpack’ or ‘randomized’ solvers are used. Pass an int for reproducible results across multiple function calls. See Glossary. New in version 0.18.0. Attributes
components_ndarray of shape (n_components, n_features)
Principal axes in feature space, representing the directions of maximum variance in the data. The components are sorted by explained_variance_.
explained_variance_ndarray of shape (n_components,)
The amount of variance explained by each of the selected components. Equal to n_components largest eigenvalues of the covariance matrix of X. New in version 0.18.
explained_variance_ratio_ndarray of shape (n_components,)
Percentage of variance explained by each of the selected components. If n_components is not set then all components are stored and the sum of the ratios is equal to 1.0.
singular_values_ndarray of shape (n_components,)
The singular values corresponding to each of the selected components. The singular values are equal to the 2-norms of the n_components variables in the lower-dimensional space. New in version 0.19.
mean_ndarray of shape (n_features,)
Per-feature empirical mean, estimated from the training set. Equal to X.mean(axis=0).
n_components_int
The estimated number of components. When n_components is set to ‘mle’ or a number between 0 and 1 (with svd_solver == ‘full’) this number is estimated from input data. Otherwise it equals the parameter n_components, or the lesser value of n_features and n_samples if n_components is None.
n_features_int
Number of features in the training data.
n_samples_int
Number of samples in the training data.
noise_variance_float
The estimated noise covariance following the Probabilistic PCA model from Tipping and Bishop 1999. See “Pattern Recognition and Machine Learning” by C. Bishop, 12.2.1 p. 574 or http://www.miketipping.com/papers/met-mppca.pdf. It is required to compute the estimated data covariance and score samples. Equal to the average of (min(n_features, n_samples) - n_components) smallest eigenvalues of the covariance matrix of X. See also
KernelPCA
Kernel Principal Component Analysis.
SparsePCA
Sparse Principal Component Analysis.
TruncatedSVD
Dimensionality reduction using truncated SVD.
IncrementalPCA
Incremental Principal Component Analysis. References For n_components == ‘mle’, this class uses the method of Minka, T. P. “Automatic choice of dimensionality for PCA”. In NIPS, pp. 598-604 Implements the probabilistic PCA model from: Tipping, M. E., and Bishop, C. M. (1999). “Probabilistic principal component analysis”. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 61(3), 611-622. via the score and score_samples methods. See http://www.miketipping.com/papers/met-mppca.pdf For svd_solver == ‘arpack’, refer to scipy.sparse.linalg.svds. For svd_solver == ‘randomized’, see: Halko, N., Martinsson, P. G., and Tropp, J. A. (2011). “Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions”. SIAM review, 53(2), 217-288. and also Martinsson, P. G., Rokhlin, V., and Tygert, M. (2011). “A randomized algorithm for the decomposition of matrices”. Applied and Computational Harmonic Analysis, 30(1), 47-68. Examples >>> import numpy as np
>>> from sklearn.decomposition import PCA
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> pca = PCA(n_components=2)
>>> pca.fit(X)
PCA(n_components=2)
>>> print(pca.explained_variance_ratio_)
[0.9924... 0.0075...]
>>> print(pca.singular_values_)
[6.30061... 0.54980...]
>>> pca = PCA(n_components=2, svd_solver='full')
>>> pca.fit(X)
PCA(n_components=2, svd_solver='full')
>>> print(pca.explained_variance_ratio_)
[0.9924... 0.00755...]
>>> print(pca.singular_values_)
[6.30061... 0.54980...]
>>> pca = PCA(n_components=1, svd_solver='arpack')
>>> pca.fit(X)
PCA(n_components=1, svd_solver='arpack')
>>> print(pca.explained_variance_ratio_)
[0.99244...]
>>> print(pca.singular_values_)
[6.30061...]
Methods
fit(X[, y]) Fit the model with X.
fit_transform(X[, y]) Fit the model with X and apply the dimensionality reduction on X.
get_covariance() Compute data covariance with the generative model.
get_params([deep]) Get parameters for this estimator.
get_precision() Compute data precision matrix with the generative model.
inverse_transform(X) Transform data back to its original space.
score(X[, y]) Return the average log-likelihood of all samples.
score_samples(X) Return the log-likelihood of each sample.
set_params(**params) Set the parameters of this estimator.
transform(X) Apply dimensionality reduction to X.
fit(X, y=None) [source]
Fit the model with X. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Returns
selfobject
Returns the instance itself.
fit_transform(X, y=None) [source]
Fit the model with X and apply the dimensionality reduction on X. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Returns
X_newndarray of shape (n_samples, n_components)
Transformed values. Notes This method returns a Fortran-ordered array. To convert it to a C-ordered array, use ‘np.ascontiguousarray’.
get_covariance() [source]
Compute data covariance with the generative model. cov = components_.T * S**2 * components_ + sigma2 * eye(n_features) where S**2 contains the explained variances, and sigma2 contains the noise variances. Returns
covarray, shape=(n_features, n_features)
Estimated covariance of data.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_precision() [source]
Compute data precision matrix with the generative model. Equals the inverse of the covariance but computed with the matrix inversion lemma for efficiency. Returns
precisionarray, shape=(n_features, n_features)
Estimated precision of data.
inverse_transform(X) [source]
Transform data back to its original space. In other words, return an input X_original whose transform would be X. Parameters
Xarray-like, shape (n_samples, n_components)
New data, where n_samples is the number of samples and n_components is the number of components. Returns
X_original array-like, shape (n_samples, n_features)
Notes If whitening is enabled, inverse_transform will compute the exact inverse operation, which includes reversing whitening.
score(X, y=None) [source]
Return the average log-likelihood of all samples. See. “Pattern Recognition and Machine Learning” by C. Bishop, 12.2.1 p. 574 or http://www.miketipping.com/papers/met-mppca.pdf Parameters
Xarray-like of shape (n_samples, n_features)
The data.
yIgnored
Returns
llfloat
Average log-likelihood of the samples under the current model.
score_samples(X) [source]
Return the log-likelihood of each sample. See. “Pattern Recognition and Machine Learning” by C. Bishop, 12.2.1 p. 574 or http://www.miketipping.com/papers/met-mppca.pdf Parameters
Xarray-like of shape (n_samples, n_features)
The data. Returns
llndarray of shape (n_samples,)
Log-likelihood of each sample under the current model.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Apply dimensionality reduction to X. X is projected on the first principal components previously extracted from a training set. Parameters
Xarray-like, shape (n_samples, n_features)
New data, where n_samples is the number of samples and n_features is the number of features. Returns
X_newarray-like, shape (n_samples, n_components)
Examples >>> import numpy as np
>>> from sklearn.decomposition import IncrementalPCA
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> ipca = IncrementalPCA(n_components=2, batch_size=3)
>>> ipca.fit(X)
IncrementalPCA(batch_size=3, n_components=2)
>>> ipca.transform(X)
Examples using sklearn.decomposition.PCA
A demo of K-Means clustering on the handwritten digits data
Principal Component Regression vs Partial Least Squares Regression
The Iris Dataset
PCA example with Iris Data-set
Incremental PCA
Comparison of LDA and PCA 2D projection of Iris dataset
Factor Analysis (with rotation) to visualize patterns
Blind source separation using FastICA
Principal components analysis (PCA)
FastICA on 2D point clouds
Kernel PCA
Model selection with Probabilistic PCA and Factor Analysis (FA)
Faces dataset decompositions
Faces recognition example using eigenfaces and SVMs
Multi-dimensional scaling
Multilabel classification
Explicit feature map approximation for RBF kernels
Balance model complexity and cross-validated score
Kernel Density Estimation
Dimensionality Reduction with Neighborhood Components Analysis
Concatenating multiple feature extraction methods
Pipelining: chaining a PCA and a logistic regression
Selecting dimensionality reduction with Pipeline and GridSearchCV
Importance of Feature Scaling | sklearn.modules.generated.sklearn.decomposition.pca |
fit(X, y=None) [source]
Fit the model with X. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Returns
selfobject
Returns the instance itself. | sklearn.modules.generated.sklearn.decomposition.pca#sklearn.decomposition.PCA.fit |
fit_transform(X, y=None) [source]
Fit the model with X and apply the dimensionality reduction on X. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Returns
X_newndarray of shape (n_samples, n_components)
Transformed values. Notes This method returns a Fortran-ordered array. To convert it to a C-ordered array, use ‘np.ascontiguousarray’. | sklearn.modules.generated.sklearn.decomposition.pca#sklearn.decomposition.PCA.fit_transform |
get_covariance() [source]
Compute data covariance with the generative model. cov = components_.T * S**2 * components_ + sigma2 * eye(n_features) where S**2 contains the explained variances, and sigma2 contains the noise variances. Returns
covarray, shape=(n_features, n_features)
Estimated covariance of data. | sklearn.modules.generated.sklearn.decomposition.pca#sklearn.decomposition.PCA.get_covariance |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.decomposition.pca#sklearn.decomposition.PCA.get_params |
get_precision() [source]
Compute data precision matrix with the generative model. Equals the inverse of the covariance but computed with the matrix inversion lemma for efficiency. Returns
precisionarray, shape=(n_features, n_features)
Estimated precision of data. | sklearn.modules.generated.sklearn.decomposition.pca#sklearn.decomposition.PCA.get_precision |
inverse_transform(X) [source]
Transform data back to its original space. In other words, return an input X_original whose transform would be X. Parameters
Xarray-like, shape (n_samples, n_components)
New data, where n_samples is the number of samples and n_components is the number of components. Returns
X_original array-like, shape (n_samples, n_features)
Notes If whitening is enabled, inverse_transform will compute the exact inverse operation, which includes reversing whitening. | sklearn.modules.generated.sklearn.decomposition.pca#sklearn.decomposition.PCA.inverse_transform |
score(X, y=None) [source]
Return the average log-likelihood of all samples. See. “Pattern Recognition and Machine Learning” by C. Bishop, 12.2.1 p. 574 or http://www.miketipping.com/papers/met-mppca.pdf Parameters
Xarray-like of shape (n_samples, n_features)
The data.
yIgnored
Returns
llfloat
Average log-likelihood of the samples under the current model. | sklearn.modules.generated.sklearn.decomposition.pca#sklearn.decomposition.PCA.score |
score_samples(X) [source]
Return the log-likelihood of each sample. See. “Pattern Recognition and Machine Learning” by C. Bishop, 12.2.1 p. 574 or http://www.miketipping.com/papers/met-mppca.pdf Parameters
Xarray-like of shape (n_samples, n_features)
The data. Returns
llndarray of shape (n_samples,)
Log-likelihood of each sample under the current model. | sklearn.modules.generated.sklearn.decomposition.pca#sklearn.decomposition.PCA.score_samples |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.decomposition.pca#sklearn.decomposition.PCA.set_params |
transform(X) [source]
Apply dimensionality reduction to X. X is projected on the first principal components previously extracted from a training set. Parameters
Xarray-like, shape (n_samples, n_features)
New data, where n_samples is the number of samples and n_features is the number of features. Returns
X_newarray-like, shape (n_samples, n_components)
Examples >>> import numpy as np
>>> from sklearn.decomposition import IncrementalPCA
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> ipca = IncrementalPCA(n_components=2, batch_size=3)
>>> ipca.fit(X)
IncrementalPCA(batch_size=3, n_components=2)
>>> ipca.transform(X) | sklearn.modules.generated.sklearn.decomposition.pca#sklearn.decomposition.PCA.transform |
class sklearn.decomposition.SparseCoder(dictionary, *, transform_algorithm='omp', transform_n_nonzero_coefs=None, transform_alpha=None, split_sign=False, n_jobs=None, positive_code=False, transform_max_iter=1000) [source]
Sparse coding Finds a sparse representation of data against a fixed, precomputed dictionary. Each row of the result is the solution to a sparse coding problem. The goal is to find a sparse array code such that: X ~= code * dictionary
Read more in the User Guide. Parameters
dictionaryndarray of shape (n_components, n_features)
The dictionary atoms used for sparse coding. Lines are assumed to be normalized to unit norm.
transform_algorithm{‘lasso_lars’, ‘lasso_cd’, ‘lars’, ‘omp’, ‘threshold’}, default=’omp’
Algorithm used to transform the data:
'lars': uses the least angle regression method (linear_model.lars_path);
'lasso_lars': uses Lars to compute the Lasso solution;
'lasso_cd': uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). 'lasso_lars' will be faster if the estimated components are sparse;
'omp': uses orthogonal matching pursuit to estimate the sparse solution;
'threshold': squashes to zero all coefficients less than alpha from the projection dictionary * X'.
transform_n_nonzero_coefsint, default=None
Number of nonzero coefficients to target in each column of the solution. This is only used by algorithm='lars' and algorithm='omp' and is overridden by alpha in the omp case. If None, then transform_n_nonzero_coefs=int(n_features / 10).
transform_alphafloat, default=None
If algorithm='lasso_lars' or algorithm='lasso_cd', alpha is the penalty applied to the L1 norm. If algorithm='threshold', alpha is the absolute value of the threshold below which coefficients will be squashed to zero. If algorithm='omp', alpha is the tolerance parameter: the value of the reconstruction error targeted. In this case, it overrides n_nonzero_coefs. If None, default to 1.
split_signbool, default=False
Whether to split the sparse feature vector into the concatenation of its negative part and its positive part. This can improve the performance of downstream classifiers.
n_jobsint, default=None
Number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
positive_codebool, default=False
Whether to enforce positivity when finding the code. New in version 0.20.
transform_max_iterint, default=1000
Maximum number of iterations to perform if algorithm='lasso_cd' or lasso_lars. New in version 0.22. Attributes
components_ndarray of shape (n_components, n_features)
The unchanged dictionary atoms. Deprecated since version 0.24: This attribute is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26). Use dictionary instead. See also
DictionaryLearning
MiniBatchDictionaryLearning
SparsePCA
MiniBatchSparsePCA
sparse_encode
Examples >>> import numpy as np
>>> from sklearn.decomposition import SparseCoder
>>> X = np.array([[-1, -1, -1], [0, 0, 3]])
>>> dictionary = np.array(
... [[0, 1, 0],
... [-1, -1, 2],
... [1, 1, 1],
... [0, 1, 1],
... [0, 2, 1]],
... dtype=np.float64
... )
>>> coder = SparseCoder(
... dictionary=dictionary, transform_algorithm='lasso_lars',
... transform_alpha=1e-10,
... )
>>> coder.transform(X)
array([[ 0., 0., -1., 0., 0.],
[ 0., 1., 1., 0., 0.]])
Methods
fit(X[, y]) Do nothing and return the estimator unchanged.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X[, y]) Encode the data as a sparse combination of the dictionary atoms.
fit(X, y=None) [source]
Do nothing and return the estimator unchanged. This method is just there to implement the usual API and hence work in pipelines. Parameters
XIgnored
yIgnored
Returns
selfobject
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X, y=None) [source]
Encode the data as a sparse combination of the dictionary atoms. Coding method is determined by the object parameter transform_algorithm. Parameters
Xndarray of shape (n_samples, n_features)
Test data to be transformed, must have the same number of features as the data used to train the model. Returns
X_newndarray of shape (n_samples, n_components)
Transformed data. | sklearn.modules.generated.sklearn.decomposition.sparsecoder#sklearn.decomposition.SparseCoder |
sklearn.decomposition.SparseCoder
class sklearn.decomposition.SparseCoder(dictionary, *, transform_algorithm='omp', transform_n_nonzero_coefs=None, transform_alpha=None, split_sign=False, n_jobs=None, positive_code=False, transform_max_iter=1000) [source]
Sparse coding Finds a sparse representation of data against a fixed, precomputed dictionary. Each row of the result is the solution to a sparse coding problem. The goal is to find a sparse array code such that: X ~= code * dictionary
Read more in the User Guide. Parameters
dictionaryndarray of shape (n_components, n_features)
The dictionary atoms used for sparse coding. Lines are assumed to be normalized to unit norm.
transform_algorithm{‘lasso_lars’, ‘lasso_cd’, ‘lars’, ‘omp’, ‘threshold’}, default=’omp’
Algorithm used to transform the data:
'lars': uses the least angle regression method (linear_model.lars_path);
'lasso_lars': uses Lars to compute the Lasso solution;
'lasso_cd': uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). 'lasso_lars' will be faster if the estimated components are sparse;
'omp': uses orthogonal matching pursuit to estimate the sparse solution;
'threshold': squashes to zero all coefficients less than alpha from the projection dictionary * X'.
transform_n_nonzero_coefsint, default=None
Number of nonzero coefficients to target in each column of the solution. This is only used by algorithm='lars' and algorithm='omp' and is overridden by alpha in the omp case. If None, then transform_n_nonzero_coefs=int(n_features / 10).
transform_alphafloat, default=None
If algorithm='lasso_lars' or algorithm='lasso_cd', alpha is the penalty applied to the L1 norm. If algorithm='threshold', alpha is the absolute value of the threshold below which coefficients will be squashed to zero. If algorithm='omp', alpha is the tolerance parameter: the value of the reconstruction error targeted. In this case, it overrides n_nonzero_coefs. If None, default to 1.
split_signbool, default=False
Whether to split the sparse feature vector into the concatenation of its negative part and its positive part. This can improve the performance of downstream classifiers.
n_jobsint, default=None
Number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
positive_codebool, default=False
Whether to enforce positivity when finding the code. New in version 0.20.
transform_max_iterint, default=1000
Maximum number of iterations to perform if algorithm='lasso_cd' or lasso_lars. New in version 0.22. Attributes
components_ndarray of shape (n_components, n_features)
The unchanged dictionary atoms. Deprecated since version 0.24: This attribute is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26). Use dictionary instead. See also
DictionaryLearning
MiniBatchDictionaryLearning
SparsePCA
MiniBatchSparsePCA
sparse_encode
Examples >>> import numpy as np
>>> from sklearn.decomposition import SparseCoder
>>> X = np.array([[-1, -1, -1], [0, 0, 3]])
>>> dictionary = np.array(
... [[0, 1, 0],
... [-1, -1, 2],
... [1, 1, 1],
... [0, 1, 1],
... [0, 2, 1]],
... dtype=np.float64
... )
>>> coder = SparseCoder(
... dictionary=dictionary, transform_algorithm='lasso_lars',
... transform_alpha=1e-10,
... )
>>> coder.transform(X)
array([[ 0., 0., -1., 0., 0.],
[ 0., 1., 1., 0., 0.]])
Methods
fit(X[, y]) Do nothing and return the estimator unchanged.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X[, y]) Encode the data as a sparse combination of the dictionary atoms.
fit(X, y=None) [source]
Do nothing and return the estimator unchanged. This method is just there to implement the usual API and hence work in pipelines. Parameters
XIgnored
yIgnored
Returns
selfobject
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X, y=None) [source]
Encode the data as a sparse combination of the dictionary atoms. Coding method is determined by the object parameter transform_algorithm. Parameters
Xndarray of shape (n_samples, n_features)
Test data to be transformed, must have the same number of features as the data used to train the model. Returns
X_newndarray of shape (n_samples, n_components)
Transformed data.
Examples using sklearn.decomposition.SparseCoder
Sparse coding with a precomputed dictionary | sklearn.modules.generated.sklearn.decomposition.sparsecoder |
fit(X, y=None) [source]
Do nothing and return the estimator unchanged. This method is just there to implement the usual API and hence work in pipelines. Parameters
XIgnored
yIgnored
Returns
selfobject | sklearn.modules.generated.sklearn.decomposition.sparsecoder#sklearn.decomposition.SparseCoder.fit |
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | sklearn.modules.generated.sklearn.decomposition.sparsecoder#sklearn.decomposition.SparseCoder.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.decomposition.sparsecoder#sklearn.decomposition.SparseCoder.get_params |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.decomposition.sparsecoder#sklearn.decomposition.SparseCoder.set_params |
transform(X, y=None) [source]
Encode the data as a sparse combination of the dictionary atoms. Coding method is determined by the object parameter transform_algorithm. Parameters
Xndarray of shape (n_samples, n_features)
Test data to be transformed, must have the same number of features as the data used to train the model. Returns
X_newndarray of shape (n_samples, n_components)
Transformed data. | sklearn.modules.generated.sklearn.decomposition.sparsecoder#sklearn.decomposition.SparseCoder.transform |
class sklearn.decomposition.SparsePCA(n_components=None, *, alpha=1, ridge_alpha=0.01, max_iter=1000, tol=1e-08, method='lars', n_jobs=None, U_init=None, V_init=None, verbose=False, random_state=None) [source]
Sparse Principal Components Analysis (SparsePCA). Finds the set of sparse components that can optimally reconstruct the data. The amount of sparseness is controllable by the coefficient of the L1 penalty, given by the parameter alpha. Read more in the User Guide. Parameters
n_componentsint, default=None
Number of sparse atoms to extract.
alphafloat, default=1
Sparsity controlling parameter. Higher values lead to sparser components.
ridge_alphafloat, default=0.01
Amount of ridge shrinkage to apply in order to improve conditioning when calling the transform method.
max_iterint, default=1000
Maximum number of iterations to perform.
tolfloat, default=1e-8
Tolerance for the stopping condition.
method{‘lars’, ‘cd’}, default=’lars’
lars: uses the least angle regression method to solve the lasso problem (linear_model.lars_path) cd: uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). Lars will be faster if the estimated components are sparse.
n_jobsint, default=None
Number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
U_initndarray of shape (n_samples, n_components), default=None
Initial values for the loadings for warm restart scenarios.
V_initndarray of shape (n_components, n_features), default=None
Initial values for the components for warm restart scenarios.
verboseint or bool, default=False
Controls the verbosity; the higher, the more messages. Defaults to 0.
random_stateint, RandomState instance or None, default=None
Used during dictionary learning. Pass an int for reproducible results across multiple function calls. See Glossary. Attributes
components_ndarray of shape (n_components, n_features)
Sparse components extracted from the data.
error_ndarray
Vector of errors at each iteration.
n_components_int
Estimated number of components. New in version 0.23.
n_iter_int
Number of iterations run.
mean_ndarray of shape (n_features,)
Per-feature empirical mean, estimated from the training set. Equal to X.mean(axis=0). See also
PCA
MiniBatchSparsePCA
DictionaryLearning
Examples >>> import numpy as np
>>> from sklearn.datasets import make_friedman1
>>> from sklearn.decomposition import SparsePCA
>>> X, _ = make_friedman1(n_samples=200, n_features=30, random_state=0)
>>> transformer = SparsePCA(n_components=5, random_state=0)
>>> transformer.fit(X)
SparsePCA(...)
>>> X_transformed = transformer.transform(X)
>>> X_transformed.shape
(200, 5)
>>> # most values in the components_ are zero (sparsity)
>>> np.mean(transformer.components_ == 0)
0.9666...
Methods
fit(X[, y]) Fit the model from data in X.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Least Squares projection of the data onto the sparse components.
fit(X, y=None) [source]
Fit the model from data in X. Parameters
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yIgnored
Returns
selfobject
Returns the instance itself.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Least Squares projection of the data onto the sparse components. To avoid instability issues in case the system is under-determined, regularization can be applied (Ridge regression) via the ridge_alpha parameter. Note that Sparse PCA components orthogonality is not enforced as in PCA hence one cannot use a simple linear projection. Parameters
Xndarray of shape (n_samples, n_features)
Test data to be transformed, must have the same number of features as the data used to train the model. Returns
X_newndarray of shape (n_samples, n_components)
Transformed data. | sklearn.modules.generated.sklearn.decomposition.sparsepca#sklearn.decomposition.SparsePCA |
sklearn.decomposition.SparsePCA
class sklearn.decomposition.SparsePCA(n_components=None, *, alpha=1, ridge_alpha=0.01, max_iter=1000, tol=1e-08, method='lars', n_jobs=None, U_init=None, V_init=None, verbose=False, random_state=None) [source]
Sparse Principal Components Analysis (SparsePCA). Finds the set of sparse components that can optimally reconstruct the data. The amount of sparseness is controllable by the coefficient of the L1 penalty, given by the parameter alpha. Read more in the User Guide. Parameters
n_componentsint, default=None
Number of sparse atoms to extract.
alphafloat, default=1
Sparsity controlling parameter. Higher values lead to sparser components.
ridge_alphafloat, default=0.01
Amount of ridge shrinkage to apply in order to improve conditioning when calling the transform method.
max_iterint, default=1000
Maximum number of iterations to perform.
tolfloat, default=1e-8
Tolerance for the stopping condition.
method{‘lars’, ‘cd’}, default=’lars’
lars: uses the least angle regression method to solve the lasso problem (linear_model.lars_path) cd: uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). Lars will be faster if the estimated components are sparse.
n_jobsint, default=None
Number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
U_initndarray of shape (n_samples, n_components), default=None
Initial values for the loadings for warm restart scenarios.
V_initndarray of shape (n_components, n_features), default=None
Initial values for the components for warm restart scenarios.
verboseint or bool, default=False
Controls the verbosity; the higher, the more messages. Defaults to 0.
random_stateint, RandomState instance or None, default=None
Used during dictionary learning. Pass an int for reproducible results across multiple function calls. See Glossary. Attributes
components_ndarray of shape (n_components, n_features)
Sparse components extracted from the data.
error_ndarray
Vector of errors at each iteration.
n_components_int
Estimated number of components. New in version 0.23.
n_iter_int
Number of iterations run.
mean_ndarray of shape (n_features,)
Per-feature empirical mean, estimated from the training set. Equal to X.mean(axis=0). See also
PCA
MiniBatchSparsePCA
DictionaryLearning
Examples >>> import numpy as np
>>> from sklearn.datasets import make_friedman1
>>> from sklearn.decomposition import SparsePCA
>>> X, _ = make_friedman1(n_samples=200, n_features=30, random_state=0)
>>> transformer = SparsePCA(n_components=5, random_state=0)
>>> transformer.fit(X)
SparsePCA(...)
>>> X_transformed = transformer.transform(X)
>>> X_transformed.shape
(200, 5)
>>> # most values in the components_ are zero (sparsity)
>>> np.mean(transformer.components_ == 0)
0.9666...
Methods
fit(X[, y]) Fit the model from data in X.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Least Squares projection of the data onto the sparse components.
fit(X, y=None) [source]
Fit the model from data in X. Parameters
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yIgnored
Returns
selfobject
Returns the instance itself.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Least Squares projection of the data onto the sparse components. To avoid instability issues in case the system is under-determined, regularization can be applied (Ridge regression) via the ridge_alpha parameter. Note that Sparse PCA components orthogonality is not enforced as in PCA hence one cannot use a simple linear projection. Parameters
Xndarray of shape (n_samples, n_features)
Test data to be transformed, must have the same number of features as the data used to train the model. Returns
X_newndarray of shape (n_samples, n_components)
Transformed data. | sklearn.modules.generated.sklearn.decomposition.sparsepca |
fit(X, y=None) [source]
Fit the model from data in X. Parameters
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yIgnored
Returns
selfobject
Returns the instance itself. | sklearn.modules.generated.sklearn.decomposition.sparsepca#sklearn.decomposition.SparsePCA.fit |
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | sklearn.modules.generated.sklearn.decomposition.sparsepca#sklearn.decomposition.SparsePCA.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.decomposition.sparsepca#sklearn.decomposition.SparsePCA.get_params |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.