doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer.get_params
get_stop_words() [source] Build or fetch the effective stop words list. Returns stop_words: list or None A list of stop words.
sklearn.modules.generated.sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer.get_stop_words
inverse_transform(X) [source] Return terms per document with nonzero entries in X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Document-term matrix. Returns X_invlist of arrays of shape (n_samples,) List of arrays of terms.
sklearn.modules.generated.sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer.inverse_transform
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer.set_params
transform(raw_documents) [source] Transform documents to document-term matrix. Extract token counts out of raw text documents using the vocabulary fitted with fit or the one provided to the constructor. Parameters raw_documentsiterable An iterable which yields either str, unicode or file objects. Returns Xsparse matrix of shape (n_samples, n_features) Document-term matrix.
sklearn.modules.generated.sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer.transform
class sklearn.feature_extraction.text.HashingVectorizer(*, input='content', encoding='utf-8', decode_error='strict', strip_accents=None, lowercase=True, preprocessor=None, tokenizer=None, stop_words=None, token_pattern='(?u)\\b\\w\\w+\\b', ngram_range=(1, 1), analyzer='word', n_features=1048576, binary=False, norm='l2', alternate_sign=True, dtype=<class 'numpy.float64'>) [source] Convert a collection of text documents to a matrix of token occurrences It turns a collection of text documents into a scipy.sparse matrix holding token occurrence counts (or binary occurrence information), possibly normalized as token frequencies if norm=’l1’ or projected on the euclidean unit sphere if norm=’l2’. This text vectorizer implementation uses the hashing trick to find the token string name to feature integer index mapping. This strategy has several advantages: it is very low memory scalable to large datasets as there is no need to store a vocabulary dictionary in memory it is fast to pickle and un-pickle as it holds no state besides the constructor parameters it can be used in a streaming (partial fit) or parallel pipeline as there is no state computed during fit. There are also a couple of cons (vs using a CountVectorizer with an in-memory vocabulary): there is no way to compute the inverse transform (from feature indices to string feature names) which can be a problem when trying to introspect which features are most important to a model. there can be collisions: distinct tokens can be mapped to the same feature index. However in practice this is rarely an issue if n_features is large enough (e.g. 2 ** 18 for text classification problems). no IDF weighting as this would render the transformer stateful. The hash function employed is the signed 32-bit version of Murmurhash3. Read more in the User Guide. Parameters inputstring {‘filename’, ‘file’, ‘content’}, default=’content’ If ‘filename’, the sequence passed as an argument to fit is expected to be a list of filenames that need reading to fetch the raw content to analyze. If ‘file’, the sequence items must have a ‘read’ method (file-like object) that is called to fetch the bytes in memory. Otherwise the input is expected to be a sequence of items that can be of type string or byte. encodingstring, default=’utf-8’ If bytes or files are given to analyze, this encoding is used to decode. decode_error{‘strict’, ‘ignore’, ‘replace’}, default=’strict’ Instruction on what to do if a byte sequence is given to analyze that contains characters not of the given encoding. By default, it is ‘strict’, meaning that a UnicodeDecodeError will be raised. Other values are ‘ignore’ and ‘replace’. strip_accents{‘ascii’, ‘unicode’}, default=None Remove accents and perform other character normalization during the preprocessing step. ‘ascii’ is a fast method that only works on characters that have an direct ASCII mapping. ‘unicode’ is a slightly slower method that works on any characters. None (default) does nothing. Both ‘ascii’ and ‘unicode’ use NFKD normalization from unicodedata.normalize. lowercasebool, default=True Convert all characters to lowercase before tokenizing. preprocessorcallable, default=None Override the preprocessing (string transformation) stage while preserving the tokenizing and n-grams generation steps. Only applies if analyzer is not callable. tokenizercallable, default=None Override the string tokenization step while preserving the preprocessing and n-grams generation steps. Only applies if analyzer == 'word'. stop_wordsstring {‘english’}, list, default=None If ‘english’, a built-in stop word list for English is used. There are several known issues with ‘english’ and you should consider an alternative (see Using stop words). If a list, that list is assumed to contain stop words, all of which will be removed from the resulting tokens. Only applies if analyzer == 'word'. token_patternstr, default=r”(?u)\b\w\w+\b” Regular expression denoting what constitutes a “token”, only used if analyzer == 'word'. The default regexp selects tokens of 2 or more alphanumeric characters (punctuation is completely ignored and always treated as a token separator). If there is a capturing group in token_pattern then the captured group content, not the entire match, becomes the token. At most one capturing group is permitted. ngram_rangetuple (min_n, max_n), default=(1, 1) The lower and upper boundary of the range of n-values for different n-grams to be extracted. All values of n such that min_n <= n <= max_n will be used. For example an ngram_range of (1, 1) means only unigrams, (1, 2) means unigrams and bigrams, and (2, 2) means only bigrams. Only applies if analyzer is not callable. analyzer{‘word’, ‘char’, ‘char_wb’} or callable, default=’word’ Whether the feature should be made of word or character n-grams. Option ‘char_wb’ creates character n-grams only from text inside word boundaries; n-grams at the edges of words are padded with space. If a callable is passed it is used to extract the sequence of features out of the raw, unprocessed input. Changed in version 0.21. Since v0.21, if input is filename or file, the data is first read from the file and then passed to the given callable analyzer. n_featuresint, default=(2 ** 20) The number of features (columns) in the output matrices. Small numbers of features are likely to cause hash collisions, but large numbers will cause larger coefficient dimensions in linear learners. binarybool, default=False. If True, all non zero counts are set to 1. This is useful for discrete probabilistic models that model binary events rather than integer counts. norm{‘l1’, ‘l2’}, default=’l2’ Norm used to normalize term vectors. None for no normalization. alternate_signbool, default=True When True, an alternating sign is added to the features as to approximately conserve the inner product in the hashed space even for small n_features. This approach is similar to sparse random projection. New in version 0.19. dtypetype, default=np.float64 Type of the matrix returned by fit_transform() or transform(). See also CountVectorizer, TfidfVectorizer Examples >>> from sklearn.feature_extraction.text import HashingVectorizer >>> corpus = [ ... 'This is the first document.', ... 'This document is the second document.', ... 'And this is the third one.', ... 'Is this the first document?', ... ] >>> vectorizer = HashingVectorizer(n_features=2**4) >>> X = vectorizer.fit_transform(corpus) >>> print(X.shape) (4, 16) Methods build_analyzer() Return a callable that handles preprocessing, tokenization and n-grams generation. build_preprocessor() Return a function to preprocess the text before tokenization. build_tokenizer() Return a function that splits a string into a sequence of tokens. decode(doc) Decode the input into a string of unicode symbols. fit(X[, y]) Does nothing: this transformer is stateless. fit_transform(X[, y]) Transform a sequence of documents to a document-term matrix. get_params([deep]) Get parameters for this estimator. get_stop_words() Build or fetch the effective stop words list. partial_fit(X[, y]) Does nothing: this transformer is stateless. set_params(**params) Set the parameters of this estimator. transform(X) Transform a sequence of documents to a document-term matrix. build_analyzer() [source] Return a callable that handles preprocessing, tokenization and n-grams generation. Returns analyzer: callable A function to handle preprocessing, tokenization and n-grams generation. build_preprocessor() [source] Return a function to preprocess the text before tokenization. Returns preprocessor: callable A function to preprocess the text before tokenization. build_tokenizer() [source] Return a function that splits a string into a sequence of tokens. Returns tokenizer: callable A function to split a string into a sequence of tokens. decode(doc) [source] Decode the input into a string of unicode symbols. The decoding strategy depends on the vectorizer parameters. Parameters docstr The string to decode. Returns doc: str A string of unicode symbols. fit(X, y=None) [source] Does nothing: this transformer is stateless. Parameters Xndarray of shape [n_samples, n_features] Training data. fit_transform(X, y=None) [source] Transform a sequence of documents to a document-term matrix. Parameters Xiterable over raw text documents, length = n_samples Samples. Each sample must be a text document (either bytes or unicode strings, file name or file object depending on the constructor argument) which will be tokenized and hashed. yany Ignored. This parameter exists only for compatibility with sklearn.pipeline.Pipeline. Returns Xsparse matrix of shape (n_samples, n_features) Document-term matrix. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. get_stop_words() [source] Build or fetch the effective stop words list. Returns stop_words: list or None A list of stop words. partial_fit(X, y=None) [source] Does nothing: this transformer is stateless. This method is just there to mark the fact that this transformer can work in a streaming setup. Parameters Xndarray of shape [n_samples, n_features] Training data. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Transform a sequence of documents to a document-term matrix. Parameters Xiterable over raw text documents, length = n_samples Samples. Each sample must be a text document (either bytes or unicode strings, file name or file object depending on the constructor argument) which will be tokenized and hashed. Returns Xsparse matrix of shape (n_samples, n_features) Document-term matrix.
sklearn.modules.generated.sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer
sklearn.feature_extraction.text.HashingVectorizer class sklearn.feature_extraction.text.HashingVectorizer(*, input='content', encoding='utf-8', decode_error='strict', strip_accents=None, lowercase=True, preprocessor=None, tokenizer=None, stop_words=None, token_pattern='(?u)\\b\\w\\w+\\b', ngram_range=(1, 1), analyzer='word', n_features=1048576, binary=False, norm='l2', alternate_sign=True, dtype=<class 'numpy.float64'>) [source] Convert a collection of text documents to a matrix of token occurrences It turns a collection of text documents into a scipy.sparse matrix holding token occurrence counts (or binary occurrence information), possibly normalized as token frequencies if norm=’l1’ or projected on the euclidean unit sphere if norm=’l2’. This text vectorizer implementation uses the hashing trick to find the token string name to feature integer index mapping. This strategy has several advantages: it is very low memory scalable to large datasets as there is no need to store a vocabulary dictionary in memory it is fast to pickle and un-pickle as it holds no state besides the constructor parameters it can be used in a streaming (partial fit) or parallel pipeline as there is no state computed during fit. There are also a couple of cons (vs using a CountVectorizer with an in-memory vocabulary): there is no way to compute the inverse transform (from feature indices to string feature names) which can be a problem when trying to introspect which features are most important to a model. there can be collisions: distinct tokens can be mapped to the same feature index. However in practice this is rarely an issue if n_features is large enough (e.g. 2 ** 18 for text classification problems). no IDF weighting as this would render the transformer stateful. The hash function employed is the signed 32-bit version of Murmurhash3. Read more in the User Guide. Parameters inputstring {‘filename’, ‘file’, ‘content’}, default=’content’ If ‘filename’, the sequence passed as an argument to fit is expected to be a list of filenames that need reading to fetch the raw content to analyze. If ‘file’, the sequence items must have a ‘read’ method (file-like object) that is called to fetch the bytes in memory. Otherwise the input is expected to be a sequence of items that can be of type string or byte. encodingstring, default=’utf-8’ If bytes or files are given to analyze, this encoding is used to decode. decode_error{‘strict’, ‘ignore’, ‘replace’}, default=’strict’ Instruction on what to do if a byte sequence is given to analyze that contains characters not of the given encoding. By default, it is ‘strict’, meaning that a UnicodeDecodeError will be raised. Other values are ‘ignore’ and ‘replace’. strip_accents{‘ascii’, ‘unicode’}, default=None Remove accents and perform other character normalization during the preprocessing step. ‘ascii’ is a fast method that only works on characters that have an direct ASCII mapping. ‘unicode’ is a slightly slower method that works on any characters. None (default) does nothing. Both ‘ascii’ and ‘unicode’ use NFKD normalization from unicodedata.normalize. lowercasebool, default=True Convert all characters to lowercase before tokenizing. preprocessorcallable, default=None Override the preprocessing (string transformation) stage while preserving the tokenizing and n-grams generation steps. Only applies if analyzer is not callable. tokenizercallable, default=None Override the string tokenization step while preserving the preprocessing and n-grams generation steps. Only applies if analyzer == 'word'. stop_wordsstring {‘english’}, list, default=None If ‘english’, a built-in stop word list for English is used. There are several known issues with ‘english’ and you should consider an alternative (see Using stop words). If a list, that list is assumed to contain stop words, all of which will be removed from the resulting tokens. Only applies if analyzer == 'word'. token_patternstr, default=r”(?u)\b\w\w+\b” Regular expression denoting what constitutes a “token”, only used if analyzer == 'word'. The default regexp selects tokens of 2 or more alphanumeric characters (punctuation is completely ignored and always treated as a token separator). If there is a capturing group in token_pattern then the captured group content, not the entire match, becomes the token. At most one capturing group is permitted. ngram_rangetuple (min_n, max_n), default=(1, 1) The lower and upper boundary of the range of n-values for different n-grams to be extracted. All values of n such that min_n <= n <= max_n will be used. For example an ngram_range of (1, 1) means only unigrams, (1, 2) means unigrams and bigrams, and (2, 2) means only bigrams. Only applies if analyzer is not callable. analyzer{‘word’, ‘char’, ‘char_wb’} or callable, default=’word’ Whether the feature should be made of word or character n-grams. Option ‘char_wb’ creates character n-grams only from text inside word boundaries; n-grams at the edges of words are padded with space. If a callable is passed it is used to extract the sequence of features out of the raw, unprocessed input. Changed in version 0.21. Since v0.21, if input is filename or file, the data is first read from the file and then passed to the given callable analyzer. n_featuresint, default=(2 ** 20) The number of features (columns) in the output matrices. Small numbers of features are likely to cause hash collisions, but large numbers will cause larger coefficient dimensions in linear learners. binarybool, default=False. If True, all non zero counts are set to 1. This is useful for discrete probabilistic models that model binary events rather than integer counts. norm{‘l1’, ‘l2’}, default=’l2’ Norm used to normalize term vectors. None for no normalization. alternate_signbool, default=True When True, an alternating sign is added to the features as to approximately conserve the inner product in the hashed space even for small n_features. This approach is similar to sparse random projection. New in version 0.19. dtypetype, default=np.float64 Type of the matrix returned by fit_transform() or transform(). See also CountVectorizer, TfidfVectorizer Examples >>> from sklearn.feature_extraction.text import HashingVectorizer >>> corpus = [ ... 'This is the first document.', ... 'This document is the second document.', ... 'And this is the third one.', ... 'Is this the first document?', ... ] >>> vectorizer = HashingVectorizer(n_features=2**4) >>> X = vectorizer.fit_transform(corpus) >>> print(X.shape) (4, 16) Methods build_analyzer() Return a callable that handles preprocessing, tokenization and n-grams generation. build_preprocessor() Return a function to preprocess the text before tokenization. build_tokenizer() Return a function that splits a string into a sequence of tokens. decode(doc) Decode the input into a string of unicode symbols. fit(X[, y]) Does nothing: this transformer is stateless. fit_transform(X[, y]) Transform a sequence of documents to a document-term matrix. get_params([deep]) Get parameters for this estimator. get_stop_words() Build or fetch the effective stop words list. partial_fit(X[, y]) Does nothing: this transformer is stateless. set_params(**params) Set the parameters of this estimator. transform(X) Transform a sequence of documents to a document-term matrix. build_analyzer() [source] Return a callable that handles preprocessing, tokenization and n-grams generation. Returns analyzer: callable A function to handle preprocessing, tokenization and n-grams generation. build_preprocessor() [source] Return a function to preprocess the text before tokenization. Returns preprocessor: callable A function to preprocess the text before tokenization. build_tokenizer() [source] Return a function that splits a string into a sequence of tokens. Returns tokenizer: callable A function to split a string into a sequence of tokens. decode(doc) [source] Decode the input into a string of unicode symbols. The decoding strategy depends on the vectorizer parameters. Parameters docstr The string to decode. Returns doc: str A string of unicode symbols. fit(X, y=None) [source] Does nothing: this transformer is stateless. Parameters Xndarray of shape [n_samples, n_features] Training data. fit_transform(X, y=None) [source] Transform a sequence of documents to a document-term matrix. Parameters Xiterable over raw text documents, length = n_samples Samples. Each sample must be a text document (either bytes or unicode strings, file name or file object depending on the constructor argument) which will be tokenized and hashed. yany Ignored. This parameter exists only for compatibility with sklearn.pipeline.Pipeline. Returns Xsparse matrix of shape (n_samples, n_features) Document-term matrix. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. get_stop_words() [source] Build or fetch the effective stop words list. Returns stop_words: list or None A list of stop words. partial_fit(X, y=None) [source] Does nothing: this transformer is stateless. This method is just there to mark the fact that this transformer can work in a streaming setup. Parameters Xndarray of shape [n_samples, n_features] Training data. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Transform a sequence of documents to a document-term matrix. Parameters Xiterable over raw text documents, length = n_samples Samples. Each sample must be a text document (either bytes or unicode strings, file name or file object depending on the constructor argument) which will be tokenized and hashed. Returns Xsparse matrix of shape (n_samples, n_features) Document-term matrix. Examples using sklearn.feature_extraction.text.HashingVectorizer Out-of-core classification of text documents Clustering text documents using k-means Classification of text documents using sparse features
sklearn.modules.generated.sklearn.feature_extraction.text.hashingvectorizer
build_analyzer() [source] Return a callable that handles preprocessing, tokenization and n-grams generation. Returns analyzer: callable A function to handle preprocessing, tokenization and n-grams generation.
sklearn.modules.generated.sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer.build_analyzer
build_preprocessor() [source] Return a function to preprocess the text before tokenization. Returns preprocessor: callable A function to preprocess the text before tokenization.
sklearn.modules.generated.sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer.build_preprocessor
build_tokenizer() [source] Return a function that splits a string into a sequence of tokens. Returns tokenizer: callable A function to split a string into a sequence of tokens.
sklearn.modules.generated.sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer.build_tokenizer
decode(doc) [source] Decode the input into a string of unicode symbols. The decoding strategy depends on the vectorizer parameters. Parameters docstr The string to decode. Returns doc: str A string of unicode symbols.
sklearn.modules.generated.sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer.decode
fit(X, y=None) [source] Does nothing: this transformer is stateless. Parameters Xndarray of shape [n_samples, n_features] Training data.
sklearn.modules.generated.sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer.fit
fit_transform(X, y=None) [source] Transform a sequence of documents to a document-term matrix. Parameters Xiterable over raw text documents, length = n_samples Samples. Each sample must be a text document (either bytes or unicode strings, file name or file object depending on the constructor argument) which will be tokenized and hashed. yany Ignored. This parameter exists only for compatibility with sklearn.pipeline.Pipeline. Returns Xsparse matrix of shape (n_samples, n_features) Document-term matrix.
sklearn.modules.generated.sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer.fit_transform
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer.get_params
get_stop_words() [source] Build or fetch the effective stop words list. Returns stop_words: list or None A list of stop words.
sklearn.modules.generated.sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer.get_stop_words
partial_fit(X, y=None) [source] Does nothing: this transformer is stateless. This method is just there to mark the fact that this transformer can work in a streaming setup. Parameters Xndarray of shape [n_samples, n_features] Training data.
sklearn.modules.generated.sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer.partial_fit
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer.set_params
transform(X) [source] Transform a sequence of documents to a document-term matrix. Parameters Xiterable over raw text documents, length = n_samples Samples. Each sample must be a text document (either bytes or unicode strings, file name or file object depending on the constructor argument) which will be tokenized and hashed. Returns Xsparse matrix of shape (n_samples, n_features) Document-term matrix.
sklearn.modules.generated.sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer.transform
class sklearn.feature_extraction.text.TfidfTransformer(*, norm='l2', use_idf=True, smooth_idf=True, sublinear_tf=False) [source] Transform a count matrix to a normalized tf or tf-idf representation Tf means term-frequency while tf-idf means term-frequency times inverse document-frequency. This is a common term weighting scheme in information retrieval, that has also found good use in document classification. The goal of using tf-idf instead of the raw frequencies of occurrence of a token in a given document is to scale down the impact of tokens that occur very frequently in a given corpus and that are hence empirically less informative than features that occur in a small fraction of the training corpus. The formula that is used to compute the tf-idf for a term t of a document d in a document set is tf-idf(t, d) = tf(t, d) * idf(t), and the idf is computed as idf(t) = log [ n / df(t) ] + 1 (if smooth_idf=False), where n is the total number of documents in the document set and df(t) is the document frequency of t; the document frequency is the number of documents in the document set that contain the term t. The effect of adding “1” to the idf in the equation above is that terms with zero idf, i.e., terms that occur in all documents in a training set, will not be entirely ignored. (Note that the idf formula above differs from the standard textbook notation that defines the idf as idf(t) = log [ n / (df(t) + 1) ]). If smooth_idf=True (the default), the constant “1” is added to the numerator and denominator of the idf as if an extra document was seen containing every term in the collection exactly once, which prevents zero divisions: idf(t) = log [ (1 + n) / (1 + df(t)) ] + 1. Furthermore, the formulas used to compute tf and idf depend on parameter settings that correspond to the SMART notation used in IR as follows: Tf is “n” (natural) by default, “l” (logarithmic) when sublinear_tf=True. Idf is “t” when use_idf is given, “n” (none) otherwise. Normalization is “c” (cosine) when norm='l2', “n” (none) when norm=None. Read more in the User Guide. Parameters norm{‘l1’, ‘l2’}, default=’l2’ Each output row will have unit norm, either: * ‘l2’: Sum of squares of vector elements is 1. The cosine similarity between two vectors is their dot product when l2 norm has been applied. * ‘l1’: Sum of absolute values of vector elements is 1. See preprocessing.normalize use_idfbool, default=True Enable inverse-document-frequency reweighting. smooth_idfbool, default=True Smooth idf weights by adding one to document frequencies, as if an extra document was seen containing every term in the collection exactly once. Prevents zero divisions. sublinear_tfbool, default=False Apply sublinear tf scaling, i.e. replace tf with 1 + log(tf). Attributes idf_array of shape (n_features) The inverse document frequency (IDF) vector; only defined if use_idf is True. New in version 0.20. References Yates2011 R. Baeza-Yates and B. Ribeiro-Neto (2011). Modern Information Retrieval. Addison Wesley, pp. 68-74. MRS2008 C.D. Manning, P. Raghavan and H. Schütze (2008). Introduction to Information Retrieval. Cambridge University Press, pp. 118-120. Examples >>> from sklearn.feature_extraction.text import TfidfTransformer >>> from sklearn.feature_extraction.text import CountVectorizer >>> from sklearn.pipeline import Pipeline >>> import numpy as np >>> corpus = ['this is the first document', ... 'this document is the second document', ... 'and this is the third one', ... 'is this the first document'] >>> vocabulary = ['this', 'document', 'first', 'is', 'second', 'the', ... 'and', 'one'] >>> pipe = Pipeline([('count', CountVectorizer(vocabulary=vocabulary)), ... ('tfid', TfidfTransformer())]).fit(corpus) >>> pipe['count'].transform(corpus).toarray() array([[1, 1, 1, 1, 0, 1, 0, 0], [1, 2, 0, 1, 1, 1, 0, 0], [1, 0, 0, 1, 0, 1, 1, 1], [1, 1, 1, 1, 0, 1, 0, 0]]) >>> pipe['tfid'].idf_ array([1. , 1.22314355, 1.51082562, 1. , 1.91629073, 1. , 1.91629073, 1.91629073]) >>> pipe.transform(corpus).shape (4, 8) Methods fit(X[, y]) Learn the idf vector (global term weights). fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. set_params(**params) Set the parameters of this estimator. transform(X[, copy]) Transform a count matrix to a tf or tf-idf representation fit(X, y=None) [source] Learn the idf vector (global term weights). Parameters Xsparse matrix of shape n_samples, n_features) A matrix of term/token counts. fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X, copy=True) [source] Transform a count matrix to a tf or tf-idf representation Parameters Xsparse matrix of (n_samples, n_features) a matrix of term/token counts copybool, default=True Whether to copy X and operate on the copy or perform in-place operations. Returns vectorssparse matrix of shape (n_samples, n_features)
sklearn.modules.generated.sklearn.feature_extraction.text.tfidftransformer#sklearn.feature_extraction.text.TfidfTransformer
sklearn.feature_extraction.text.TfidfTransformer class sklearn.feature_extraction.text.TfidfTransformer(*, norm='l2', use_idf=True, smooth_idf=True, sublinear_tf=False) [source] Transform a count matrix to a normalized tf or tf-idf representation Tf means term-frequency while tf-idf means term-frequency times inverse document-frequency. This is a common term weighting scheme in information retrieval, that has also found good use in document classification. The goal of using tf-idf instead of the raw frequencies of occurrence of a token in a given document is to scale down the impact of tokens that occur very frequently in a given corpus and that are hence empirically less informative than features that occur in a small fraction of the training corpus. The formula that is used to compute the tf-idf for a term t of a document d in a document set is tf-idf(t, d) = tf(t, d) * idf(t), and the idf is computed as idf(t) = log [ n / df(t) ] + 1 (if smooth_idf=False), where n is the total number of documents in the document set and df(t) is the document frequency of t; the document frequency is the number of documents in the document set that contain the term t. The effect of adding “1” to the idf in the equation above is that terms with zero idf, i.e., terms that occur in all documents in a training set, will not be entirely ignored. (Note that the idf formula above differs from the standard textbook notation that defines the idf as idf(t) = log [ n / (df(t) + 1) ]). If smooth_idf=True (the default), the constant “1” is added to the numerator and denominator of the idf as if an extra document was seen containing every term in the collection exactly once, which prevents zero divisions: idf(t) = log [ (1 + n) / (1 + df(t)) ] + 1. Furthermore, the formulas used to compute tf and idf depend on parameter settings that correspond to the SMART notation used in IR as follows: Tf is “n” (natural) by default, “l” (logarithmic) when sublinear_tf=True. Idf is “t” when use_idf is given, “n” (none) otherwise. Normalization is “c” (cosine) when norm='l2', “n” (none) when norm=None. Read more in the User Guide. Parameters norm{‘l1’, ‘l2’}, default=’l2’ Each output row will have unit norm, either: * ‘l2’: Sum of squares of vector elements is 1. The cosine similarity between two vectors is their dot product when l2 norm has been applied. * ‘l1’: Sum of absolute values of vector elements is 1. See preprocessing.normalize use_idfbool, default=True Enable inverse-document-frequency reweighting. smooth_idfbool, default=True Smooth idf weights by adding one to document frequencies, as if an extra document was seen containing every term in the collection exactly once. Prevents zero divisions. sublinear_tfbool, default=False Apply sublinear tf scaling, i.e. replace tf with 1 + log(tf). Attributes idf_array of shape (n_features) The inverse document frequency (IDF) vector; only defined if use_idf is True. New in version 0.20. References Yates2011 R. Baeza-Yates and B. Ribeiro-Neto (2011). Modern Information Retrieval. Addison Wesley, pp. 68-74. MRS2008 C.D. Manning, P. Raghavan and H. Schütze (2008). Introduction to Information Retrieval. Cambridge University Press, pp. 118-120. Examples >>> from sklearn.feature_extraction.text import TfidfTransformer >>> from sklearn.feature_extraction.text import CountVectorizer >>> from sklearn.pipeline import Pipeline >>> import numpy as np >>> corpus = ['this is the first document', ... 'this document is the second document', ... 'and this is the third one', ... 'is this the first document'] >>> vocabulary = ['this', 'document', 'first', 'is', 'second', 'the', ... 'and', 'one'] >>> pipe = Pipeline([('count', CountVectorizer(vocabulary=vocabulary)), ... ('tfid', TfidfTransformer())]).fit(corpus) >>> pipe['count'].transform(corpus).toarray() array([[1, 1, 1, 1, 0, 1, 0, 0], [1, 2, 0, 1, 1, 1, 0, 0], [1, 0, 0, 1, 0, 1, 1, 1], [1, 1, 1, 1, 0, 1, 0, 0]]) >>> pipe['tfid'].idf_ array([1. , 1.22314355, 1.51082562, 1. , 1.91629073, 1. , 1.91629073, 1.91629073]) >>> pipe.transform(corpus).shape (4, 8) Methods fit(X[, y]) Learn the idf vector (global term weights). fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. set_params(**params) Set the parameters of this estimator. transform(X[, copy]) Transform a count matrix to a tf or tf-idf representation fit(X, y=None) [source] Learn the idf vector (global term weights). Parameters Xsparse matrix of shape n_samples, n_features) A matrix of term/token counts. fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X, copy=True) [source] Transform a count matrix to a tf or tf-idf representation Parameters Xsparse matrix of (n_samples, n_features) a matrix of term/token counts copybool, default=True Whether to copy X and operate on the copy or perform in-place operations. Returns vectorssparse matrix of shape (n_samples, n_features) Examples using sklearn.feature_extraction.text.TfidfTransformer Sample pipeline for text feature extraction and evaluation Semi-supervised Classification on a Text Dataset Clustering text documents using k-means
sklearn.modules.generated.sklearn.feature_extraction.text.tfidftransformer
fit(X, y=None) [source] Learn the idf vector (global term weights). Parameters Xsparse matrix of shape n_samples, n_features) A matrix of term/token counts.
sklearn.modules.generated.sklearn.feature_extraction.text.tfidftransformer#sklearn.feature_extraction.text.TfidfTransformer.fit
fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array.
sklearn.modules.generated.sklearn.feature_extraction.text.tfidftransformer#sklearn.feature_extraction.text.TfidfTransformer.fit_transform
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.feature_extraction.text.tfidftransformer#sklearn.feature_extraction.text.TfidfTransformer.get_params
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.feature_extraction.text.tfidftransformer#sklearn.feature_extraction.text.TfidfTransformer.set_params
transform(X, copy=True) [source] Transform a count matrix to a tf or tf-idf representation Parameters Xsparse matrix of (n_samples, n_features) a matrix of term/token counts copybool, default=True Whether to copy X and operate on the copy or perform in-place operations. Returns vectorssparse matrix of shape (n_samples, n_features)
sklearn.modules.generated.sklearn.feature_extraction.text.tfidftransformer#sklearn.feature_extraction.text.TfidfTransformer.transform
class sklearn.feature_extraction.text.TfidfVectorizer(*, input='content', encoding='utf-8', decode_error='strict', strip_accents=None, lowercase=True, preprocessor=None, tokenizer=None, analyzer='word', stop_words=None, token_pattern='(?u)\\b\\w\\w+\\b', ngram_range=(1, 1), max_df=1.0, min_df=1, max_features=None, vocabulary=None, binary=False, dtype=<class 'numpy.float64'>, norm='l2', use_idf=True, smooth_idf=True, sublinear_tf=False) [source] Convert a collection of raw documents to a matrix of TF-IDF features. Equivalent to CountVectorizer followed by TfidfTransformer. Read more in the User Guide. Parameters input{‘filename’, ‘file’, ‘content’}, default=’content’ If ‘filename’, the sequence passed as an argument to fit is expected to be a list of filenames that need reading to fetch the raw content to analyze. If ‘file’, the sequence items must have a ‘read’ method (file-like object) that is called to fetch the bytes in memory. Otherwise the input is expected to be a sequence of items that can be of type string or byte. encodingstr, default=’utf-8’ If bytes or files are given to analyze, this encoding is used to decode. decode_error{‘strict’, ‘ignore’, ‘replace’}, default=’strict’ Instruction on what to do if a byte sequence is given to analyze that contains characters not of the given encoding. By default, it is ‘strict’, meaning that a UnicodeDecodeError will be raised. Other values are ‘ignore’ and ‘replace’. strip_accents{‘ascii’, ‘unicode’}, default=None Remove accents and perform other character normalization during the preprocessing step. ‘ascii’ is a fast method that only works on characters that have an direct ASCII mapping. ‘unicode’ is a slightly slower method that works on any characters. None (default) does nothing. Both ‘ascii’ and ‘unicode’ use NFKD normalization from unicodedata.normalize. lowercasebool, default=True Convert all characters to lowercase before tokenizing. preprocessorcallable, default=None Override the preprocessing (string transformation) stage while preserving the tokenizing and n-grams generation steps. Only applies if analyzer is not callable. tokenizercallable, default=None Override the string tokenization step while preserving the preprocessing and n-grams generation steps. Only applies if analyzer == 'word'. analyzer{‘word’, ‘char’, ‘char_wb’} or callable, default=’word’ Whether the feature should be made of word or character n-grams. Option ‘char_wb’ creates character n-grams only from text inside word boundaries; n-grams at the edges of words are padded with space. If a callable is passed it is used to extract the sequence of features out of the raw, unprocessed input. Changed in version 0.21. Since v0.21, if input is filename or file, the data is first read from the file and then passed to the given callable analyzer. stop_words{‘english’}, list, default=None If a string, it is passed to _check_stop_list and the appropriate stop list is returned. ‘english’ is currently the only supported string value. There are several known issues with ‘english’ and you should consider an alternative (see Using stop words). If a list, that list is assumed to contain stop words, all of which will be removed from the resulting tokens. Only applies if analyzer == 'word'. If None, no stop words will be used. max_df can be set to a value in the range [0.7, 1.0) to automatically detect and filter stop words based on intra corpus document frequency of terms. token_patternstr, default=r”(?u)\b\w\w+\b” Regular expression denoting what constitutes a “token”, only used if analyzer == 'word'. The default regexp selects tokens of 2 or more alphanumeric characters (punctuation is completely ignored and always treated as a token separator). If there is a capturing group in token_pattern then the captured group content, not the entire match, becomes the token. At most one capturing group is permitted. ngram_rangetuple (min_n, max_n), default=(1, 1) The lower and upper boundary of the range of n-values for different n-grams to be extracted. All values of n such that min_n <= n <= max_n will be used. For example an ngram_range of (1, 1) means only unigrams, (1, 2) means unigrams and bigrams, and (2, 2) means only bigrams. Only applies if analyzer is not callable. max_dffloat or int, default=1.0 When building the vocabulary ignore terms that have a document frequency strictly higher than the given threshold (corpus-specific stop words). If float in range [0.0, 1.0], the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None. min_dffloat or int, default=1 When building the vocabulary ignore terms that have a document frequency strictly lower than the given threshold. This value is also called cut-off in the literature. If float in range of [0.0, 1.0], the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None. max_featuresint, default=None If not None, build a vocabulary that only consider the top max_features ordered by term frequency across the corpus. This parameter is ignored if vocabulary is not None. vocabularyMapping or iterable, default=None Either a Mapping (e.g., a dict) where keys are terms and values are indices in the feature matrix, or an iterable over terms. If not given, a vocabulary is determined from the input documents. binarybool, default=False If True, all non-zero term counts are set to 1. This does not mean outputs will have only 0/1 values, only that the tf term in tf-idf is binary. (Set idf and normalization to False to get 0/1 outputs). dtypedtype, default=float64 Type of the matrix returned by fit_transform() or transform(). norm{‘l1’, ‘l2’}, default=’l2’ Each output row will have unit norm, either: * ‘l2’: Sum of squares of vector elements is 1. The cosine similarity between two vectors is their dot product when l2 norm has been applied. * ‘l1’: Sum of absolute values of vector elements is 1. See preprocessing.normalize. use_idfbool, default=True Enable inverse-document-frequency reweighting. smooth_idfbool, default=True Smooth idf weights by adding one to document frequencies, as if an extra document was seen containing every term in the collection exactly once. Prevents zero divisions. sublinear_tfbool, default=False Apply sublinear tf scaling, i.e. replace tf with 1 + log(tf). Attributes vocabulary_dict A mapping of terms to feature indices. fixed_vocabulary_: bool True if a fixed vocabulary of term to indices mapping is provided by the user idf_array of shape (n_features,) The inverse document frequency (IDF) vector; only defined if use_idf is True. stop_words_set Terms that were ignored because they either: occurred in too many documents (max_df) occurred in too few documents (min_df) were cut off by feature selection (max_features). This is only available if no vocabulary was given. See also CountVectorizer Transforms text into a sparse matrix of n-gram counts. TfidfTransformer Performs the TF-IDF transformation from a provided matrix of counts. Notes The stop_words_ attribute can get large and increase the model size when pickling. This attribute is provided only for introspection and can be safely removed using delattr or set to None before pickling. Examples >>> from sklearn.feature_extraction.text import TfidfVectorizer >>> corpus = [ ... 'This is the first document.', ... 'This document is the second document.', ... 'And this is the third one.', ... 'Is this the first document?', ... ] >>> vectorizer = TfidfVectorizer() >>> X = vectorizer.fit_transform(corpus) >>> print(vectorizer.get_feature_names()) ['and', 'document', 'first', 'is', 'one', 'second', 'the', 'third', 'this'] >>> print(X.shape) (4, 9) Methods build_analyzer() Return a callable that handles preprocessing, tokenization and n-grams generation. build_preprocessor() Return a function to preprocess the text before tokenization. build_tokenizer() Return a function that splits a string into a sequence of tokens. decode(doc) Decode the input into a string of unicode symbols. fit(raw_documents[, y]) Learn vocabulary and idf from training set. fit_transform(raw_documents[, y]) Learn vocabulary and idf, return document-term matrix. get_feature_names() Array mapping from feature integer indices to feature name. get_params([deep]) Get parameters for this estimator. get_stop_words() Build or fetch the effective stop words list. inverse_transform(X) Return terms per document with nonzero entries in X. set_params(**params) Set the parameters of this estimator. transform(raw_documents) Transform documents to document-term matrix. build_analyzer() [source] Return a callable that handles preprocessing, tokenization and n-grams generation. Returns analyzer: callable A function to handle preprocessing, tokenization and n-grams generation. build_preprocessor() [source] Return a function to preprocess the text before tokenization. Returns preprocessor: callable A function to preprocess the text before tokenization. build_tokenizer() [source] Return a function that splits a string into a sequence of tokens. Returns tokenizer: callable A function to split a string into a sequence of tokens. decode(doc) [source] Decode the input into a string of unicode symbols. The decoding strategy depends on the vectorizer parameters. Parameters docstr The string to decode. Returns doc: str A string of unicode symbols. fit(raw_documents, y=None) [source] Learn vocabulary and idf from training set. Parameters raw_documentsiterable An iterable which yields either str, unicode or file objects. yNone This parameter is not needed to compute tfidf. Returns selfobject Fitted vectorizer. fit_transform(raw_documents, y=None) [source] Learn vocabulary and idf, return document-term matrix. This is equivalent to fit followed by transform, but more efficiently implemented. Parameters raw_documentsiterable An iterable which yields either str, unicode or file objects. yNone This parameter is ignored. Returns Xsparse matrix of (n_samples, n_features) Tf-idf-weighted document-term matrix. get_feature_names() [source] Array mapping from feature integer indices to feature name. Returns feature_nameslist A list of feature names. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. get_stop_words() [source] Build or fetch the effective stop words list. Returns stop_words: list or None A list of stop words. inverse_transform(X) [source] Return terms per document with nonzero entries in X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Document-term matrix. Returns X_invlist of arrays of shape (n_samples,) List of arrays of terms. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(raw_documents) [source] Transform documents to document-term matrix. Uses the vocabulary and document frequencies (df) learned by fit (or fit_transform). Parameters raw_documentsiterable An iterable which yields either str, unicode or file objects. Returns Xsparse matrix of (n_samples, n_features) Tf-idf-weighted document-term matrix.
sklearn.modules.generated.sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer
sklearn.feature_extraction.text.TfidfVectorizer class sklearn.feature_extraction.text.TfidfVectorizer(*, input='content', encoding='utf-8', decode_error='strict', strip_accents=None, lowercase=True, preprocessor=None, tokenizer=None, analyzer='word', stop_words=None, token_pattern='(?u)\\b\\w\\w+\\b', ngram_range=(1, 1), max_df=1.0, min_df=1, max_features=None, vocabulary=None, binary=False, dtype=<class 'numpy.float64'>, norm='l2', use_idf=True, smooth_idf=True, sublinear_tf=False) [source] Convert a collection of raw documents to a matrix of TF-IDF features. Equivalent to CountVectorizer followed by TfidfTransformer. Read more in the User Guide. Parameters input{‘filename’, ‘file’, ‘content’}, default=’content’ If ‘filename’, the sequence passed as an argument to fit is expected to be a list of filenames that need reading to fetch the raw content to analyze. If ‘file’, the sequence items must have a ‘read’ method (file-like object) that is called to fetch the bytes in memory. Otherwise the input is expected to be a sequence of items that can be of type string or byte. encodingstr, default=’utf-8’ If bytes or files are given to analyze, this encoding is used to decode. decode_error{‘strict’, ‘ignore’, ‘replace’}, default=’strict’ Instruction on what to do if a byte sequence is given to analyze that contains characters not of the given encoding. By default, it is ‘strict’, meaning that a UnicodeDecodeError will be raised. Other values are ‘ignore’ and ‘replace’. strip_accents{‘ascii’, ‘unicode’}, default=None Remove accents and perform other character normalization during the preprocessing step. ‘ascii’ is a fast method that only works on characters that have an direct ASCII mapping. ‘unicode’ is a slightly slower method that works on any characters. None (default) does nothing. Both ‘ascii’ and ‘unicode’ use NFKD normalization from unicodedata.normalize. lowercasebool, default=True Convert all characters to lowercase before tokenizing. preprocessorcallable, default=None Override the preprocessing (string transformation) stage while preserving the tokenizing and n-grams generation steps. Only applies if analyzer is not callable. tokenizercallable, default=None Override the string tokenization step while preserving the preprocessing and n-grams generation steps. Only applies if analyzer == 'word'. analyzer{‘word’, ‘char’, ‘char_wb’} or callable, default=’word’ Whether the feature should be made of word or character n-grams. Option ‘char_wb’ creates character n-grams only from text inside word boundaries; n-grams at the edges of words are padded with space. If a callable is passed it is used to extract the sequence of features out of the raw, unprocessed input. Changed in version 0.21. Since v0.21, if input is filename or file, the data is first read from the file and then passed to the given callable analyzer. stop_words{‘english’}, list, default=None If a string, it is passed to _check_stop_list and the appropriate stop list is returned. ‘english’ is currently the only supported string value. There are several known issues with ‘english’ and you should consider an alternative (see Using stop words). If a list, that list is assumed to contain stop words, all of which will be removed from the resulting tokens. Only applies if analyzer == 'word'. If None, no stop words will be used. max_df can be set to a value in the range [0.7, 1.0) to automatically detect and filter stop words based on intra corpus document frequency of terms. token_patternstr, default=r”(?u)\b\w\w+\b” Regular expression denoting what constitutes a “token”, only used if analyzer == 'word'. The default regexp selects tokens of 2 or more alphanumeric characters (punctuation is completely ignored and always treated as a token separator). If there is a capturing group in token_pattern then the captured group content, not the entire match, becomes the token. At most one capturing group is permitted. ngram_rangetuple (min_n, max_n), default=(1, 1) The lower and upper boundary of the range of n-values for different n-grams to be extracted. All values of n such that min_n <= n <= max_n will be used. For example an ngram_range of (1, 1) means only unigrams, (1, 2) means unigrams and bigrams, and (2, 2) means only bigrams. Only applies if analyzer is not callable. max_dffloat or int, default=1.0 When building the vocabulary ignore terms that have a document frequency strictly higher than the given threshold (corpus-specific stop words). If float in range [0.0, 1.0], the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None. min_dffloat or int, default=1 When building the vocabulary ignore terms that have a document frequency strictly lower than the given threshold. This value is also called cut-off in the literature. If float in range of [0.0, 1.0], the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None. max_featuresint, default=None If not None, build a vocabulary that only consider the top max_features ordered by term frequency across the corpus. This parameter is ignored if vocabulary is not None. vocabularyMapping or iterable, default=None Either a Mapping (e.g., a dict) where keys are terms and values are indices in the feature matrix, or an iterable over terms. If not given, a vocabulary is determined from the input documents. binarybool, default=False If True, all non-zero term counts are set to 1. This does not mean outputs will have only 0/1 values, only that the tf term in tf-idf is binary. (Set idf and normalization to False to get 0/1 outputs). dtypedtype, default=float64 Type of the matrix returned by fit_transform() or transform(). norm{‘l1’, ‘l2’}, default=’l2’ Each output row will have unit norm, either: * ‘l2’: Sum of squares of vector elements is 1. The cosine similarity between two vectors is their dot product when l2 norm has been applied. * ‘l1’: Sum of absolute values of vector elements is 1. See preprocessing.normalize. use_idfbool, default=True Enable inverse-document-frequency reweighting. smooth_idfbool, default=True Smooth idf weights by adding one to document frequencies, as if an extra document was seen containing every term in the collection exactly once. Prevents zero divisions. sublinear_tfbool, default=False Apply sublinear tf scaling, i.e. replace tf with 1 + log(tf). Attributes vocabulary_dict A mapping of terms to feature indices. fixed_vocabulary_: bool True if a fixed vocabulary of term to indices mapping is provided by the user idf_array of shape (n_features,) The inverse document frequency (IDF) vector; only defined if use_idf is True. stop_words_set Terms that were ignored because they either: occurred in too many documents (max_df) occurred in too few documents (min_df) were cut off by feature selection (max_features). This is only available if no vocabulary was given. See also CountVectorizer Transforms text into a sparse matrix of n-gram counts. TfidfTransformer Performs the TF-IDF transformation from a provided matrix of counts. Notes The stop_words_ attribute can get large and increase the model size when pickling. This attribute is provided only for introspection and can be safely removed using delattr or set to None before pickling. Examples >>> from sklearn.feature_extraction.text import TfidfVectorizer >>> corpus = [ ... 'This is the first document.', ... 'This document is the second document.', ... 'And this is the third one.', ... 'Is this the first document?', ... ] >>> vectorizer = TfidfVectorizer() >>> X = vectorizer.fit_transform(corpus) >>> print(vectorizer.get_feature_names()) ['and', 'document', 'first', 'is', 'one', 'second', 'the', 'third', 'this'] >>> print(X.shape) (4, 9) Methods build_analyzer() Return a callable that handles preprocessing, tokenization and n-grams generation. build_preprocessor() Return a function to preprocess the text before tokenization. build_tokenizer() Return a function that splits a string into a sequence of tokens. decode(doc) Decode the input into a string of unicode symbols. fit(raw_documents[, y]) Learn vocabulary and idf from training set. fit_transform(raw_documents[, y]) Learn vocabulary and idf, return document-term matrix. get_feature_names() Array mapping from feature integer indices to feature name. get_params([deep]) Get parameters for this estimator. get_stop_words() Build or fetch the effective stop words list. inverse_transform(X) Return terms per document with nonzero entries in X. set_params(**params) Set the parameters of this estimator. transform(raw_documents) Transform documents to document-term matrix. build_analyzer() [source] Return a callable that handles preprocessing, tokenization and n-grams generation. Returns analyzer: callable A function to handle preprocessing, tokenization and n-grams generation. build_preprocessor() [source] Return a function to preprocess the text before tokenization. Returns preprocessor: callable A function to preprocess the text before tokenization. build_tokenizer() [source] Return a function that splits a string into a sequence of tokens. Returns tokenizer: callable A function to split a string into a sequence of tokens. decode(doc) [source] Decode the input into a string of unicode symbols. The decoding strategy depends on the vectorizer parameters. Parameters docstr The string to decode. Returns doc: str A string of unicode symbols. fit(raw_documents, y=None) [source] Learn vocabulary and idf from training set. Parameters raw_documentsiterable An iterable which yields either str, unicode or file objects. yNone This parameter is not needed to compute tfidf. Returns selfobject Fitted vectorizer. fit_transform(raw_documents, y=None) [source] Learn vocabulary and idf, return document-term matrix. This is equivalent to fit followed by transform, but more efficiently implemented. Parameters raw_documentsiterable An iterable which yields either str, unicode or file objects. yNone This parameter is ignored. Returns Xsparse matrix of (n_samples, n_features) Tf-idf-weighted document-term matrix. get_feature_names() [source] Array mapping from feature integer indices to feature name. Returns feature_nameslist A list of feature names. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. get_stop_words() [source] Build or fetch the effective stop words list. Returns stop_words: list or None A list of stop words. inverse_transform(X) [source] Return terms per document with nonzero entries in X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Document-term matrix. Returns X_invlist of arrays of shape (n_samples,) List of arrays of terms. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(raw_documents) [source] Transform documents to document-term matrix. Uses the vocabulary and document frequencies (df) learned by fit (or fit_transform). Parameters raw_documentsiterable An iterable which yields either str, unicode or file objects. Returns Xsparse matrix of (n_samples, n_features) Tf-idf-weighted document-term matrix. Examples using sklearn.feature_extraction.text.TfidfVectorizer Biclustering documents with the Spectral Co-clustering algorithm Topic extraction with Non-negative Matrix Factorization and Latent Dirichlet Allocation Column Transformer with Heterogeneous Data Sources Clustering text documents using k-means Classification of text documents using sparse features
sklearn.modules.generated.sklearn.feature_extraction.text.tfidfvectorizer
build_analyzer() [source] Return a callable that handles preprocessing, tokenization and n-grams generation. Returns analyzer: callable A function to handle preprocessing, tokenization and n-grams generation.
sklearn.modules.generated.sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer.build_analyzer
build_preprocessor() [source] Return a function to preprocess the text before tokenization. Returns preprocessor: callable A function to preprocess the text before tokenization.
sklearn.modules.generated.sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer.build_preprocessor
build_tokenizer() [source] Return a function that splits a string into a sequence of tokens. Returns tokenizer: callable A function to split a string into a sequence of tokens.
sklearn.modules.generated.sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer.build_tokenizer
decode(doc) [source] Decode the input into a string of unicode symbols. The decoding strategy depends on the vectorizer parameters. Parameters docstr The string to decode. Returns doc: str A string of unicode symbols.
sklearn.modules.generated.sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer.decode
fit(raw_documents, y=None) [source] Learn vocabulary and idf from training set. Parameters raw_documentsiterable An iterable which yields either str, unicode or file objects. yNone This parameter is not needed to compute tfidf. Returns selfobject Fitted vectorizer.
sklearn.modules.generated.sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer.fit
fit_transform(raw_documents, y=None) [source] Learn vocabulary and idf, return document-term matrix. This is equivalent to fit followed by transform, but more efficiently implemented. Parameters raw_documentsiterable An iterable which yields either str, unicode or file objects. yNone This parameter is ignored. Returns Xsparse matrix of (n_samples, n_features) Tf-idf-weighted document-term matrix.
sklearn.modules.generated.sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer.fit_transform
get_feature_names() [source] Array mapping from feature integer indices to feature name. Returns feature_nameslist A list of feature names.
sklearn.modules.generated.sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer.get_feature_names
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer.get_params
get_stop_words() [source] Build or fetch the effective stop words list. Returns stop_words: list or None A list of stop words.
sklearn.modules.generated.sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer.get_stop_words
inverse_transform(X) [source] Return terms per document with nonzero entries in X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Document-term matrix. Returns X_invlist of arrays of shape (n_samples,) List of arrays of terms.
sklearn.modules.generated.sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer.inverse_transform
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer.set_params
transform(raw_documents) [source] Transform documents to document-term matrix. Uses the vocabulary and document frequencies (df) learned by fit (or fit_transform). Parameters raw_documentsiterable An iterable which yields either str, unicode or file objects. Returns Xsparse matrix of (n_samples, n_features) Tf-idf-weighted document-term matrix.
sklearn.modules.generated.sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer.transform
sklearn.feature_selection.chi2(X, y) [source] Compute chi-squared stats between each non-negative feature and class. This score can be used to select the n_features features with the highest values for the test chi-squared statistic from X, which must contain only non-negative features such as booleans or frequencies (e.g., term counts in document classification), relative to the classes. Recall that the chi-square test measures dependence between stochastic variables, so using this function “weeds out” the features that are the most likely to be independent of class and therefore irrelevant for classification. Read more in the User Guide. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Sample vectors. yarray-like of shape (n_samples,) Target vector (class labels). Returns chi2array, shape = (n_features,) chi2 statistics of each feature. pvalarray, shape = (n_features,) p-values of each feature. See also f_classif ANOVA F-value between label/feature for classification tasks. f_regression F-value between label/feature for regression tasks. Notes Complexity of this algorithm is O(n_classes * n_features).
sklearn.modules.generated.sklearn.feature_selection.chi2#sklearn.feature_selection.chi2
sklearn.feature_selection.f_classif(X, y) [source] Compute the ANOVA F-value for the provided sample. Read more in the User Guide. Parameters X{array-like, sparse matrix} shape = [n_samples, n_features] The set of regressors that will be tested sequentially. yarray of shape(n_samples) The data matrix. Returns Farray, shape = [n_features,] The set of F values. pvalarray, shape = [n_features,] The set of p-values. See also chi2 Chi-squared stats of non-negative features for classification tasks. f_regression F-value between label/feature for regression tasks.
sklearn.modules.generated.sklearn.feature_selection.f_classif#sklearn.feature_selection.f_classif
sklearn.feature_selection.f_regression(X, y, *, center=True) [source] Univariate linear regression tests. Linear model for testing the individual effect of each of many regressors. This is a scoring function to be used in a feature selection procedure, not a free standing feature selection procedure. This is done in 2 steps: The correlation between each regressor and the target is computed, that is, ((X[:, i] - mean(X[:, i])) * (y - mean_y)) / (std(X[:, i]) * std(y)). It is converted to an F score then to a p-value. For more on usage see the User Guide. Parameters X{array-like, sparse matrix} shape = (n_samples, n_features) The set of regressors that will be tested sequentially. yarray of shape(n_samples). The data matrix centerbool, default=True If true, X and y will be centered. Returns Farray, shape=(n_features,) F values of features. pvalarray, shape=(n_features,) p-values of F-scores. See also mutual_info_regression Mutual information for a continuous target. f_classif ANOVA F-value between label/feature for classification tasks. chi2 Chi-squared stats of non-negative features for classification tasks. SelectKBest Select features based on the k highest scores. SelectFpr Select features based on a false positive rate test. SelectFdr Select features based on an estimated false discovery rate. SelectFwe Select features based on family-wise error rate. SelectPercentile Select features based on percentile of the highest scores.
sklearn.modules.generated.sklearn.feature_selection.f_regression#sklearn.feature_selection.f_regression
class sklearn.feature_selection.GenericUnivariateSelect(score_func=<function f_classif>, *, mode='percentile', param=1e-05) [source] Univariate feature selector with configurable strategy. Read more in the User Guide. Parameters score_funccallable, default=f_classif Function taking two arrays X and y, and returning a pair of arrays (scores, pvalues). For modes ‘percentile’ or ‘kbest’ it can return a single array scores. mode{‘percentile’, ‘k_best’, ‘fpr’, ‘fdr’, ‘fwe’}, default=’percentile’ Feature selection mode. paramfloat or int depending on the feature selection mode, default=1e-5 Parameter of the corresponding mode. Attributes scores_array-like of shape (n_features,) Scores of features. pvalues_array-like of shape (n_features,) p-values of feature scores, None if score_func returned scores only. See also f_classif ANOVA F-value between label/feature for classification tasks. mutual_info_classif Mutual information for a discrete target. chi2 Chi-squared stats of non-negative features for classification tasks. f_regression F-value between label/feature for regression tasks. mutual_info_regression Mutual information for a continuous target. SelectPercentile Select features based on percentile of the highest scores. SelectKBest Select features based on the k highest scores. SelectFpr Select features based on a false positive rate test. SelectFdr Select features based on an estimated false discovery rate. SelectFwe Select features based on family-wise error rate. Examples >>> from sklearn.datasets import load_breast_cancer >>> from sklearn.feature_selection import GenericUnivariateSelect, chi2 >>> X, y = load_breast_cancer(return_X_y=True) >>> X.shape (569, 30) >>> transformer = GenericUnivariateSelect(chi2, mode='k_best', param=20) >>> X_new = transformer.fit_transform(X, y) >>> X_new.shape (569, 20) Methods fit(X, y) Run score function on (X, y) and get the appropriate features. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. get_support([indices]) Get a mask, or integer index, of the features selected inverse_transform(X) Reverse the transformation operation set_params(**params) Set the parameters of this estimator. transform(X) Reduce X to the selected features. fit(X, y) [source] Run score function on (X, y) and get the appropriate features. Parameters Xarray-like of shape (n_samples, n_features) The training input samples. yarray-like of shape (n_samples,) The target values (class labels in classification, real numbers in regression). Returns selfobject fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. get_support(indices=False) [source] Get a mask, or integer index, of the features selected Parameters indicesbool, default=False If True, the return value will be an array of integers, rather than a boolean mask. Returns supportarray An index that selects the retained features from a feature vector. If indices is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If indices is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector. inverse_transform(X) [source] Reverse the transformation operation Parameters Xarray of shape [n_samples, n_selected_features] The input samples. Returns X_rarray of shape [n_samples, n_original_features] X with columns of zeros inserted where features would have been removed by transform. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Reduce X to the selected features. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns X_rarray of shape [n_samples, n_selected_features] The input samples with only the selected features.
sklearn.modules.generated.sklearn.feature_selection.genericunivariateselect#sklearn.feature_selection.GenericUnivariateSelect
sklearn.feature_selection.GenericUnivariateSelect class sklearn.feature_selection.GenericUnivariateSelect(score_func=<function f_classif>, *, mode='percentile', param=1e-05) [source] Univariate feature selector with configurable strategy. Read more in the User Guide. Parameters score_funccallable, default=f_classif Function taking two arrays X and y, and returning a pair of arrays (scores, pvalues). For modes ‘percentile’ or ‘kbest’ it can return a single array scores. mode{‘percentile’, ‘k_best’, ‘fpr’, ‘fdr’, ‘fwe’}, default=’percentile’ Feature selection mode. paramfloat or int depending on the feature selection mode, default=1e-5 Parameter of the corresponding mode. Attributes scores_array-like of shape (n_features,) Scores of features. pvalues_array-like of shape (n_features,) p-values of feature scores, None if score_func returned scores only. See also f_classif ANOVA F-value between label/feature for classification tasks. mutual_info_classif Mutual information for a discrete target. chi2 Chi-squared stats of non-negative features for classification tasks. f_regression F-value between label/feature for regression tasks. mutual_info_regression Mutual information for a continuous target. SelectPercentile Select features based on percentile of the highest scores. SelectKBest Select features based on the k highest scores. SelectFpr Select features based on a false positive rate test. SelectFdr Select features based on an estimated false discovery rate. SelectFwe Select features based on family-wise error rate. Examples >>> from sklearn.datasets import load_breast_cancer >>> from sklearn.feature_selection import GenericUnivariateSelect, chi2 >>> X, y = load_breast_cancer(return_X_y=True) >>> X.shape (569, 30) >>> transformer = GenericUnivariateSelect(chi2, mode='k_best', param=20) >>> X_new = transformer.fit_transform(X, y) >>> X_new.shape (569, 20) Methods fit(X, y) Run score function on (X, y) and get the appropriate features. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. get_support([indices]) Get a mask, or integer index, of the features selected inverse_transform(X) Reverse the transformation operation set_params(**params) Set the parameters of this estimator. transform(X) Reduce X to the selected features. fit(X, y) [source] Run score function on (X, y) and get the appropriate features. Parameters Xarray-like of shape (n_samples, n_features) The training input samples. yarray-like of shape (n_samples,) The target values (class labels in classification, real numbers in regression). Returns selfobject fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. get_support(indices=False) [source] Get a mask, or integer index, of the features selected Parameters indicesbool, default=False If True, the return value will be an array of integers, rather than a boolean mask. Returns supportarray An index that selects the retained features from a feature vector. If indices is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If indices is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector. inverse_transform(X) [source] Reverse the transformation operation Parameters Xarray of shape [n_samples, n_selected_features] The input samples. Returns X_rarray of shape [n_samples, n_original_features] X with columns of zeros inserted where features would have been removed by transform. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Reduce X to the selected features. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns X_rarray of shape [n_samples, n_selected_features] The input samples with only the selected features.
sklearn.modules.generated.sklearn.feature_selection.genericunivariateselect
fit(X, y) [source] Run score function on (X, y) and get the appropriate features. Parameters Xarray-like of shape (n_samples, n_features) The training input samples. yarray-like of shape (n_samples,) The target values (class labels in classification, real numbers in regression). Returns selfobject
sklearn.modules.generated.sklearn.feature_selection.genericunivariateselect#sklearn.feature_selection.GenericUnivariateSelect.fit
fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array.
sklearn.modules.generated.sklearn.feature_selection.genericunivariateselect#sklearn.feature_selection.GenericUnivariateSelect.fit_transform
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.feature_selection.genericunivariateselect#sklearn.feature_selection.GenericUnivariateSelect.get_params
get_support(indices=False) [source] Get a mask, or integer index, of the features selected Parameters indicesbool, default=False If True, the return value will be an array of integers, rather than a boolean mask. Returns supportarray An index that selects the retained features from a feature vector. If indices is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If indices is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector.
sklearn.modules.generated.sklearn.feature_selection.genericunivariateselect#sklearn.feature_selection.GenericUnivariateSelect.get_support
inverse_transform(X) [source] Reverse the transformation operation Parameters Xarray of shape [n_samples, n_selected_features] The input samples. Returns X_rarray of shape [n_samples, n_original_features] X with columns of zeros inserted where features would have been removed by transform.
sklearn.modules.generated.sklearn.feature_selection.genericunivariateselect#sklearn.feature_selection.GenericUnivariateSelect.inverse_transform
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.feature_selection.genericunivariateselect#sklearn.feature_selection.GenericUnivariateSelect.set_params
transform(X) [source] Reduce X to the selected features. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns X_rarray of shape [n_samples, n_selected_features] The input samples with only the selected features.
sklearn.modules.generated.sklearn.feature_selection.genericunivariateselect#sklearn.feature_selection.GenericUnivariateSelect.transform
sklearn.feature_selection.mutual_info_classif(X, y, *, discrete_features='auto', n_neighbors=3, copy=True, random_state=None) [source] Estimate mutual information for a discrete target variable. Mutual information (MI) [1] between two random variables is a non-negative value, which measures the dependency between the variables. It is equal to zero if and only if two random variables are independent, and higher values mean higher dependency. The function relies on nonparametric methods based on entropy estimation from k-nearest neighbors distances as described in [2] and [3]. Both methods are based on the idea originally proposed in [4]. It can be used for univariate features selection, read more in the User Guide. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Feature matrix. yarray-like of shape (n_samples,) Target vector. discrete_features{‘auto’, bool, array-like}, default=’auto’ If bool, then determines whether to consider all features discrete or continuous. If array, then it should be either a boolean mask with shape (n_features,) or array with indices of discrete features. If ‘auto’, it is assigned to False for dense X and to True for sparse X. n_neighborsint, default=3 Number of neighbors to use for MI estimation for continuous variables, see [2] and [3]. Higher values reduce variance of the estimation, but could introduce a bias. copybool, default=True Whether to make a copy of the given data. If set to False, the initial data will be overwritten. random_stateint, RandomState instance or None, default=None Determines random number generation for adding small noise to continuous variables in order to remove repeated values. Pass an int for reproducible results across multiple function calls. See Glossary. Returns mindarray, shape (n_features,) Estimated mutual information between each feature and the target. Notes The term “discrete features” is used instead of naming them “categorical”, because it describes the essence more accurately. For example, pixel intensities of an image are discrete features (but hardly categorical) and you will get better results if mark them as such. Also note, that treating a continuous variable as discrete and vice versa will usually give incorrect results, so be attentive about that. True mutual information can’t be negative. If its estimate turns out to be negative, it is replaced by zero. References 1 Mutual Information on Wikipedia. 2(1,2) A. Kraskov, H. Stogbauer and P. Grassberger, “Estimating mutual information”. Phys. Rev. E 69, 2004. 3(1,2) B. C. Ross “Mutual Information between Discrete and Continuous Data Sets”. PLoS ONE 9(2), 2014. 4 L. F. Kozachenko, N. N. Leonenko, “Sample Estimate of the Entropy of a Random Vector:, Probl. Peredachi Inf., 23:2 (1987), 9-16
sklearn.modules.generated.sklearn.feature_selection.mutual_info_classif#sklearn.feature_selection.mutual_info_classif
sklearn.feature_selection.mutual_info_regression(X, y, *, discrete_features='auto', n_neighbors=3, copy=True, random_state=None) [source] Estimate mutual information for a continuous target variable. Mutual information (MI) [1] between two random variables is a non-negative value, which measures the dependency between the variables. It is equal to zero if and only if two random variables are independent, and higher values mean higher dependency. The function relies on nonparametric methods based on entropy estimation from k-nearest neighbors distances as described in [2] and [3]. Both methods are based on the idea originally proposed in [4]. It can be used for univariate features selection, read more in the User Guide. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Feature matrix. yarray-like of shape (n_samples,) Target vector. discrete_features{‘auto’, bool, array-like}, default=’auto’ If bool, then determines whether to consider all features discrete or continuous. If array, then it should be either a boolean mask with shape (n_features,) or array with indices of discrete features. If ‘auto’, it is assigned to False for dense X and to True for sparse X. n_neighborsint, default=3 Number of neighbors to use for MI estimation for continuous variables, see [2] and [3]. Higher values reduce variance of the estimation, but could introduce a bias. copybool, default=True Whether to make a copy of the given data. If set to False, the initial data will be overwritten. random_stateint, RandomState instance or None, default=None Determines random number generation for adding small noise to continuous variables in order to remove repeated values. Pass an int for reproducible results across multiple function calls. See Glossary. Returns mindarray, shape (n_features,) Estimated mutual information between each feature and the target. Notes The term “discrete features” is used instead of naming them “categorical”, because it describes the essence more accurately. For example, pixel intensities of an image are discrete features (but hardly categorical) and you will get better results if mark them as such. Also note, that treating a continuous variable as discrete and vice versa will usually give incorrect results, so be attentive about that. True mutual information can’t be negative. If its estimate turns out to be negative, it is replaced by zero. References 1 Mutual Information on Wikipedia. 2(1,2) A. Kraskov, H. Stogbauer and P. Grassberger, “Estimating mutual information”. Phys. Rev. E 69, 2004. 3(1,2) B. C. Ross “Mutual Information between Discrete and Continuous Data Sets”. PLoS ONE 9(2), 2014. 4 L. F. Kozachenko, N. N. Leonenko, “Sample Estimate of the Entropy of a Random Vector”, Probl. Peredachi Inf., 23:2 (1987), 9-16
sklearn.modules.generated.sklearn.feature_selection.mutual_info_regression#sklearn.feature_selection.mutual_info_regression
class sklearn.feature_selection.RFE(estimator, *, n_features_to_select=None, step=1, verbose=0, importance_getter='auto') [source] Feature ranking with recursive feature elimination. Given an external estimator that assigns weights to features (e.g., the coefficients of a linear model), the goal of recursive feature elimination (RFE) is to select features by recursively considering smaller and smaller sets of features. First, the estimator is trained on the initial set of features and the importance of each feature is obtained either through any specific attribute or callable. Then, the least important features are pruned from current set of features. That procedure is recursively repeated on the pruned set until the desired number of features to select is eventually reached. Read more in the User Guide. Parameters estimatorEstimator instance A supervised learning estimator with a fit method that provides information about feature importance (e.g. coef_, feature_importances_). n_features_to_selectint or float, default=None The number of features to select. If None, half of the features are selected. If integer, the parameter is the absolute number of features to select. If float between 0 and 1, it is the fraction of features to select. Changed in version 0.24: Added float values for fractions. stepint or float, default=1 If greater than or equal to 1, then step corresponds to the (integer) number of features to remove at each iteration. If within (0.0, 1.0), then step corresponds to the percentage (rounded down) of features to remove at each iteration. verboseint, default=0 Controls verbosity of output. importance_getterstr or callable, default=’auto’ If ‘auto’, uses the feature importance either through a coef_ or feature_importances_ attributes of estimator. Also accepts a string that specifies an attribute name/path for extracting feature importance (implemented with attrgetter). For example, give regressor_.coef_ in case of TransformedTargetRegressor or named_steps.clf.feature_importances_ in case of class:~sklearn.pipeline.Pipeline with its last step named clf. If callable, overrides the default feature importance getter. The callable is passed with the fitted estimator and it should return importance for each feature. New in version 0.24. Attributes estimator_Estimator instance The fitted estimator used to select features. n_features_int The number of selected features. ranking_ndarray of shape (n_features,) The feature ranking, such that ranking_[i] corresponds to the ranking position of the i-th feature. Selected (i.e., estimated best) features are assigned rank 1. support_ndarray of shape (n_features,) The mask of selected features. See also RFECV Recursive feature elimination with built-in cross-validated selection of the best number of features. SelectFromModel Feature selection based on thresholds of importance weights. SequentialFeatureSelector Sequential cross-validation based feature selection. Does not rely on importance weights. Notes Allows NaN/Inf in the input if the underlying estimator does as well. References 1 Guyon, I., Weston, J., Barnhill, S., & Vapnik, V., “Gene selection for cancer classification using support vector machines”, Mach. Learn., 46(1-3), 389–422, 2002. Examples The following example shows how to retrieve the 5 most informative features in the Friedman #1 dataset. >>> from sklearn.datasets import make_friedman1 >>> from sklearn.feature_selection import RFE >>> from sklearn.svm import SVR >>> X, y = make_friedman1(n_samples=50, n_features=10, random_state=0) >>> estimator = SVR(kernel="linear") >>> selector = RFE(estimator, n_features_to_select=5, step=1) >>> selector = selector.fit(X, y) >>> selector.support_ array([ True, True, True, True, True, False, False, False, False, False]) >>> selector.ranking_ array([1, 1, 1, 1, 1, 6, 4, 3, 2, 5]) Methods decision_function(X) Compute the decision function of X. fit(X, y) Fit the RFE model and then the underlying estimator on the selected fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. get_support([indices]) Get a mask, or integer index, of the features selected inverse_transform(X) Reverse the transformation operation predict(X) Reduce X to the selected features and then predict using the predict_log_proba(X) Predict class log-probabilities for X. predict_proba(X) Predict class probabilities for X. score(X, y) Reduce X to the selected features and then return the score of the set_params(**params) Set the parameters of this estimator. transform(X) Reduce X to the selected features. decision_function(X) [source] Compute the decision function of X. Parameters X{array-like or sparse matrix} of shape (n_samples, n_features) The input samples. Internally, it will be converted to dtype=np.float32 and if a sparse matrix is provided to a sparse csr_matrix. Returns scorearray, shape = [n_samples, n_classes] or [n_samples] The decision function of the input samples. The order of the classes corresponds to that in the attribute classes_. Regression and binary classification produce an array of shape [n_samples]. fit(X, y) [source] Fit the RFE model and then the underlying estimator on the selected features. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The training input samples. yarray-like of shape (n_samples,) The target values. fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. get_support(indices=False) [source] Get a mask, or integer index, of the features selected Parameters indicesbool, default=False If True, the return value will be an array of integers, rather than a boolean mask. Returns supportarray An index that selects the retained features from a feature vector. If indices is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If indices is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector. inverse_transform(X) [source] Reverse the transformation operation Parameters Xarray of shape [n_samples, n_selected_features] The input samples. Returns X_rarray of shape [n_samples, n_original_features] X with columns of zeros inserted where features would have been removed by transform. predict(X) [source] Reduce X to the selected features and then predict using the underlying estimator. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns yarray of shape [n_samples] The predicted target values. predict_log_proba(X) [source] Predict class log-probabilities for X. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns parray of shape (n_samples, n_classes) The class log-probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_. predict_proba(X) [source] Predict class probabilities for X. Parameters X{array-like or sparse matrix} of shape (n_samples, n_features) The input samples. Internally, it will be converted to dtype=np.float32 and if a sparse matrix is provided to a sparse csr_matrix. Returns parray of shape (n_samples, n_classes) The class probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_. score(X, y) [source] Reduce X to the selected features and then return the score of the underlying estimator. Parameters Xarray of shape [n_samples, n_features] The input samples. yarray of shape [n_samples] The target values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Reduce X to the selected features. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns X_rarray of shape [n_samples, n_selected_features] The input samples with only the selected features.
sklearn.modules.generated.sklearn.feature_selection.rfe#sklearn.feature_selection.RFE
sklearn.feature_selection.RFE class sklearn.feature_selection.RFE(estimator, *, n_features_to_select=None, step=1, verbose=0, importance_getter='auto') [source] Feature ranking with recursive feature elimination. Given an external estimator that assigns weights to features (e.g., the coefficients of a linear model), the goal of recursive feature elimination (RFE) is to select features by recursively considering smaller and smaller sets of features. First, the estimator is trained on the initial set of features and the importance of each feature is obtained either through any specific attribute or callable. Then, the least important features are pruned from current set of features. That procedure is recursively repeated on the pruned set until the desired number of features to select is eventually reached. Read more in the User Guide. Parameters estimatorEstimator instance A supervised learning estimator with a fit method that provides information about feature importance (e.g. coef_, feature_importances_). n_features_to_selectint or float, default=None The number of features to select. If None, half of the features are selected. If integer, the parameter is the absolute number of features to select. If float between 0 and 1, it is the fraction of features to select. Changed in version 0.24: Added float values for fractions. stepint or float, default=1 If greater than or equal to 1, then step corresponds to the (integer) number of features to remove at each iteration. If within (0.0, 1.0), then step corresponds to the percentage (rounded down) of features to remove at each iteration. verboseint, default=0 Controls verbosity of output. importance_getterstr or callable, default=’auto’ If ‘auto’, uses the feature importance either through a coef_ or feature_importances_ attributes of estimator. Also accepts a string that specifies an attribute name/path for extracting feature importance (implemented with attrgetter). For example, give regressor_.coef_ in case of TransformedTargetRegressor or named_steps.clf.feature_importances_ in case of class:~sklearn.pipeline.Pipeline with its last step named clf. If callable, overrides the default feature importance getter. The callable is passed with the fitted estimator and it should return importance for each feature. New in version 0.24. Attributes estimator_Estimator instance The fitted estimator used to select features. n_features_int The number of selected features. ranking_ndarray of shape (n_features,) The feature ranking, such that ranking_[i] corresponds to the ranking position of the i-th feature. Selected (i.e., estimated best) features are assigned rank 1. support_ndarray of shape (n_features,) The mask of selected features. See also RFECV Recursive feature elimination with built-in cross-validated selection of the best number of features. SelectFromModel Feature selection based on thresholds of importance weights. SequentialFeatureSelector Sequential cross-validation based feature selection. Does not rely on importance weights. Notes Allows NaN/Inf in the input if the underlying estimator does as well. References 1 Guyon, I., Weston, J., Barnhill, S., & Vapnik, V., “Gene selection for cancer classification using support vector machines”, Mach. Learn., 46(1-3), 389–422, 2002. Examples The following example shows how to retrieve the 5 most informative features in the Friedman #1 dataset. >>> from sklearn.datasets import make_friedman1 >>> from sklearn.feature_selection import RFE >>> from sklearn.svm import SVR >>> X, y = make_friedman1(n_samples=50, n_features=10, random_state=0) >>> estimator = SVR(kernel="linear") >>> selector = RFE(estimator, n_features_to_select=5, step=1) >>> selector = selector.fit(X, y) >>> selector.support_ array([ True, True, True, True, True, False, False, False, False, False]) >>> selector.ranking_ array([1, 1, 1, 1, 1, 6, 4, 3, 2, 5]) Methods decision_function(X) Compute the decision function of X. fit(X, y) Fit the RFE model and then the underlying estimator on the selected fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. get_support([indices]) Get a mask, or integer index, of the features selected inverse_transform(X) Reverse the transformation operation predict(X) Reduce X to the selected features and then predict using the predict_log_proba(X) Predict class log-probabilities for X. predict_proba(X) Predict class probabilities for X. score(X, y) Reduce X to the selected features and then return the score of the set_params(**params) Set the parameters of this estimator. transform(X) Reduce X to the selected features. decision_function(X) [source] Compute the decision function of X. Parameters X{array-like or sparse matrix} of shape (n_samples, n_features) The input samples. Internally, it will be converted to dtype=np.float32 and if a sparse matrix is provided to a sparse csr_matrix. Returns scorearray, shape = [n_samples, n_classes] or [n_samples] The decision function of the input samples. The order of the classes corresponds to that in the attribute classes_. Regression and binary classification produce an array of shape [n_samples]. fit(X, y) [source] Fit the RFE model and then the underlying estimator on the selected features. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The training input samples. yarray-like of shape (n_samples,) The target values. fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. get_support(indices=False) [source] Get a mask, or integer index, of the features selected Parameters indicesbool, default=False If True, the return value will be an array of integers, rather than a boolean mask. Returns supportarray An index that selects the retained features from a feature vector. If indices is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If indices is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector. inverse_transform(X) [source] Reverse the transformation operation Parameters Xarray of shape [n_samples, n_selected_features] The input samples. Returns X_rarray of shape [n_samples, n_original_features] X with columns of zeros inserted where features would have been removed by transform. predict(X) [source] Reduce X to the selected features and then predict using the underlying estimator. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns yarray of shape [n_samples] The predicted target values. predict_log_proba(X) [source] Predict class log-probabilities for X. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns parray of shape (n_samples, n_classes) The class log-probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_. predict_proba(X) [source] Predict class probabilities for X. Parameters X{array-like or sparse matrix} of shape (n_samples, n_features) The input samples. Internally, it will be converted to dtype=np.float32 and if a sparse matrix is provided to a sparse csr_matrix. Returns parray of shape (n_samples, n_classes) The class probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_. score(X, y) [source] Reduce X to the selected features and then return the score of the underlying estimator. Parameters Xarray of shape [n_samples, n_features] The input samples. yarray of shape [n_samples] The target values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Reduce X to the selected features. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns X_rarray of shape [n_samples, n_selected_features] The input samples with only the selected features. Examples using sklearn.feature_selection.RFE Recursive feature elimination
sklearn.modules.generated.sklearn.feature_selection.rfe
decision_function(X) [source] Compute the decision function of X. Parameters X{array-like or sparse matrix} of shape (n_samples, n_features) The input samples. Internally, it will be converted to dtype=np.float32 and if a sparse matrix is provided to a sparse csr_matrix. Returns scorearray, shape = [n_samples, n_classes] or [n_samples] The decision function of the input samples. The order of the classes corresponds to that in the attribute classes_. Regression and binary classification produce an array of shape [n_samples].
sklearn.modules.generated.sklearn.feature_selection.rfe#sklearn.feature_selection.RFE.decision_function
fit(X, y) [source] Fit the RFE model and then the underlying estimator on the selected features. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The training input samples. yarray-like of shape (n_samples,) The target values.
sklearn.modules.generated.sklearn.feature_selection.rfe#sklearn.feature_selection.RFE.fit
fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array.
sklearn.modules.generated.sklearn.feature_selection.rfe#sklearn.feature_selection.RFE.fit_transform
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.feature_selection.rfe#sklearn.feature_selection.RFE.get_params
get_support(indices=False) [source] Get a mask, or integer index, of the features selected Parameters indicesbool, default=False If True, the return value will be an array of integers, rather than a boolean mask. Returns supportarray An index that selects the retained features from a feature vector. If indices is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If indices is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector.
sklearn.modules.generated.sklearn.feature_selection.rfe#sklearn.feature_selection.RFE.get_support
inverse_transform(X) [source] Reverse the transformation operation Parameters Xarray of shape [n_samples, n_selected_features] The input samples. Returns X_rarray of shape [n_samples, n_original_features] X with columns of zeros inserted where features would have been removed by transform.
sklearn.modules.generated.sklearn.feature_selection.rfe#sklearn.feature_selection.RFE.inverse_transform
predict(X) [source] Reduce X to the selected features and then predict using the underlying estimator. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns yarray of shape [n_samples] The predicted target values.
sklearn.modules.generated.sklearn.feature_selection.rfe#sklearn.feature_selection.RFE.predict
predict_log_proba(X) [source] Predict class log-probabilities for X. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns parray of shape (n_samples, n_classes) The class log-probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_.
sklearn.modules.generated.sklearn.feature_selection.rfe#sklearn.feature_selection.RFE.predict_log_proba
predict_proba(X) [source] Predict class probabilities for X. Parameters X{array-like or sparse matrix} of shape (n_samples, n_features) The input samples. Internally, it will be converted to dtype=np.float32 and if a sparse matrix is provided to a sparse csr_matrix. Returns parray of shape (n_samples, n_classes) The class probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_.
sklearn.modules.generated.sklearn.feature_selection.rfe#sklearn.feature_selection.RFE.predict_proba
score(X, y) [source] Reduce X to the selected features and then return the score of the underlying estimator. Parameters Xarray of shape [n_samples, n_features] The input samples. yarray of shape [n_samples] The target values.
sklearn.modules.generated.sklearn.feature_selection.rfe#sklearn.feature_selection.RFE.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.feature_selection.rfe#sklearn.feature_selection.RFE.set_params
transform(X) [source] Reduce X to the selected features. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns X_rarray of shape [n_samples, n_selected_features] The input samples with only the selected features.
sklearn.modules.generated.sklearn.feature_selection.rfe#sklearn.feature_selection.RFE.transform
class sklearn.feature_selection.RFECV(estimator, *, step=1, min_features_to_select=1, cv=None, scoring=None, verbose=0, n_jobs=None, importance_getter='auto') [source] Feature ranking with recursive feature elimination and cross-validated selection of the best number of features. See glossary entry for cross-validation estimator. Read more in the User Guide. Parameters estimatorEstimator instance A supervised learning estimator with a fit method that provides information about feature importance either through a coef_ attribute or through a feature_importances_ attribute. stepint or float, default=1 If greater than or equal to 1, then step corresponds to the (integer) number of features to remove at each iteration. If within (0.0, 1.0), then step corresponds to the percentage (rounded down) of features to remove at each iteration. Note that the last iteration may remove fewer than step features in order to reach min_features_to_select. min_features_to_selectint, default=1 The minimum number of features to be selected. This number of features will always be scored, even if the difference between the original feature count and min_features_to_select isn’t divisible by step. New in version 0.20. cvint, cross-validation generator or an iterable, default=None Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross-validation, integer, to specify the number of folds. CV splitter, An iterable yielding (train, test) splits as arrays of indices. For integer/None inputs, if y is binary or multiclass, StratifiedKFold is used. If the estimator is a classifier or if y is neither binary nor multiclass, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value of None changed from 3-fold to 5-fold. scoringstring, callable or None, default=None A string (see model evaluation documentation) or a scorer callable object / function with signature scorer(estimator, X, y). verboseint, default=0 Controls verbosity of output. n_jobsint or None, default=None Number of cores to run in parallel while fitting across folds. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. New in version 0.18. importance_getterstr or callable, default=’auto’ If ‘auto’, uses the feature importance either through a coef_ or feature_importances_ attributes of estimator. Also accepts a string that specifies an attribute name/path for extracting feature importance. For example, give regressor_.coef_ in case of TransformedTargetRegressor or named_steps.clf.feature_importances_ in case of Pipeline with its last step named clf. If callable, overrides the default feature importance getter. The callable is passed with the fitted estimator and it should return importance for each feature. New in version 0.24. Attributes estimator_Estimator instance The fitted estimator used to select features. grid_scores_ndarray of shape (n_subsets_of_features,) The cross-validation scores such that grid_scores_[i] corresponds to the CV score of the i-th subset of features. n_features_int The number of selected features with cross-validation. ranking_narray of shape (n_features,) The feature ranking, such that ranking_[i] corresponds to the ranking position of the i-th feature. Selected (i.e., estimated best) features are assigned rank 1. support_ndarray of shape (n_features,) The mask of selected features. See also RFE Recursive feature elimination. Notes The size of grid_scores_ is equal to ceil((n_features - min_features_to_select) / step) + 1, where step is the number of features removed at each iteration. Allows NaN/Inf in the input if the underlying estimator does as well. References 1 Guyon, I., Weston, J., Barnhill, S., & Vapnik, V., “Gene selection for cancer classification using support vector machines”, Mach. Learn., 46(1-3), 389–422, 2002. Examples The following example shows how to retrieve the a-priori not known 5 informative features in the Friedman #1 dataset. >>> from sklearn.datasets import make_friedman1 >>> from sklearn.feature_selection import RFECV >>> from sklearn.svm import SVR >>> X, y = make_friedman1(n_samples=50, n_features=10, random_state=0) >>> estimator = SVR(kernel="linear") >>> selector = RFECV(estimator, step=1, cv=5) >>> selector = selector.fit(X, y) >>> selector.support_ array([ True, True, True, True, True, False, False, False, False, False]) >>> selector.ranking_ array([1, 1, 1, 1, 1, 6, 4, 3, 2, 5]) Methods decision_function(X) Compute the decision function of X. fit(X, y[, groups]) Fit the RFE model and automatically tune the number of selected fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. get_support([indices]) Get a mask, or integer index, of the features selected inverse_transform(X) Reverse the transformation operation predict(X) Reduce X to the selected features and then predict using the predict_log_proba(X) Predict class log-probabilities for X. predict_proba(X) Predict class probabilities for X. score(X, y) Reduce X to the selected features and then return the score of the set_params(**params) Set the parameters of this estimator. transform(X) Reduce X to the selected features. decision_function(X) [source] Compute the decision function of X. Parameters X{array-like or sparse matrix} of shape (n_samples, n_features) The input samples. Internally, it will be converted to dtype=np.float32 and if a sparse matrix is provided to a sparse csr_matrix. Returns scorearray, shape = [n_samples, n_classes] or [n_samples] The decision function of the input samples. The order of the classes corresponds to that in the attribute classes_. Regression and binary classification produce an array of shape [n_samples]. fit(X, y, groups=None) [source] Fit the RFE model and automatically tune the number of selected features. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training vector, where n_samples is the number of samples and n_features is the total number of features. yarray-like of shape (n_samples,) Target values (integers for classification, real numbers for regression). groupsarray-like of shape (n_samples,) or None, default=None Group labels for the samples used while splitting the dataset into train/test set. Only used in conjunction with a “Group” cv instance (e.g., GroupKFold). New in version 0.20. fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. get_support(indices=False) [source] Get a mask, or integer index, of the features selected Parameters indicesbool, default=False If True, the return value will be an array of integers, rather than a boolean mask. Returns supportarray An index that selects the retained features from a feature vector. If indices is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If indices is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector. inverse_transform(X) [source] Reverse the transformation operation Parameters Xarray of shape [n_samples, n_selected_features] The input samples. Returns X_rarray of shape [n_samples, n_original_features] X with columns of zeros inserted where features would have been removed by transform. predict(X) [source] Reduce X to the selected features and then predict using the underlying estimator. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns yarray of shape [n_samples] The predicted target values. predict_log_proba(X) [source] Predict class log-probabilities for X. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns parray of shape (n_samples, n_classes) The class log-probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_. predict_proba(X) [source] Predict class probabilities for X. Parameters X{array-like or sparse matrix} of shape (n_samples, n_features) The input samples. Internally, it will be converted to dtype=np.float32 and if a sparse matrix is provided to a sparse csr_matrix. Returns parray of shape (n_samples, n_classes) The class probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_. score(X, y) [source] Reduce X to the selected features and then return the score of the underlying estimator. Parameters Xarray of shape [n_samples, n_features] The input samples. yarray of shape [n_samples] The target values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Reduce X to the selected features. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns X_rarray of shape [n_samples, n_selected_features] The input samples with only the selected features.
sklearn.modules.generated.sklearn.feature_selection.rfecv#sklearn.feature_selection.RFECV
sklearn.feature_selection.RFECV class sklearn.feature_selection.RFECV(estimator, *, step=1, min_features_to_select=1, cv=None, scoring=None, verbose=0, n_jobs=None, importance_getter='auto') [source] Feature ranking with recursive feature elimination and cross-validated selection of the best number of features. See glossary entry for cross-validation estimator. Read more in the User Guide. Parameters estimatorEstimator instance A supervised learning estimator with a fit method that provides information about feature importance either through a coef_ attribute or through a feature_importances_ attribute. stepint or float, default=1 If greater than or equal to 1, then step corresponds to the (integer) number of features to remove at each iteration. If within (0.0, 1.0), then step corresponds to the percentage (rounded down) of features to remove at each iteration. Note that the last iteration may remove fewer than step features in order to reach min_features_to_select. min_features_to_selectint, default=1 The minimum number of features to be selected. This number of features will always be scored, even if the difference between the original feature count and min_features_to_select isn’t divisible by step. New in version 0.20. cvint, cross-validation generator or an iterable, default=None Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross-validation, integer, to specify the number of folds. CV splitter, An iterable yielding (train, test) splits as arrays of indices. For integer/None inputs, if y is binary or multiclass, StratifiedKFold is used. If the estimator is a classifier or if y is neither binary nor multiclass, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value of None changed from 3-fold to 5-fold. scoringstring, callable or None, default=None A string (see model evaluation documentation) or a scorer callable object / function with signature scorer(estimator, X, y). verboseint, default=0 Controls verbosity of output. n_jobsint or None, default=None Number of cores to run in parallel while fitting across folds. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. New in version 0.18. importance_getterstr or callable, default=’auto’ If ‘auto’, uses the feature importance either through a coef_ or feature_importances_ attributes of estimator. Also accepts a string that specifies an attribute name/path for extracting feature importance. For example, give regressor_.coef_ in case of TransformedTargetRegressor or named_steps.clf.feature_importances_ in case of Pipeline with its last step named clf. If callable, overrides the default feature importance getter. The callable is passed with the fitted estimator and it should return importance for each feature. New in version 0.24. Attributes estimator_Estimator instance The fitted estimator used to select features. grid_scores_ndarray of shape (n_subsets_of_features,) The cross-validation scores such that grid_scores_[i] corresponds to the CV score of the i-th subset of features. n_features_int The number of selected features with cross-validation. ranking_narray of shape (n_features,) The feature ranking, such that ranking_[i] corresponds to the ranking position of the i-th feature. Selected (i.e., estimated best) features are assigned rank 1. support_ndarray of shape (n_features,) The mask of selected features. See also RFE Recursive feature elimination. Notes The size of grid_scores_ is equal to ceil((n_features - min_features_to_select) / step) + 1, where step is the number of features removed at each iteration. Allows NaN/Inf in the input if the underlying estimator does as well. References 1 Guyon, I., Weston, J., Barnhill, S., & Vapnik, V., “Gene selection for cancer classification using support vector machines”, Mach. Learn., 46(1-3), 389–422, 2002. Examples The following example shows how to retrieve the a-priori not known 5 informative features in the Friedman #1 dataset. >>> from sklearn.datasets import make_friedman1 >>> from sklearn.feature_selection import RFECV >>> from sklearn.svm import SVR >>> X, y = make_friedman1(n_samples=50, n_features=10, random_state=0) >>> estimator = SVR(kernel="linear") >>> selector = RFECV(estimator, step=1, cv=5) >>> selector = selector.fit(X, y) >>> selector.support_ array([ True, True, True, True, True, False, False, False, False, False]) >>> selector.ranking_ array([1, 1, 1, 1, 1, 6, 4, 3, 2, 5]) Methods decision_function(X) Compute the decision function of X. fit(X, y[, groups]) Fit the RFE model and automatically tune the number of selected fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. get_support([indices]) Get a mask, or integer index, of the features selected inverse_transform(X) Reverse the transformation operation predict(X) Reduce X to the selected features and then predict using the predict_log_proba(X) Predict class log-probabilities for X. predict_proba(X) Predict class probabilities for X. score(X, y) Reduce X to the selected features and then return the score of the set_params(**params) Set the parameters of this estimator. transform(X) Reduce X to the selected features. decision_function(X) [source] Compute the decision function of X. Parameters X{array-like or sparse matrix} of shape (n_samples, n_features) The input samples. Internally, it will be converted to dtype=np.float32 and if a sparse matrix is provided to a sparse csr_matrix. Returns scorearray, shape = [n_samples, n_classes] or [n_samples] The decision function of the input samples. The order of the classes corresponds to that in the attribute classes_. Regression and binary classification produce an array of shape [n_samples]. fit(X, y, groups=None) [source] Fit the RFE model and automatically tune the number of selected features. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training vector, where n_samples is the number of samples and n_features is the total number of features. yarray-like of shape (n_samples,) Target values (integers for classification, real numbers for regression). groupsarray-like of shape (n_samples,) or None, default=None Group labels for the samples used while splitting the dataset into train/test set. Only used in conjunction with a “Group” cv instance (e.g., GroupKFold). New in version 0.20. fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. get_support(indices=False) [source] Get a mask, or integer index, of the features selected Parameters indicesbool, default=False If True, the return value will be an array of integers, rather than a boolean mask. Returns supportarray An index that selects the retained features from a feature vector. If indices is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If indices is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector. inverse_transform(X) [source] Reverse the transformation operation Parameters Xarray of shape [n_samples, n_selected_features] The input samples. Returns X_rarray of shape [n_samples, n_original_features] X with columns of zeros inserted where features would have been removed by transform. predict(X) [source] Reduce X to the selected features and then predict using the underlying estimator. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns yarray of shape [n_samples] The predicted target values. predict_log_proba(X) [source] Predict class log-probabilities for X. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns parray of shape (n_samples, n_classes) The class log-probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_. predict_proba(X) [source] Predict class probabilities for X. Parameters X{array-like or sparse matrix} of shape (n_samples, n_features) The input samples. Internally, it will be converted to dtype=np.float32 and if a sparse matrix is provided to a sparse csr_matrix. Returns parray of shape (n_samples, n_classes) The class probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_. score(X, y) [source] Reduce X to the selected features and then return the score of the underlying estimator. Parameters Xarray of shape [n_samples, n_features] The input samples. yarray of shape [n_samples] The target values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Reduce X to the selected features. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns X_rarray of shape [n_samples, n_selected_features] The input samples with only the selected features. Examples using sklearn.feature_selection.RFECV Recursive feature elimination with cross-validation
sklearn.modules.generated.sklearn.feature_selection.rfecv
decision_function(X) [source] Compute the decision function of X. Parameters X{array-like or sparse matrix} of shape (n_samples, n_features) The input samples. Internally, it will be converted to dtype=np.float32 and if a sparse matrix is provided to a sparse csr_matrix. Returns scorearray, shape = [n_samples, n_classes] or [n_samples] The decision function of the input samples. The order of the classes corresponds to that in the attribute classes_. Regression and binary classification produce an array of shape [n_samples].
sklearn.modules.generated.sklearn.feature_selection.rfecv#sklearn.feature_selection.RFECV.decision_function
fit(X, y, groups=None) [source] Fit the RFE model and automatically tune the number of selected features. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training vector, where n_samples is the number of samples and n_features is the total number of features. yarray-like of shape (n_samples,) Target values (integers for classification, real numbers for regression). groupsarray-like of shape (n_samples,) or None, default=None Group labels for the samples used while splitting the dataset into train/test set. Only used in conjunction with a “Group” cv instance (e.g., GroupKFold). New in version 0.20.
sklearn.modules.generated.sklearn.feature_selection.rfecv#sklearn.feature_selection.RFECV.fit
fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array.
sklearn.modules.generated.sklearn.feature_selection.rfecv#sklearn.feature_selection.RFECV.fit_transform
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.feature_selection.rfecv#sklearn.feature_selection.RFECV.get_params
get_support(indices=False) [source] Get a mask, or integer index, of the features selected Parameters indicesbool, default=False If True, the return value will be an array of integers, rather than a boolean mask. Returns supportarray An index that selects the retained features from a feature vector. If indices is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If indices is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector.
sklearn.modules.generated.sklearn.feature_selection.rfecv#sklearn.feature_selection.RFECV.get_support
inverse_transform(X) [source] Reverse the transformation operation Parameters Xarray of shape [n_samples, n_selected_features] The input samples. Returns X_rarray of shape [n_samples, n_original_features] X with columns of zeros inserted where features would have been removed by transform.
sklearn.modules.generated.sklearn.feature_selection.rfecv#sklearn.feature_selection.RFECV.inverse_transform
predict(X) [source] Reduce X to the selected features and then predict using the underlying estimator. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns yarray of shape [n_samples] The predicted target values.
sklearn.modules.generated.sklearn.feature_selection.rfecv#sklearn.feature_selection.RFECV.predict
predict_log_proba(X) [source] Predict class log-probabilities for X. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns parray of shape (n_samples, n_classes) The class log-probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_.
sklearn.modules.generated.sklearn.feature_selection.rfecv#sklearn.feature_selection.RFECV.predict_log_proba
predict_proba(X) [source] Predict class probabilities for X. Parameters X{array-like or sparse matrix} of shape (n_samples, n_features) The input samples. Internally, it will be converted to dtype=np.float32 and if a sparse matrix is provided to a sparse csr_matrix. Returns parray of shape (n_samples, n_classes) The class probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_.
sklearn.modules.generated.sklearn.feature_selection.rfecv#sklearn.feature_selection.RFECV.predict_proba
score(X, y) [source] Reduce X to the selected features and then return the score of the underlying estimator. Parameters Xarray of shape [n_samples, n_features] The input samples. yarray of shape [n_samples] The target values.
sklearn.modules.generated.sklearn.feature_selection.rfecv#sklearn.feature_selection.RFECV.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.feature_selection.rfecv#sklearn.feature_selection.RFECV.set_params
transform(X) [source] Reduce X to the selected features. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns X_rarray of shape [n_samples, n_selected_features] The input samples with only the selected features.
sklearn.modules.generated.sklearn.feature_selection.rfecv#sklearn.feature_selection.RFECV.transform
class sklearn.feature_selection.SelectFdr(score_func=<function f_classif>, *, alpha=0.05) [source] Filter: Select the p-values for an estimated false discovery rate This uses the Benjamini-Hochberg procedure. alpha is an upper bound on the expected false discovery rate. Read more in the User Guide. Parameters score_funccallable, default=f_classif Function taking two arrays X and y, and returning a pair of arrays (scores, pvalues). Default is f_classif (see below “See Also”). The default function only works with classification tasks. alphafloat, default=5e-2 The highest uncorrected p-value for features to keep. Attributes scores_array-like of shape (n_features,) Scores of features. pvalues_array-like of shape (n_features,) p-values of feature scores. See also f_classif ANOVA F-value between label/feature for classification tasks. mutual_info_classif Mutual information for a discrete target. chi2 Chi-squared stats of non-negative features for classification tasks. f_regression F-value between label/feature for regression tasks. mutual_info_regression Mutual information for a contnuous target. SelectPercentile Select features based on percentile of the highest scores. SelectKBest Select features based on the k highest scores. SelectFpr Select features based on a false positive rate test. SelectFwe Select features based on family-wise error rate. GenericUnivariateSelect Univariate feature selector with configurable mode. References https://en.wikipedia.org/wiki/False_discovery_rate Examples >>> from sklearn.datasets import load_breast_cancer >>> from sklearn.feature_selection import SelectFdr, chi2 >>> X, y = load_breast_cancer(return_X_y=True) >>> X.shape (569, 30) >>> X_new = SelectFdr(chi2, alpha=0.01).fit_transform(X, y) >>> X_new.shape (569, 16) Methods fit(X, y) Run score function on (X, y) and get the appropriate features. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. get_support([indices]) Get a mask, or integer index, of the features selected inverse_transform(X) Reverse the transformation operation set_params(**params) Set the parameters of this estimator. transform(X) Reduce X to the selected features. fit(X, y) [source] Run score function on (X, y) and get the appropriate features. Parameters Xarray-like of shape (n_samples, n_features) The training input samples. yarray-like of shape (n_samples,) The target values (class labels in classification, real numbers in regression). Returns selfobject fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. get_support(indices=False) [source] Get a mask, or integer index, of the features selected Parameters indicesbool, default=False If True, the return value will be an array of integers, rather than a boolean mask. Returns supportarray An index that selects the retained features from a feature vector. If indices is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If indices is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector. inverse_transform(X) [source] Reverse the transformation operation Parameters Xarray of shape [n_samples, n_selected_features] The input samples. Returns X_rarray of shape [n_samples, n_original_features] X with columns of zeros inserted where features would have been removed by transform. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Reduce X to the selected features. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns X_rarray of shape [n_samples, n_selected_features] The input samples with only the selected features.
sklearn.modules.generated.sklearn.feature_selection.selectfdr#sklearn.feature_selection.SelectFdr
sklearn.feature_selection.SelectFdr class sklearn.feature_selection.SelectFdr(score_func=<function f_classif>, *, alpha=0.05) [source] Filter: Select the p-values for an estimated false discovery rate This uses the Benjamini-Hochberg procedure. alpha is an upper bound on the expected false discovery rate. Read more in the User Guide. Parameters score_funccallable, default=f_classif Function taking two arrays X and y, and returning a pair of arrays (scores, pvalues). Default is f_classif (see below “See Also”). The default function only works with classification tasks. alphafloat, default=5e-2 The highest uncorrected p-value for features to keep. Attributes scores_array-like of shape (n_features,) Scores of features. pvalues_array-like of shape (n_features,) p-values of feature scores. See also f_classif ANOVA F-value between label/feature for classification tasks. mutual_info_classif Mutual information for a discrete target. chi2 Chi-squared stats of non-negative features for classification tasks. f_regression F-value between label/feature for regression tasks. mutual_info_regression Mutual information for a contnuous target. SelectPercentile Select features based on percentile of the highest scores. SelectKBest Select features based on the k highest scores. SelectFpr Select features based on a false positive rate test. SelectFwe Select features based on family-wise error rate. GenericUnivariateSelect Univariate feature selector with configurable mode. References https://en.wikipedia.org/wiki/False_discovery_rate Examples >>> from sklearn.datasets import load_breast_cancer >>> from sklearn.feature_selection import SelectFdr, chi2 >>> X, y = load_breast_cancer(return_X_y=True) >>> X.shape (569, 30) >>> X_new = SelectFdr(chi2, alpha=0.01).fit_transform(X, y) >>> X_new.shape (569, 16) Methods fit(X, y) Run score function on (X, y) and get the appropriate features. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. get_support([indices]) Get a mask, or integer index, of the features selected inverse_transform(X) Reverse the transformation operation set_params(**params) Set the parameters of this estimator. transform(X) Reduce X to the selected features. fit(X, y) [source] Run score function on (X, y) and get the appropriate features. Parameters Xarray-like of shape (n_samples, n_features) The training input samples. yarray-like of shape (n_samples,) The target values (class labels in classification, real numbers in regression). Returns selfobject fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. get_support(indices=False) [source] Get a mask, or integer index, of the features selected Parameters indicesbool, default=False If True, the return value will be an array of integers, rather than a boolean mask. Returns supportarray An index that selects the retained features from a feature vector. If indices is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If indices is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector. inverse_transform(X) [source] Reverse the transformation operation Parameters Xarray of shape [n_samples, n_selected_features] The input samples. Returns X_rarray of shape [n_samples, n_original_features] X with columns of zeros inserted where features would have been removed by transform. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Reduce X to the selected features. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns X_rarray of shape [n_samples, n_selected_features] The input samples with only the selected features.
sklearn.modules.generated.sklearn.feature_selection.selectfdr
fit(X, y) [source] Run score function on (X, y) and get the appropriate features. Parameters Xarray-like of shape (n_samples, n_features) The training input samples. yarray-like of shape (n_samples,) The target values (class labels in classification, real numbers in regression). Returns selfobject
sklearn.modules.generated.sklearn.feature_selection.selectfdr#sklearn.feature_selection.SelectFdr.fit
fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array.
sklearn.modules.generated.sklearn.feature_selection.selectfdr#sklearn.feature_selection.SelectFdr.fit_transform
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.feature_selection.selectfdr#sklearn.feature_selection.SelectFdr.get_params
get_support(indices=False) [source] Get a mask, or integer index, of the features selected Parameters indicesbool, default=False If True, the return value will be an array of integers, rather than a boolean mask. Returns supportarray An index that selects the retained features from a feature vector. If indices is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If indices is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector.
sklearn.modules.generated.sklearn.feature_selection.selectfdr#sklearn.feature_selection.SelectFdr.get_support
inverse_transform(X) [source] Reverse the transformation operation Parameters Xarray of shape [n_samples, n_selected_features] The input samples. Returns X_rarray of shape [n_samples, n_original_features] X with columns of zeros inserted where features would have been removed by transform.
sklearn.modules.generated.sklearn.feature_selection.selectfdr#sklearn.feature_selection.SelectFdr.inverse_transform
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.feature_selection.selectfdr#sklearn.feature_selection.SelectFdr.set_params
transform(X) [source] Reduce X to the selected features. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns X_rarray of shape [n_samples, n_selected_features] The input samples with only the selected features.
sklearn.modules.generated.sklearn.feature_selection.selectfdr#sklearn.feature_selection.SelectFdr.transform
class sklearn.feature_selection.SelectFpr(score_func=<function f_classif>, *, alpha=0.05) [source] Filter: Select the pvalues below alpha based on a FPR test. FPR test stands for False Positive Rate test. It controls the total amount of false detections. Read more in the User Guide. Parameters score_funccallable, default=f_classif Function taking two arrays X and y, and returning a pair of arrays (scores, pvalues). Default is f_classif (see below “See Also”). The default function only works with classification tasks. alphafloat, default=5e-2 The highest p-value for features to be kept. Attributes scores_array-like of shape (n_features,) Scores of features. pvalues_array-like of shape (n_features,) p-values of feature scores. See also f_classif ANOVA F-value between label/feature for classification tasks. chi2 Chi-squared stats of non-negative features for classification tasks. mutual_info_classif Mutual information for a discrete target. f_regression F-value between label/feature for regression tasks. mutual_info_regression Mutual information for a continuous target. SelectPercentile Select features based on percentile of the highest scores. SelectKBest Select features based on the k highest scores. SelectFdr Select features based on an estimated false discovery rate. SelectFwe Select features based on family-wise error rate. GenericUnivariateSelect Univariate feature selector with configurable mode. Examples >>> from sklearn.datasets import load_breast_cancer >>> from sklearn.feature_selection import SelectFpr, chi2 >>> X, y = load_breast_cancer(return_X_y=True) >>> X.shape (569, 30) >>> X_new = SelectFpr(chi2, alpha=0.01).fit_transform(X, y) >>> X_new.shape (569, 16) Methods fit(X, y) Run score function on (X, y) and get the appropriate features. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. get_support([indices]) Get a mask, or integer index, of the features selected inverse_transform(X) Reverse the transformation operation set_params(**params) Set the parameters of this estimator. transform(X) Reduce X to the selected features. fit(X, y) [source] Run score function on (X, y) and get the appropriate features. Parameters Xarray-like of shape (n_samples, n_features) The training input samples. yarray-like of shape (n_samples,) The target values (class labels in classification, real numbers in regression). Returns selfobject fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. get_support(indices=False) [source] Get a mask, or integer index, of the features selected Parameters indicesbool, default=False If True, the return value will be an array of integers, rather than a boolean mask. Returns supportarray An index that selects the retained features from a feature vector. If indices is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If indices is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector. inverse_transform(X) [source] Reverse the transformation operation Parameters Xarray of shape [n_samples, n_selected_features] The input samples. Returns X_rarray of shape [n_samples, n_original_features] X with columns of zeros inserted where features would have been removed by transform. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Reduce X to the selected features. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns X_rarray of shape [n_samples, n_selected_features] The input samples with only the selected features.
sklearn.modules.generated.sklearn.feature_selection.selectfpr#sklearn.feature_selection.SelectFpr
sklearn.feature_selection.SelectFpr class sklearn.feature_selection.SelectFpr(score_func=<function f_classif>, *, alpha=0.05) [source] Filter: Select the pvalues below alpha based on a FPR test. FPR test stands for False Positive Rate test. It controls the total amount of false detections. Read more in the User Guide. Parameters score_funccallable, default=f_classif Function taking two arrays X and y, and returning a pair of arrays (scores, pvalues). Default is f_classif (see below “See Also”). The default function only works with classification tasks. alphafloat, default=5e-2 The highest p-value for features to be kept. Attributes scores_array-like of shape (n_features,) Scores of features. pvalues_array-like of shape (n_features,) p-values of feature scores. See also f_classif ANOVA F-value between label/feature for classification tasks. chi2 Chi-squared stats of non-negative features for classification tasks. mutual_info_classif Mutual information for a discrete target. f_regression F-value between label/feature for regression tasks. mutual_info_regression Mutual information for a continuous target. SelectPercentile Select features based on percentile of the highest scores. SelectKBest Select features based on the k highest scores. SelectFdr Select features based on an estimated false discovery rate. SelectFwe Select features based on family-wise error rate. GenericUnivariateSelect Univariate feature selector with configurable mode. Examples >>> from sklearn.datasets import load_breast_cancer >>> from sklearn.feature_selection import SelectFpr, chi2 >>> X, y = load_breast_cancer(return_X_y=True) >>> X.shape (569, 30) >>> X_new = SelectFpr(chi2, alpha=0.01).fit_transform(X, y) >>> X_new.shape (569, 16) Methods fit(X, y) Run score function on (X, y) and get the appropriate features. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. get_support([indices]) Get a mask, or integer index, of the features selected inverse_transform(X) Reverse the transformation operation set_params(**params) Set the parameters of this estimator. transform(X) Reduce X to the selected features. fit(X, y) [source] Run score function on (X, y) and get the appropriate features. Parameters Xarray-like of shape (n_samples, n_features) The training input samples. yarray-like of shape (n_samples,) The target values (class labels in classification, real numbers in regression). Returns selfobject fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. get_support(indices=False) [source] Get a mask, or integer index, of the features selected Parameters indicesbool, default=False If True, the return value will be an array of integers, rather than a boolean mask. Returns supportarray An index that selects the retained features from a feature vector. If indices is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If indices is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector. inverse_transform(X) [source] Reverse the transformation operation Parameters Xarray of shape [n_samples, n_selected_features] The input samples. Returns X_rarray of shape [n_samples, n_original_features] X with columns of zeros inserted where features would have been removed by transform. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Reduce X to the selected features. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns X_rarray of shape [n_samples, n_selected_features] The input samples with only the selected features.
sklearn.modules.generated.sklearn.feature_selection.selectfpr
fit(X, y) [source] Run score function on (X, y) and get the appropriate features. Parameters Xarray-like of shape (n_samples, n_features) The training input samples. yarray-like of shape (n_samples,) The target values (class labels in classification, real numbers in regression). Returns selfobject
sklearn.modules.generated.sklearn.feature_selection.selectfpr#sklearn.feature_selection.SelectFpr.fit
fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array.
sklearn.modules.generated.sklearn.feature_selection.selectfpr#sklearn.feature_selection.SelectFpr.fit_transform
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.feature_selection.selectfpr#sklearn.feature_selection.SelectFpr.get_params
get_support(indices=False) [source] Get a mask, or integer index, of the features selected Parameters indicesbool, default=False If True, the return value will be an array of integers, rather than a boolean mask. Returns supportarray An index that selects the retained features from a feature vector. If indices is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If indices is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector.
sklearn.modules.generated.sklearn.feature_selection.selectfpr#sklearn.feature_selection.SelectFpr.get_support
inverse_transform(X) [source] Reverse the transformation operation Parameters Xarray of shape [n_samples, n_selected_features] The input samples. Returns X_rarray of shape [n_samples, n_original_features] X with columns of zeros inserted where features would have been removed by transform.
sklearn.modules.generated.sklearn.feature_selection.selectfpr#sklearn.feature_selection.SelectFpr.inverse_transform
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.feature_selection.selectfpr#sklearn.feature_selection.SelectFpr.set_params
transform(X) [source] Reduce X to the selected features. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns X_rarray of shape [n_samples, n_selected_features] The input samples with only the selected features.
sklearn.modules.generated.sklearn.feature_selection.selectfpr#sklearn.feature_selection.SelectFpr.transform
class sklearn.feature_selection.SelectFromModel(estimator, *, threshold=None, prefit=False, norm_order=1, max_features=None, importance_getter='auto') [source] Meta-transformer for selecting features based on importance weights. New in version 0.17. Read more in the User Guide. Parameters estimatorobject The base estimator from which the transformer is built. This can be both a fitted (if prefit is set to True) or a non-fitted estimator. The estimator must have either a feature_importances_ or coef_ attribute after fitting. thresholdstring or float, default=None The threshold value to use for feature selection. Features whose importance is greater or equal are kept while the others are discarded. If “median” (resp. “mean”), then the threshold value is the median (resp. the mean) of the feature importances. A scaling factor (e.g., “1.25*mean”) may also be used. If None and if the estimator has a parameter penalty set to l1, either explicitly or implicitly (e.g, Lasso), the threshold used is 1e-5. Otherwise, “mean” is used by default. prefitbool, default=False Whether a prefit model is expected to be passed into the constructor directly or not. If True, transform must be called directly and SelectFromModel cannot be used with cross_val_score, GridSearchCV and similar utilities that clone the estimator. Otherwise train the model using fit and then transform to do feature selection. norm_ordernon-zero int, inf, -inf, default=1 Order of the norm used to filter the vectors of coefficients below threshold in the case where the coef_ attribute of the estimator is of dimension 2. max_featuresint, default=None The maximum number of features to select. To only select based on max_features, set threshold=-np.inf. New in version 0.20. importance_getterstr or callable, default=’auto’ If ‘auto’, uses the feature importance either through a coef_ attribute or feature_importances_ attribute of estimator. Also accepts a string that specifies an attribute name/path for extracting feature importance (implemented with attrgetter). For example, give regressor_.coef_ in case of TransformedTargetRegressor or named_steps.clf.feature_importances_ in case of Pipeline with its last step named clf. If callable, overrides the default feature importance getter. The callable is passed with the fitted estimator and it should return importance for each feature. New in version 0.24. Attributes estimator_an estimator The base estimator from which the transformer is built. This is stored only when a non-fitted estimator is passed to the SelectFromModel, i.e when prefit is False. threshold_float The threshold value used for feature selection. See also RFE Recursive feature elimination based on importance weights. RFECV Recursive feature elimination with built-in cross-validated selection of the best number of features. SequentialFeatureSelector Sequential cross-validation based feature selection. Does not rely on importance weights. Notes Allows NaN/Inf in the input if the underlying estimator does as well. Examples >>> from sklearn.feature_selection import SelectFromModel >>> from sklearn.linear_model import LogisticRegression >>> X = [[ 0.87, -1.34, 0.31 ], ... [-2.79, -0.02, -0.85 ], ... [-1.34, -0.48, -2.55 ], ... [ 1.92, 1.48, 0.65 ]] >>> y = [0, 1, 0, 1] >>> selector = SelectFromModel(estimator=LogisticRegression()).fit(X, y) >>> selector.estimator_.coef_ array([[-0.3252302 , 0.83462377, 0.49750423]]) >>> selector.threshold_ 0.55245... >>> selector.get_support() array([False, True, False]) >>> selector.transform(X) array([[-1.34], [-0.02], [-0.48], [ 1.48]]) Methods fit(X[, y]) Fit the SelectFromModel meta-transformer. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. get_support([indices]) Get a mask, or integer index, of the features selected inverse_transform(X) Reverse the transformation operation partial_fit(X[, y]) Fit the SelectFromModel meta-transformer only once. set_params(**params) Set the parameters of this estimator. transform(X) Reduce X to the selected features. fit(X, y=None, **fit_params) [source] Fit the SelectFromModel meta-transformer. Parameters Xarray-like of shape (n_samples, n_features) The training input samples. yarray-like of shape (n_samples,), default=None The target values (integers that correspond to classes in classification, real numbers in regression). **fit_paramsOther estimator specific parameters Returns selfobject fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. get_support(indices=False) [source] Get a mask, or integer index, of the features selected Parameters indicesbool, default=False If True, the return value will be an array of integers, rather than a boolean mask. Returns supportarray An index that selects the retained features from a feature vector. If indices is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If indices is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector. inverse_transform(X) [source] Reverse the transformation operation Parameters Xarray of shape [n_samples, n_selected_features] The input samples. Returns X_rarray of shape [n_samples, n_original_features] X with columns of zeros inserted where features would have been removed by transform. partial_fit(X, y=None, **fit_params) [source] Fit the SelectFromModel meta-transformer only once. Parameters Xarray-like of shape (n_samples, n_features) The training input samples. yarray-like of shape (n_samples,), default=None The target values (integers that correspond to classes in classification, real numbers in regression). **fit_paramsOther estimator specific parameters Returns selfobject set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Reduce X to the selected features. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns X_rarray of shape [n_samples, n_selected_features] The input samples with only the selected features.
sklearn.modules.generated.sklearn.feature_selection.selectfrommodel#sklearn.feature_selection.SelectFromModel