doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
get_n_splits(X=None, y=None, groups=None) [source]
Returns the number of splitting iterations in the cross-validator Parameters
Xobject
Always ignored, exists for compatibility.
yobject
Always ignored, exists for compatibility.
groupsobject
Always ignored, exists for compatibility. Returns
n_splitsint
Returns the number of splitting iterations in the cross-validator. | sklearn.modules.generated.sklearn.model_selection.shufflesplit#sklearn.model_selection.ShuffleSplit.get_n_splits |
split(X, y=None, groups=None) [source]
Generate indices to split data into training and test set. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
The target variable for supervised learning problems.
groupsarray-like of shape (n_samples,), default=None
Group labels for the samples used while splitting the dataset into train/test set. Yields
trainndarray
The training set indices for that split.
testndarray
The testing set indices for that split. Notes Randomized CV splitters may return different results for each call of split. You can make the results identical by setting random_state to an integer. | sklearn.modules.generated.sklearn.model_selection.shufflesplit#sklearn.model_selection.ShuffleSplit.split |
class sklearn.model_selection.StratifiedKFold(n_splits=5, *, shuffle=False, random_state=None) [source]
Stratified K-Folds cross-validator. Provides train/test indices to split data in train/test sets. This cross-validation object is a variation of KFold that returns stratified folds. The folds are made by preserving the percentage of samples for each class. Read more in the User Guide. Parameters
n_splitsint, default=5
Number of folds. Must be at least 2. Changed in version 0.22: n_splits default value changed from 3 to 5.
shufflebool, default=False
Whether to shuffle each class’s samples before splitting into batches. Note that the samples within each split will not be shuffled.
random_stateint, RandomState instance or None, default=None
When shuffle is True, random_state affects the ordering of the indices, which controls the randomness of each fold for each class. Otherwise, leave random_state as None. Pass an int for reproducible output across multiple function calls. See Glossary. See also
RepeatedStratifiedKFold
Repeats Stratified K-Fold n times. Notes The implementation is designed to: Generate test sets such that all contain the same distribution of classes, or as close as possible. Be invariant to class label: relabelling y = ["Happy", "Sad"] to y = [1, 0] should not change the indices generated. Preserve order dependencies in the dataset ordering, when shuffle=False: all samples from class k in some test set were contiguous in y, or separated in y by samples from classes other than k. Generate test sets where the smallest and largest differ by at most one sample. Changed in version 0.22: The previous implementation did not follow the last constraint. Examples >>> import numpy as np
>>> from sklearn.model_selection import StratifiedKFold
>>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]])
>>> y = np.array([0, 0, 1, 1])
>>> skf = StratifiedKFold(n_splits=2)
>>> skf.get_n_splits(X, y)
2
>>> print(skf)
StratifiedKFold(n_splits=2, random_state=None, shuffle=False)
>>> for train_index, test_index in skf.split(X, y):
... print("TRAIN:", train_index, "TEST:", test_index)
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
TRAIN: [1 3] TEST: [0 2]
TRAIN: [0 2] TEST: [1 3]
Methods
get_n_splits([X, y, groups]) Returns the number of splitting iterations in the cross-validator
split(X, y[, groups]) Generate indices to split data into training and test set.
get_n_splits(X=None, y=None, groups=None) [source]
Returns the number of splitting iterations in the cross-validator Parameters
Xobject
Always ignored, exists for compatibility.
yobject
Always ignored, exists for compatibility.
groupsobject
Always ignored, exists for compatibility. Returns
n_splitsint
Returns the number of splitting iterations in the cross-validator.
split(X, y, groups=None) [source]
Generate indices to split data into training and test set. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features. Note that providing y is sufficient to generate the splits and hence np.zeros(n_samples) may be used as a placeholder for X instead of actual training data.
yarray-like of shape (n_samples,)
The target variable for supervised learning problems. Stratification is done based on the y labels.
groupsobject
Always ignored, exists for compatibility. Yields
trainndarray
The training set indices for that split.
testndarray
The testing set indices for that split. Notes Randomized CV splitters may return different results for each call of split. You can make the results identical by setting random_state to an integer. | sklearn.modules.generated.sklearn.model_selection.stratifiedkfold#sklearn.model_selection.StratifiedKFold |
sklearn.model_selection.StratifiedKFold
class sklearn.model_selection.StratifiedKFold(n_splits=5, *, shuffle=False, random_state=None) [source]
Stratified K-Folds cross-validator. Provides train/test indices to split data in train/test sets. This cross-validation object is a variation of KFold that returns stratified folds. The folds are made by preserving the percentage of samples for each class. Read more in the User Guide. Parameters
n_splitsint, default=5
Number of folds. Must be at least 2. Changed in version 0.22: n_splits default value changed from 3 to 5.
shufflebool, default=False
Whether to shuffle each class’s samples before splitting into batches. Note that the samples within each split will not be shuffled.
random_stateint, RandomState instance or None, default=None
When shuffle is True, random_state affects the ordering of the indices, which controls the randomness of each fold for each class. Otherwise, leave random_state as None. Pass an int for reproducible output across multiple function calls. See Glossary. See also
RepeatedStratifiedKFold
Repeats Stratified K-Fold n times. Notes The implementation is designed to: Generate test sets such that all contain the same distribution of classes, or as close as possible. Be invariant to class label: relabelling y = ["Happy", "Sad"] to y = [1, 0] should not change the indices generated. Preserve order dependencies in the dataset ordering, when shuffle=False: all samples from class k in some test set were contiguous in y, or separated in y by samples from classes other than k. Generate test sets where the smallest and largest differ by at most one sample. Changed in version 0.22: The previous implementation did not follow the last constraint. Examples >>> import numpy as np
>>> from sklearn.model_selection import StratifiedKFold
>>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]])
>>> y = np.array([0, 0, 1, 1])
>>> skf = StratifiedKFold(n_splits=2)
>>> skf.get_n_splits(X, y)
2
>>> print(skf)
StratifiedKFold(n_splits=2, random_state=None, shuffle=False)
>>> for train_index, test_index in skf.split(X, y):
... print("TRAIN:", train_index, "TEST:", test_index)
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
TRAIN: [1 3] TEST: [0 2]
TRAIN: [0 2] TEST: [1 3]
Methods
get_n_splits([X, y, groups]) Returns the number of splitting iterations in the cross-validator
split(X, y[, groups]) Generate indices to split data into training and test set.
get_n_splits(X=None, y=None, groups=None) [source]
Returns the number of splitting iterations in the cross-validator Parameters
Xobject
Always ignored, exists for compatibility.
yobject
Always ignored, exists for compatibility.
groupsobject
Always ignored, exists for compatibility. Returns
n_splitsint
Returns the number of splitting iterations in the cross-validator.
split(X, y, groups=None) [source]
Generate indices to split data into training and test set. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features. Note that providing y is sufficient to generate the splits and hence np.zeros(n_samples) may be used as a placeholder for X instead of actual training data.
yarray-like of shape (n_samples,)
The target variable for supervised learning problems. Stratification is done based on the y labels.
groupsobject
Always ignored, exists for compatibility. Yields
trainndarray
The training set indices for that split.
testndarray
The testing set indices for that split. Notes Randomized CV splitters may return different results for each call of split. You can make the results identical by setting random_state to an integer.
Examples using sklearn.model_selection.StratifiedKFold
Recursive feature elimination with cross-validation
Test with permutations the significance of a classification score
GMM covariances
Receiver Operating Characteristic (ROC) with cross validation
Visualizing cross-validation behavior in scikit-learn
Effect of varying threshold for self-training | sklearn.modules.generated.sklearn.model_selection.stratifiedkfold |
get_n_splits(X=None, y=None, groups=None) [source]
Returns the number of splitting iterations in the cross-validator Parameters
Xobject
Always ignored, exists for compatibility.
yobject
Always ignored, exists for compatibility.
groupsobject
Always ignored, exists for compatibility. Returns
n_splitsint
Returns the number of splitting iterations in the cross-validator. | sklearn.modules.generated.sklearn.model_selection.stratifiedkfold#sklearn.model_selection.StratifiedKFold.get_n_splits |
split(X, y, groups=None) [source]
Generate indices to split data into training and test set. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features. Note that providing y is sufficient to generate the splits and hence np.zeros(n_samples) may be used as a placeholder for X instead of actual training data.
yarray-like of shape (n_samples,)
The target variable for supervised learning problems. Stratification is done based on the y labels.
groupsobject
Always ignored, exists for compatibility. Yields
trainndarray
The training set indices for that split.
testndarray
The testing set indices for that split. Notes Randomized CV splitters may return different results for each call of split. You can make the results identical by setting random_state to an integer. | sklearn.modules.generated.sklearn.model_selection.stratifiedkfold#sklearn.model_selection.StratifiedKFold.split |
class sklearn.model_selection.StratifiedShuffleSplit(n_splits=10, *, test_size=None, train_size=None, random_state=None) [source]
Stratified ShuffleSplit cross-validator Provides train/test indices to split data in train/test sets. This cross-validation object is a merge of StratifiedKFold and ShuffleSplit, which returns stratified randomized folds. The folds are made by preserving the percentage of samples for each class. Note: like the ShuffleSplit strategy, stratified random splits do not guarantee that all folds will be different, although this is still very likely for sizeable datasets. Read more in the User Guide. Parameters
n_splitsint, default=10
Number of re-shuffling & splitting iterations.
test_sizefloat or int, default=None
If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the test split. If int, represents the absolute number of test samples. If None, the value is set to the complement of the train size. If train_size is also None, it will be set to 0.1.
train_sizefloat or int, default=None
If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the train split. If int, represents the absolute number of train samples. If None, the value is automatically set to the complement of the test size.
random_stateint, RandomState instance or None, default=None
Controls the randomness of the training and testing indices produced. Pass an int for reproducible output across multiple function calls. See Glossary. Examples >>> import numpy as np
>>> from sklearn.model_selection import StratifiedShuffleSplit
>>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4], [1, 2], [3, 4]])
>>> y = np.array([0, 0, 0, 1, 1, 1])
>>> sss = StratifiedShuffleSplit(n_splits=5, test_size=0.5, random_state=0)
>>> sss.get_n_splits(X, y)
5
>>> print(sss)
StratifiedShuffleSplit(n_splits=5, random_state=0, ...)
>>> for train_index, test_index in sss.split(X, y):
... print("TRAIN:", train_index, "TEST:", test_index)
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
TRAIN: [5 2 3] TEST: [4 1 0]
TRAIN: [5 1 4] TEST: [0 2 3]
TRAIN: [5 0 2] TEST: [4 3 1]
TRAIN: [4 1 0] TEST: [2 3 5]
TRAIN: [0 5 1] TEST: [3 4 2]
Methods
get_n_splits([X, y, groups]) Returns the number of splitting iterations in the cross-validator
split(X, y[, groups]) Generate indices to split data into training and test set.
get_n_splits(X=None, y=None, groups=None) [source]
Returns the number of splitting iterations in the cross-validator Parameters
Xobject
Always ignored, exists for compatibility.
yobject
Always ignored, exists for compatibility.
groupsobject
Always ignored, exists for compatibility. Returns
n_splitsint
Returns the number of splitting iterations in the cross-validator.
split(X, y, groups=None) [source]
Generate indices to split data into training and test set. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features. Note that providing y is sufficient to generate the splits and hence np.zeros(n_samples) may be used as a placeholder for X instead of actual training data.
yarray-like of shape (n_samples,) or (n_samples, n_labels)
The target variable for supervised learning problems. Stratification is done based on the y labels.
groupsobject
Always ignored, exists for compatibility. Yields
trainndarray
The training set indices for that split.
testndarray
The testing set indices for that split. Notes Randomized CV splitters may return different results for each call of split. You can make the results identical by setting random_state to an integer. | sklearn.modules.generated.sklearn.model_selection.stratifiedshufflesplit#sklearn.model_selection.StratifiedShuffleSplit |
sklearn.model_selection.StratifiedShuffleSplit
class sklearn.model_selection.StratifiedShuffleSplit(n_splits=10, *, test_size=None, train_size=None, random_state=None) [source]
Stratified ShuffleSplit cross-validator Provides train/test indices to split data in train/test sets. This cross-validation object is a merge of StratifiedKFold and ShuffleSplit, which returns stratified randomized folds. The folds are made by preserving the percentage of samples for each class. Note: like the ShuffleSplit strategy, stratified random splits do not guarantee that all folds will be different, although this is still very likely for sizeable datasets. Read more in the User Guide. Parameters
n_splitsint, default=10
Number of re-shuffling & splitting iterations.
test_sizefloat or int, default=None
If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the test split. If int, represents the absolute number of test samples. If None, the value is set to the complement of the train size. If train_size is also None, it will be set to 0.1.
train_sizefloat or int, default=None
If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the train split. If int, represents the absolute number of train samples. If None, the value is automatically set to the complement of the test size.
random_stateint, RandomState instance or None, default=None
Controls the randomness of the training and testing indices produced. Pass an int for reproducible output across multiple function calls. See Glossary. Examples >>> import numpy as np
>>> from sklearn.model_selection import StratifiedShuffleSplit
>>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4], [1, 2], [3, 4]])
>>> y = np.array([0, 0, 0, 1, 1, 1])
>>> sss = StratifiedShuffleSplit(n_splits=5, test_size=0.5, random_state=0)
>>> sss.get_n_splits(X, y)
5
>>> print(sss)
StratifiedShuffleSplit(n_splits=5, random_state=0, ...)
>>> for train_index, test_index in sss.split(X, y):
... print("TRAIN:", train_index, "TEST:", test_index)
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
TRAIN: [5 2 3] TEST: [4 1 0]
TRAIN: [5 1 4] TEST: [0 2 3]
TRAIN: [5 0 2] TEST: [4 3 1]
TRAIN: [4 1 0] TEST: [2 3 5]
TRAIN: [0 5 1] TEST: [3 4 2]
Methods
get_n_splits([X, y, groups]) Returns the number of splitting iterations in the cross-validator
split(X, y[, groups]) Generate indices to split data into training and test set.
get_n_splits(X=None, y=None, groups=None) [source]
Returns the number of splitting iterations in the cross-validator Parameters
Xobject
Always ignored, exists for compatibility.
yobject
Always ignored, exists for compatibility.
groupsobject
Always ignored, exists for compatibility. Returns
n_splitsint
Returns the number of splitting iterations in the cross-validator.
split(X, y, groups=None) [source]
Generate indices to split data into training and test set. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features. Note that providing y is sufficient to generate the splits and hence np.zeros(n_samples) may be used as a placeholder for X instead of actual training data.
yarray-like of shape (n_samples,) or (n_samples, n_labels)
The target variable for supervised learning problems. Stratification is done based on the y labels.
groupsobject
Always ignored, exists for compatibility. Yields
trainndarray
The training set indices for that split.
testndarray
The testing set indices for that split. Notes Randomized CV splitters may return different results for each call of split. You can make the results identical by setting random_state to an integer.
Examples using sklearn.model_selection.StratifiedShuffleSplit
Visualizing cross-validation behavior in scikit-learn
RBF SVM parameters | sklearn.modules.generated.sklearn.model_selection.stratifiedshufflesplit |
get_n_splits(X=None, y=None, groups=None) [source]
Returns the number of splitting iterations in the cross-validator Parameters
Xobject
Always ignored, exists for compatibility.
yobject
Always ignored, exists for compatibility.
groupsobject
Always ignored, exists for compatibility. Returns
n_splitsint
Returns the number of splitting iterations in the cross-validator. | sklearn.modules.generated.sklearn.model_selection.stratifiedshufflesplit#sklearn.model_selection.StratifiedShuffleSplit.get_n_splits |
split(X, y, groups=None) [source]
Generate indices to split data into training and test set. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features. Note that providing y is sufficient to generate the splits and hence np.zeros(n_samples) may be used as a placeholder for X instead of actual training data.
yarray-like of shape (n_samples,) or (n_samples, n_labels)
The target variable for supervised learning problems. Stratification is done based on the y labels.
groupsobject
Always ignored, exists for compatibility. Yields
trainndarray
The training set indices for that split.
testndarray
The testing set indices for that split. Notes Randomized CV splitters may return different results for each call of split. You can make the results identical by setting random_state to an integer. | sklearn.modules.generated.sklearn.model_selection.stratifiedshufflesplit#sklearn.model_selection.StratifiedShuffleSplit.split |
class sklearn.model_selection.TimeSeriesSplit(n_splits=5, *, max_train_size=None, test_size=None, gap=0) [source]
Time Series cross-validator Provides train/test indices to split time series data samples that are observed at fixed time intervals, in train/test sets. In each split, test indices must be higher than before, and thus shuffling in cross validator is inappropriate. This cross-validation object is a variation of KFold. In the kth split, it returns first k folds as train set and the (k+1)th fold as test set. Note that unlike standard cross-validation methods, successive training sets are supersets of those that come before them. Read more in the User Guide. New in version 0.18. Parameters
n_splitsint, default=5
Number of splits. Must be at least 2. Changed in version 0.22: n_splits default value changed from 3 to 5.
max_train_sizeint, default=None
Maximum size for a single training set.
test_sizeint, default=None
Used to limit the size of the test set. Defaults to n_samples // (n_splits + 1), which is the maximum allowed value with gap=0. New in version 0.24.
gapint, default=0
Number of samples to exclude from the end of each train set before the test set. New in version 0.24. Notes The training set has size i * n_samples // (n_splits + 1)
+ n_samples % (n_splits + 1) in the i th split, with a test set of size n_samples//(n_splits + 1) by default, where n_samples is the number of samples. Examples >>> import numpy as np
>>> from sklearn.model_selection import TimeSeriesSplit
>>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4], [1, 2], [3, 4]])
>>> y = np.array([1, 2, 3, 4, 5, 6])
>>> tscv = TimeSeriesSplit()
>>> print(tscv)
TimeSeriesSplit(gap=0, max_train_size=None, n_splits=5, test_size=None)
>>> for train_index, test_index in tscv.split(X):
... print("TRAIN:", train_index, "TEST:", test_index)
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
TRAIN: [0] TEST: [1]
TRAIN: [0 1] TEST: [2]
TRAIN: [0 1 2] TEST: [3]
TRAIN: [0 1 2 3] TEST: [4]
TRAIN: [0 1 2 3 4] TEST: [5]
>>> # Fix test_size to 2 with 12 samples
>>> X = np.random.randn(12, 2)
>>> y = np.random.randint(0, 2, 12)
>>> tscv = TimeSeriesSplit(n_splits=3, test_size=2)
>>> for train_index, test_index in tscv.split(X):
... print("TRAIN:", train_index, "TEST:", test_index)
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
TRAIN: [0 1 2 3 4 5] TEST: [6 7]
TRAIN: [0 1 2 3 4 5 6 7] TEST: [8 9]
TRAIN: [0 1 2 3 4 5 6 7 8 9] TEST: [10 11]
>>> # Add in a 2 period gap
>>> tscv = TimeSeriesSplit(n_splits=3, test_size=2, gap=2)
>>> for train_index, test_index in tscv.split(X):
... print("TRAIN:", train_index, "TEST:", test_index)
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
TRAIN: [0 1 2 3] TEST: [6 7]
TRAIN: [0 1 2 3 4 5] TEST: [8 9]
TRAIN: [0 1 2 3 4 5 6 7] TEST: [10 11]
Methods
get_n_splits([X, y, groups]) Returns the number of splitting iterations in the cross-validator
split(X[, y, groups]) Generate indices to split data into training and test set.
get_n_splits(X=None, y=None, groups=None) [source]
Returns the number of splitting iterations in the cross-validator Parameters
Xobject
Always ignored, exists for compatibility.
yobject
Always ignored, exists for compatibility.
groupsobject
Always ignored, exists for compatibility. Returns
n_splitsint
Returns the number of splitting iterations in the cross-validator.
split(X, y=None, groups=None) [source]
Generate indices to split data into training and test set. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Always ignored, exists for compatibility.
groupsarray-like of shape (n_samples,)
Always ignored, exists for compatibility. Yields
trainndarray
The training set indices for that split.
testndarray
The testing set indices for that split. | sklearn.modules.generated.sklearn.model_selection.timeseriessplit#sklearn.model_selection.TimeSeriesSplit |
sklearn.model_selection.TimeSeriesSplit
class sklearn.model_selection.TimeSeriesSplit(n_splits=5, *, max_train_size=None, test_size=None, gap=0) [source]
Time Series cross-validator Provides train/test indices to split time series data samples that are observed at fixed time intervals, in train/test sets. In each split, test indices must be higher than before, and thus shuffling in cross validator is inappropriate. This cross-validation object is a variation of KFold. In the kth split, it returns first k folds as train set and the (k+1)th fold as test set. Note that unlike standard cross-validation methods, successive training sets are supersets of those that come before them. Read more in the User Guide. New in version 0.18. Parameters
n_splitsint, default=5
Number of splits. Must be at least 2. Changed in version 0.22: n_splits default value changed from 3 to 5.
max_train_sizeint, default=None
Maximum size for a single training set.
test_sizeint, default=None
Used to limit the size of the test set. Defaults to n_samples // (n_splits + 1), which is the maximum allowed value with gap=0. New in version 0.24.
gapint, default=0
Number of samples to exclude from the end of each train set before the test set. New in version 0.24. Notes The training set has size i * n_samples // (n_splits + 1)
+ n_samples % (n_splits + 1) in the i th split, with a test set of size n_samples//(n_splits + 1) by default, where n_samples is the number of samples. Examples >>> import numpy as np
>>> from sklearn.model_selection import TimeSeriesSplit
>>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4], [1, 2], [3, 4]])
>>> y = np.array([1, 2, 3, 4, 5, 6])
>>> tscv = TimeSeriesSplit()
>>> print(tscv)
TimeSeriesSplit(gap=0, max_train_size=None, n_splits=5, test_size=None)
>>> for train_index, test_index in tscv.split(X):
... print("TRAIN:", train_index, "TEST:", test_index)
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
TRAIN: [0] TEST: [1]
TRAIN: [0 1] TEST: [2]
TRAIN: [0 1 2] TEST: [3]
TRAIN: [0 1 2 3] TEST: [4]
TRAIN: [0 1 2 3 4] TEST: [5]
>>> # Fix test_size to 2 with 12 samples
>>> X = np.random.randn(12, 2)
>>> y = np.random.randint(0, 2, 12)
>>> tscv = TimeSeriesSplit(n_splits=3, test_size=2)
>>> for train_index, test_index in tscv.split(X):
... print("TRAIN:", train_index, "TEST:", test_index)
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
TRAIN: [0 1 2 3 4 5] TEST: [6 7]
TRAIN: [0 1 2 3 4 5 6 7] TEST: [8 9]
TRAIN: [0 1 2 3 4 5 6 7 8 9] TEST: [10 11]
>>> # Add in a 2 period gap
>>> tscv = TimeSeriesSplit(n_splits=3, test_size=2, gap=2)
>>> for train_index, test_index in tscv.split(X):
... print("TRAIN:", train_index, "TEST:", test_index)
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
TRAIN: [0 1 2 3] TEST: [6 7]
TRAIN: [0 1 2 3 4 5] TEST: [8 9]
TRAIN: [0 1 2 3 4 5 6 7] TEST: [10 11]
Methods
get_n_splits([X, y, groups]) Returns the number of splitting iterations in the cross-validator
split(X[, y, groups]) Generate indices to split data into training and test set.
get_n_splits(X=None, y=None, groups=None) [source]
Returns the number of splitting iterations in the cross-validator Parameters
Xobject
Always ignored, exists for compatibility.
yobject
Always ignored, exists for compatibility.
groupsobject
Always ignored, exists for compatibility. Returns
n_splitsint
Returns the number of splitting iterations in the cross-validator.
split(X, y=None, groups=None) [source]
Generate indices to split data into training and test set. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Always ignored, exists for compatibility.
groupsarray-like of shape (n_samples,)
Always ignored, exists for compatibility. Yields
trainndarray
The training set indices for that split.
testndarray
The testing set indices for that split.
Examples using sklearn.model_selection.TimeSeriesSplit
Visualizing cross-validation behavior in scikit-learn | sklearn.modules.generated.sklearn.model_selection.timeseriessplit |
get_n_splits(X=None, y=None, groups=None) [source]
Returns the number of splitting iterations in the cross-validator Parameters
Xobject
Always ignored, exists for compatibility.
yobject
Always ignored, exists for compatibility.
groupsobject
Always ignored, exists for compatibility. Returns
n_splitsint
Returns the number of splitting iterations in the cross-validator. | sklearn.modules.generated.sklearn.model_selection.timeseriessplit#sklearn.model_selection.TimeSeriesSplit.get_n_splits |
split(X, y=None, groups=None) [source]
Generate indices to split data into training and test set. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Always ignored, exists for compatibility.
groupsarray-like of shape (n_samples,)
Always ignored, exists for compatibility. Yields
trainndarray
The training set indices for that split.
testndarray
The testing set indices for that split. | sklearn.modules.generated.sklearn.model_selection.timeseriessplit#sklearn.model_selection.TimeSeriesSplit.split |
sklearn.model_selection.train_test_split(*arrays, test_size=None, train_size=None, random_state=None, shuffle=True, stratify=None) [source]
Split arrays or matrices into random train and test subsets Quick utility that wraps input validation and next(ShuffleSplit().split(X, y)) and application to input data into a single call for splitting (and optionally subsampling) data in a oneliner. Read more in the User Guide. Parameters
*arrayssequence of indexables with same length / shape[0]
Allowed inputs are lists, numpy arrays, scipy-sparse matrices or pandas dataframes.
test_sizefloat or int, default=None
If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the test split. If int, represents the absolute number of test samples. If None, the value is set to the complement of the train size. If train_size is also None, it will be set to 0.25.
train_sizefloat or int, default=None
If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the train split. If int, represents the absolute number of train samples. If None, the value is automatically set to the complement of the test size.
random_stateint, RandomState instance or None, default=None
Controls the shuffling applied to the data before applying the split. Pass an int for reproducible output across multiple function calls. See Glossary.
shufflebool, default=True
Whether or not to shuffle the data before splitting. If shuffle=False then stratify must be None.
stratifyarray-like, default=None
If not None, data is split in a stratified fashion, using this as the class labels. Read more in the User Guide. Returns
splittinglist, length=2 * len(arrays)
List containing train-test split of inputs. New in version 0.16: If the input is sparse, the output will be a scipy.sparse.csr_matrix. Else, output type is the same as the input type. Examples >>> import numpy as np
>>> from sklearn.model_selection import train_test_split
>>> X, y = np.arange(10).reshape((5, 2)), range(5)
>>> X
array([[0, 1],
[2, 3],
[4, 5],
[6, 7],
[8, 9]])
>>> list(y)
[0, 1, 2, 3, 4]
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, test_size=0.33, random_state=42)
...
>>> X_train
array([[4, 5],
[0, 1],
[6, 7]])
>>> y_train
[2, 0, 3]
>>> X_test
array([[2, 3],
[8, 9]])
>>> y_test
[1, 4]
>>> train_test_split(y, shuffle=False)
[[0, 1, 2], [3, 4]] | sklearn.modules.generated.sklearn.model_selection.train_test_split#sklearn.model_selection.train_test_split |
sklearn.model_selection.validation_curve(estimator, X, y, *, param_name, param_range, groups=None, cv=None, scoring=None, n_jobs=None, pre_dispatch='all', verbose=0, error_score=nan, fit_params=None) [source]
Validation curve. Determine training and test scores for varying parameter values. Compute scores for an estimator with different values of a specified parameter. This is similar to grid search with one parameter. However, this will also compute training scores and is merely a utility for plotting the results. Read more in the User Guide. Parameters
estimatorobject type that implements the “fit” and “predict” methods
An object of that type which is cloned for each validation.
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,) or (n_samples, n_outputs) or None
Target relative to X for classification or regression; None for unsupervised learning.
param_namestr
Name of the parameter that will be varied.
param_rangearray-like of shape (n_values,)
The values of the parameter that will be evaluated.
groupsarray-like of shape (n_samples,), default=None
Group labels for the samples used while splitting the dataset into train/test set. Only used in conjunction with a “Group” cv instance (e.g., GroupKFold).
cvint, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross validation, int, to specify the number of folds in a (Stratified)KFold,
CV splitter, An iterable yielding (train, test) splits as arrays of indices. For int/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold.
scoringstr or callable, default=None
A str (see model evaluation documentation) or a scorer callable object / function with signature scorer(estimator, X, y).
n_jobsint, default=None
Number of jobs to run in parallel. Training the estimator and computing the score are parallelized over the combinations of each parameter value and each cross-validation split. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
pre_dispatchint or str, default=’all’
Number of predispatched jobs for parallel execution (default is all). The option can reduce the allocated memory. The str can be an expression like ‘2*n_jobs’.
verboseint, default=0
Controls the verbosity: the higher, the more messages.
fit_paramsdict, default=None
Parameters to pass to the fit method of the estimator. New in version 0.24.
error_score‘raise’ or numeric, default=np.nan
Value to assign to the score if an error occurs in estimator fitting. If set to ‘raise’, the error is raised. If a numeric value is given, FitFailedWarning is raised. New in version 0.20. Returns
train_scoresarray of shape (n_ticks, n_cv_folds)
Scores on training sets.
test_scoresarray of shape (n_ticks, n_cv_folds)
Scores on test set. Notes See Plotting Validation Curves | sklearn.modules.generated.sklearn.model_selection.validation_curve#sklearn.model_selection.validation_curve |
class sklearn.multiclass.OneVsOneClassifier(estimator, *, n_jobs=None) [source]
One-vs-one multiclass strategy This strategy consists in fitting one classifier per class pair. At prediction time, the class which received the most votes is selected. Since it requires to fit n_classes * (n_classes - 1) / 2 classifiers, this method is usually slower than one-vs-the-rest, due to its O(n_classes^2) complexity. However, this method may be advantageous for algorithms such as kernel algorithms which don’t scale well with n_samples. This is because each individual learning problem only involves a small subset of the data whereas, with one-vs-the-rest, the complete dataset is used n_classes times. Read more in the User Guide. Parameters
estimatorestimator object
An estimator object implementing fit and one of decision_function or predict_proba.
n_jobsint, default=None
The number of jobs to use for the computation: the n_classes * (
n_classes - 1) / 2 OVO problems are computed in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Attributes
estimators_list of n_classes * (n_classes - 1) / 2 estimators
Estimators used for predictions.
classes_numpy array of shape [n_classes]
Array containing labels.
n_classes_int
Number of classes
pairwise_indices_list, length = len(estimators_), or None
Indices of samples used when training the estimators. None when estimator’s pairwise tag is False. Deprecated since version 0.24: The _pairwise attribute is deprecated in 0.24. From 1.1 (renaming of 0.25) and onward, pairwise_indices_ will use the pairwise estimator tag instead. Examples >>> from sklearn.datasets import load_iris
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.multiclass import OneVsOneClassifier
>>> from sklearn.svm import LinearSVC
>>> X, y = load_iris(return_X_y=True)
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, test_size=0.33, shuffle=True, random_state=0)
>>> clf = OneVsOneClassifier(
... LinearSVC(random_state=0)).fit(X_train, y_train)
>>> clf.predict(X_test[:10])
array([2, 1, 0, 2, 0, 2, 0, 1, 1, 1])
Methods
decision_function(X) Decision function for the OneVsOneClassifier.
fit(X, y) Fit underlying estimators.
get_params([deep]) Get parameters for this estimator.
partial_fit(X, y[, classes]) Partially fit underlying estimators
predict(X) Estimate the best class label for each sample in X.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
decision_function(X) [source]
Decision function for the OneVsOneClassifier. The decision values for the samples are computed by adding the normalized sum of pair-wise classification confidence levels to the votes in order to disambiguate between the decision values when the votes for all the classes are equal leading to a tie. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Yarray-like of shape (n_samples, n_classes) or (n_samples,) for binary classification.
Changed in version 0.19: output shape changed to (n_samples,) to conform to scikit-learn conventions for binary classification.
fit(X, y) [source]
Fit underlying estimators. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data.
yarray-like of shape (n_samples,)
Multi-class targets. Returns
self
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
partial_fit(X, y, classes=None) [source]
Partially fit underlying estimators Should be used when memory is inefficient to train all data. Chunks of data can be passed in several iteration, where the first call should have an array of all target variables. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data.
yarray-like of shape (n_samples,)
Multi-class targets.
classesarray, shape (n_classes, )
Classes across all calls to partial_fit. Can be obtained via np.unique(y_all), where y_all is the target vector of the entire dataset. This argument is only required in the first call of partial_fit and can be omitted in the subsequent calls. Returns
self
predict(X) [source]
Estimate the best class label for each sample in X. This is implemented as argmax(decision_function(X), axis=1) which will return the label of the class with most votes by estimators predicting the outcome of a decision for each possible class pair. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data. Returns
ynumpy array of shape [n_samples]
Predicted multi-class targets.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.multiclass.onevsoneclassifier#sklearn.multiclass.OneVsOneClassifier |
sklearn.multiclass.OneVsOneClassifier
class sklearn.multiclass.OneVsOneClassifier(estimator, *, n_jobs=None) [source]
One-vs-one multiclass strategy This strategy consists in fitting one classifier per class pair. At prediction time, the class which received the most votes is selected. Since it requires to fit n_classes * (n_classes - 1) / 2 classifiers, this method is usually slower than one-vs-the-rest, due to its O(n_classes^2) complexity. However, this method may be advantageous for algorithms such as kernel algorithms which don’t scale well with n_samples. This is because each individual learning problem only involves a small subset of the data whereas, with one-vs-the-rest, the complete dataset is used n_classes times. Read more in the User Guide. Parameters
estimatorestimator object
An estimator object implementing fit and one of decision_function or predict_proba.
n_jobsint, default=None
The number of jobs to use for the computation: the n_classes * (
n_classes - 1) / 2 OVO problems are computed in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Attributes
estimators_list of n_classes * (n_classes - 1) / 2 estimators
Estimators used for predictions.
classes_numpy array of shape [n_classes]
Array containing labels.
n_classes_int
Number of classes
pairwise_indices_list, length = len(estimators_), or None
Indices of samples used when training the estimators. None when estimator’s pairwise tag is False. Deprecated since version 0.24: The _pairwise attribute is deprecated in 0.24. From 1.1 (renaming of 0.25) and onward, pairwise_indices_ will use the pairwise estimator tag instead. Examples >>> from sklearn.datasets import load_iris
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.multiclass import OneVsOneClassifier
>>> from sklearn.svm import LinearSVC
>>> X, y = load_iris(return_X_y=True)
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, test_size=0.33, shuffle=True, random_state=0)
>>> clf = OneVsOneClassifier(
... LinearSVC(random_state=0)).fit(X_train, y_train)
>>> clf.predict(X_test[:10])
array([2, 1, 0, 2, 0, 2, 0, 1, 1, 1])
Methods
decision_function(X) Decision function for the OneVsOneClassifier.
fit(X, y) Fit underlying estimators.
get_params([deep]) Get parameters for this estimator.
partial_fit(X, y[, classes]) Partially fit underlying estimators
predict(X) Estimate the best class label for each sample in X.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
decision_function(X) [source]
Decision function for the OneVsOneClassifier. The decision values for the samples are computed by adding the normalized sum of pair-wise classification confidence levels to the votes in order to disambiguate between the decision values when the votes for all the classes are equal leading to a tie. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Yarray-like of shape (n_samples, n_classes) or (n_samples,) for binary classification.
Changed in version 0.19: output shape changed to (n_samples,) to conform to scikit-learn conventions for binary classification.
fit(X, y) [source]
Fit underlying estimators. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data.
yarray-like of shape (n_samples,)
Multi-class targets. Returns
self
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
partial_fit(X, y, classes=None) [source]
Partially fit underlying estimators Should be used when memory is inefficient to train all data. Chunks of data can be passed in several iteration, where the first call should have an array of all target variables. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data.
yarray-like of shape (n_samples,)
Multi-class targets.
classesarray, shape (n_classes, )
Classes across all calls to partial_fit. Can be obtained via np.unique(y_all), where y_all is the target vector of the entire dataset. This argument is only required in the first call of partial_fit and can be omitted in the subsequent calls. Returns
self
predict(X) [source]
Estimate the best class label for each sample in X. This is implemented as argmax(decision_function(X), axis=1) which will return the label of the class with most votes by estimators predicting the outcome of a decision for each possible class pair. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data. Returns
ynumpy array of shape [n_samples]
Predicted multi-class targets.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.multiclass.onevsoneclassifier |
decision_function(X) [source]
Decision function for the OneVsOneClassifier. The decision values for the samples are computed by adding the normalized sum of pair-wise classification confidence levels to the votes in order to disambiguate between the decision values when the votes for all the classes are equal leading to a tie. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Yarray-like of shape (n_samples, n_classes) or (n_samples,) for binary classification.
Changed in version 0.19: output shape changed to (n_samples,) to conform to scikit-learn conventions for binary classification. | sklearn.modules.generated.sklearn.multiclass.onevsoneclassifier#sklearn.multiclass.OneVsOneClassifier.decision_function |
fit(X, y) [source]
Fit underlying estimators. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data.
yarray-like of shape (n_samples,)
Multi-class targets. Returns
self | sklearn.modules.generated.sklearn.multiclass.onevsoneclassifier#sklearn.multiclass.OneVsOneClassifier.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.multiclass.onevsoneclassifier#sklearn.multiclass.OneVsOneClassifier.get_params |
partial_fit(X, y, classes=None) [source]
Partially fit underlying estimators Should be used when memory is inefficient to train all data. Chunks of data can be passed in several iteration, where the first call should have an array of all target variables. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data.
yarray-like of shape (n_samples,)
Multi-class targets.
classesarray, shape (n_classes, )
Classes across all calls to partial_fit. Can be obtained via np.unique(y_all), where y_all is the target vector of the entire dataset. This argument is only required in the first call of partial_fit and can be omitted in the subsequent calls. Returns
self | sklearn.modules.generated.sklearn.multiclass.onevsoneclassifier#sklearn.multiclass.OneVsOneClassifier.partial_fit |
predict(X) [source]
Estimate the best class label for each sample in X. This is implemented as argmax(decision_function(X), axis=1) which will return the label of the class with most votes by estimators predicting the outcome of a decision for each possible class pair. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data. Returns
ynumpy array of shape [n_samples]
Predicted multi-class targets. | sklearn.modules.generated.sklearn.multiclass.onevsoneclassifier#sklearn.multiclass.OneVsOneClassifier.predict |
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y. | sklearn.modules.generated.sklearn.multiclass.onevsoneclassifier#sklearn.multiclass.OneVsOneClassifier.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.multiclass.onevsoneclassifier#sklearn.multiclass.OneVsOneClassifier.set_params |
class sklearn.multiclass.OneVsRestClassifier(estimator, *, n_jobs=None) [source]
One-vs-the-rest (OvR) multiclass strategy. Also known as one-vs-all, this strategy consists in fitting one classifier per class. For each classifier, the class is fitted against all the other classes. In addition to its computational efficiency (only n_classes classifiers are needed), one advantage of this approach is its interpretability. Since each class is represented by one and one classifier only, it is possible to gain knowledge about the class by inspecting its corresponding classifier. This is the most commonly used strategy for multiclass classification and is a fair default choice. OneVsRestClassifier can also be used for multilabel classification. To use this feature, provide an indicator matrix for the target y when calling .fit. In other words, the target labels should be formatted as a 2D binary (0/1) matrix, where [i, j] == 1 indicates the presence of label j in sample i. This estimator uses the binary relevance method to perform multilabel classification, which involves training one binary classifier independently for each label. Read more in the User Guide. Parameters
estimatorestimator object
An estimator object implementing fit and one of decision_function or predict_proba.
n_jobsint, default=None
The number of jobs to use for the computation: the n_classes one-vs-rest problems are computed in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Changed in version v0.20: n_jobs default changed from 1 to None Attributes
estimators_list of n_classes estimators
Estimators used for predictions.
coef_ndarray of shape (1, n_features) or (n_classes, n_features)
Coefficient of the features in the decision function. This attribute exists only if the estimators_ defines coef_. Deprecated since version 0.24: This attribute is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26). If you use this attribute in RFE or SelectFromModel, you may pass a callable to the importance_getter parameter that extracts feature the importances from estimators_.
intercept_ndarray of shape (1, 1) or (n_classes, 1)
If y is binary, the shape is (1, 1) else (n_classes, 1) This attribute exists only if the estimators_ defines intercept_. Deprecated since version 0.24: This attribute is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26). If you use this attribute in RFE or SelectFromModel, you may pass a callable to the importance_getter parameter that extracts feature the importances from estimators_.
classes_array, shape = [n_classes]
Class labels.
n_classes_int
Number of classes.
label_binarizer_LabelBinarizer object
Object used to transform multiclass labels to binary labels and vice-versa.
multilabel_boolean
Whether this is a multilabel classifier See also
sklearn.multioutput.MultiOutputClassifier
Alternate way of extending an estimator for multilabel classification.
sklearn.preprocessing.MultiLabelBinarizer
Transform iterable of iterables to binary indicator matrix. Examples >>> import numpy as np
>>> from sklearn.multiclass import OneVsRestClassifier
>>> from sklearn.svm import SVC
>>> X = np.array([
... [10, 10],
... [8, 10],
... [-5, 5.5],
... [-5.4, 5.5],
... [-20, -20],
... [-15, -20]
... ])
>>> y = np.array([0, 0, 1, 1, 2, 2])
>>> clf = OneVsRestClassifier(SVC()).fit(X, y)
>>> clf.predict([[-19, -20], [9, 9], [-5, 5]])
array([2, 0, 1])
Methods
decision_function(X) Returns the distance of each sample from the decision boundary for each class.
fit(X, y) Fit underlying estimators.
get_params([deep]) Get parameters for this estimator.
partial_fit(X, y[, classes]) Partially fit underlying estimators
predict(X) Predict multi-class targets using underlying estimators.
predict_proba(X) Probability estimates.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
decision_function(X) [source]
Returns the distance of each sample from the decision boundary for each class. This can only be used with estimators which implement the decision_function method. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Tarray-like of shape (n_samples, n_classes) or (n_samples,) for binary classification.
Changed in version 0.19: output shape changed to (n_samples,) to conform to scikit-learn conventions for binary classification.
fit(X, y) [source]
Fit underlying estimators. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data.
y(sparse) array-like of shape (n_samples,) or (n_samples, n_classes)
Multi-class targets. An indicator matrix turns on multilabel classification. Returns
self
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
property multilabel_
Whether this is a multilabel classifier
partial_fit(X, y, classes=None) [source]
Partially fit underlying estimators Should be used when memory is inefficient to train all data. Chunks of data can be passed in several iteration. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data.
y(sparse) array-like of shape (n_samples,) or (n_samples, n_classes)
Multi-class targets. An indicator matrix turns on multilabel classification.
classesarray, shape (n_classes, )
Classes across all calls to partial_fit. Can be obtained via np.unique(y_all), where y_all is the target vector of the entire dataset. This argument is only required in the first call of partial_fit and can be omitted in the subsequent calls. Returns
self
predict(X) [source]
Predict multi-class targets using underlying estimators. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data. Returns
y(sparse) array-like of shape (n_samples,) or (n_samples, n_classes)
Predicted multi-class targets.
predict_proba(X) [source]
Probability estimates. The returned estimates for all classes are ordered by label of classes. Note that in the multilabel case, each sample can have any number of labels. This returns the marginal probability that the given sample has the label in question. For example, it is entirely consistent that two labels both have a 90% probability of applying to a given sample. In the single label multiclass case, the rows of the returned matrix sum to 1. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
T(sparse) array-like of shape (n_samples, n_classes)
Returns the probability of the sample for each class in the model, where classes are ordered as they are in self.classes_.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.multiclass.onevsrestclassifier#sklearn.multiclass.OneVsRestClassifier |
sklearn.multiclass.OneVsRestClassifier
class sklearn.multiclass.OneVsRestClassifier(estimator, *, n_jobs=None) [source]
One-vs-the-rest (OvR) multiclass strategy. Also known as one-vs-all, this strategy consists in fitting one classifier per class. For each classifier, the class is fitted against all the other classes. In addition to its computational efficiency (only n_classes classifiers are needed), one advantage of this approach is its interpretability. Since each class is represented by one and one classifier only, it is possible to gain knowledge about the class by inspecting its corresponding classifier. This is the most commonly used strategy for multiclass classification and is a fair default choice. OneVsRestClassifier can also be used for multilabel classification. To use this feature, provide an indicator matrix for the target y when calling .fit. In other words, the target labels should be formatted as a 2D binary (0/1) matrix, where [i, j] == 1 indicates the presence of label j in sample i. This estimator uses the binary relevance method to perform multilabel classification, which involves training one binary classifier independently for each label. Read more in the User Guide. Parameters
estimatorestimator object
An estimator object implementing fit and one of decision_function or predict_proba.
n_jobsint, default=None
The number of jobs to use for the computation: the n_classes one-vs-rest problems are computed in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Changed in version v0.20: n_jobs default changed from 1 to None Attributes
estimators_list of n_classes estimators
Estimators used for predictions.
coef_ndarray of shape (1, n_features) or (n_classes, n_features)
Coefficient of the features in the decision function. This attribute exists only if the estimators_ defines coef_. Deprecated since version 0.24: This attribute is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26). If you use this attribute in RFE or SelectFromModel, you may pass a callable to the importance_getter parameter that extracts feature the importances from estimators_.
intercept_ndarray of shape (1, 1) or (n_classes, 1)
If y is binary, the shape is (1, 1) else (n_classes, 1) This attribute exists only if the estimators_ defines intercept_. Deprecated since version 0.24: This attribute is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26). If you use this attribute in RFE or SelectFromModel, you may pass a callable to the importance_getter parameter that extracts feature the importances from estimators_.
classes_array, shape = [n_classes]
Class labels.
n_classes_int
Number of classes.
label_binarizer_LabelBinarizer object
Object used to transform multiclass labels to binary labels and vice-versa.
multilabel_boolean
Whether this is a multilabel classifier See also
sklearn.multioutput.MultiOutputClassifier
Alternate way of extending an estimator for multilabel classification.
sklearn.preprocessing.MultiLabelBinarizer
Transform iterable of iterables to binary indicator matrix. Examples >>> import numpy as np
>>> from sklearn.multiclass import OneVsRestClassifier
>>> from sklearn.svm import SVC
>>> X = np.array([
... [10, 10],
... [8, 10],
... [-5, 5.5],
... [-5.4, 5.5],
... [-20, -20],
... [-15, -20]
... ])
>>> y = np.array([0, 0, 1, 1, 2, 2])
>>> clf = OneVsRestClassifier(SVC()).fit(X, y)
>>> clf.predict([[-19, -20], [9, 9], [-5, 5]])
array([2, 0, 1])
Methods
decision_function(X) Returns the distance of each sample from the decision boundary for each class.
fit(X, y) Fit underlying estimators.
get_params([deep]) Get parameters for this estimator.
partial_fit(X, y[, classes]) Partially fit underlying estimators
predict(X) Predict multi-class targets using underlying estimators.
predict_proba(X) Probability estimates.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
decision_function(X) [source]
Returns the distance of each sample from the decision boundary for each class. This can only be used with estimators which implement the decision_function method. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Tarray-like of shape (n_samples, n_classes) or (n_samples,) for binary classification.
Changed in version 0.19: output shape changed to (n_samples,) to conform to scikit-learn conventions for binary classification.
fit(X, y) [source]
Fit underlying estimators. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data.
y(sparse) array-like of shape (n_samples,) or (n_samples, n_classes)
Multi-class targets. An indicator matrix turns on multilabel classification. Returns
self
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
property multilabel_
Whether this is a multilabel classifier
partial_fit(X, y, classes=None) [source]
Partially fit underlying estimators Should be used when memory is inefficient to train all data. Chunks of data can be passed in several iteration. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data.
y(sparse) array-like of shape (n_samples,) or (n_samples, n_classes)
Multi-class targets. An indicator matrix turns on multilabel classification.
classesarray, shape (n_classes, )
Classes across all calls to partial_fit. Can be obtained via np.unique(y_all), where y_all is the target vector of the entire dataset. This argument is only required in the first call of partial_fit and can be omitted in the subsequent calls. Returns
self
predict(X) [source]
Predict multi-class targets using underlying estimators. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data. Returns
y(sparse) array-like of shape (n_samples,) or (n_samples, n_classes)
Predicted multi-class targets.
predict_proba(X) [source]
Probability estimates. The returned estimates for all classes are ordered by label of classes. Note that in the multilabel case, each sample can have any number of labels. This returns the marginal probability that the given sample has the label in question. For example, it is entirely consistent that two labels both have a 90% probability of applying to a given sample. In the single label multiclass case, the rows of the returned matrix sum to 1. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
T(sparse) array-like of shape (n_samples, n_classes)
Returns the probability of the sample for each class in the model, where classes are ordered as they are in self.classes_.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
Examples using sklearn.multiclass.OneVsRestClassifier
Multilabel classification
Receiver Operating Characteristic (ROC)
Precision-Recall
Classifier Chain | sklearn.modules.generated.sklearn.multiclass.onevsrestclassifier |
decision_function(X) [source]
Returns the distance of each sample from the decision boundary for each class. This can only be used with estimators which implement the decision_function method. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Tarray-like of shape (n_samples, n_classes) or (n_samples,) for binary classification.
Changed in version 0.19: output shape changed to (n_samples,) to conform to scikit-learn conventions for binary classification. | sklearn.modules.generated.sklearn.multiclass.onevsrestclassifier#sklearn.multiclass.OneVsRestClassifier.decision_function |
fit(X, y) [source]
Fit underlying estimators. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data.
y(sparse) array-like of shape (n_samples,) or (n_samples, n_classes)
Multi-class targets. An indicator matrix turns on multilabel classification. Returns
self | sklearn.modules.generated.sklearn.multiclass.onevsrestclassifier#sklearn.multiclass.OneVsRestClassifier.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.multiclass.onevsrestclassifier#sklearn.multiclass.OneVsRestClassifier.get_params |
property multilabel_
Whether this is a multilabel classifier | sklearn.modules.generated.sklearn.multiclass.onevsrestclassifier#sklearn.multiclass.OneVsRestClassifier.multilabel_ |
partial_fit(X, y, classes=None) [source]
Partially fit underlying estimators Should be used when memory is inefficient to train all data. Chunks of data can be passed in several iteration. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data.
y(sparse) array-like of shape (n_samples,) or (n_samples, n_classes)
Multi-class targets. An indicator matrix turns on multilabel classification.
classesarray, shape (n_classes, )
Classes across all calls to partial_fit. Can be obtained via np.unique(y_all), where y_all is the target vector of the entire dataset. This argument is only required in the first call of partial_fit and can be omitted in the subsequent calls. Returns
self | sklearn.modules.generated.sklearn.multiclass.onevsrestclassifier#sklearn.multiclass.OneVsRestClassifier.partial_fit |
predict(X) [source]
Predict multi-class targets using underlying estimators. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data. Returns
y(sparse) array-like of shape (n_samples,) or (n_samples, n_classes)
Predicted multi-class targets. | sklearn.modules.generated.sklearn.multiclass.onevsrestclassifier#sklearn.multiclass.OneVsRestClassifier.predict |
predict_proba(X) [source]
Probability estimates. The returned estimates for all classes are ordered by label of classes. Note that in the multilabel case, each sample can have any number of labels. This returns the marginal probability that the given sample has the label in question. For example, it is entirely consistent that two labels both have a 90% probability of applying to a given sample. In the single label multiclass case, the rows of the returned matrix sum to 1. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
T(sparse) array-like of shape (n_samples, n_classes)
Returns the probability of the sample for each class in the model, where classes are ordered as they are in self.classes_. | sklearn.modules.generated.sklearn.multiclass.onevsrestclassifier#sklearn.multiclass.OneVsRestClassifier.predict_proba |
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y. | sklearn.modules.generated.sklearn.multiclass.onevsrestclassifier#sklearn.multiclass.OneVsRestClassifier.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.multiclass.onevsrestclassifier#sklearn.multiclass.OneVsRestClassifier.set_params |
class sklearn.multiclass.OutputCodeClassifier(estimator, *, code_size=1.5, random_state=None, n_jobs=None) [source]
(Error-Correcting) Output-Code multiclass strategy Output-code based strategies consist in representing each class with a binary code (an array of 0s and 1s). At fitting time, one binary classifier per bit in the code book is fitted. At prediction time, the classifiers are used to project new points in the class space and the class closest to the points is chosen. The main advantage of these strategies is that the number of classifiers used can be controlled by the user, either for compressing the model (0 < code_size < 1) or for making the model more robust to errors (code_size > 1). See the documentation for more details. Read more in the User Guide. Parameters
estimatorestimator object
An estimator object implementing fit and one of decision_function or predict_proba.
code_sizefloat
Percentage of the number of classes to be used to create the code book. A number between 0 and 1 will require fewer classifiers than one-vs-the-rest. A number greater than 1 will require more classifiers than one-vs-the-rest.
random_stateint, RandomState instance, default=None
The generator used to initialize the codebook. Pass an int for reproducible output across multiple function calls. See Glossary.
n_jobsint, default=None
The number of jobs to use for the computation: the multiclass problems are computed in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Attributes
estimators_list of int(n_classes * code_size) estimators
Estimators used for predictions.
classes_numpy array of shape [n_classes]
Array containing labels.
code_book_numpy array of shape [n_classes, code_size]
Binary array containing the code of each class. References
1
“Solving multiclass learning problems via error-correcting output codes”, Dietterich T., Bakiri G., Journal of Artificial Intelligence Research 2, 1995.
2
“The error coding method and PICTs”, James G., Hastie T., Journal of Computational and Graphical statistics 7, 1998.
3
“The Elements of Statistical Learning”, Hastie T., Tibshirani R., Friedman J., page 606 (second-edition) 2008. Examples >>> from sklearn.multiclass import OutputCodeClassifier
>>> from sklearn.ensemble import RandomForestClassifier
>>> from sklearn.datasets import make_classification
>>> X, y = make_classification(n_samples=100, n_features=4,
... n_informative=2, n_redundant=0,
... random_state=0, shuffle=False)
>>> clf = OutputCodeClassifier(
... estimator=RandomForestClassifier(random_state=0),
... random_state=0).fit(X, y)
>>> clf.predict([[0, 0, 0, 0]])
array([1])
Methods
fit(X, y) Fit underlying estimators.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict multi-class targets using underlying estimators.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
fit(X, y) [source]
Fit underlying estimators. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data.
ynumpy array of shape [n_samples]
Multi-class targets. Returns
self
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict multi-class targets using underlying estimators. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data. Returns
ynumpy array of shape [n_samples]
Predicted multi-class targets.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.multiclass.outputcodeclassifier#sklearn.multiclass.OutputCodeClassifier |
sklearn.multiclass.OutputCodeClassifier
class sklearn.multiclass.OutputCodeClassifier(estimator, *, code_size=1.5, random_state=None, n_jobs=None) [source]
(Error-Correcting) Output-Code multiclass strategy Output-code based strategies consist in representing each class with a binary code (an array of 0s and 1s). At fitting time, one binary classifier per bit in the code book is fitted. At prediction time, the classifiers are used to project new points in the class space and the class closest to the points is chosen. The main advantage of these strategies is that the number of classifiers used can be controlled by the user, either for compressing the model (0 < code_size < 1) or for making the model more robust to errors (code_size > 1). See the documentation for more details. Read more in the User Guide. Parameters
estimatorestimator object
An estimator object implementing fit and one of decision_function or predict_proba.
code_sizefloat
Percentage of the number of classes to be used to create the code book. A number between 0 and 1 will require fewer classifiers than one-vs-the-rest. A number greater than 1 will require more classifiers than one-vs-the-rest.
random_stateint, RandomState instance, default=None
The generator used to initialize the codebook. Pass an int for reproducible output across multiple function calls. See Glossary.
n_jobsint, default=None
The number of jobs to use for the computation: the multiclass problems are computed in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Attributes
estimators_list of int(n_classes * code_size) estimators
Estimators used for predictions.
classes_numpy array of shape [n_classes]
Array containing labels.
code_book_numpy array of shape [n_classes, code_size]
Binary array containing the code of each class. References
1
“Solving multiclass learning problems via error-correcting output codes”, Dietterich T., Bakiri G., Journal of Artificial Intelligence Research 2, 1995.
2
“The error coding method and PICTs”, James G., Hastie T., Journal of Computational and Graphical statistics 7, 1998.
3
“The Elements of Statistical Learning”, Hastie T., Tibshirani R., Friedman J., page 606 (second-edition) 2008. Examples >>> from sklearn.multiclass import OutputCodeClassifier
>>> from sklearn.ensemble import RandomForestClassifier
>>> from sklearn.datasets import make_classification
>>> X, y = make_classification(n_samples=100, n_features=4,
... n_informative=2, n_redundant=0,
... random_state=0, shuffle=False)
>>> clf = OutputCodeClassifier(
... estimator=RandomForestClassifier(random_state=0),
... random_state=0).fit(X, y)
>>> clf.predict([[0, 0, 0, 0]])
array([1])
Methods
fit(X, y) Fit underlying estimators.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict multi-class targets using underlying estimators.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
fit(X, y) [source]
Fit underlying estimators. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data.
ynumpy array of shape [n_samples]
Multi-class targets. Returns
self
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict multi-class targets using underlying estimators. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data. Returns
ynumpy array of shape [n_samples]
Predicted multi-class targets.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.multiclass.outputcodeclassifier |
fit(X, y) [source]
Fit underlying estimators. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data.
ynumpy array of shape [n_samples]
Multi-class targets. Returns
self | sklearn.modules.generated.sklearn.multiclass.outputcodeclassifier#sklearn.multiclass.OutputCodeClassifier.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.multiclass.outputcodeclassifier#sklearn.multiclass.OutputCodeClassifier.get_params |
predict(X) [source]
Predict multi-class targets using underlying estimators. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data. Returns
ynumpy array of shape [n_samples]
Predicted multi-class targets. | sklearn.modules.generated.sklearn.multiclass.outputcodeclassifier#sklearn.multiclass.OutputCodeClassifier.predict |
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y. | sklearn.modules.generated.sklearn.multiclass.outputcodeclassifier#sklearn.multiclass.OutputCodeClassifier.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.multiclass.outputcodeclassifier#sklearn.multiclass.OutputCodeClassifier.set_params |
class sklearn.multioutput.ClassifierChain(base_estimator, *, order=None, cv=None, random_state=None) [source]
A multi-label model that arranges binary classifiers into a chain. Each model makes a prediction in the order specified by the chain using all of the available features provided to the model plus the predictions of models that are earlier in the chain. Read more in the User Guide. New in version 0.19. Parameters
base_estimatorestimator
The base estimator from which the classifier chain is built.
orderarray-like of shape (n_outputs,) or ‘random’, default=None
If None, the order will be determined by the order of columns in the label matrix Y.: order = [0, 1, 2, ..., Y.shape[1] - 1]
The order of the chain can be explicitly set by providing a list of integers. For example, for a chain of length 5.: order = [1, 3, 2, 4, 0]
means that the first model in the chain will make predictions for column 1 in the Y matrix, the second model will make predictions for column 3, etc. If order is ‘random’ a random ordering will be used.
cvint, cross-validation generator or an iterable, default=None
Determines whether to use cross validated predictions or true labels for the results of previous estimators in the chain. Possible inputs for cv are: None, to use true labels when fitting, integer, to specify the number of folds in a (Stratified)KFold,
CV splitter, An iterable yielding (train, test) splits as arrays of indices.
random_stateint, RandomState instance or None, optional (default=None)
If order='random', determines random number generation for the chain order. In addition, it controls the random seed given at each base_estimator at each chaining iteration. Thus, it is only used when base_estimator exposes a random_state. Pass an int for reproducible output across multiple function calls. See Glossary. Attributes
classes_list
A list of arrays of length len(estimators_) containing the class labels for each estimator in the chain.
estimators_list
A list of clones of base_estimator.
order_list
The order of labels in the classifier chain. See also
RegressorChain
Equivalent for regression.
MultioutputClassifier
Classifies each output independently rather than chaining. References Jesse Read, Bernhard Pfahringer, Geoff Holmes, Eibe Frank, “Classifier Chains for Multi-label Classification”, 2009. Examples >>> from sklearn.datasets import make_multilabel_classification
>>> from sklearn.linear_model import LogisticRegression
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.multioutput import ClassifierChain
>>> X, Y = make_multilabel_classification(
... n_samples=12, n_classes=3, random_state=0
... )
>>> X_train, X_test, Y_train, Y_test = train_test_split(
... X, Y, random_state=0
... )
>>> base_lr = LogisticRegression(solver='lbfgs', random_state=0)
>>> chain = ClassifierChain(base_lr, order='random', random_state=0)
>>> chain.fit(X_train, Y_train).predict(X_test)
array([[1., 1., 0.],
[1., 0., 0.],
[0., 1., 0.]])
>>> chain.predict_proba(X_test)
array([[0.8387..., 0.9431..., 0.4576...],
[0.8878..., 0.3684..., 0.2640...],
[0.0321..., 0.9935..., 0.0625...]])
Methods
decision_function(X) Evaluate the decision_function of the models in the chain.
fit(X, Y) Fit the model to data matrix X and targets Y.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict on the data matrix X using the ClassifierChain model.
predict_proba(X) Predict probability estimates.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
decision_function(X) [source]
Evaluate the decision_function of the models in the chain. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Y_decisionarray-like of shape (n_samples, n_classes)
Returns the decision function of the sample for each model in the chain.
fit(X, Y) [source]
Fit the model to data matrix X and targets Y. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input data.
Yarray-like of shape (n_samples, n_classes)
The target values. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict on the data matrix X using the ClassifierChain model. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input data. Returns
Y_predarray-like of shape (n_samples, n_classes)
The predicted values.
predict_proba(X) [source]
Predict probability estimates. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Returns
Y_probarray-like of shape (n_samples, n_classes)
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.multioutput.classifierchain#sklearn.multioutput.ClassifierChain |
sklearn.multioutput.ClassifierChain
class sklearn.multioutput.ClassifierChain(base_estimator, *, order=None, cv=None, random_state=None) [source]
A multi-label model that arranges binary classifiers into a chain. Each model makes a prediction in the order specified by the chain using all of the available features provided to the model plus the predictions of models that are earlier in the chain. Read more in the User Guide. New in version 0.19. Parameters
base_estimatorestimator
The base estimator from which the classifier chain is built.
orderarray-like of shape (n_outputs,) or ‘random’, default=None
If None, the order will be determined by the order of columns in the label matrix Y.: order = [0, 1, 2, ..., Y.shape[1] - 1]
The order of the chain can be explicitly set by providing a list of integers. For example, for a chain of length 5.: order = [1, 3, 2, 4, 0]
means that the first model in the chain will make predictions for column 1 in the Y matrix, the second model will make predictions for column 3, etc. If order is ‘random’ a random ordering will be used.
cvint, cross-validation generator or an iterable, default=None
Determines whether to use cross validated predictions or true labels for the results of previous estimators in the chain. Possible inputs for cv are: None, to use true labels when fitting, integer, to specify the number of folds in a (Stratified)KFold,
CV splitter, An iterable yielding (train, test) splits as arrays of indices.
random_stateint, RandomState instance or None, optional (default=None)
If order='random', determines random number generation for the chain order. In addition, it controls the random seed given at each base_estimator at each chaining iteration. Thus, it is only used when base_estimator exposes a random_state. Pass an int for reproducible output across multiple function calls. See Glossary. Attributes
classes_list
A list of arrays of length len(estimators_) containing the class labels for each estimator in the chain.
estimators_list
A list of clones of base_estimator.
order_list
The order of labels in the classifier chain. See also
RegressorChain
Equivalent for regression.
MultioutputClassifier
Classifies each output independently rather than chaining. References Jesse Read, Bernhard Pfahringer, Geoff Holmes, Eibe Frank, “Classifier Chains for Multi-label Classification”, 2009. Examples >>> from sklearn.datasets import make_multilabel_classification
>>> from sklearn.linear_model import LogisticRegression
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.multioutput import ClassifierChain
>>> X, Y = make_multilabel_classification(
... n_samples=12, n_classes=3, random_state=0
... )
>>> X_train, X_test, Y_train, Y_test = train_test_split(
... X, Y, random_state=0
... )
>>> base_lr = LogisticRegression(solver='lbfgs', random_state=0)
>>> chain = ClassifierChain(base_lr, order='random', random_state=0)
>>> chain.fit(X_train, Y_train).predict(X_test)
array([[1., 1., 0.],
[1., 0., 0.],
[0., 1., 0.]])
>>> chain.predict_proba(X_test)
array([[0.8387..., 0.9431..., 0.4576...],
[0.8878..., 0.3684..., 0.2640...],
[0.0321..., 0.9935..., 0.0625...]])
Methods
decision_function(X) Evaluate the decision_function of the models in the chain.
fit(X, Y) Fit the model to data matrix X and targets Y.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict on the data matrix X using the ClassifierChain model.
predict_proba(X) Predict probability estimates.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
decision_function(X) [source]
Evaluate the decision_function of the models in the chain. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Y_decisionarray-like of shape (n_samples, n_classes)
Returns the decision function of the sample for each model in the chain.
fit(X, Y) [source]
Fit the model to data matrix X and targets Y. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input data.
Yarray-like of shape (n_samples, n_classes)
The target values. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict on the data matrix X using the ClassifierChain model. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input data. Returns
Y_predarray-like of shape (n_samples, n_classes)
The predicted values.
predict_proba(X) [source]
Predict probability estimates. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Returns
Y_probarray-like of shape (n_samples, n_classes)
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
Examples using sklearn.multioutput.ClassifierChain
Classifier Chain | sklearn.modules.generated.sklearn.multioutput.classifierchain |
decision_function(X) [source]
Evaluate the decision_function of the models in the chain. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Y_decisionarray-like of shape (n_samples, n_classes)
Returns the decision function of the sample for each model in the chain. | sklearn.modules.generated.sklearn.multioutput.classifierchain#sklearn.multioutput.ClassifierChain.decision_function |
fit(X, Y) [source]
Fit the model to data matrix X and targets Y. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input data.
Yarray-like of shape (n_samples, n_classes)
The target values. Returns
selfobject | sklearn.modules.generated.sklearn.multioutput.classifierchain#sklearn.multioutput.ClassifierChain.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.multioutput.classifierchain#sklearn.multioutput.ClassifierChain.get_params |
predict(X) [source]
Predict on the data matrix X using the ClassifierChain model. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input data. Returns
Y_predarray-like of shape (n_samples, n_classes)
The predicted values. | sklearn.modules.generated.sklearn.multioutput.classifierchain#sklearn.multioutput.ClassifierChain.predict |
predict_proba(X) [source]
Predict probability estimates. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Returns
Y_probarray-like of shape (n_samples, n_classes) | sklearn.modules.generated.sklearn.multioutput.classifierchain#sklearn.multioutput.ClassifierChain.predict_proba |
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y. | sklearn.modules.generated.sklearn.multioutput.classifierchain#sklearn.multioutput.ClassifierChain.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.multioutput.classifierchain#sklearn.multioutput.ClassifierChain.set_params |
class sklearn.multioutput.MultiOutputClassifier(estimator, *, n_jobs=None) [source]
Multi target classification This strategy consists of fitting one classifier per target. This is a simple strategy for extending classifiers that do not natively support multi-target classification Parameters
estimatorestimator object
An estimator object implementing fit, score and predict_proba.
n_jobsint or None, optional (default=None)
The number of jobs to run in parallel. fit, predict and partial_fit (if supported by the passed estimator) will be parallelized for each target. When individual estimators are fast to train or predict, using n_jobs > 1 can result in slower performance due to the parallelism overhead. None means 1 unless in a joblib.parallel_backend context. -1 means using all available processes / threads. See Glossary for more details. Changed in version 0.20: n_jobs default changed from 1 to None Attributes
classes_ndarray of shape (n_classes,)
Class labels.
estimators_list of n_output estimators
Estimators used for predictions. Examples >>> import numpy as np
>>> from sklearn.datasets import make_multilabel_classification
>>> from sklearn.multioutput import MultiOutputClassifier
>>> from sklearn.neighbors import KNeighborsClassifier
>>> X, y = make_multilabel_classification(n_classes=3, random_state=0)
>>> clf = MultiOutputClassifier(KNeighborsClassifier()).fit(X, y)
>>> clf.predict(X[-2:])
array([[1, 1, 0], [1, 1, 1]])
Methods
fit(X, Y[, sample_weight]) Fit the model to data matrix X and targets Y.
get_params([deep]) Get parameters for this estimator.
partial_fit(X, y[, classes, sample_weight]) Incrementally fit the model to data.
predict(X) Predict multi-output variable using a model
score(X, y) Returns the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
fit(X, Y, sample_weight=None, **fit_params) [source]
Fit the model to data matrix X and targets Y. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input data.
Yarray-like of shape (n_samples, n_classes)
The target values.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Only supported if the underlying classifier supports sample weights.
**fit_paramsdict of string -> object
Parameters passed to the estimator.fit method of each step. New in version 0.23. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
partial_fit(X, y, classes=None, sample_weight=None) [source]
Incrementally fit the model to data. Fit a separate model for each output variable. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data.
y{array-like, sparse matrix} of shape (n_samples, n_outputs)
Multi-output targets.
classeslist of ndarray of shape (n_outputs,)
Each array is unique classes for one output in str/int Can be obtained by via [np.unique(y[:, i]) for i in range(y.shape[1])], where y is the target matrix of the entire dataset. This argument is required for the first call to partial_fit and can be omitted in the subsequent calls. Note that y doesn’t need to contain all labels in classes.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Only supported if the underlying regressor supports sample weights. Returns
selfobject
predict(X) [source]
Predict multi-output variable using a model
trained for each target variable. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data. Returns
y{array-like, sparse matrix} of shape (n_samples, n_outputs)
Multi-output targets predicted across multiple predictors. Note: Separate models are generated for each predictor.
property predict_proba
Probability estimates. Returns prediction probabilities for each class of each output. This method will raise a ValueError if any of the estimators do not have predict_proba. Parameters
Xarray-like of shape (n_samples, n_features)
Data Returns
parray of shape (n_samples, n_classes), or a list of n_outputs such arrays if n_outputs > 1.
The class probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_. Changed in version 0.19: This function now returns a list of arrays where the length of the list is n_outputs, and each array is (n_samples, n_classes) for that particular output.
score(X, y) [source]
Returns the mean accuracy on the given test data and labels. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples
yarray-like of shape (n_samples, n_outputs)
True values for X Returns
scoresfloat
accuracy_score of self.predict(X) versus y
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.multioutput.multioutputclassifier#sklearn.multioutput.MultiOutputClassifier |
sklearn.multioutput.MultiOutputClassifier
class sklearn.multioutput.MultiOutputClassifier(estimator, *, n_jobs=None) [source]
Multi target classification This strategy consists of fitting one classifier per target. This is a simple strategy for extending classifiers that do not natively support multi-target classification Parameters
estimatorestimator object
An estimator object implementing fit, score and predict_proba.
n_jobsint or None, optional (default=None)
The number of jobs to run in parallel. fit, predict and partial_fit (if supported by the passed estimator) will be parallelized for each target. When individual estimators are fast to train or predict, using n_jobs > 1 can result in slower performance due to the parallelism overhead. None means 1 unless in a joblib.parallel_backend context. -1 means using all available processes / threads. See Glossary for more details. Changed in version 0.20: n_jobs default changed from 1 to None Attributes
classes_ndarray of shape (n_classes,)
Class labels.
estimators_list of n_output estimators
Estimators used for predictions. Examples >>> import numpy as np
>>> from sklearn.datasets import make_multilabel_classification
>>> from sklearn.multioutput import MultiOutputClassifier
>>> from sklearn.neighbors import KNeighborsClassifier
>>> X, y = make_multilabel_classification(n_classes=3, random_state=0)
>>> clf = MultiOutputClassifier(KNeighborsClassifier()).fit(X, y)
>>> clf.predict(X[-2:])
array([[1, 1, 0], [1, 1, 1]])
Methods
fit(X, Y[, sample_weight]) Fit the model to data matrix X and targets Y.
get_params([deep]) Get parameters for this estimator.
partial_fit(X, y[, classes, sample_weight]) Incrementally fit the model to data.
predict(X) Predict multi-output variable using a model
score(X, y) Returns the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
fit(X, Y, sample_weight=None, **fit_params) [source]
Fit the model to data matrix X and targets Y. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input data.
Yarray-like of shape (n_samples, n_classes)
The target values.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Only supported if the underlying classifier supports sample weights.
**fit_paramsdict of string -> object
Parameters passed to the estimator.fit method of each step. New in version 0.23. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
partial_fit(X, y, classes=None, sample_weight=None) [source]
Incrementally fit the model to data. Fit a separate model for each output variable. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data.
y{array-like, sparse matrix} of shape (n_samples, n_outputs)
Multi-output targets.
classeslist of ndarray of shape (n_outputs,)
Each array is unique classes for one output in str/int Can be obtained by via [np.unique(y[:, i]) for i in range(y.shape[1])], where y is the target matrix of the entire dataset. This argument is required for the first call to partial_fit and can be omitted in the subsequent calls. Note that y doesn’t need to contain all labels in classes.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Only supported if the underlying regressor supports sample weights. Returns
selfobject
predict(X) [source]
Predict multi-output variable using a model
trained for each target variable. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data. Returns
y{array-like, sparse matrix} of shape (n_samples, n_outputs)
Multi-output targets predicted across multiple predictors. Note: Separate models are generated for each predictor.
property predict_proba
Probability estimates. Returns prediction probabilities for each class of each output. This method will raise a ValueError if any of the estimators do not have predict_proba. Parameters
Xarray-like of shape (n_samples, n_features)
Data Returns
parray of shape (n_samples, n_classes), or a list of n_outputs such arrays if n_outputs > 1.
The class probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_. Changed in version 0.19: This function now returns a list of arrays where the length of the list is n_outputs, and each array is (n_samples, n_classes) for that particular output.
score(X, y) [source]
Returns the mean accuracy on the given test data and labels. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples
yarray-like of shape (n_samples, n_outputs)
True values for X Returns
scoresfloat
accuracy_score of self.predict(X) versus y
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.multioutput.multioutputclassifier |
fit(X, Y, sample_weight=None, **fit_params) [source]
Fit the model to data matrix X and targets Y. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input data.
Yarray-like of shape (n_samples, n_classes)
The target values.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Only supported if the underlying classifier supports sample weights.
**fit_paramsdict of string -> object
Parameters passed to the estimator.fit method of each step. New in version 0.23. Returns
selfobject | sklearn.modules.generated.sklearn.multioutput.multioutputclassifier#sklearn.multioutput.MultiOutputClassifier.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.multioutput.multioutputclassifier#sklearn.multioutput.MultiOutputClassifier.get_params |
partial_fit(X, y, classes=None, sample_weight=None) [source]
Incrementally fit the model to data. Fit a separate model for each output variable. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data.
y{array-like, sparse matrix} of shape (n_samples, n_outputs)
Multi-output targets.
classeslist of ndarray of shape (n_outputs,)
Each array is unique classes for one output in str/int Can be obtained by via [np.unique(y[:, i]) for i in range(y.shape[1])], where y is the target matrix of the entire dataset. This argument is required for the first call to partial_fit and can be omitted in the subsequent calls. Note that y doesn’t need to contain all labels in classes.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Only supported if the underlying regressor supports sample weights. Returns
selfobject | sklearn.modules.generated.sklearn.multioutput.multioutputclassifier#sklearn.multioutput.MultiOutputClassifier.partial_fit |
predict(X) [source]
Predict multi-output variable using a model
trained for each target variable. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data. Returns
y{array-like, sparse matrix} of shape (n_samples, n_outputs)
Multi-output targets predicted across multiple predictors. Note: Separate models are generated for each predictor. | sklearn.modules.generated.sklearn.multioutput.multioutputclassifier#sklearn.multioutput.MultiOutputClassifier.predict |
property predict_proba
Probability estimates. Returns prediction probabilities for each class of each output. This method will raise a ValueError if any of the estimators do not have predict_proba. Parameters
Xarray-like of shape (n_samples, n_features)
Data Returns
parray of shape (n_samples, n_classes), or a list of n_outputs such arrays if n_outputs > 1.
The class probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_. Changed in version 0.19: This function now returns a list of arrays where the length of the list is n_outputs, and each array is (n_samples, n_classes) for that particular output. | sklearn.modules.generated.sklearn.multioutput.multioutputclassifier#sklearn.multioutput.MultiOutputClassifier.predict_proba |
score(X, y) [source]
Returns the mean accuracy on the given test data and labels. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples
yarray-like of shape (n_samples, n_outputs)
True values for X Returns
scoresfloat
accuracy_score of self.predict(X) versus y | sklearn.modules.generated.sklearn.multioutput.multioutputclassifier#sklearn.multioutput.MultiOutputClassifier.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.multioutput.multioutputclassifier#sklearn.multioutput.MultiOutputClassifier.set_params |
class sklearn.multioutput.MultiOutputRegressor(estimator, *, n_jobs=None) [source]
Multi target regression This strategy consists of fitting one regressor per target. This is a simple strategy for extending regressors that do not natively support multi-target regression. New in version 0.18. Parameters
estimatorestimator object
An estimator object implementing fit and predict.
n_jobsint or None, optional (default=None)
The number of jobs to run in parallel. fit, predict and partial_fit (if supported by the passed estimator) will be parallelized for each target. When individual estimators are fast to train or predict, using n_jobs > 1 can result in slower performance due to the parallelism overhead. None means 1 unless in a joblib.parallel_backend context. -1 means using all available processes / threads. See Glossary for more details. Changed in version 0.20: n_jobs default changed from 1 to None Attributes
estimators_list of n_output estimators
Estimators used for predictions. Examples >>> import numpy as np
>>> from sklearn.datasets import load_linnerud
>>> from sklearn.multioutput import MultiOutputRegressor
>>> from sklearn.linear_model import Ridge
>>> X, y = load_linnerud(return_X_y=True)
>>> clf = MultiOutputRegressor(Ridge(random_state=123)).fit(X, y)
>>> clf.predict(X[[0]])
array([[176..., 35..., 57...]])
Methods
fit(X, y[, sample_weight]) Fit the model to data.
get_params([deep]) Get parameters for this estimator.
partial_fit(X, y[, sample_weight]) Incrementally fit the model to data.
predict(X) Predict multi-output variable using a model
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y, sample_weight=None, **fit_params) [source]
Fit the model to data. Fit a separate model for each output variable. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data.
y{array-like, sparse matrix} of shape (n_samples, n_outputs)
Multi-output targets. An indicator matrix turns on multilabel estimation.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Only supported if the underlying regressor supports sample weights.
**fit_paramsdict of string -> object
Parameters passed to the estimator.fit method of each step. New in version 0.23. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
partial_fit(X, y, sample_weight=None) [source]
Incrementally fit the model to data. Fit a separate model for each output variable. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data.
y{array-like, sparse matrix} of shape (n_samples, n_outputs)
Multi-output targets.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Only supported if the underlying regressor supports sample weights. Returns
selfobject
predict(X) [source]
Predict multi-output variable using a model
trained for each target variable. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data. Returns
y{array-like, sparse matrix} of shape (n_samples, n_outputs)
Multi-output targets predicted across multiple predictors. Note: Separate models are generated for each predictor.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor |
sklearn.multioutput.MultiOutputRegressor
class sklearn.multioutput.MultiOutputRegressor(estimator, *, n_jobs=None) [source]
Multi target regression This strategy consists of fitting one regressor per target. This is a simple strategy for extending regressors that do not natively support multi-target regression. New in version 0.18. Parameters
estimatorestimator object
An estimator object implementing fit and predict.
n_jobsint or None, optional (default=None)
The number of jobs to run in parallel. fit, predict and partial_fit (if supported by the passed estimator) will be parallelized for each target. When individual estimators are fast to train or predict, using n_jobs > 1 can result in slower performance due to the parallelism overhead. None means 1 unless in a joblib.parallel_backend context. -1 means using all available processes / threads. See Glossary for more details. Changed in version 0.20: n_jobs default changed from 1 to None Attributes
estimators_list of n_output estimators
Estimators used for predictions. Examples >>> import numpy as np
>>> from sklearn.datasets import load_linnerud
>>> from sklearn.multioutput import MultiOutputRegressor
>>> from sklearn.linear_model import Ridge
>>> X, y = load_linnerud(return_X_y=True)
>>> clf = MultiOutputRegressor(Ridge(random_state=123)).fit(X, y)
>>> clf.predict(X[[0]])
array([[176..., 35..., 57...]])
Methods
fit(X, y[, sample_weight]) Fit the model to data.
get_params([deep]) Get parameters for this estimator.
partial_fit(X, y[, sample_weight]) Incrementally fit the model to data.
predict(X) Predict multi-output variable using a model
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y, sample_weight=None, **fit_params) [source]
Fit the model to data. Fit a separate model for each output variable. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data.
y{array-like, sparse matrix} of shape (n_samples, n_outputs)
Multi-output targets. An indicator matrix turns on multilabel estimation.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Only supported if the underlying regressor supports sample weights.
**fit_paramsdict of string -> object
Parameters passed to the estimator.fit method of each step. New in version 0.23. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
partial_fit(X, y, sample_weight=None) [source]
Incrementally fit the model to data. Fit a separate model for each output variable. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data.
y{array-like, sparse matrix} of shape (n_samples, n_outputs)
Multi-output targets.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Only supported if the underlying regressor supports sample weights. Returns
selfobject
predict(X) [source]
Predict multi-output variable using a model
trained for each target variable. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data. Returns
y{array-like, sparse matrix} of shape (n_samples, n_outputs)
Multi-output targets predicted across multiple predictors. Note: Separate models are generated for each predictor.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
Examples using sklearn.multioutput.MultiOutputRegressor
Comparing random forests and the multi-output meta estimator | sklearn.modules.generated.sklearn.multioutput.multioutputregressor |
fit(X, y, sample_weight=None, **fit_params) [source]
Fit the model to data. Fit a separate model for each output variable. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data.
y{array-like, sparse matrix} of shape (n_samples, n_outputs)
Multi-output targets. An indicator matrix turns on multilabel estimation.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Only supported if the underlying regressor supports sample weights.
**fit_paramsdict of string -> object
Parameters passed to the estimator.fit method of each step. New in version 0.23. Returns
selfobject | sklearn.modules.generated.sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor.get_params |
partial_fit(X, y, sample_weight=None) [source]
Incrementally fit the model to data. Fit a separate model for each output variable. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data.
y{array-like, sparse matrix} of shape (n_samples, n_outputs)
Multi-output targets.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Only supported if the underlying regressor supports sample weights. Returns
selfobject | sklearn.modules.generated.sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor.partial_fit |
predict(X) [source]
Predict multi-output variable using a model
trained for each target variable. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data. Returns
y{array-like, sparse matrix} of shape (n_samples, n_outputs)
Multi-output targets predicted across multiple predictors. Note: Separate models are generated for each predictor. | sklearn.modules.generated.sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor.predict |
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). | sklearn.modules.generated.sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor.set_params |
class sklearn.multioutput.RegressorChain(base_estimator, *, order=None, cv=None, random_state=None) [source]
A multi-label model that arranges regressions into a chain. Each model makes a prediction in the order specified by the chain using all of the available features provided to the model plus the predictions of models that are earlier in the chain. Read more in the User Guide. New in version 0.20. Parameters
base_estimatorestimator
The base estimator from which the classifier chain is built.
orderarray-like of shape (n_outputs,) or ‘random’, default=None
If None, the order will be determined by the order of columns in the label matrix Y.: order = [0, 1, 2, ..., Y.shape[1] - 1]
The order of the chain can be explicitly set by providing a list of integers. For example, for a chain of length 5.: order = [1, 3, 2, 4, 0]
means that the first model in the chain will make predictions for column 1 in the Y matrix, the second model will make predictions for column 3, etc. If order is ‘random’ a random ordering will be used.
cvint, cross-validation generator or an iterable, default=None
Determines whether to use cross validated predictions or true labels for the results of previous estimators in the chain. Possible inputs for cv are: None, to use true labels when fitting, integer, to specify the number of folds in a (Stratified)KFold,
CV splitter, An iterable yielding (train, test) splits as arrays of indices.
random_stateint, RandomState instance or None, optional (default=None)
If order='random', determines random number generation for the chain order. In addition, it controls the random seed given at each base_estimator at each chaining iteration. Thus, it is only used when base_estimator exposes a random_state. Pass an int for reproducible output across multiple function calls. See Glossary. Attributes
estimators_list
A list of clones of base_estimator.
order_list
The order of labels in the classifier chain. See also
ClassifierChain
Equivalent for classification.
MultioutputRegressor
Learns each output independently rather than chaining. Examples >>> from sklearn.multioutput import RegressorChain
>>> from sklearn.linear_model import LogisticRegression
>>> logreg = LogisticRegression(solver='lbfgs',multi_class='multinomial')
>>> X, Y = [[1, 0], [0, 1], [1, 1]], [[0, 2], [1, 1], [2, 0]]
>>> chain = RegressorChain(base_estimator=logreg, order=[0, 1]).fit(X, Y)
>>> chain.predict(X)
array([[0., 2.],
[1., 1.],
[2., 0.]])
Methods
fit(X, Y, **fit_params) Fit the model to data matrix X and targets Y.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict on the data matrix X using the ClassifierChain model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, Y, **fit_params) [source]
Fit the model to data matrix X and targets Y. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input data.
Yarray-like of shape (n_samples, n_classes)
The target values.
**fit_paramsdict of string -> object
Parameters passed to the fit method at each step of the regressor chain. New in version 0.23. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict on the data matrix X using the ClassifierChain model. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input data. Returns
Y_predarray-like of shape (n_samples, n_classes)
The predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.multioutput.regressorchain#sklearn.multioutput.RegressorChain |
sklearn.multioutput.RegressorChain
class sklearn.multioutput.RegressorChain(base_estimator, *, order=None, cv=None, random_state=None) [source]
A multi-label model that arranges regressions into a chain. Each model makes a prediction in the order specified by the chain using all of the available features provided to the model plus the predictions of models that are earlier in the chain. Read more in the User Guide. New in version 0.20. Parameters
base_estimatorestimator
The base estimator from which the classifier chain is built.
orderarray-like of shape (n_outputs,) or ‘random’, default=None
If None, the order will be determined by the order of columns in the label matrix Y.: order = [0, 1, 2, ..., Y.shape[1] - 1]
The order of the chain can be explicitly set by providing a list of integers. For example, for a chain of length 5.: order = [1, 3, 2, 4, 0]
means that the first model in the chain will make predictions for column 1 in the Y matrix, the second model will make predictions for column 3, etc. If order is ‘random’ a random ordering will be used.
cvint, cross-validation generator or an iterable, default=None
Determines whether to use cross validated predictions or true labels for the results of previous estimators in the chain. Possible inputs for cv are: None, to use true labels when fitting, integer, to specify the number of folds in a (Stratified)KFold,
CV splitter, An iterable yielding (train, test) splits as arrays of indices.
random_stateint, RandomState instance or None, optional (default=None)
If order='random', determines random number generation for the chain order. In addition, it controls the random seed given at each base_estimator at each chaining iteration. Thus, it is only used when base_estimator exposes a random_state. Pass an int for reproducible output across multiple function calls. See Glossary. Attributes
estimators_list
A list of clones of base_estimator.
order_list
The order of labels in the classifier chain. See also
ClassifierChain
Equivalent for classification.
MultioutputRegressor
Learns each output independently rather than chaining. Examples >>> from sklearn.multioutput import RegressorChain
>>> from sklearn.linear_model import LogisticRegression
>>> logreg = LogisticRegression(solver='lbfgs',multi_class='multinomial')
>>> X, Y = [[1, 0], [0, 1], [1, 1]], [[0, 2], [1, 1], [2, 0]]
>>> chain = RegressorChain(base_estimator=logreg, order=[0, 1]).fit(X, Y)
>>> chain.predict(X)
array([[0., 2.],
[1., 1.],
[2., 0.]])
Methods
fit(X, Y, **fit_params) Fit the model to data matrix X and targets Y.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict on the data matrix X using the ClassifierChain model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, Y, **fit_params) [source]
Fit the model to data matrix X and targets Y. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input data.
Yarray-like of shape (n_samples, n_classes)
The target values.
**fit_paramsdict of string -> object
Parameters passed to the fit method at each step of the regressor chain. New in version 0.23. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict on the data matrix X using the ClassifierChain model. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input data. Returns
Y_predarray-like of shape (n_samples, n_classes)
The predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.multioutput.regressorchain |
fit(X, Y, **fit_params) [source]
Fit the model to data matrix X and targets Y. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input data.
Yarray-like of shape (n_samples, n_classes)
The target values.
**fit_paramsdict of string -> object
Parameters passed to the fit method at each step of the regressor chain. New in version 0.23. Returns
selfobject | sklearn.modules.generated.sklearn.multioutput.regressorchain#sklearn.multioutput.RegressorChain.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.multioutput.regressorchain#sklearn.multioutput.RegressorChain.get_params |
predict(X) [source]
Predict on the data matrix X using the ClassifierChain model. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input data. Returns
Y_predarray-like of shape (n_samples, n_classes)
The predicted values. | sklearn.modules.generated.sklearn.multioutput.regressorchain#sklearn.multioutput.RegressorChain.predict |
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). | sklearn.modules.generated.sklearn.multioutput.regressorchain#sklearn.multioutput.RegressorChain.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.multioutput.regressorchain#sklearn.multioutput.RegressorChain.set_params |
class sklearn.naive_bayes.BernoulliNB(*, alpha=1.0, binarize=0.0, fit_prior=True, class_prior=None) [source]
Naive Bayes classifier for multivariate Bernoulli models. Like MultinomialNB, this classifier is suitable for discrete data. The difference is that while MultinomialNB works with occurrence counts, BernoulliNB is designed for binary/boolean features. Read more in the User Guide. Parameters
alphafloat, default=1.0
Additive (Laplace/Lidstone) smoothing parameter (0 for no smoothing).
binarizefloat or None, default=0.0
Threshold for binarizing (mapping to booleans) of sample features. If None, input is presumed to already consist of binary vectors.
fit_priorbool, default=True
Whether to learn class prior probabilities or not. If false, a uniform prior will be used.
class_priorarray-like of shape (n_classes,), default=None
Prior probabilities of the classes. If specified the priors are not adjusted according to the data. Attributes
class_count_ndarray of shape (n_classes)
Number of samples encountered for each class during fitting. This value is weighted by the sample weight when provided.
class_log_prior_ndarray of shape (n_classes)
Log probability of each class (smoothed).
classes_ndarray of shape (n_classes,)
Class labels known to the classifier
coef_ndarray of shape (n_classes, n_features)
Mirrors feature_log_prob_ for interpreting BernoulliNB as a linear model.
feature_count_ndarray of shape (n_classes, n_features)
Number of samples encountered for each (class, feature) during fitting. This value is weighted by the sample weight when provided.
feature_log_prob_ndarray of shape (n_classes, n_features)
Empirical log probability of features given a class, P(x_i|y).
intercept_ndarray of shape (n_classes,)
Mirrors class_log_prior_ for interpreting BernoulliNB as a linear model.
n_features_int
Number of features of each sample. References C.D. Manning, P. Raghavan and H. Schuetze (2008). Introduction to Information Retrieval. Cambridge University Press, pp. 234-265. https://nlp.stanford.edu/IR-book/html/htmledition/the-bernoulli-model-1.html A. McCallum and K. Nigam (1998). A comparison of event models for naive Bayes text classification. Proc. AAAI/ICML-98 Workshop on Learning for Text Categorization, pp. 41-48. V. Metsis, I. Androutsopoulos and G. Paliouras (2006). Spam filtering with naive Bayes – Which naive Bayes? 3rd Conf. on Email and Anti-Spam (CEAS). Examples >>> import numpy as np
>>> rng = np.random.RandomState(1)
>>> X = rng.randint(5, size=(6, 100))
>>> Y = np.array([1, 2, 3, 4, 4, 5])
>>> from sklearn.naive_bayes import BernoulliNB
>>> clf = BernoulliNB()
>>> clf.fit(X, Y)
BernoulliNB()
>>> print(clf.predict(X[2:3]))
[3]
Methods
fit(X, y[, sample_weight]) Fit Naive Bayes classifier according to X, y
get_params([deep]) Get parameters for this estimator.
partial_fit(X, y[, classes, sample_weight]) Incremental fit on a batch of samples.
predict(X) Perform classification on an array of test vectors X.
predict_log_proba(X) Return log-probability estimates for the test vector X.
predict_proba(X) Return probability estimates for the test vector X.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
fit(X, y, sample_weight=None) [source]
Fit Naive Bayes classifier according to X, y Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples,), default=None
Weights applied to individual samples (1. for unweighted). Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
partial_fit(X, y, classes=None, sample_weight=None) [source]
Incremental fit on a batch of samples. This method is expected to be called several times consecutively on different chunks of a dataset so as to implement out-of-core or online learning. This is especially useful when the whole dataset is too big to fit in memory at once. This method has some performance overhead hence it is better to call partial_fit on chunks of data that are as large as possible (as long as fitting in the memory budget) to hide the overhead. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
classesarray-like of shape (n_classes), default=None
List of all the classes that can possibly appear in the y vector. Must be provided at the first call to partial_fit, can be omitted in subsequent calls.
sample_weightarray-like of shape (n_samples,), default=None
Weights applied to individual samples (1. for unweighted). Returns
selfobject
predict(X) [source]
Perform classification on an array of test vectors X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Cndarray of shape (n_samples,)
Predicted target values for X
predict_log_proba(X) [source]
Return log-probability estimates for the test vector X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Carray-like of shape (n_samples, n_classes)
Returns the log-probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_.
predict_proba(X) [source]
Return probability estimates for the test vector X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Carray-like of shape (n_samples, n_classes)
Returns the probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.naive_bayes.bernoullinb#sklearn.naive_bayes.BernoulliNB |
sklearn.naive_bayes.BernoulliNB
class sklearn.naive_bayes.BernoulliNB(*, alpha=1.0, binarize=0.0, fit_prior=True, class_prior=None) [source]
Naive Bayes classifier for multivariate Bernoulli models. Like MultinomialNB, this classifier is suitable for discrete data. The difference is that while MultinomialNB works with occurrence counts, BernoulliNB is designed for binary/boolean features. Read more in the User Guide. Parameters
alphafloat, default=1.0
Additive (Laplace/Lidstone) smoothing parameter (0 for no smoothing).
binarizefloat or None, default=0.0
Threshold for binarizing (mapping to booleans) of sample features. If None, input is presumed to already consist of binary vectors.
fit_priorbool, default=True
Whether to learn class prior probabilities or not. If false, a uniform prior will be used.
class_priorarray-like of shape (n_classes,), default=None
Prior probabilities of the classes. If specified the priors are not adjusted according to the data. Attributes
class_count_ndarray of shape (n_classes)
Number of samples encountered for each class during fitting. This value is weighted by the sample weight when provided.
class_log_prior_ndarray of shape (n_classes)
Log probability of each class (smoothed).
classes_ndarray of shape (n_classes,)
Class labels known to the classifier
coef_ndarray of shape (n_classes, n_features)
Mirrors feature_log_prob_ for interpreting BernoulliNB as a linear model.
feature_count_ndarray of shape (n_classes, n_features)
Number of samples encountered for each (class, feature) during fitting. This value is weighted by the sample weight when provided.
feature_log_prob_ndarray of shape (n_classes, n_features)
Empirical log probability of features given a class, P(x_i|y).
intercept_ndarray of shape (n_classes,)
Mirrors class_log_prior_ for interpreting BernoulliNB as a linear model.
n_features_int
Number of features of each sample. References C.D. Manning, P. Raghavan and H. Schuetze (2008). Introduction to Information Retrieval. Cambridge University Press, pp. 234-265. https://nlp.stanford.edu/IR-book/html/htmledition/the-bernoulli-model-1.html A. McCallum and K. Nigam (1998). A comparison of event models for naive Bayes text classification. Proc. AAAI/ICML-98 Workshop on Learning for Text Categorization, pp. 41-48. V. Metsis, I. Androutsopoulos and G. Paliouras (2006). Spam filtering with naive Bayes – Which naive Bayes? 3rd Conf. on Email and Anti-Spam (CEAS). Examples >>> import numpy as np
>>> rng = np.random.RandomState(1)
>>> X = rng.randint(5, size=(6, 100))
>>> Y = np.array([1, 2, 3, 4, 4, 5])
>>> from sklearn.naive_bayes import BernoulliNB
>>> clf = BernoulliNB()
>>> clf.fit(X, Y)
BernoulliNB()
>>> print(clf.predict(X[2:3]))
[3]
Methods
fit(X, y[, sample_weight]) Fit Naive Bayes classifier according to X, y
get_params([deep]) Get parameters for this estimator.
partial_fit(X, y[, classes, sample_weight]) Incremental fit on a batch of samples.
predict(X) Perform classification on an array of test vectors X.
predict_log_proba(X) Return log-probability estimates for the test vector X.
predict_proba(X) Return probability estimates for the test vector X.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
fit(X, y, sample_weight=None) [source]
Fit Naive Bayes classifier according to X, y Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples,), default=None
Weights applied to individual samples (1. for unweighted). Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
partial_fit(X, y, classes=None, sample_weight=None) [source]
Incremental fit on a batch of samples. This method is expected to be called several times consecutively on different chunks of a dataset so as to implement out-of-core or online learning. This is especially useful when the whole dataset is too big to fit in memory at once. This method has some performance overhead hence it is better to call partial_fit on chunks of data that are as large as possible (as long as fitting in the memory budget) to hide the overhead. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
classesarray-like of shape (n_classes), default=None
List of all the classes that can possibly appear in the y vector. Must be provided at the first call to partial_fit, can be omitted in subsequent calls.
sample_weightarray-like of shape (n_samples,), default=None
Weights applied to individual samples (1. for unweighted). Returns
selfobject
predict(X) [source]
Perform classification on an array of test vectors X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Cndarray of shape (n_samples,)
Predicted target values for X
predict_log_proba(X) [source]
Return log-probability estimates for the test vector X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Carray-like of shape (n_samples, n_classes)
Returns the log-probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_.
predict_proba(X) [source]
Return probability estimates for the test vector X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Carray-like of shape (n_samples, n_classes)
Returns the probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
Examples using sklearn.naive_bayes.BernoulliNB
Hashing feature transformation using Totally Random Trees
Classification of text documents using sparse features | sklearn.modules.generated.sklearn.naive_bayes.bernoullinb |
fit(X, y, sample_weight=None) [source]
Fit Naive Bayes classifier according to X, y Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples,), default=None
Weights applied to individual samples (1. for unweighted). Returns
selfobject | sklearn.modules.generated.sklearn.naive_bayes.bernoullinb#sklearn.naive_bayes.BernoulliNB.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.naive_bayes.bernoullinb#sklearn.naive_bayes.BernoulliNB.get_params |
partial_fit(X, y, classes=None, sample_weight=None) [source]
Incremental fit on a batch of samples. This method is expected to be called several times consecutively on different chunks of a dataset so as to implement out-of-core or online learning. This is especially useful when the whole dataset is too big to fit in memory at once. This method has some performance overhead hence it is better to call partial_fit on chunks of data that are as large as possible (as long as fitting in the memory budget) to hide the overhead. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
classesarray-like of shape (n_classes), default=None
List of all the classes that can possibly appear in the y vector. Must be provided at the first call to partial_fit, can be omitted in subsequent calls.
sample_weightarray-like of shape (n_samples,), default=None
Weights applied to individual samples (1. for unweighted). Returns
selfobject | sklearn.modules.generated.sklearn.naive_bayes.bernoullinb#sklearn.naive_bayes.BernoulliNB.partial_fit |
predict(X) [source]
Perform classification on an array of test vectors X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Cndarray of shape (n_samples,)
Predicted target values for X | sklearn.modules.generated.sklearn.naive_bayes.bernoullinb#sklearn.naive_bayes.BernoulliNB.predict |
predict_log_proba(X) [source]
Return log-probability estimates for the test vector X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Carray-like of shape (n_samples, n_classes)
Returns the log-probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_. | sklearn.modules.generated.sklearn.naive_bayes.bernoullinb#sklearn.naive_bayes.BernoulliNB.predict_log_proba |
predict_proba(X) [source]
Return probability estimates for the test vector X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Carray-like of shape (n_samples, n_classes)
Returns the probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_. | sklearn.modules.generated.sklearn.naive_bayes.bernoullinb#sklearn.naive_bayes.BernoulliNB.predict_proba |
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y. | sklearn.modules.generated.sklearn.naive_bayes.bernoullinb#sklearn.naive_bayes.BernoulliNB.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.naive_bayes.bernoullinb#sklearn.naive_bayes.BernoulliNB.set_params |
class sklearn.naive_bayes.CategoricalNB(*, alpha=1.0, fit_prior=True, class_prior=None, min_categories=None) [source]
Naive Bayes classifier for categorical features The categorical Naive Bayes classifier is suitable for classification with discrete features that are categorically distributed. The categories of each feature are drawn from a categorical distribution. Read more in the User Guide. Parameters
alphafloat, default=1.0
Additive (Laplace/Lidstone) smoothing parameter (0 for no smoothing).
fit_priorbool, default=True
Whether to learn class prior probabilities or not. If false, a uniform prior will be used.
class_priorarray-like of shape (n_classes,), default=None
Prior probabilities of the classes. If specified the priors are not adjusted according to the data.
min_categoriesint or array-like of shape (n_features,), default=None
Minimum number of categories per feature. integer: Sets the minimum number of categories per feature to n_categories for each features. array-like: shape (n_features,) where n_categories[i] holds the minimum number of categories for the ith column of the input. None (default): Determines the number of categories automatically from the training data. New in version 0.24. Attributes
category_count_list of arrays of shape (n_features,)
Holds arrays of shape (n_classes, n_categories of respective feature) for each feature. Each array provides the number of samples encountered for each class and category of the specific feature.
class_count_ndarray of shape (n_classes,)
Number of samples encountered for each class during fitting. This value is weighted by the sample weight when provided.
class_log_prior_ndarray of shape (n_classes,)
Smoothed empirical log probability for each class.
classes_ndarray of shape (n_classes,)
Class labels known to the classifier
feature_log_prob_list of arrays of shape (n_features,)
Holds arrays of shape (n_classes, n_categories of respective feature) for each feature. Each array provides the empirical log probability of categories given the respective feature and class, P(x_i|y).
n_features_int
Number of features of each sample.
n_categories_ndarray of shape (n_features,), dtype=np.int64
Number of categories for each feature. This value is inferred from the data or set by the minimum number of categories. New in version 0.24. Examples >>> import numpy as np
>>> rng = np.random.RandomState(1)
>>> X = rng.randint(5, size=(6, 100))
>>> y = np.array([1, 2, 3, 4, 5, 6])
>>> from sklearn.naive_bayes import CategoricalNB
>>> clf = CategoricalNB()
>>> clf.fit(X, y)
CategoricalNB()
>>> print(clf.predict(X[2:3]))
[3]
Methods
fit(X, y[, sample_weight]) Fit Naive Bayes classifier according to X, y
get_params([deep]) Get parameters for this estimator.
partial_fit(X, y[, classes, sample_weight]) Incremental fit on a batch of samples.
predict(X) Perform classification on an array of test vectors X.
predict_log_proba(X) Return log-probability estimates for the test vector X.
predict_proba(X) Return probability estimates for the test vector X.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
fit(X, y, sample_weight=None) [source]
Fit Naive Bayes classifier according to X, y Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features. Here, each feature of X is assumed to be from a different categorical distribution. It is further assumed that all categories of each feature are represented by the numbers 0, …, n - 1, where n refers to the total number of categories for the given feature. This can, for instance, be achieved with the help of OrdinalEncoder.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples), default=None
Weights applied to individual samples (1. for unweighted). Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
partial_fit(X, y, classes=None, sample_weight=None) [source]
Incremental fit on a batch of samples. This method is expected to be called several times consecutively on different chunks of a dataset so as to implement out-of-core or online learning. This is especially useful when the whole dataset is too big to fit in memory at once. This method has some performance overhead hence it is better to call partial_fit on chunks of data that are as large as possible (as long as fitting in the memory budget) to hide the overhead. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features. Here, each feature of X is assumed to be from a different categorical distribution. It is further assumed that all categories of each feature are represented by the numbers 0, …, n - 1, where n refers to the total number of categories for the given feature. This can, for instance, be achieved with the help of OrdinalEncoder.
yarray-like of shape (n_samples)
Target values.
classesarray-like of shape (n_classes), default=None
List of all the classes that can possibly appear in the y vector. Must be provided at the first call to partial_fit, can be omitted in subsequent calls.
sample_weightarray-like of shape (n_samples), default=None
Weights applied to individual samples (1. for unweighted). Returns
selfobject
predict(X) [source]
Perform classification on an array of test vectors X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Cndarray of shape (n_samples,)
Predicted target values for X
predict_log_proba(X) [source]
Return log-probability estimates for the test vector X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Carray-like of shape (n_samples, n_classes)
Returns the log-probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_.
predict_proba(X) [source]
Return probability estimates for the test vector X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Carray-like of shape (n_samples, n_classes)
Returns the probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.naive_bayes.categoricalnb#sklearn.naive_bayes.CategoricalNB |
sklearn.naive_bayes.CategoricalNB
class sklearn.naive_bayes.CategoricalNB(*, alpha=1.0, fit_prior=True, class_prior=None, min_categories=None) [source]
Naive Bayes classifier for categorical features The categorical Naive Bayes classifier is suitable for classification with discrete features that are categorically distributed. The categories of each feature are drawn from a categorical distribution. Read more in the User Guide. Parameters
alphafloat, default=1.0
Additive (Laplace/Lidstone) smoothing parameter (0 for no smoothing).
fit_priorbool, default=True
Whether to learn class prior probabilities or not. If false, a uniform prior will be used.
class_priorarray-like of shape (n_classes,), default=None
Prior probabilities of the classes. If specified the priors are not adjusted according to the data.
min_categoriesint or array-like of shape (n_features,), default=None
Minimum number of categories per feature. integer: Sets the minimum number of categories per feature to n_categories for each features. array-like: shape (n_features,) where n_categories[i] holds the minimum number of categories for the ith column of the input. None (default): Determines the number of categories automatically from the training data. New in version 0.24. Attributes
category_count_list of arrays of shape (n_features,)
Holds arrays of shape (n_classes, n_categories of respective feature) for each feature. Each array provides the number of samples encountered for each class and category of the specific feature.
class_count_ndarray of shape (n_classes,)
Number of samples encountered for each class during fitting. This value is weighted by the sample weight when provided.
class_log_prior_ndarray of shape (n_classes,)
Smoothed empirical log probability for each class.
classes_ndarray of shape (n_classes,)
Class labels known to the classifier
feature_log_prob_list of arrays of shape (n_features,)
Holds arrays of shape (n_classes, n_categories of respective feature) for each feature. Each array provides the empirical log probability of categories given the respective feature and class, P(x_i|y).
n_features_int
Number of features of each sample.
n_categories_ndarray of shape (n_features,), dtype=np.int64
Number of categories for each feature. This value is inferred from the data or set by the minimum number of categories. New in version 0.24. Examples >>> import numpy as np
>>> rng = np.random.RandomState(1)
>>> X = rng.randint(5, size=(6, 100))
>>> y = np.array([1, 2, 3, 4, 5, 6])
>>> from sklearn.naive_bayes import CategoricalNB
>>> clf = CategoricalNB()
>>> clf.fit(X, y)
CategoricalNB()
>>> print(clf.predict(X[2:3]))
[3]
Methods
fit(X, y[, sample_weight]) Fit Naive Bayes classifier according to X, y
get_params([deep]) Get parameters for this estimator.
partial_fit(X, y[, classes, sample_weight]) Incremental fit on a batch of samples.
predict(X) Perform classification on an array of test vectors X.
predict_log_proba(X) Return log-probability estimates for the test vector X.
predict_proba(X) Return probability estimates for the test vector X.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
fit(X, y, sample_weight=None) [source]
Fit Naive Bayes classifier according to X, y Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features. Here, each feature of X is assumed to be from a different categorical distribution. It is further assumed that all categories of each feature are represented by the numbers 0, …, n - 1, where n refers to the total number of categories for the given feature. This can, for instance, be achieved with the help of OrdinalEncoder.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples), default=None
Weights applied to individual samples (1. for unweighted). Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
partial_fit(X, y, classes=None, sample_weight=None) [source]
Incremental fit on a batch of samples. This method is expected to be called several times consecutively on different chunks of a dataset so as to implement out-of-core or online learning. This is especially useful when the whole dataset is too big to fit in memory at once. This method has some performance overhead hence it is better to call partial_fit on chunks of data that are as large as possible (as long as fitting in the memory budget) to hide the overhead. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features. Here, each feature of X is assumed to be from a different categorical distribution. It is further assumed that all categories of each feature are represented by the numbers 0, …, n - 1, where n refers to the total number of categories for the given feature. This can, for instance, be achieved with the help of OrdinalEncoder.
yarray-like of shape (n_samples)
Target values.
classesarray-like of shape (n_classes), default=None
List of all the classes that can possibly appear in the y vector. Must be provided at the first call to partial_fit, can be omitted in subsequent calls.
sample_weightarray-like of shape (n_samples), default=None
Weights applied to individual samples (1. for unweighted). Returns
selfobject
predict(X) [source]
Perform classification on an array of test vectors X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Cndarray of shape (n_samples,)
Predicted target values for X
predict_log_proba(X) [source]
Return log-probability estimates for the test vector X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Carray-like of shape (n_samples, n_classes)
Returns the log-probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_.
predict_proba(X) [source]
Return probability estimates for the test vector X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Carray-like of shape (n_samples, n_classes)
Returns the probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.naive_bayes.categoricalnb |
fit(X, y, sample_weight=None) [source]
Fit Naive Bayes classifier according to X, y Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features. Here, each feature of X is assumed to be from a different categorical distribution. It is further assumed that all categories of each feature are represented by the numbers 0, …, n - 1, where n refers to the total number of categories for the given feature. This can, for instance, be achieved with the help of OrdinalEncoder.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples), default=None
Weights applied to individual samples (1. for unweighted). Returns
selfobject | sklearn.modules.generated.sklearn.naive_bayes.categoricalnb#sklearn.naive_bayes.CategoricalNB.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.naive_bayes.categoricalnb#sklearn.naive_bayes.CategoricalNB.get_params |
partial_fit(X, y, classes=None, sample_weight=None) [source]
Incremental fit on a batch of samples. This method is expected to be called several times consecutively on different chunks of a dataset so as to implement out-of-core or online learning. This is especially useful when the whole dataset is too big to fit in memory at once. This method has some performance overhead hence it is better to call partial_fit on chunks of data that are as large as possible (as long as fitting in the memory budget) to hide the overhead. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features. Here, each feature of X is assumed to be from a different categorical distribution. It is further assumed that all categories of each feature are represented by the numbers 0, …, n - 1, where n refers to the total number of categories for the given feature. This can, for instance, be achieved with the help of OrdinalEncoder.
yarray-like of shape (n_samples)
Target values.
classesarray-like of shape (n_classes), default=None
List of all the classes that can possibly appear in the y vector. Must be provided at the first call to partial_fit, can be omitted in subsequent calls.
sample_weightarray-like of shape (n_samples), default=None
Weights applied to individual samples (1. for unweighted). Returns
selfobject | sklearn.modules.generated.sklearn.naive_bayes.categoricalnb#sklearn.naive_bayes.CategoricalNB.partial_fit |
predict(X) [source]
Perform classification on an array of test vectors X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Cndarray of shape (n_samples,)
Predicted target values for X | sklearn.modules.generated.sklearn.naive_bayes.categoricalnb#sklearn.naive_bayes.CategoricalNB.predict |
predict_log_proba(X) [source]
Return log-probability estimates for the test vector X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Carray-like of shape (n_samples, n_classes)
Returns the log-probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_. | sklearn.modules.generated.sklearn.naive_bayes.categoricalnb#sklearn.naive_bayes.CategoricalNB.predict_log_proba |
predict_proba(X) [source]
Return probability estimates for the test vector X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Carray-like of shape (n_samples, n_classes)
Returns the probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_. | sklearn.modules.generated.sklearn.naive_bayes.categoricalnb#sklearn.naive_bayes.CategoricalNB.predict_proba |
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y. | sklearn.modules.generated.sklearn.naive_bayes.categoricalnb#sklearn.naive_bayes.CategoricalNB.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.naive_bayes.categoricalnb#sklearn.naive_bayes.CategoricalNB.set_params |
class sklearn.naive_bayes.ComplementNB(*, alpha=1.0, fit_prior=True, class_prior=None, norm=False) [source]
The Complement Naive Bayes classifier described in Rennie et al. (2003). The Complement Naive Bayes classifier was designed to correct the “severe assumptions” made by the standard Multinomial Naive Bayes classifier. It is particularly suited for imbalanced data sets. Read more in the User Guide. New in version 0.20. Parameters
alphafloat, default=1.0
Additive (Laplace/Lidstone) smoothing parameter (0 for no smoothing).
fit_priorbool, default=True
Only used in edge case with a single class in the training set.
class_priorarray-like of shape (n_classes,), default=None
Prior probabilities of the classes. Not used.
normbool, default=False
Whether or not a second normalization of the weights is performed. The default behavior mirrors the implementations found in Mahout and Weka, which do not follow the full algorithm described in Table 9 of the paper. Attributes
class_count_ndarray of shape (n_classes,)
Number of samples encountered for each class during fitting. This value is weighted by the sample weight when provided.
class_log_prior_ndarray of shape (n_classes,)
Smoothed empirical log probability for each class. Only used in edge case with a single class in the training set.
classes_ndarray of shape (n_classes,)
Class labels known to the classifier
coef_ndarray of shape (n_classes, n_features)
Mirrors feature_log_prob_ for interpreting ComplementNB as a linear model. Deprecated since version 0.24: coef_ is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26).
feature_all_ndarray of shape (n_features,)
Number of samples encountered for each feature during fitting. This value is weighted by the sample weight when provided.
feature_count_ndarray of shape (n_classes, n_features)
Number of samples encountered for each (class, feature) during fitting. This value is weighted by the sample weight when provided.
feature_log_prob_ndarray of shape (n_classes, n_features)
Empirical weights for class complements.
intercept_ndarray of shape (n_classes,)
Mirrors class_log_prior_ for interpreting ComplementNB as a linear model. Deprecated since version 0.24: coef_ is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26).
n_features_int
Number of features of each sample. References Rennie, J. D., Shih, L., Teevan, J., & Karger, D. R. (2003). Tackling the poor assumptions of naive bayes text classifiers. In ICML (Vol. 3, pp. 616-623). https://people.csail.mit.edu/jrennie/papers/icml03-nb.pdf Examples >>> import numpy as np
>>> rng = np.random.RandomState(1)
>>> X = rng.randint(5, size=(6, 100))
>>> y = np.array([1, 2, 3, 4, 5, 6])
>>> from sklearn.naive_bayes import ComplementNB
>>> clf = ComplementNB()
>>> clf.fit(X, y)
ComplementNB()
>>> print(clf.predict(X[2:3]))
[3]
Methods
fit(X, y[, sample_weight]) Fit Naive Bayes classifier according to X, y
get_params([deep]) Get parameters for this estimator.
partial_fit(X, y[, classes, sample_weight]) Incremental fit on a batch of samples.
predict(X) Perform classification on an array of test vectors X.
predict_log_proba(X) Return log-probability estimates for the test vector X.
predict_proba(X) Return probability estimates for the test vector X.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
fit(X, y, sample_weight=None) [source]
Fit Naive Bayes classifier according to X, y Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples,), default=None
Weights applied to individual samples (1. for unweighted). Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
partial_fit(X, y, classes=None, sample_weight=None) [source]
Incremental fit on a batch of samples. This method is expected to be called several times consecutively on different chunks of a dataset so as to implement out-of-core or online learning. This is especially useful when the whole dataset is too big to fit in memory at once. This method has some performance overhead hence it is better to call partial_fit on chunks of data that are as large as possible (as long as fitting in the memory budget) to hide the overhead. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
classesarray-like of shape (n_classes), default=None
List of all the classes that can possibly appear in the y vector. Must be provided at the first call to partial_fit, can be omitted in subsequent calls.
sample_weightarray-like of shape (n_samples,), default=None
Weights applied to individual samples (1. for unweighted). Returns
selfobject
predict(X) [source]
Perform classification on an array of test vectors X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Cndarray of shape (n_samples,)
Predicted target values for X
predict_log_proba(X) [source]
Return log-probability estimates for the test vector X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Carray-like of shape (n_samples, n_classes)
Returns the log-probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_.
predict_proba(X) [source]
Return probability estimates for the test vector X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Carray-like of shape (n_samples, n_classes)
Returns the probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.naive_bayes.complementnb#sklearn.naive_bayes.ComplementNB |
sklearn.naive_bayes.ComplementNB
class sklearn.naive_bayes.ComplementNB(*, alpha=1.0, fit_prior=True, class_prior=None, norm=False) [source]
The Complement Naive Bayes classifier described in Rennie et al. (2003). The Complement Naive Bayes classifier was designed to correct the “severe assumptions” made by the standard Multinomial Naive Bayes classifier. It is particularly suited for imbalanced data sets. Read more in the User Guide. New in version 0.20. Parameters
alphafloat, default=1.0
Additive (Laplace/Lidstone) smoothing parameter (0 for no smoothing).
fit_priorbool, default=True
Only used in edge case with a single class in the training set.
class_priorarray-like of shape (n_classes,), default=None
Prior probabilities of the classes. Not used.
normbool, default=False
Whether or not a second normalization of the weights is performed. The default behavior mirrors the implementations found in Mahout and Weka, which do not follow the full algorithm described in Table 9 of the paper. Attributes
class_count_ndarray of shape (n_classes,)
Number of samples encountered for each class during fitting. This value is weighted by the sample weight when provided.
class_log_prior_ndarray of shape (n_classes,)
Smoothed empirical log probability for each class. Only used in edge case with a single class in the training set.
classes_ndarray of shape (n_classes,)
Class labels known to the classifier
coef_ndarray of shape (n_classes, n_features)
Mirrors feature_log_prob_ for interpreting ComplementNB as a linear model. Deprecated since version 0.24: coef_ is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26).
feature_all_ndarray of shape (n_features,)
Number of samples encountered for each feature during fitting. This value is weighted by the sample weight when provided.
feature_count_ndarray of shape (n_classes, n_features)
Number of samples encountered for each (class, feature) during fitting. This value is weighted by the sample weight when provided.
feature_log_prob_ndarray of shape (n_classes, n_features)
Empirical weights for class complements.
intercept_ndarray of shape (n_classes,)
Mirrors class_log_prior_ for interpreting ComplementNB as a linear model. Deprecated since version 0.24: coef_ is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26).
n_features_int
Number of features of each sample. References Rennie, J. D., Shih, L., Teevan, J., & Karger, D. R. (2003). Tackling the poor assumptions of naive bayes text classifiers. In ICML (Vol. 3, pp. 616-623). https://people.csail.mit.edu/jrennie/papers/icml03-nb.pdf Examples >>> import numpy as np
>>> rng = np.random.RandomState(1)
>>> X = rng.randint(5, size=(6, 100))
>>> y = np.array([1, 2, 3, 4, 5, 6])
>>> from sklearn.naive_bayes import ComplementNB
>>> clf = ComplementNB()
>>> clf.fit(X, y)
ComplementNB()
>>> print(clf.predict(X[2:3]))
[3]
Methods
fit(X, y[, sample_weight]) Fit Naive Bayes classifier according to X, y
get_params([deep]) Get parameters for this estimator.
partial_fit(X, y[, classes, sample_weight]) Incremental fit on a batch of samples.
predict(X) Perform classification on an array of test vectors X.
predict_log_proba(X) Return log-probability estimates for the test vector X.
predict_proba(X) Return probability estimates for the test vector X.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
fit(X, y, sample_weight=None) [source]
Fit Naive Bayes classifier according to X, y Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples,), default=None
Weights applied to individual samples (1. for unweighted). Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
partial_fit(X, y, classes=None, sample_weight=None) [source]
Incremental fit on a batch of samples. This method is expected to be called several times consecutively on different chunks of a dataset so as to implement out-of-core or online learning. This is especially useful when the whole dataset is too big to fit in memory at once. This method has some performance overhead hence it is better to call partial_fit on chunks of data that are as large as possible (as long as fitting in the memory budget) to hide the overhead. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
classesarray-like of shape (n_classes), default=None
List of all the classes that can possibly appear in the y vector. Must be provided at the first call to partial_fit, can be omitted in subsequent calls.
sample_weightarray-like of shape (n_samples,), default=None
Weights applied to individual samples (1. for unweighted). Returns
selfobject
predict(X) [source]
Perform classification on an array of test vectors X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Cndarray of shape (n_samples,)
Predicted target values for X
predict_log_proba(X) [source]
Return log-probability estimates for the test vector X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Carray-like of shape (n_samples, n_classes)
Returns the log-probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_.
predict_proba(X) [source]
Return probability estimates for the test vector X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Carray-like of shape (n_samples, n_classes)
Returns the probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
Examples using sklearn.naive_bayes.ComplementNB
Classification of text documents using sparse features | sklearn.modules.generated.sklearn.naive_bayes.complementnb |
fit(X, y, sample_weight=None) [source]
Fit Naive Bayes classifier according to X, y Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples,), default=None
Weights applied to individual samples (1. for unweighted). Returns
selfobject | sklearn.modules.generated.sklearn.naive_bayes.complementnb#sklearn.naive_bayes.ComplementNB.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.naive_bayes.complementnb#sklearn.naive_bayes.ComplementNB.get_params |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.