content large_stringlengths 3 20.5k | url large_stringlengths 54 193 | branch large_stringclasses 4
values | source large_stringclasses 42
values | embeddings listlengths 384 384 | score float64 -0.21 0.65 |
|---|---|---|---|---|---|
labeled as ``-1`` for "noise"). When chosen too large, it causes close clusters to be merged into one cluster, and eventually the entire data set to be returned as a single cluster. Some heuristics for choosing this parameter have been discussed in the literature, for example based on a knee in the nearest neighbor dis... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
0.0073374733328819275,
-0.08768898993730545,
-0.04345749691128731,
0.05449296534061432,
0.12610311806201935,
-0.07569859176874161,
0.0337102971971035,
0.0006357814418151975,
-0.04820825159549713,
-0.05140082910656929,
0.029243670403957367,
-0.026121873408555984,
0.09111247956752777,
-0.046... | 0.18001 |
H. P., & Xu, X. (2017). In ACM Transactions on Database Systems (TODS), 42(3), 19. .. \_hdbscan: HDBSCAN ======= The :class:`HDBSCAN` algorithm can be seen as an extension of :class:`DBSCAN` and :class:`OPTICS`. Specifically, :class:`DBSCAN` assumes that the clustering criterion (i.e. density requirement) is \*globally... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.03299977257847786,
-0.08796947449445724,
-0.0932191014289856,
0.036380402743816376,
0.05608561262488365,
-0.04952046647667885,
0.04758646339178085,
-0.06982593983411789,
-0.11980748921632767,
0.029405036941170692,
-0.020750155672430992,
-0.027826888486742973,
0.15107949078083038,
-0.035... | 0.114206 |
|hdbscan\_results| image:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_hdbscan\_007.png :target: ../auto\_examples/cluster/plot\_hdbscan.html :scale: 75 .. centered:: |hdbscan\_ground\_truth| .. centered:: |hdbscan\_results| HDBSCAN can be smoothed with an additional hyperparameter `min\_cluster\_size` which spec... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.027314824983477592,
-0.09137628972530365,
-0.0013912973226979375,
0.020951461046934128,
0.06000825762748718,
-0.02904292568564415,
0.028744176030158997,
-0.007446420844644308,
-0.0559118390083313,
-0.04150580242276192,
-0.037666983902454376,
-0.06584085524082184,
0.1450410783290863,
-0.... | 0.086984 |
because the first samples of each dense area processed by OPTICS have a large reachability value while being close to other points in their area, and will thus sometimes be marked as noise rather than periphery. This affects adjacent points when they are considered as candidates for being marked as either periphery or ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
0.06954105198383331,
-0.10930328816175461,
-0.014286110177636147,
-0.033826690167188644,
0.03263434022665024,
-0.11522197723388672,
0.02300151437520981,
-0.04418510943651199,
0.0337928868830204,
0.04012030363082886,
-0.08955419808626175,
0.028872298076748848,
0.0755155012011528,
-0.0331590... | 0.08935 |
CF Node. It is then merged with the subcluster of the root, that has the smallest radius after merging, constrained by the threshold and branching factor conditions. If the subcluster has any child node, then this is done repeatedly till it reaches a leaf. After finding the nearest subcluster in the leaf, the propertie... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.02564743347465992,
-0.04477578029036522,
-0.02265745773911476,
-0.004719073418527842,
0.13780583441257477,
-0.043771177530288696,
-0.0659303143620491,
0.017693281173706055,
0.08010371029376984,
0.0036789108999073505,
0.0875362902879715,
-0.03440728783607483,
-0.025305839255452156,
-0.04... | 0.078729 |
with all clustering metrics, one can permute 0 and 1 in the predicted labels, rename 2 to 3, and get the same score:: >>> labels\_pred = [1, 1, 0, 0, 3, 3] >>> metrics.rand\_score(labels\_true, labels\_pred) 0.66 >>> metrics.adjusted\_rand\_score(labels\_true, labels\_pred) 0.24 Furthermore, both :func:`rand\_score` an... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.0015557892620563507,
-0.06345803290605545,
-0.05717850103974342,
0.040353696793317795,
0.020185116678476334,
0.019644824787974358,
0.07159868627786636,
-0.023891199380159378,
-0.025971442461013794,
-0.00851498544216156,
0.017100410535931587,
-0.11015055328607559,
0.05914696678519249,
-0... | 0.0414 |
of elements that are in different sets in C and in different sets in K The unadjusted Rand index is then given by: .. math:: \text{RI} = \frac{a + b}{C\_2^{n\_{samples}}} where :math:`C\_2^{n\_{samples}}` is the total number of possible pairs in the dataset. It does not matter if the calculation is performed on ordered... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.006376650650054216,
0.011349333450198174,
-0.056736983358860016,
0.024353016167879105,
0.07387000322341919,
0.019319135695695877,
0.0846315547823906,
-0.062052641063928604,
0.06072458252310753,
-0.026536958292126656,
-0.009544527158141136,
-0.05764339864253998,
0.06413345783948898,
-0.1... | 0.023858 |
by human annotators (as in the supervised learning setting). However MI-based measures can also be useful in purely unsupervised setting as a building block for a Consensus Index that can be used for clustering model selection. - NMI and MI are not adjusted against chance. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.034555330872535706,
-0.06573785096406937,
0.03395891562104225,
0.048495225608348846,
0.09648220241069794,
-0.0172555074095726,
0.09637517482042313,
0.03182101994752884,
0.037661101669073105,
-0.012663443572819233,
0.006872417405247688,
-0.017272911965847015,
0.06756572425365448,
0.03270... | 0.123055 |
and Correction for Chance". JMLR .. [YAT2016] Yang, Algesheimer, and Tessone, (2016). "A comparative analysis of community detection algorithms on artificial networks". Scientific Reports 6: 30750. `doi:10.1038/srep30750 `\_. .. \_homogeneity\_completeness: Homogeneity, completeness and V-measure ----------------------... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.016433028504252434,
-0.028818044811487198,
0.04173731058835983,
-0.0042654480785131454,
0.07676374167203903,
0.020323684439063072,
0.039927661418914795,
-0.029364487156271935,
0.05780122056603432,
-0.03364839032292366,
0.029459934681653976,
-0.11074281483888626,
0.05577782914042473,
0.0... | 0.037484 |
:scale: 100 - These metrics \*\*require the knowledge of the ground truth classes\*\* while almost never available in practice or requires manual assignment by human annotators (as in the supervised learning setting). .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_adjusted\_for\_chance\_measure... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.01800347864627838,
-0.025852929800748825,
-0.007814773358404636,
-0.009018353186547756,
0.010211119428277016,
0.03825239837169647,
0.03679914399981499,
0.010227030143141747,
-0.038900211453437805,
-0.015598529018461704,
0.013884235173463821,
-0.09149694442749023,
0.04912332817912102,
0.... | 0.115803 |
for instance). - \*\*Upper-bounded at 1\*\*: Values close to zero indicate two label assignments that are largely independent, while values close to one indicate significant agreement. Further, values of exactly 0 indicate \*\*purely\*\* independent label assignments and a FMI of exactly 1 indicates that the two label ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.05588896945118904,
-0.05048646777868271,
-0.03880975767970085,
0.016707295551896095,
-0.017023319378495216,
-0.017233800143003464,
0.007167026400566101,
0.012197576463222504,
0.008225979283452034,
-0.06576037406921387,
0.038819409906864166,
-0.06578768044710159,
0.04901565611362457,
0.0... | 0.218253 |
sklearn import metrics >>> from sklearn.metrics import pairwise\_distances >>> from sklearn import datasets >>> X, y = datasets.load\_iris(return\_X\_y=True) In normal usage, the Calinski-Harabasz index is applied to the results of a cluster analysis: >>> import numpy as np >>> from sklearn.cluster import KMeans >>> km... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.052296821027994156,
-0.03232130780816078,
-0.09433137625455856,
0.06019265949726105,
0.05745864287018776,
0.029416052624583244,
0.02858586795628071,
0.009058348834514618,
0.030087409541010857,
-0.01145833358168602,
0.012992918491363525,
-0.06412555277347565,
0.03844449296593666,
-0.0125... | 0.155275 |
A simple choice to construct :math:`R\_{ij}` so that it is nonnegative and symmetric is: .. math:: R\_{ij} = \frac{s\_i + s\_j}{d\_{ij}} Then the Davies-Bouldin index is defined as: .. math:: DB = \frac{1}{k} \sum\_{i=1}^k \max\_{i \neq j} R\_{ij} .. dropdown:: References \* Davies, David L.; Bouldin, Donald W. (1979).... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.06964216381311417,
-0.029346955940127373,
-0.026412051171064377,
-0.009794635698199272,
0.07165132462978363,
-0.03575284034013748,
-0.01103636622428894,
-0.008039766922593117,
-0.03143102303147316,
-0.07042942941188812,
-0.022856835275888443,
-0.014135438948869705,
0.04597550258040428,
... | 0.102271 |
diagonal regardless of actual label values:: >>> from sklearn.metrics.cluster import pair\_confusion\_matrix >>> pair\_confusion\_matrix([0, 0, 1, 1], [0, 0, 1, 1]) array([[8, 0], [0, 4]]) :: >>> pair\_confusion\_matrix([0, 0, 1, 1], [1, 1, 0, 0]) array([[8, 0], [0, 4]]) Labelings that assign all classes members to the... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
0.03610554710030556,
-0.036353789269924164,
-0.10029473155736923,
-0.040950026363134384,
-0.00015617898316122591,
-0.003051961772143841,
0.03884252533316612,
-0.07059621810913086,
-0.02263941429555416,
-0.09775861352682114,
0.01409119926393032,
-0.02301383949816227,
-0.00366304162889719,
0... | 0.028089 |
.. \_biclustering: ============ Biclustering ============ Biclustering algorithms simultaneously cluster rows and columns of a data matrix. These clusters of rows and columns are known as biclusters. Each determines a submatrix of the original data matrix with some desired properties. For instance, given a matrix of sh... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/biclustering.rst | main | scikit-learn | [
0.02191389724612236,
-0.048732683062553406,
-0.05805512145161629,
-0.05481002479791641,
0.031232740730047226,
-0.10492484271526337,
-0.08782176673412323,
-0.07881806790828705,
-0.039900247007608414,
-0.00926363468170166,
-0.05647340416908264,
-0.0015629188856109977,
0.033241260796785355,
-... | 0.128179 |
the original data matrix :math:`A` has shape :math:`m \times n`, the Laplacian matrix for the corresponding bipartite graph has shape :math:`(m + n) \times (m + n)`. However, in this case it is possible to work directly with :math:`A`, which is smaller and more efficient. The input matrix :math:`A` is preprocessed as f... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/biclustering.rst | main | scikit-learn | [
-0.03818059340119362,
-0.014236542396247387,
-0.08835099637508392,
-0.06611424684524536,
-0.0192902572453022,
-0.01658022776246071,
-0.05697528272867203,
-0.024829154834151268,
-0.029822735115885735,
0.05186838284134865,
0.037968993186950684,
0.020160948857665062,
-0.002322350163012743,
0.... | -0.001073 |
v\_{p+1}` except in the case of log normalization. Given these singular vectors, they are ranked according to which can be best approximated by a piecewise-constant vector. The approximations for each vector are found using one-dimensional k-means and scored using the Euclidean distance. Some subset of the best left an... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/biclustering.rst | main | scikit-learn | [
0.015313551761209965,
-0.07345552742481232,
-0.02685989812016487,
-0.06176371872425079,
0.08165642619132996,
-0.05151468887925148,
-0.025542763993144035,
0.04714834317564964,
0.03302625194191933,
0.030672380700707436,
-0.021265456452965736,
0.02485961839556694,
-0.01417310070246458,
0.0165... | 0.057547 |
.. \_naive\_bayes: =========== Naive Bayes =========== .. currentmodule:: sklearn.naive\_bayes Naive Bayes methods are a set of supervised learning algorithms based on applying Bayes' theorem with the "naive" assumption of conditional independence between every pair of features given the value of the class variable. Ba... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/naive_bayes.rst | main | scikit-learn | [
-0.08005866408348083,
-0.06694275885820389,
0.020295236259698868,
-0.0513407327234745,
0.10579261928796768,
-0.03265887871384621,
0.10240624099969864,
-0.07018984109163284,
-0.0519341416656971,
0.048194997012615204,
0.07095149904489517,
-0.010839825496077538,
0.06357251107692719,
0.0092872... | -0.024656 |
features (in text classification, the size of the vocabulary) and :math:`\theta\_{yi}` is the probability :math:`P(x\_i \mid y)` of feature :math:`i` appearing in a sample belonging to class :math:`y`. The parameters :math:`\theta\_y` are estimated by a smoothed version of maximum likelihood, i.e. relative frequency co... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/naive_bayes.rst | main | scikit-learn | [
-0.014012843370437622,
-0.08331848680973053,
0.049685653299093246,
0.02128024399280548,
0.03364138305187225,
0.021942611783742905,
0.0915663093328476,
0.022851984947919846,
0.036280740052461624,
-0.04312531650066376,
0.08074105530977249,
0.025966567918658257,
0.02667219005525112,
-0.012778... | 0.094299 |
models, if time permits. .. dropdown:: References \* C.D. Manning, P. Raghavan and H. Schütze (2008). Introduction to Information Retrieval. Cambridge University Press, pp. 234-265. \* A. McCallum and K. Nigam (1998). `A comparison of event models for Naive Bayes text classification. `\_ Proc. AAAI/ICML-98 Workshop on ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/naive_bayes.rst | main | scikit-learn | [
-0.035035740584135056,
-0.050318971276283264,
0.026687141507864,
0.006307452451437712,
0.060245875269174576,
0.005570763256400824,
0.06919397413730621,
-0.026192177087068558,
0.029111670330166817,
-0.03638942539691925,
0.050223708152770996,
-0.018195392563939095,
0.04465080425143242,
-0.03... | 0.077213 |
.. \_cross\_decomposition: =================== Cross decomposition =================== .. currentmodule:: sklearn.cross\_decomposition The cross decomposition module contains \*\*supervised\*\* estimators for dimensionality reduction and regression, belonging to the "Partial Least Squares" family. .. figure:: ../auto\_... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_decomposition.rst | main | scikit-learn | [
-0.06501148641109467,
-0.08967917412519455,
-0.034643445163965225,
-0.0733325406908989,
0.1241132915019989,
-0.02159169875085354,
-0.042439863085746765,
-0.03502042964100838,
-0.02075875736773014,
-0.015048995614051819,
0.035399455577135086,
-0.021509820595383644,
-0.034986015409231186,
0.... | 0.120233 |
:math:`\Xi` and :math:`\Omega` correspond to the projections of the training data :math:`X` and :math:`Y`, respectively. Step \*a)\* may be performed in two ways: either by computing the whole SVD of :math:`C` and only retaining the singular vectors with the biggest singular values, or by directly computing the singula... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_decomposition.rst | main | scikit-learn | [
-0.09243126213550568,
0.01998312957584858,
-0.010398953221738338,
-0.06161902844905853,
0.014588713645935059,
-0.020647166296839714,
0.050419967621564865,
-0.036515962332487106,
-0.010534918867051601,
-0.023790206760168076,
0.07560516893863678,
-0.006922050379216671,
-0.0007371331448666751,
... | -0.021239 |
:math:`u\_k` and :math:`v\_k` are computed in the power method of step a). Details can be found in section 10 of [1]\_. Since :class:`CCA` involves the inversion of :math:`X\_k^TX\_k` and :math:`Y\_k^TY\_k`, this estimator can be unstable if the number of features or targets is greater than the number of samples. .. ru... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_decomposition.rst | main | scikit-learn | [
-0.08692425489425659,
-0.021741867065429688,
0.030321363359689713,
0.009287774562835693,
0.09631801396608353,
-0.0023740094620734453,
-0.031037144362926483,
0.06000078096985817,
-0.021219853311777115,
0.00716340122744441,
-0.028012743219733238,
0.05825984850525856,
0.04836810752749443,
-0.... | 0.013788 |
.. \_tree: ============== Decision Trees ============== .. currentmodule:: sklearn.tree \*\*Decision Trees (DTs)\*\* are a non-parametric supervised learning method used for :ref:`classification ` and :ref:`regression `. The goal is to create a model that predicts the value of a target variable by learning simple decis... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/tree.rst | main | scikit-learn | [
-0.08889603614807129,
-0.022319884970784187,
0.007689184509217739,
0.026448234915733337,
0.1248602345585823,
-0.0971420481801033,
-0.02374790981411934,
0.0296317171305418,
0.004214509390294552,
0.08947049826383591,
-0.04875597357749939,
-0.033454541116952896,
0.023160668089985847,
0.006342... | 0.148898 |
to fitting with the decision tree. .. \_tree\_classification: Classification ============== :class:`DecisionTreeClassifier` is a class capable of performing multi-class classification on a dataset. As with other classifiers, :class:`DecisionTreeClassifier` takes as input two arrays: an array X, sparse or dense, of shap... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/tree.rst | main | scikit-learn | [
-0.009759120643138885,
-0.06195050850510597,
0.0018259672215208411,
-0.034941237419843674,
0.10284904390573502,
-0.07540695369243622,
0.056866686791181564,
-0.06098358705639839,
-0.06990982592105865,
-0.03038744628429413,
-0.08899031579494476,
-0.05920647084712982,
0.03141574189066887,
-0.... | 0.04941 |
(cm) > 0.80 | |--- petal width (cm) <= 1.75 | | |--- class: 1 | |--- petal width (cm) > 1.75 | | |--- class: 2 .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_tree\_plot\_iris\_dtc.py` \* :ref:`sphx\_glr\_auto\_examples\_tree\_plot\_unveil\_tree\_structure.py` .. \_tree\_regression: Regression ========== .. fi... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/tree.rst | main | scikit-learn | [
-0.056500375270843506,
-0.051154132932424545,
-0.016204075887799263,
0.028887059539556503,
0.09673555195331573,
-0.0004273015365470201,
-0.030262582004070282,
0.05333024635910988,
-0.022422166541218758,
0.06928561627864838,
-0.010110671631991863,
-0.030632810667157173,
-0.007916788570582867,... | 0.108308 |
.. math:: \mathcal{O}(n\_{features}n\_{samples}\log (n\_{samples})) + \mathcal{O}(n\_{features}n\_{samples}) The first term is the cost of sorting :math:`n\_{samples}` repeated for :math:`n\_{features}`. The second term is the linear scan over candidate split points to find the feature that offers the largest reduction... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/tree.rst | main | scikit-learn | [
-0.05546732246875763,
0.025211619213223457,
0.09399297833442688,
0.008518864400684834,
0.14102090895175934,
-0.1401994824409485,
-0.012907193042337894,
0.05051498860120773,
0.01992550864815712,
0.06485249102115631,
-0.0457165353000164,
-0.011626076884567738,
0.005580691620707512,
-0.017381... | 0.050222 |
the classes that are dominant. Class balancing can be done by sampling an equal number of samples from each class, or preferably by normalizing the sum of the sample weights (``sample\_weight``) for each class to the same value. Also note that weight-based pre-pruning criteria, such as ``min\_weight\_fraction\_leaf``, ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/tree.rst | main | scikit-learn | [
-0.0029452829621732235,
0.0037312000058591366,
-0.004772797226905823,
-0.030965695157647133,
0.08369652926921844,
-0.10436394810676575,
-0.029448335990309715,
0.0007938510971143842,
-0.050870589911937714,
0.06655920296907425,
-0.09255202114582062,
-0.028869885951280594,
0.0512738861143589,
... | 0.057184 |
.. math:: Q\_m^{left}(\theta) = \{(x, y) | x\_j \leq t\_m\} Q\_m^{right}(\theta) = Q\_m \setminus Q\_m^{left}(\theta) The quality of a candidate split of node :math:`m` is then computed using an impurity function or loss function :math:`H()`, the choice of which depends on the task being solved (classification or regre... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/tree.rst | main | scikit-learn | [
-0.09481488913297653,
0.035903654992580414,
0.05610654130578041,
0.0034886787179857492,
0.018441835418343544,
-0.003138304688036442,
0.042687319219112396,
0.025380220264196396,
0.029632551595568657,
0.023722760379314423,
-0.04116297885775566,
0.02019249089062214,
0.06590169668197632,
-0.01... | 0.068104 |
L1 error). MSE and Poisson deviance both set the predicted value of terminal nodes to the learned mean value :math:`\bar{y}\_m` of the node whereas the MAE sets the predicted value of terminal nodes to the median :math:`median(y)\_m`. Mean Squared Error: .. math:: \bar{y}\_m = \frac{1}{n\_m} \sum\_{y \in Q\_m} y H(Q\_m... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/tree.rst | main | scikit-learn | [
0.030426964163780212,
-0.07105883210897446,
0.005443023517727852,
0.040460653603076935,
0.05823228880763054,
-0.035937923938035965,
0.013924364931881428,
0.07798044383525848,
0.06107128784060478,
-0.00842240173369646,
0.05479924753308296,
-0.0002635400742292404,
0.09924646466970444,
-0.008... | -0.073246 |
will also be randomly sent to the left or right child. This is repeated for every feature considered at each split. The best split among these is chosen. During prediction, the treatment of missing-values is the same as that of the decision tree: - By default when predicting, the samples with missing values are classif... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/tree.rst | main | scikit-learn | [
-0.029115742072463036,
0.037913739681243896,
-0.027163229882717133,
0.05946851149201393,
0.09615499526262283,
-0.02375740557909012,
0.002712578745558858,
0.01460593193769455,
-0.013457280583679676,
0.02683207020163536,
0.005848215892910957,
-0.021339597180485725,
0.023520193994045258,
-0.0... | 0.077307 |
.. currentmodule:: sklearn .. \_model\_evaluation: =========================================================== Metrics and scoring: quantifying the quality of predictions =========================================================== .. \_which\_scoring\_function: Which scoring function should I use? =====================... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.025610003620386124,
-0.04816023260354996,
-0.07144654542207718,
-0.010433397255837917,
0.015584866516292095,
0.043850161135196686,
0.04665854200720787,
0.05691798776388168,
0.023156223818659782,
-0.0035359577741473913,
-0.045957162976264954,
-0.09755822271108627,
0.05735715851187706,
0.... | 0.103576 |
` non-negative ``predict``, strictly positive mean :ref:`Gamma deviance ` strictly positive ``predict``, strictly positive mean :ref:`Tweedie deviance ` depends on ``power`` ``predict``, depends on ``power`` median :ref:`absolute error ` all reals ``predict``, all reals quantile :ref:`pinball loss ` all reals ``predict... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.0383504256606102,
-0.036594122648239136,
-0.049429379403591156,
0.026122430339455605,
-0.03525083512067795,
-0.002379216253757477,
0.004497116897255182,
0.10516505688428879,
-0.012438596226274967,
-0.015461444854736328,
-0.012396911159157753,
-0.08361726254224777,
0.07285728305578232,
0... | -0.015291 |
metrics for random predictions. .. seealso:: For "pairwise" metrics, between \*samples\* and not estimators or predictions, see the :ref:`metrics` section. .. \_scoring\_parameter: The ``scoring`` parameter: defining model evaluation rules ========================================================== Model selection and e... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.06511414051055908,
-0.08524397015571594,
-0.10991574823856354,
0.04310058802366257,
0.06735284626483917,
-0.023447219282388687,
-0.0025202122051268816,
0.022051284089684486,
0.011507782153785229,
-0.007083606440573931,
-0.04760152846574783,
-0.143066868185997,
-0.0050961109809577465,
-0... | 0.047157 |
parameter for the :func:`fbeta\_score` function:: >>> from sklearn.metrics import fbeta\_score, make\_scorer >>> ftwo\_scorer = make\_scorer(fbeta\_score, beta=2) >>> from sklearn.model\_selection import GridSearchCV >>> from sklearn.svm import LinearSVC >>> grid = GridSearchCV(LinearSVC(), param\_grid={'C': [1, 10]}, ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.02501794323325157,
-0.042719244956970215,
-0.1263946294784546,
0.06223404034972191,
0.06836721301078796,
-0.022260602563619614,
0.047398194670677185,
0.07622460275888443,
0.03189101815223694,
0.026484405621886253,
-0.0370088666677475,
-0.030695460736751556,
-0.039389468729496,
0.0098292... | 0.069323 |
scorer name to the scoring function:: >>> from sklearn.metrics import accuracy\_score >>> from sklearn.metrics import make\_scorer >>> scoring = {'accuracy': make\_scorer(accuracy\_score), ... 'prec': 'precision'} Note that the dict values can either be scorer functions or one of the predefined metric strings. - As a c... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.01019304059445858,
-0.05765314772725105,
-0.0819144994020462,
-0.017586473375558853,
0.027460459619760513,
-0.03757733851671219,
0.07838365435600281,
0.04167517274618149,
-0.037046268582344055,
0.013104856014251709,
0.01740274392068386,
-0.08785900473594666,
0.024980735033750534,
0.0299... | 0.10932 |
where a majority class is to be ignored. \* ``"samples"`` applies only to multilabel problems. It does not calculate a per-class measure, instead calculating the metric over the true and predicted classes for each sample in the evaluation data, and returning their (``sample\_weight``-weighted) average. \* Selecting ``a... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.029651742428541183,
-0.026883551850914955,
-0.06568140536546707,
-0.05262859910726547,
0.01810235157608986,
-0.025103595107793808,
0.05044922977685928,
0.03144409507513046,
-0.010147886350750923,
-0.026666641235351562,
-0.04260088503360748,
-0.11878515779972076,
0.049404941499233246,
-0... | 0.052047 |
math:: \texttt{balanced-accuracy} = \frac{1}{2}\left( \frac{TP}{TP + FN} + \frac{TN}{TN + FP}\right ) If the classifier performs equally well on either class, this term reduces to the conventional accuracy (i.e., the number of correct predictions divided by the total number of predictions). In contrast, if the conventi... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.09759632498025894,
-0.0011620705481618643,
0.01697179675102234,
0.008166042156517506,
0.00454268092289567,
-0.08772793412208557,
-0.007821626961231232,
0.07436773180961609,
0.032408908009529114,
0.011411258019506931,
-0.08515956252813339,
-0.087753526866436,
0.028681235387921333,
-0.039... | 0.033147 |
the true class (Wikipedia and other references may use different convention for axes). By definition, entry :math:`i, j` in a confusion matrix is the number of observations actually in group :math:`i`, but predicted to be in group :math:`j`. Here is an example:: >>> from sklearn.metrics import confusion\_matrix >>> y\_... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.04989682510495186,
-0.10071568936109543,
-0.10355681926012039,
-0.038386061787605286,
0.004981097765266895,
-0.0688590332865715,
0.060352642089128494,
-0.042102664709091187,
0.048419125378131866,
0.008851341903209686,
0.05037097632884979,
0.009496510960161686,
0.027321726083755493,
0.02... | 0.079797 |
number of samples and :math:`n\_\text{labels}` is the number of labels, then the Hamming loss :math:`L\_{Hamming}` is defined as: .. math:: L\_{Hamming}(y, \hat{y}) = \frac{1}{n\_\text{samples} \* n\_\text{labels}} \sum\_{i=0}^{n\_\text{samples}-1} \sum\_{j=0}^{n\_\text{labels} - 1} 1(\hat{y}\_{i,j} \not= y\_{i,j}) whe... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
0.01596893183887005,
-0.04298774525523186,
0.022244378924369812,
-0.06305834650993347,
0.06758858263492584,
0.07682955265045166,
0.05072658136487007,
-0.022098202258348465,
0.017127839848399162,
-0.015980446711182594,
-0.08471540361642838,
-0.05822020396590233,
0.04283594712615013,
-0.0715... | 0.040628 |
.. rubric:: References .. [Manning2008] C.D. Manning, P. Raghavan, H. Schütze, `Introduction to Information Retrieval `\_, 2008. .. [Everingham2010] M. Everingham, L. Van Gool, C.K.I. Williams, J. Winn, A. Zisserman, `The Pascal Visual Object Classes (VOC) Challenge `\_, IJCV 2010. .. [Davis2006] J. Davis, M. Goadrich,... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.09695926308631897,
-0.05946200340986252,
-0.11013133823871613,
0.03137492015957832,
0.030602330341935158,
-0.037003595381975174,
0.022928541526198387,
0.09723859280347824,
0.007875830866396427,
0.0025086018722504377,
-0.03713863715529442,
-0.05667470023036003,
0.025042612105607986,
-0.0... | 0.125325 |
If all labels are included, "micro"-averaging in a multiclass setting will produce precision, recall and :math:`F` that are all identical to accuracy. \* "weighted" averaging may produce an F-score that is not between precision and recall. \* "macro" averaging for F-measures is calculated as the arithmetic mean over pe... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.009129034355282784,
-0.036455970257520676,
-0.015048235654830933,
0.005511503200978041,
-0.008496181108057499,
0.04860945791006088,
-0.028777316212654114,
0.1262209713459015,
0.010134506039321423,
-0.009878524579107761,
0.003083339426666498,
-0.060788653790950775,
0.056933145970106125,
... | 0.041012 |
By computing it set-wise it can be extended to apply to multilabel and multiclass through the use of `average` (see :ref:`above `). In the binary case:: >>> import numpy as np >>> from sklearn.metrics import jaccard\_score >>> y\_true = np.array([[0, 1, 1], ... [1, 1, 0]]) >>> y\_pred = np.array([[1, 1, 1], ... [1, 0, ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
0.025346864014863968,
-0.0756475105881691,
-0.07633741945028305,
-0.08087967336177826,
-0.0001984257687581703,
0.01813475973904133,
0.010758327320218086,
0.030303968116641045,
-0.022249050438404083,
-0.09253942221403122,
-0.021857835352420807,
-0.08167360723018646,
0.09597758948802948,
0.0... | 0.119146 |
sample is the negative log-likelihood of the classifier given the true label: .. math:: L\_{\log}(y, \hat{p}) = -\log \operatorname{Pr}(y|\hat{p}) = -(y \log (\hat{p}) + (1 - y) \log (1 - \hat{p})) This extends to the multiclass case as follows. Let the true labels for a set of samples be encoded as a 1-of-K binary ind... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.022292232140898705,
-0.018689081072807312,
0.025110876187682152,
-0.06896643340587616,
0.07763335108757019,
-0.026147346943616867,
0.0927773267030716,
-0.017709124833345413,
0.03682979196310043,
-0.008206185884773731,
-0.012379077263176441,
-0.056904081255197525,
0.07050395011901855,
-0... | 0.015372 |
function: >>> from sklearn.metrics import matthews\_corrcoef >>> y\_true = [+1, +1, +1, -1] >>> y\_pred = [+1, -1, +1, +1] >>> matthews\_corrcoef(y\_true, y\_pred) -0.33 .. rubric:: References .. [WikipediaMCC2021] Wikipedia contributors. Phi coefficient. Wikipedia, The Free Encyclopedia. April 21, 2021, 12:21 CEST. Av... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.04640481621026993,
-0.07984962314367294,
-0.056644439697265625,
-0.02236885391175747,
0.01886364445090294,
0.02375314198434353,
0.03917631879448891,
0.023739779368042946,
0.02226944826543331,
0.058449458330869675,
0.03452431038022041,
-0.022460397332906723,
-0.03518882393836975,
0.03068... | 0.087131 |
sensitivity, and FPR is one minus the specificity or true negative rate." This function requires the true binary value and the target scores, which can either be probability estimates of the positive class, confidence values, or binary decisions. Here is a small example of how to use the :func:`roc\_curve` function:: >... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
0.044515013694763184,
-0.0773862972855568,
-0.0909699946641922,
-0.010734901763498783,
0.08351055532693863,
-0.08145948499441147,
0.05192925035953522,
0.12827551364898682,
0.01709004119038582,
0.0020335561130195856,
-0.013324487954378128,
-0.1113920584321022,
-0.021233655512332916,
0.03419... | 0.113827 |
| k) + \text{AUC}(k | j)) where :math:`c` is the number of classes. This algorithm is used by setting the keyword argument ``multiclass`` to ``'ovo'`` and ``average`` to ``'weighted'``. The ``'weighted'`` option returns a prevalence-weighted average as described in [FC2009]\_. .. dropdown:: One-vs-rest Algorithm Comput... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.04807925224304199,
-0.04306896775960922,
-0.09564799070358276,
0.011749982833862305,
0.0027954301331192255,
-0.02178644761443138,
0.03173983469605446,
0.0848945751786232,
0.014703060500323772,
0.012143252417445183,
0.016331786289811134,
-0.092840276658535,
0.01891540177166462,
0.0231974... | 0.124479 |
of receiver operating characteristic (ROC) curves where False Negative Rate is plotted on the y-axis instead of True Positive Rate. DET curves are commonly plotted in normal deviate scale by transformation with :math:`\phi^{-1}` (with :math:`\phi` being the cumulative distribution function). The resulting performance c... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
0.007991552352905273,
-0.08635078370571136,
0.0005815019831061363,
0.04603593796491623,
0.09797503054141998,
-0.038000110536813736,
0.014976131729781628,
0.06052953749895096,
0.031297799199819565,
0.005903169047087431,
0.0908217504620552,
-0.031214438378810883,
0.005925917997956276,
0.0635... | 0.091773 |
[2, 2, 3, 4] >>> zero\_one\_loss(y\_true, y\_pred) 0.25 >>> zero\_one\_loss(y\_true, y\_pred, normalize=False) 1.0 In the multilabel case with binary label indicators, where the first label set [0,1] has an error:: >>> zero\_one\_loss(np.array([[0, 1], [1, 1]]), np.ones((2, 2))) 0.5 >>> zero\_one\_loss(np.array([[0, 1]... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
0.008110110647976398,
-0.03956974670290947,
-0.05918927118182182,
-0.032694920897483826,
0.06756792962551117,
0.0183796938508749,
0.059820596128702164,
-0.029311584308743477,
-0.03771653026342392,
-0.05492091551423073,
-0.010723034851253033,
-0.10182619839906693,
0.05025061219930649,
0.038... | 0.065427 |
always mean better calibration" [Bella2012]\_, [Flach2008]\_. .. rubric:: Examples \* See :ref:`sphx\_glr\_auto\_examples\_calibration\_plot\_calibration.py` for an example of Brier score loss usage to perform probability calibration of classifiers. .. rubric:: References .. [Brier1950] G. Brier, `Verification of forec... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.11129685491323471,
-0.0916718989610672,
-0.02758028917014599,
0.019231179729104042,
0.0506790354847908,
-0.0295826755464077,
-0.00004527947021415457,
0.0696437656879425,
-0.005401854403316975,
-0.01685248129069805,
-0.08049085736274719,
-0.017529858276247978,
0.08947567641735077,
-0.018... | 0.110661 |
or, if there are also no true positive predictions (:math:`tp=0`), that the classifier does not predict the positive class at all. In the first case, `LR+` can be interpreted as `np.inf`, in the second case (for instance, with highly imbalanced data) it can be interpreted as `np.nan`. The negative likelihood ratio (`LR... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.08508359640836716,
-0.08455872535705566,
-0.021342338994145393,
-0.043495260179042816,
0.1136443242430687,
-0.06026483699679375,
-0.02101980708539486,
0.00905129685997963,
0.00154666427988559,
0.014917343854904175,
0.07149886339902878,
-0.07679889351129532,
0.034616872668266296,
0.00411... | -0.060789 |
are some usage examples of the :func:`d2\_brier\_score` function:: >>> from sklearn.metrics import d2\_brier\_score >>> y\_true = [1, 1, 2, 3] >>> y\_pred = [ ... [0.5, 0.25, 0.25], ... [0.5, 0.25, 0.25], ... [0.5, 0.25, 0.25], ... [0.5, 0.25, 0.25], ... ] >>> d2\_brier\_score(y\_true, y\_pred) 0.0 >>> y\_true = [1, 2,... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.04084505885839462,
-0.07598382234573364,
-0.03603658080101013,
-0.04819821938872337,
0.013485659845173359,
-0.03881339356303215,
0.008176261559128761,
0.03348664566874504,
0.010337721556425095,
0.0188356414437294,
-0.017440252006053925,
-0.035196684300899506,
0.003972687758505344,
0.014... | 0.120562 |
\right\}\right|`, :math:`|\cdot|` computes the cardinality of the set (i.e., the number of elements in the set), and :math:`||\cdot||\_0` is the :math:`\ell\_0` "norm" (which computes the number of nonzero elements in a vector). Here is a small example of usage of this function:: >>> import numpy as np >>> from sklearn... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
0.05780016630887985,
-0.07481926679611206,
-0.06000209227204323,
-0.041756175458431244,
0.019045324996113777,
-0.024025430902838707,
0.0530054084956646,
-0.02455190382897854,
-0.003417857689782977,
-0.025705590844154358,
-0.04912625253200531,
-0.004380571655929089,
0.08775202184915543,
0.0... | 0.067018 |
preferred; if the ground-truth consists of actual usefulness scores (e.g. 0 for irrelevant, 1 for relevant, 2 for very relevant), NDCG can be used. For one sample, given the vector of continuous ground-truth values for each target :math:`y \in \mathbb{R}^{M}`, where :math:`M` is the number of outputs, and the predictio... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.03715613856911659,
-0.0399063341319561,
-0.05915414169430733,
0.012130667455494404,
0.00892670452594757,
0.02500271424651146,
-0.006435893941670656,
0.09554747492074966,
0.0561465248465538,
0.051167089492082596,
-0.09986168146133423,
-0.020399101078510284,
0.08837853372097015,
0.0366577... | 0.049193 |
(y\_i - \hat{y}\_i)^2 = \sum\_{i=1}^{n} \epsilon\_i^2`. Note that :func:`r2\_score` calculates unadjusted :math:`R^2` without correcting for bias in sample variance of y. In the particular case where the true target is constant, the :math:`R^2` score is not finite: it is either ``NaN`` (perfect predictions) or ``-Inf``... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.03288310021162033,
-0.052143748849630356,
-0.0541624017059803,
0.06600628793239594,
0.10424767434597015,
-0.004242406692355871,
-0.022226789966225624,
-0.010952690616250038,
0.007533300202339888,
0.01600169949233532,
-0.0209221001714468,
-0.003216513665392995,
0.03733709454536438,
0.030... | 0.028742 |
root mean squared error (RMSE), is another common metric that provides a measure in the same units as the target variable. RMSE is available through the :func:`root\_mean\_squared\_error` function. .. \_mean\_squared\_log\_error: Mean squared logarithmic error ------------------------------ The :func:`mean\_squared\_lo... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.05059374123811722,
-0.09365888684988022,
-0.06304341554641724,
0.055764421820640564,
-0.01134362630546093,
-0.05395198613405228,
-0.023587893694639206,
0.14375871419906616,
0.09033199399709702,
0.03851596266031265,
-0.07848784327507019,
-0.027990756556391716,
0.08146565407514572,
0.0274... | 0.1103 |
the target and the prediction. If :math:`\hat{y}\_i` is the predicted value of the :math:`i`-th sample and :math:`y\_i` is the corresponding true value, then the median absolute error (MedAE) estimated over :math:`n\_{\text{samples}}` is defined as .. math:: \text{MedAE}(y, \hat{y}) = \text{median}(\mid y\_1 - \hat{y}\... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.018041564151644707,
-0.054300449788570404,
-0.0378771610558033,
-0.045110072940588,
0.07244487851858139,
-0.0893838107585907,
-0.03269488736987114,
0.11285030841827393,
-0.03859024867415428,
-0.00883051473647356,
-0.06499406695365906,
-0.053924623876810074,
0.0729043111205101,
-0.057451... | 0.125235 |
:func:`mean\_tweedie\_deviance` function computes the `mean Tweedie deviance error `\_ with a ``power`` parameter (:math:`p`). This is a metric that elicits predicted expectation values of regression targets. Following special cases exist, - when ``power=0`` it is equivalent to :func:`mean\_squared\_error`. - when ``po... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.03751997649669647,
-0.013308441266417503,
-0.007540650200098753,
0.03715711832046509,
0.028113730251789093,
-0.058532651513814926,
0.09226761758327484,
0.05453534796833992,
0.04100358858704567,
0.020966460928320885,
0.013025099411606789,
-0.05825641006231308,
0.04226643964648247,
-0.001... | -0.039103 |
data with non-symmetric noise and outliers. .. \_d2\_score: D² score -------- The D² score computes the fraction of deviance explained. It is a generalization of R², where the squared error is generalized and replaced by a deviance of choice :math:`\text{dev}(y, \hat{y})` (e.g., Tweedie, pinball or mean absolute error)... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
0.04150140658020973,
-0.03195393085479736,
-0.01997922733426094,
0.05394897609949112,
0.053624704480171204,
-0.015476838685572147,
0.012417451478540897,
0.08478537201881409,
0.03423949331045151,
0.0347120501101017,
-0.03720010071992874,
0.04791147634387016,
0.049696147441864014,
0.03906948... | 0.049692 |
predicted values is the expected value of `y` given `X`. This is typically the case for regression models that minimize the mean squared error objective function or more generally the :ref:`mean Tweedie deviance ` for any value of its "power" parameter. When plotting the predictions of an estimator that predicts a quan... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.07067841291427612,
0.004684713669121265,
-0.011132624931633472,
0.040025316178798676,
0.03920422121882439,
-0.019925681874155998,
0.04820659011602402,
0.05034951493144035,
0.07854752987623215,
0.010703765787184238,
0.00046530322288163006,
0.00689743971452117,
-0.008494340814650059,
0.02... | -0.027408 |
SVC >>> clf = SVC(kernel='linear', C=1).fit(X\_train, y\_train) >>> clf.score(X\_test, y\_test) 0.63 >>> clf = DummyClassifier(strategy='most\_frequent', random\_state=0) >>> clf.fit(X\_train, y\_train) DummyClassifier(random\_state=0, strategy='most\_frequent') >>> clf.score(X\_test, y\_test) 0.579 We see that ``SVC``... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.018024127930402756,
-0.1031966581940651,
-0.09513743966817856,
-0.0015007866313681006,
0.08809394389390945,
-0.029383962973952293,
0.025058789178729057,
0.0338185615837574,
-0.03597480431199074,
-0.04841097071766853,
-0.04798563942313194,
-0.04242321103811264,
-0.029834775254130363,
-0.... | 0.065024 |
.. \_metrics: Pairwise metrics, Affinities and Kernels ======================================== The :mod:`sklearn.metrics.pairwise` submodule implements utilities to evaluate pairwise distances or affinity of sets of samples. This module contains both distance metrics and kernels. A brief summary is given on the two he... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/metrics.rst | main | scikit-learn | [
-0.05366167426109314,
-0.0784807950258255,
-0.09904947876930237,
-0.09609668701887131,
0.027522895485162735,
-0.0506783090531826,
0.004781915340572596,
-0.006040059495717287,
0.05255437269806862,
-0.06350968033075333,
0.0250838790088892,
-0.07801274210214615,
0.027759727090597153,
0.047603... | 0.117459 |
polynomial kernel represents the similarity between two vectors. Conceptually, the polynomial kernel considers not only the similarity between vectors under the same dimension, but also across dimensions. When used in machine learning algorithms, this allows to account for feature interaction. The polynomial kernel is ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/metrics.rst | main | scikit-learn | [
-0.02995692379772663,
-0.08493392914533615,
-0.029337111860513687,
-0.025981910526752472,
0.04701721668243408,
-0.042506568133831024,
0.02807079255580902,
0.00914796907454729,
0.07920641452074051,
-0.011616339907050133,
0.052819374948740005,
-0.04297703877091408,
-0.020092017948627472,
-0.... | 0.179199 |
.. \_covariance: =================================================== Covariance estimation =================================================== .. currentmodule:: sklearn.covariance Many statistical problems require the estimation of a population's covariance matrix, which can be seen as an estimation of data set scatte... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/covariance.rst | main | scikit-learn | [
0.004771838895976543,
-0.03437205031514168,
-0.000489337311591953,
0.0076390826143324375,
0.0483948290348053,
-0.06005550175905228,
0.031175870448350906,
-0.07170671224594116,
-0.00682733952999115,
0.023906635120511055,
0.042615458369255066,
-0.07461507618427277,
0.013527967967092991,
-0.0... | 0.101351 |
or it can be otherwise obtained by fitting a :class:`LedoitWolf` object to the same sample. .. note:: \*\*Case when population covariance matrix is isotropic\*\* It is important to note that when the number of samples is much larger than the number of features, one would expect that no shrinkage would be necessary. The... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/covariance.rst | main | scikit-learn | [
-0.0399068146944046,
-0.014473658055067062,
-0.004392467904835939,
0.0737822949886322,
0.021768391132354736,
0.04289747402071953,
-0.03827429935336113,
0.002191380364820361,
-0.03435014188289642,
0.028303682804107666,
0.10883525758981705,
0.046262335032224655,
-0.003244059393182397,
-0.025... | 0.034543 |
data, they can be numerically unstable. In addition, unlike shrinkage estimators, sparse estimators are able to recover off-diagonal structure. The :class:`GraphicalLasso` estimator uses an l1 penalty to enforce sparsity on the precision matrix: the higher its ``alpha`` parameter, the more sparse the precision matrix. ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/covariance.rst | main | scikit-learn | [
-0.06467556953430176,
-0.09643732756376266,
-0.0378711000084877,
-0.013646820560097694,
0.06660047918558121,
-0.06925766915082932,
-0.05399324744939804,
0.0290013886988163,
-0.0406043641269207,
-0.03526442497968674,
0.031950198113918304,
0.05853084474802017,
-0.0007844884530641139,
0.01452... | 0.033285 |
of the covariance matrix of the data set ("reweighting step"). Rousseeuw and Van Driessen [4]\_ developed the FastMCD algorithm in order to compute the Minimum Covariance Determinant. This algorithm is used in scikit-learn when fitting an MCD object to data. The FastMCD algorithm also computes a robust estimate of the ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/covariance.rst | main | scikit-learn | [
-0.0999489277601242,
-0.023618167266249657,
-0.003143287729471922,
0.0031079864129424095,
0.04878287762403488,
-0.019406193867325783,
-0.01310499757528305,
0.02120928466320038,
-0.01906905695796013,
-0.007643429096788168,
0.05717293545603752,
0.049709755927324295,
0.05509956181049347,
-0.0... | 0.109838 |
.. \_multiclass: ===================================== Multiclass and multioutput algorithms ===================================== This section of the user guide covers functionality related to multi-learning problems, including :term:`multiclass`, :term:`multilabel`, and :term:`multioutput` classification and regressi... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/multiclass.rst | main | scikit-learn | [
-0.03902139142155647,
-0.05417552590370178,
-0.05245625227689743,
-0.053293488919734955,
0.025940896943211555,
0.003751952201128006,
-0.009089103899896145,
0.004057108890265226,
-0.05166773870587349,
-0.02903624065220356,
-0.02247459441423416,
-0.01800541952252388,
0.00635518180206418,
-0.... | 0.160042 |
(either in terms of generalization error or required computational resources). Target format ------------- Valid :term:`multiclass` representations for :func:`~sklearn.utils.multiclass.type\_of\_target` (`y`) are: - 1d or column vector containing more than two discrete values. An example of a vector ``y`` for 4 samples... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/multiclass.rst | main | scikit-learn | [
0.04569857940077782,
-0.07510454207658768,
-0.08337243646383286,
-0.08830034732818604,
0.0714653879404068,
-0.012656163424253464,
0.016098685562610626,
-0.13398925960063934,
-0.053314968943595886,
-0.015285981819033623,
-0.004399875644594431,
-0.021135345101356506,
0.05528390407562256,
-0.... | 0.038298 |
levels computed by the underlying binary classifiers. Since it requires to fit ``n\_classes \* (n\_classes - 1) / 2`` classifiers, this method is usually slower than one-vs-the-rest, due to its O(n\_classes^2) complexity. However, this method may be advantageous for algorithms such as kernel algorithms which don't scal... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/multiclass.rst | main | scikit-learn | [
-0.03208004683256149,
-0.059871334582567215,
0.024097418412566185,
-0.029339062049984932,
0.0939127504825592,
-0.08727778494358063,
-0.08773040026426315,
0.022470178082585335,
0.028073683381080627,
-0.005956873297691345,
-0.07675126194953918,
-0.02748911641538143,
0.012262585572898388,
-0.... | 0.0333 |
by other classifiers, hence the name "error-correcting". In practice, however, this may not happen as classifier mistakes will typically be correlated. The error-correcting output codes have a similar effect to bagging. Below is an example of multiclass learning using Output-Codes:: >>> from sklearn import datasets >>>... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/multiclass.rst | main | scikit-learn | [
-0.035119861364364624,
-0.08639422059059143,
-0.03452412411570549,
0.02213050052523613,
0.017186349257826805,
-0.07365398854017258,
0.048457320779561996,
-0.08446409553289413,
-0.07976454496383667,
-0.03156323358416557,
0.007525499910116196,
0.003923729993402958,
0.05621451139450073,
-0.05... | -0.010861 |
1 .. \_multioutputclassfier: MultiOutputClassifier --------------------- Multilabel classification support can be added to any classifier with :class:`~sklearn.multioutput.MultiOutputClassifier`. This strategy consists of fitting one classifier per target. This allows multiple target variable classifications. The purpo... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/multiclass.rst | main | scikit-learn | [
-0.025504425168037415,
-0.10845061391592026,
-0.08323761820793152,
-0.06250540912151337,
0.0652020275592804,
0.019092317670583725,
-0.0247003473341465,
-0.06513449549674988,
-0.03596269339323044,
-0.08175987750291824,
-0.054963674396276474,
-0.056293293833732605,
0.02280980348587036,
-0.04... | 0.103361 |
np.vstack((y1, y2, y3)).T >>> n\_samples, n\_features = X.shape # 10,100 >>> n\_outputs = Y.shape[1] # 3 >>> n\_classes = 3 >>> forest = RandomForestClassifier(random\_state=1) >>> multi\_target\_forest = MultiOutputClassifier(forest, n\_jobs=2) >>> multi\_target\_forest.fit(X, Y).predict(X) array([[2, 2, 0], [1, 2, 1]... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/multiclass.rst | main | scikit-learn | [
0.05133732780814171,
-0.11197195947170258,
-0.06315150111913681,
-0.020289525389671326,
0.09772004187107086,
-0.06138530373573303,
0.004083568230271339,
-0.09931179881095886,
-0.042114660143852234,
-0.03436727449297905,
-0.10757744312286377,
-0.10132714360952377,
-0.03895290195941925,
0.00... | 0.067256 |
.. \_neighbors: ================= Nearest Neighbors ================= .. sectionauthor:: Jake Vanderplas .. currentmodule:: sklearn.neighbors :mod:`sklearn.neighbors` provides functionality for unsupervised and supervised neighbors-based learning methods. Unsupervised nearest neighbors is the foundation of many other l... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neighbors.rst | main | scikit-learn | [
-0.07819987833499908,
-0.09972770512104034,
-0.07010406255722046,
-0.04919898509979248,
0.09148124605417252,
0.04976606369018555,
-0.06972674280405045,
-0.06924464553594589,
-0.01671895943582058,
0.0039007076993584633,
0.01941189356148243,
0.013564525172114372,
0.059171147644519806,
0.0416... | 0.061635 |
0., 1., 1., 0.], [0., 0., 0., 1., 1., 0.], [0., 0., 0., 0., 1., 1.]]) The dataset is structured such that points nearby in index order are nearby in parameter space, leading to an approximately block-diagonal matrix of K-nearest neighbors. Such a sparse graph is useful in a variety of circumstances which make use of sp... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neighbors.rst | main | scikit-learn | [
-0.02263059839606285,
-0.02807759679853916,
-0.018156779929995537,
-0.04510999843478203,
0.04083062708377838,
0.03102194145321846,
-0.054832667112350464,
-0.07289000600576401,
-0.004676362033933401,
0.0030218639876693487,
0.02740463800728321,
0.004101347178220749,
0.02511303685605526,
-0.0... | 0.035786 |
point. Alternatively, a user-defined function of the distance can be supplied to compute the weights. .. |classification\_1| image:: ../auto\_examples/neighbors/images/sphx\_glr\_plot\_classification\_001.png :target: ../auto\_examples/neighbors/plot\_classification.html :scale: 75 .. centered:: |classification\_1| .. ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neighbors.rst | main | scikit-learn | [
-0.10253243148326874,
-0.0488087497651577,
-0.10260733217000961,
-0.034138210117816925,
0.0664081797003746,
-0.014125779271125793,
0.030043181031942368,
0.021344255656003952,
-0.013528545387089252,
-0.06287559866905212,
-0.00911154504865408,
-0.023001184687018394,
0.0768037959933281,
0.004... | 0.110848 |
was the \*KD tree\* data structure (short for \*K-dimensional tree\*), which generalizes two-dimensional \*Quad-trees\* and 3-dimensional \*Oct-trees\* to an arbitrary number of dimensions. The KD tree is a binary tree structure which recursively partitions the parameter space along the data axes, dividing it into nest... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neighbors.rst | main | scikit-learn | [
-0.03254687041044235,
0.002519946312531829,
0.03487169370055199,
-0.058244913816452026,
-0.016969118267297745,
-0.07790069282054901,
-0.07773961871862411,
-0.05476786568760872,
0.07065286487340927,
0.05672875791788101,
-0.0017745159566402435,
-0.0031994518358260393,
0.004408880136907101,
-... | 0.013026 |
algorithms can be more efficient than a tree-based approach. Both :class:`KDTree` and :class:`BallTree` address this through providing a \*leaf size\* parameter: this controls the number of samples at which a query switches to brute-force. This allows both algorithms to approach the efficiency of a brute-force computat... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neighbors.rst | main | scikit-learn | [
0.003671059850603342,
-0.00784382689744234,
0.024026110768318176,
0.015283942222595215,
-0.01293662004172802,
-0.09549206495285034,
-0.01648237183690071,
-0.023915164172649384,
0.0425688810646534,
0.03720271959900856,
-0.04052001237869263,
-0.04515937715768814,
0.02621402218937874,
0.02881... | 0.074629 |
of this switch can be specified with the parameter ``leaf\_size``. This parameter choice has many effects: \*\*construction time\*\* A larger ``leaf\_size`` leads to a faster tree construction time, because fewer nodes need to be created \*\*query time\*\* Both a large or small ``leaf\_size`` can lead to suboptimal que... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neighbors.rst | main | scikit-learn | [
0.02576344646513462,
0.014181249774992466,
-0.01097587589174509,
0.04867938160896301,
-0.009332201443612576,
0.0003345486184116453,
-0.001913196756504476,
0.05227654054760933,
-0.04292461276054382,
0.055763036012649536,
0.017680620774626732,
-0.018194163218140602,
-0.02071518264710903,
-0.... | 0.080876 |
:class:`~sklearn.manifold.Isomap`. All these estimators can compute internally the nearest neighbors, but most of them also accept precomputed nearest neighbors :term:`sparse graph`, as given by :func:`~sklearn.neighbors.kneighbors\_graph` and :func:`~sklearn.neighbors.radius\_neighbors\_graph`. With mode `mode='connec... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neighbors.rst | main | scikit-learn | [
-0.03012401983141899,
-0.0484105683863163,
-0.09385699778795242,
-0.01750876195728779,
0.04038037359714508,
-0.042137544602155685,
-0.05334542319178581,
-0.03373350575566292,
-0.0726286917924881,
-0.0017734464490786195,
0.05840738117694855,
-0.01826733723282814,
0.024495117366313934,
-0.01... | 0.085872 |
:class:`KNeighborsClassifier` to enable caching of the neighbors graph during a hyper-parameter grid-search. .. \_nca: Neighborhood Components Analysis ================================ .. sectionauthor:: William de Vazelhes Neighborhood Components Analysis (NCA, :class:`NeighborhoodComponentsAnalysis`) is a distance me... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neighbors.rst | main | scikit-learn | [
-0.046370863914489746,
-0.038823891431093216,
-0.055368632078170776,
-0.08340326696634293,
0.03674803301692009,
-0.016767140477895737,
-0.05713389441370964,
-0.0378575474023819,
-0.037109147757291794,
-0.030958229675889015,
-0.015698514878749847,
0.015665903687477112,
0.030884260311722755,
... | 0.110349 |
64`. The data set is split into a training and a test set of equal size, then standardized. For evaluation the 3-nearest neighbor classification accuracy is computed on the 2-dimensional projected points found by each method. Each data sample belongs to one of 10 classes. .. |nca\_dim\_reduction\_1| image:: ../auto\_ex... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neighbors.rst | main | scikit-learn | [
-0.023535840213298798,
-0.08175204694271088,
-0.0381486602127552,
-0.08338729292154312,
0.057630155235528946,
-0.010411521419882774,
-0.06754199415445328,
-0.0021528482902795076,
-0.07740187644958496,
0.010447963140904903,
-0.014150590635836124,
-0.07929939031600952,
0.02660972997546196,
-... | 0.052123 |
.. \_preprocessing: ================== Preprocessing data ================== .. currentmodule:: sklearn.preprocessing The ``sklearn.preprocessing`` package provides several common utility functions and transformer classes to change raw feature vectors into a representation that is more suitable for the downstream estim... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
-0.058875419199466705,
-0.08393914252519608,
-0.02430952526628971,
0.018486926332116127,
0.0545445941388607,
-0.02207619696855545,
-0.04589581489562988,
0.030426891520619392,
-0.05331945791840553,
0.01036023162305355,
0.016857046633958817,
0.024382147938013077,
-0.047192662954330444,
0.005... | 0.103289 |
absolute value of each feature is scaled to unit size. This can be achieved using :class:`MinMaxScaler` or :class:`MaxAbsScaler`, respectively. The motivation to use this scaling includes robustness to very small standard deviations of features and preserving zero entries in sparse data. Here is an example to scale a t... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
0.06414514034986496,
-0.021459117531776428,
-0.034046370536088943,
-0.009214196354150772,
-0.0032148626632988453,
-0.026149246841669083,
-0.06666789948940277,
0.01755865477025509,
-0.11588272452354431,
-0.08834918588399887,
-0.020005537196993828,
-0.028219418600201607,
0.02118869684636593,
... | -0.016626 |
with outliers -------------------------- If your data contains many outliers, scaling using the mean and variance of the data is likely to not work very well. In these cases, you can use :class:`RobustScaler` as a drop-in replacement instead. It uses more robust estimates for the center and range of your data. .. dropd... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
0.01923869363963604,
-0.051182232797145844,
0.00587534811347723,
0.011740457266569138,
0.027732117101550102,
-0.050219982862472534,
-0.03970147296786308,
0.052345551550388336,
-0.07297662645578384,
-0.05222305655479431,
0.004929913207888603,
0.0422714501619339,
-0.011913551948964596,
0.008... | -0.051415 |
distribution :math:`G`. This formula is using the two following facts: (i) if :math:`X` is a random variable with a continuous cumulative distribution function :math:`F` then :math:`F(X)` is uniformly distributed on :math:`[0,1]`; (ii) if :math:`U` is a random variable with uniform distribution on :math:`[0,1]` then :m... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
-0.10901600867509842,
-0.01143339741975069,
0.00839606299996376,
-0.013033025898039341,
-0.04807112365961075,
-0.0929998829960823,
0.005043727811425924,
-0.03356770798563957,
0.05535447224974632,
-0.0063204108737409115,
0.05482158437371254,
0.036435242742300034,
0.08162462711334229,
0.0073... | 0.050547 |
by default. Below are examples of Box-Cox and Yeo-Johnson applied to various probability distributions. Note that when applied to certain distributions, the power transforms achieve very Gaussian-like results, but with others, they are ineffective. This highlights the importance of visualizing the data before and after... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
0.036950163543224335,
0.015394879505038261,
-0.06415198743343353,
-0.0014716492732986808,
-0.02252242900431156,
-0.08510922640562057,
-0.014129454270005226,
-0.001809297944419086,
0.007669377140700817,
-0.016850966960191727,
0.0655701756477356,
0.03718527778983116,
0.0030053684022277594,
0... | 0.076079 |
transforms each categorical feature to one new feature of integers (0 to n\_categories - 1):: >>> enc = preprocessing.OrdinalEncoder() >>> X = [['male', 'from US', 'uses Safari'], ['female', 'from Europe', 'uses Firefox']] >>> enc.fit(X) OrdinalEncoder() >>> enc.transform([['female', 'from US', 'uses Safari']]) array([... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
0.0763128325343132,
-0.01813848689198494,
-0.03504907712340355,
-0.002192710991948843,
0.046832744032144547,
-0.03224709630012512,
-0.01850018836557865,
-0.0959123820066452,
-0.03440823405981064,
-0.047586098313331604,
-0.03861802816390991,
-0.060729317367076874,
-0.03660858795046806,
0.02... | 0.003684 |
'uses Safari'], ['female', 'from Europe', 'uses Firefox']] >>> enc.fit(X) OneHotEncoder(handle\_unknown='infrequent\_if\_exist') >>> enc.transform([['female', 'from Asia', 'uses Chrome']]).toarray() array([[1., 0., 0., 0., 0., 0.]]) It is also possible to encode each column into ``n\_categories - 1`` columns instead of... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
-0.005698412656784058,
-0.043689269572496414,
-0.09079758077859879,
0.06492513418197632,
0.07830220460891724,
-0.006582679692655802,
-0.013236560858786106,
-0.1523008793592453,
-0.05147663131356239,
-0.07489728182554245,
0.036586128175258636,
-0.06300537288188934,
-0.03823992237448692,
0.0... | -0.017591 |
If `min\_frequency` is an integer, categories with a cardinality smaller than `min\_frequency` will be considered infrequent. If `min\_frequency` is a float, categories with a cardinality smaller than this fraction of the total number of samples will be considered infrequent. The default value is 1, which means every c... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
0.06977249681949615,
-0.022356826812028885,
-0.03443995490670204,
0.03183591365814209,
0.009152485057711601,
-0.00841083750128746,
-0.060560211539268494,
-0.04285614937543869,
-0.021794268861413002,
-0.06449675559997559,
-0.030192846432328224,
-0.08218434453010559,
0.0053174602799117565,
-... | 0.022854 |
>>> enc = preprocessing.OneHotEncoder(min\_frequency=4, max\_categories=3, sparse\_output=False) >>> enc = enc.fit(X) >>> enc.transform([['dog'], ['cat'], ['rabbit'], ['snake']]) array([[0., 0., 1.], [1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) If there are infrequent categories with the same cardinality at the cutoff of... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
0.10636244714260101,
-0.028228633105754852,
-0.0002903554996009916,
-0.02500843070447445,
-0.008783724159002304,
-0.024510765448212624,
-0.02131064236164093,
-0.0905199721455574,
-0.03120896965265274,
-0.007647369522601366,
0.004293819423764944,
-0.0511639229953289,
0.011526981368660927,
0... | 0.00872 |
diagram shows the :term:`cross fitting` scheme in :meth:`~TargetEncoder.fit\_transform` with the default `cv=5`: .. image:: ../images/target\_encoder\_cross\_validation.svg :width: 600 :align: center The :meth:`~TargetEncoder.fit` method does \*\*not\*\* use any :term:`cross fitting` schemes and learns one encoding on ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
-0.06080179288983345,
-0.07104811072349548,
-0.05954465642571449,
0.006839490029960871,
0.06867770105600357,
-0.05304684117436409,
-0.07985593378543854,
0.011281481944024563,
-0.07342669367790222,
-0.04940430074930191,
-0.0034592922311276197,
-0.13482064008712769,
-0.02348942682147026,
-0.... | 0.004327 |
:func:`pandas.cut`:: >>> import pandas as pd >>> import numpy as np >>> from sklearn import preprocessing >>> >>> bins = [0, 1, 13, 20, 60, np.inf] >>> labels = ['infant', 'kid', 'teen', 'adult', 'senior citizen'] >>> transformer = preprocessing.FunctionTransformer( ... pd.cut, kw\_args={'bins': bins, 'labels': labels,... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
0.0022303995210677385,
0.04121822491288185,
-0.07079947739839554,
-0.06707146018743515,
0.04425456374883652,
0.022549917921423912,
-0.025939637795090675,
-0.05704647675156593,
-0.05903789773583412,
0.04117630422115326,
0.06480778753757477,
-0.08228263258934021,
-0.038737375289201736,
-0.00... | 0.044804 |
2], [3, 4, 5], [6, 7, 8]]) >>> poly = PolynomialFeatures(degree=3, interaction\_only=True) >>> poly.fit\_transform(X) array([[ 1., 0., 1., 2., 0., 0., 2., 0.], [ 1., 3., 4., 5., 12., 15., 20., 60.], [ 1., 6., 7., 8., 42., 48., 56., 336.]]) The features of X have been transformed from :math:`(X\_1, X\_2, X\_3)` to :math... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
-0.017870021983981133,
0.013575640507042408,
0.006525685545057058,
-0.04170466959476471,
0.024966513738036156,
-0.013155966065824032,
0.03205173462629318,
-0.07170005887746811,
-0.05050456151366234,
-0.020887786522507668,
0.04493943974375725,
-0.02833511121571064,
-0.005391476210206747,
-0... | 0.038304 |
a log transformation in a pipeline, do:: >>> import numpy as np >>> from sklearn.preprocessing import FunctionTransformer >>> transformer = FunctionTransformer(np.log1p, validate=True) >>> X = np.array([[0, 1], [2, 3]]) >>> # Since FunctionTransformer is no-op during fit, we can call transform directly >>> transformer.... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
-0.0768207460641861,
-0.020068222656846046,
-0.04250044748187065,
-0.02324610762298107,
0.047318581491708755,
-0.1448938101530075,
0.010387098416686058,
-0.015891138464212418,
-0.017648104578256607,
-0.01548495888710022,
-0.03508233278989792,
-0.04558440297842026,
-0.02291000634431839,
0.0... | 0.091384 |
.. \_mixture: .. \_gmm: ======================= Gaussian mixture models ======================= .. currentmodule:: sklearn.mixture ``sklearn.mixture`` is a package which enables one to learn Gaussian Mixture Models (diagonal, spherical, tied and full covariance matrices supported), sample them, and estimate them from d... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/mixture.rst | main | scikit-learn | [
0.03827058523893356,
-0.14156268537044525,
0.006165196653455496,
-0.059283897280693054,
0.09584016352891922,
0.012920984998345375,
0.08137741684913635,
-0.008948959410190582,
-0.04092387482523918,
-0.05283186212182045,
0.054047178477048874,
-0.0725696012377739,
0.050786737352609634,
-0.000... | 0.067344 |
difficulty in learning Gaussian mixture models from unlabeled data is that one usually doesn't know which points came from which latent component (if one has access to this information it gets very easy to fit a separate Gaussian distribution to each set of points). `Expectation-maximization `\_ is a well-founded stati... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/mixture.rst | main | scikit-learn | [
0.030707335099577904,
-0.10524823516607285,
0.04576911777257919,
-0.023170648142695427,
0.030497858300805092,
0.011504944413900375,
0.05928691104054451,
-0.055349040776491165,
0.0005942682037129998,
-0.0362776443362236,
0.040133509784936905,
-0.0117301344871521,
0.08583580702543259,
0.0128... | 0.001922 |
active in the mixture. The parameters implementation of the :class:`BayesianGaussianMixture` class proposes two types of prior for the weights distribution: a finite mixture model with Dirichlet distribution and an infinite mixture model with the Dirichlet Process. In practice Dirichlet Process inference algorithm is a... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/mixture.rst | main | scikit-learn | [
-0.019125450402498245,
-0.13617953658103943,
0.07637427002191544,
-0.008478150703012943,
0.042586736381053925,
0.0017482730327174067,
0.08955186605453491,
0.08556347340345383,
0.0570128969848156,
-0.004223431460559368,
0.025863830000162125,
0.0474492646753788,
0.05892200022935867,
-0.03719... | 0.061131 |
hence will produce wildly different solutions for different numbers of components, the variational inference with a Dirichlet process prior (``weight\_concentration\_prior\_type='dirichlet\_process'``) won't change much with changes to the parameters, leading to more stability and less tuning. :Regularization: Due to t... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/mixture.rst | main | scikit-learn | [
-0.04293849319219589,
-0.14528238773345947,
0.057175274938344955,
-0.03376021236181259,
0.04773196578025818,
0.05516408383846283,
0.0077421762980520725,
0.030941344797611237,
-0.002795402193441987,
-0.03381579741835594,
-0.04465649649500847,
0.05525238439440727,
0.0030381272081285715,
0.00... | 0.036438 |
.. \_partial\_dependence: =============================================================== Partial Dependence and Individual Conditional Expectation plots =============================================================== .. currentmodule:: sklearn.inspection Partial dependence plots (PDP) and individual conditional expect... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/partial_dependence.rst | main | scikit-learn | [
-0.08118787407875061,
-0.04124707356095314,
0.09971583634614944,
0.043704554438591,
0.14379380643367767,
0.011309641413390636,
0.010733507573604584,
0.00011238826118642464,
0.009885194711387157,
-0.04436662793159485,
0.0007441829075105488,
-0.020508764311671257,
0.030398491770029068,
0.003... | -0.025652 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.