content large_stringlengths 3 20.5k | url large_stringlengths 54 193 | branch large_stringclasses 4
values | source large_stringclasses 42
values | embeddings listlengths 384 384 | score float64 -0.21 0.65 |
|---|---|---|---|---|---|
.. \_gaussian\_process: ================== Gaussian Processes ================== .. currentmodule:: sklearn.gaussian\_process \*\*Gaussian Processes (GP)\*\* are a nonparametric supervised learning method used to solve \*regression\* and \*probabilistic classification\* problems. The advantages of Gaussian processes ar... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/gaussian_process.rst | main | scikit-learn | [
-0.14184944331645966,
-0.1247568354010582,
-0.004009552299976349,
-0.001137965708039701,
0.08885965496301651,
-0.05928182601928711,
0.03270294889807701,
0.037590738385915756,
-0.015495049767196178,
0.015049890615046024,
-0.053129758685827255,
0.028344931080937386,
0.049770236015319824,
-0.... | 0.2214 |
class probabilities. GaussianProcessClassifier places a GP prior on a latent function :math:`f`, which is then squashed through a link function :math:`\pi` to obtain the probabilistic classification. The latent function :math:`f` is a so-called nuisance function, whose values are not observed and are not relevant by th... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/gaussian_process.rst | main | scikit-learn | [
-0.08151236921548843,
-0.04049821197986603,
-0.050374191254377365,
-0.04461104795336723,
0.08535488694906235,
-0.022295160219073296,
0.03155365213751793,
0.028752099722623825,
-0.004088874440640211,
0.04807540029287338,
0.027038775384426117,
0.06838030368089676,
0.06514395028352737,
-0.032... | 0.101755 |
figure shows that this is because they exhibit a steep change of the class probabilities at the class boundaries (which is good) but have predicted probabilities close to 0.5 far away from the class boundaries (which is bad). This undesirable effect is caused by the Laplace approximation used internally by GPC. The sec... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/gaussian_process.rst | main | scikit-learn | [
-0.06172211840748787,
-0.11372900009155273,
0.04092099145054817,
0.0017825362738221884,
0.11503136157989502,
-0.09101416170597076,
0.0211293026804924,
0.028018223121762276,
0.012594794854521751,
-0.017782660201191902,
0.023550624027848244,
0.011577094905078411,
0.013258402235805988,
-0.057... | -0.001765 |
kernels support computing analytic gradients of the kernel's auto-covariance with respect to :math:`log(\theta)` via setting ``eval\_gradient=True`` in the ``\_\_call\_\_`` method. That is, a ``(len(X), len(X), len(theta))`` array is returned where the entry ``[i, j, l]`` contains :math:`\frac{\partial k\_\theta(x\_i, ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/gaussian_process.rst | main | scikit-learn | [
-0.09342975914478302,
-0.10570970922708511,
-0.03421347215771675,
-0.03275168314576149,
0.028467882424592972,
-0.06911543011665344,
0.03542831540107727,
0.006076071877032518,
0.06801651418209076,
-0.006152149755507708,
0.039129890501499176,
0.0635596290230751,
0.0420430488884449,
-0.038047... | 0.178281 |
process. It depends on a parameter :math:`constant\\_value`. It is defined as: .. math:: k(x\_i, x\_j) = constant\\_value \;\forall\; x\_i, x\_j The main use-case of the :class:`WhiteKernel` kernel is as part of a sum-kernel where it explains the noise-component of the signal. Tuning its parameter :math:`noise\\_level`... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/gaussian_process.rst | main | scikit-learn | [
-0.05592400208115578,
-0.011683421209454536,
0.01025378331542015,
0.016160322353243828,
-0.001971747726202011,
-0.06739881634712219,
0.06623728573322296,
-0.015976952388882637,
-0.0013665745500475168,
-0.0805351510643959,
-0.0293504036962986,
-0.00313646555878222,
0.04261929914355278,
-0.0... | 0.170124 |
3/2`) or twice differentiable (:math:`\nu = 5/2`). The flexibility of controlling the smoothness of the learned function via :math:`\nu` allows adapting to the properties of the true underlying functional relation. The prior and posterior of a GP resulting from a Matérn kernel are shown in the following figure: .. figu... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/gaussian_process.rst | main | scikit-learn | [
-0.10083723813295364,
-0.15042442083358765,
-0.0255175419151783,
-0.022239210084080696,
-0.010115078650414944,
-0.0009851689683273435,
0.02933925949037075,
0.045976486057043076,
0.050348054617643356,
-0.0170463714748621,
0.07882999628782272,
0.015247971750795841,
-0.0026466939598321915,
0.... | 0.13509 |
.. \_impute: ============================ Imputation of missing values ============================ .. currentmodule:: sklearn.impute For various reasons, many real world datasets contain missing values, often encoded as blanks, NaNs or other placeholders. Such datasets however are incompatible with scikit-learn estima... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/impute.rst | main | scikit-learn | [
-0.04640425369143486,
0.004591502249240875,
0.0163921806961298,
0.010102150030434132,
0.041048407554626465,
-0.09088412672281265,
0.01997651904821396,
0.002395505318418145,
-0.005278297699987888,
-0.023159107193350792,
0.02585228532552719,
0.0023268312215805054,
0.005475516431033611,
-0.07... | 0.046934 |
\*\*experimental\*\* for now: default parameters or details of behaviour might change without any deprecation cycle. Resolving the following issues would help stabilize :class:`IterativeImputer`: convergence criteria (:issue:`14338`) and default estimators (:issue:`13286`). To use it, you need to explicitly import ``en... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/impute.rst | main | scikit-learn | [
-0.06971768289804459,
-0.08603672683238983,
0.026189293712377548,
-0.02838638238608837,
0.05532994493842125,
-0.10167849808931351,
-0.027216603979468346,
-0.03756313398480415,
-0.11590051651000977,
-0.0009482451132498682,
0.002087955130264163,
0.0376364104449749,
0.03265818580985069,
-0.06... | 0.061554 |
missing, then the neighbors for that sample can be different depending on the particular feature being imputed. When the number of available neighbors is less than `n\_neighbors` and there are no defined distances to the training set, the training set average for that feature is used during imputation. If there is at l... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/impute.rst | main | scikit-learn | [
-0.06553405523300171,
-0.009119771420955658,
0.08112934976816177,
-0.0074792965315282345,
0.07662127912044525,
-0.04955018311738968,
-0.051190976053476334,
-0.02605479769408703,
-0.042492009699344635,
-0.04261936992406845,
0.014031137339770794,
0.01819552108645439,
0.05087253078818321,
-0.... | 0.006936 |
the features containing missing values at ``fit`` time:: >>> indicator.features\_ array([0, 1, 3]) The ``features`` parameter can be set to ``'all'`` to return all features whether or not they contain missing values:: >>> indicator = MissingIndicator(missing\_values=-1, features="all") >>> mask\_all = indicator.fit\_tr... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/impute.rst | main | scikit-learn | [
0.029995068907737732,
0.006512004416435957,
0.010153002105653286,
0.04004117473959923,
0.06500574946403503,
0.01566026732325554,
0.025738254189491272,
-0.07252605259418488,
-0.029896341264247894,
-0.06787838786840439,
0.03295000270009041,
-0.13382798433303833,
-0.011079759337008,
-0.086607... | -0.012821 |
.. \_lda\_qda: ========================================== Linear and Quadratic Discriminant Analysis ========================================== .. currentmodule:: sklearn Linear Discriminant Analysis (:class:`~discriminant\_analysis.LinearDiscriminantAnalysis`) and Quadratic Discriminant Analysis (:class:`~discriminant... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/lda_qda.rst | main | scikit-learn | [
-0.09641604870557785,
-0.08529116958379745,
-0.049235258251428604,
-0.10162857919931412,
-0.0483085997402668,
0.04579874128103256,
-0.018717190250754356,
-0.01204791758209467,
-0.03891674801707268,
0.02183745615184307,
0.045755114406347275,
-0.028313107788562775,
-0.05338798463344574,
0.07... | 0.028585 |
:math:`(x-\mu\_k)^T \Sigma^{-1} (x-\mu\_k)` corresponds to the `Mahalanobis Distance `\_ between the sample :math:`x` and the mean :math:`\mu\_k`. The Mahalanobis distance tells how close :math:`x` is from :math:`\mu\_k`, while also accounting for the variance of each feature. We can thus interpret LDA as assigning :ma... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/lda_qda.rst | main | scikit-learn | [
-0.027434388175606728,
-0.07295415550470352,
0.033660151064395905,
-0.06548119336366653,
-0.010853092186152935,
-0.023022644221782684,
0.013672727160155773,
0.078033946454525,
0.030633963644504547,
0.00205570668913424,
0.09255096316337585,
-0.02336391620337963,
0.02394523285329342,
-0.0719... | 0.037553 |
covariance matrix will be used) and a value of 1 corresponds to complete shrinkage (which means that the diagonal matrix of variances will be used as an estimate for the covariance matrix). Setting this parameter to a value between these two extrema will estimate a shrunk version of the covariance matrix. The shrunk Le... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/lda_qda.rst | main | scikit-learn | [
-0.023542648181319237,
-0.008835064247250557,
-0.05491993576288223,
0.02371031604707241,
0.016712594777345657,
-0.002732304623350501,
-0.05270165950059891,
0.07079910486936569,
-0.017923777922987938,
0.019606435671448708,
0.057884104549884796,
0.05494704842567444,
-0.023019080981612206,
-0... | 0.040072 |
Honey, I Shrunk the Sample Covariance Matrix. The Journal of Portfolio Management 30(4), 110-119, 2004. .. [3] R. O. Duda, P. E. Hart, D. G. Stork. Pattern Classification (Second Edition), section 2.6.2. | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/lda_qda.rst | main | scikit-learn | [
-0.00008913130295695737,
-0.045718543231487274,
-0.06757417321205139,
-0.04684220626950264,
-0.0021503050811588764,
0.04025999456644058,
0.03750348463654518,
-0.06860802322626114,
0.03181835263967514,
0.01082095317542553,
0.008418479934334755,
0.05528736487030983,
-0.09703171253204346,
-0.... | 0.082004 |
.. \_cross\_validation: =================================================== Cross-validation: evaluating estimator performance =================================================== .. currentmodule:: sklearn.model\_selection Learning the parameters of a prediction function and testing it on the same data is a methodologi... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_validation.rst | main | scikit-learn | [
-0.08386753499507904,
-0.06845244765281677,
0.018407441675662994,
0.1080942153930664,
0.13324642181396484,
-0.030939746648073196,
0.0008202157914638519,
-0.024208877235651016,
-0.04467488080263138,
-0.011302342638373375,
-0.02298821322619915,
-0.08967996388673782,
0.029526708647608757,
-0.... | 0.069526 |
is then the average of the values computed in the loop. This approach can be computationally expensive, but does not waste too much data (as is the case when fixing an arbitrary validation set), which is a major advantage in problems such as inverse inference where the number of samples is very small. .. image:: ../ima... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_validation.rst | main | scikit-learn | [
-0.05506205931305885,
-0.04387262091040611,
-0.06065249815583229,
-0.042565420269966125,
0.07637085765600204,
-0.05536910519003868,
-0.000939090212341398,
0.02725156396627426,
-0.043985847383737564,
-0.06287969648838043,
-0.055892132222652435,
-0.053672343492507935,
0.029107512906193733,
-... | 0.003241 |
The :func:`cross\_validate` function differs from :func:`cross\_val\_score` in two ways: - It allows specifying multiple metrics for evaluation. - It returns a dict containing fit-times, score-times (and optionally training scores, fitted estimators, train-test split indices) in addition to the test score. For single m... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_validation.rst | main | scikit-learn | [
-0.0579567551612854,
-0.07494482398033142,
-0.1006135419011116,
0.014634862542152405,
0.029336733743548393,
-0.024587268009781837,
0.030319103971123695,
0.02172861061990261,
-0.04425140842795372,
-0.05349593982100487,
0.002851886907592416,
-0.0943189263343811,
-0.01798606663942337,
-0.0019... | 0.043106 |
data is a common assumption in machine learning theory, it rarely holds in practice. If one knows that the samples have been generated using a time-dependent process, it is safer to use a :ref:`time-series aware cross-validation scheme `. Similarly, if we know that the generative process has a group structure (samples ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_validation.rst | main | scikit-learn | [
-0.15754932165145874,
-0.04832054302096367,
0.009427422657608986,
0.003807474160566926,
0.08429000526666641,
-0.002613066928461194,
-0.05761263146996498,
-0.02367187663912773,
-0.013731160201132298,
0.0281022060662508,
-0.02158663235604763,
-0.06502226740121841,
-0.03028061054646969,
-0.04... | 0.106693 |
the test error. Intuitively, since :math:`n - 1` of the :math:`n` samples are used to build each model, models constructed from folds are virtually identical to each other and to the model built from the entire training set. However, if the learning curve is steep for the training size in question, then 5 or 10-fold cr... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_validation.rst | main | scikit-learn | [
-0.10754674673080444,
-0.06552624702453613,
0.03458971157670021,
0.033660922199487686,
0.12569579482078552,
0.019300933927297592,
-0.03529568389058113,
0.08844927698373795,
-0.03645911067724228,
-0.04434230923652649,
-0.017637168988585472,
-0.0027489992789924145,
-0.0018062925664708018,
0.... | 0.064732 |
This typically leads to undefined classification metrics (e.g. ROC AUC), exceptions raised when attempting to call :term:`fit` or missing columns in the output of the `predict\_proba` or `decision\_function` methods of multiclass classifiers trained on different folds. To mitigate such problems, splitters such as :clas... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_validation.rst | main | scikit-learn | [
-0.05037201568484306,
-0.05364040285348892,
-0.04261036962270737,
0.0004427144885994494,
0.10409580171108246,
-0.021814635023474693,
-0.045119959861040115,
0.032685551792383194,
-0.043055109679698944,
-0.03602750599384308,
-0.01724402606487274,
-0.10969807952642441,
-0.03908534720540047,
-... | 0.000974 |
data is likely to be dependent on the individual group. In our example, the patient id for each sample will be its group identifier. In this case we would like to know if a model trained on a particular set of groups generalizes well to the unseen groups. To measure this, we need to ensure that all the samples in the v... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_validation.rst | main | scikit-learn | [
-0.0610843226313591,
-0.028361111879348755,
0.0073321545496582985,
0.017389468848705292,
0.08786921203136444,
-0.010357868857681751,
-0.023020271211862564,
-0.046451471745967865,
-0.022794445976614952,
-0.049570273607969284,
-0.026853736490011215,
-0.0498364232480526,
-0.04675376042723656,
... | 0.018234 |
10 15 16 17] [ 1 2 3 8 9 10 12 13 14 15 16 17] [ 0 4 5 6 7 11] .. dropdown:: Implementation notes - With the current implementation full shuffle is not possible in most scenarios. When shuffle=True, the following happens: 1. All groups are shuffled. 2. Groups are sorted by standard deviation of classes using stable sor... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_validation.rst | main | scikit-learn | [
-0.09111734479665756,
-0.0008816253975965083,
0.037526391446590424,
-0.032604772597551346,
0.05136728659272194,
-0.03975512832403183,
-0.039023950695991516,
0.030514735728502274,
-0.05677930265665054,
-0.013327659107744694,
-0.03299368545413017,
0.03128095716238022,
-0.02080702595412731,
-... | 0.014941 |
and generates a sequence of randomized partitions in which a subset of groups are held out for each split. Each train/test split is performed independently meaning there is no guaranteed relationship between successive test sets. Here is a usage example:: >>> from sklearn.model\_selection import GroupShuffleSplit >>> X... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_validation.rst | main | scikit-learn | [
-0.059548184275627136,
-0.026590365916490555,
0.03341513127088547,
0.04687918350100517,
0.04828958585858345,
0.01561661809682846,
-0.0015940621960908175,
-0.08891652524471283,
-0.05822692811489105,
-0.06082842871546745,
0.0013055885210633278,
-0.013521225191652775,
0.01161119807511568,
-0.... | 0.057564 |
same duration, in order to have comparable metrics across folds. Example of 3-split time series cross-validation on a dataset with 6 samples:: >>> from sklearn.model\_selection import TimeSeriesSplit >>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4], [1, 2], [3, 4]]) >>> y = np.array([1, 2, 3, 4, 5, 6]) >>> tscv = TimeS... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_validation.rst | main | scikit-learn | [
-0.05181961879134178,
-0.14067837595939636,
0.021079810336232185,
-0.030765946954488754,
0.07705550640821457,
-0.0025144771207123995,
-0.07839065790176392,
0.007737522479146719,
0.008146222680807114,
-0.08880240470170975,
-0.07574335485696793,
-0.04746851325035095,
-0.11812727898359299,
0.... | 0.074322 |
one of these: - a lack of dependency between features and targets (i.e., there is no systematic relationship and any observed patterns are likely due to random chance) - \*\*or\*\* because the estimator was not able to use the dependency in the data (for instance because it underfit). In the latter case, using a more a... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_validation.rst | main | scikit-learn | [
-0.03298240527510643,
-0.12810541689395905,
0.03654314577579498,
0.03169918805360794,
0.1702430248260498,
-0.011156951077282429,
-0.02424994483590126,
0.02044016122817993,
0.02698056772351265,
-0.020140042528510094,
-0.015864457935094833,
-0.0719861388206482,
-0.007976075634360313,
-0.0202... | 0.005806 |
.. currentmodule:: sklearn.preprocessing .. \_preprocessing\_targets: ========================================== Transforming the prediction target (``y``) ========================================== These are transformers that are not intended to be used on features, only on supervised learning targets. See also :ref:`... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing_targets.rst | main | scikit-learn | [
-0.055145926773548126,
-0.02711663953959942,
-0.08604104816913605,
-0.035524882376194,
0.05346878990530968,
0.06143099442124367,
0.05782125145196915,
-0.015250854194164276,
-0.08185944706201553,
-0.04240892827510834,
0.022311093285679817,
-0.10076668858528137,
-0.02065679058432579,
0.01994... | 0.078658 |
.. \_sgd: =========================== Stochastic Gradient Descent =========================== .. currentmodule:: sklearn.linear\_model \*\*Stochastic Gradient Descent (SGD)\*\* is a simple yet very efficient approach to fitting linear classifiers and regressors under convex loss functions such as (linear) `Support Vect... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/sgd.rst | main | scikit-learn | [
-0.13399629294872284,
-0.059581346809864044,
-0.02553606778383255,
0.010198775678873062,
0.0784359872341156,
0.0032509788870811462,
-0.035657744854688644,
-0.0035857888869941235,
-0.04045846313238144,
-0.04972301423549652,
0.004725810140371323,
0.06469698995351791,
0.007632046472281218,
-0... | 0.05021 |
and all regression losses below. In this case the target is encoded as :math:`-1` or :math:`1`, and the problem is treated as a regression problem. The predicted class then corresponds to the sign of the predicted target. Please refer to the :ref:`mathematical section below ` for formulas. The first two loss functions ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/sgd.rst | main | scikit-learn | [
-0.031836219131946564,
-0.0401027537882328,
-0.035684339702129364,
0.05365753918886185,
0.15695908665657043,
-0.007224753964692354,
-0.010517103597521782,
0.07003102451562881,
0.013897942379117012,
0.01571866311132908,
-0.006244208663702011,
-0.016544990241527557,
0.09900301694869995,
-0.0... | 0.030867 |
Regression ========== The class :class:`SGDRegressor` implements a plain stochastic gradient descent learning routine which supports different loss functions and penalties to fit linear regression models. :class:`SGDRegressor` is well suited for regression problems with a large number of training samples (> 10.000), fo... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/sgd.rst | main | scikit-learn | [
-0.04112710431218147,
-0.03641277179121971,
-0.019368678331375122,
0.01785794459283352,
0.05946415662765503,
0.056183651089668274,
-0.03954996541142464,
0.08394879102706909,
-0.04031994938850403,
0.017664648592472076,
0.010002749040722847,
0.06332496553659439,
0.05772387608885765,
-0.02868... | 0.070196 |
The sparse implementation produces slightly different results from the dense implementation, due to a shrunk learning rate for the intercept. See :ref:`implementation\_details`. There is built-in support for sparse data given in any matrix in a format supported by `scipy.sparse `\_. For maximum efficiency, however, use... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/sgd.rst | main | scikit-learn | [
-0.08586014062166214,
-0.08398769050836563,
-0.0727691650390625,
0.02842317335307598,
0.05593777447938919,
-0.0771191418170929,
-0.04568755254149437,
0.018016062676906586,
-0.04582501947879791,
-0.03292137756943703,
0.000888270151335746,
0.06554345786571503,
-0.003971024416387081,
-0.00825... | -0.036931 |
SGD works best with a larger number of features and a higher `eta0`. .. rubric:: References \* `"Efficient BackProp" `\_ Y. LeCun, L. Bottou, G. Orr, K. Müller - In Neural Networks: Tricks of the Trade 1998. .. \_sgd\_mathematical\_formulation: Mathematical formulation ======================== We describe here the math... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/sgd.rst | main | scikit-learn | [
-0.08828761428594589,
-0.12259003520011902,
-0.010941551066935062,
-0.03935815766453743,
0.07471048086881638,
0.03316660597920418,
0.006673725787550211,
-0.022698136046528816,
-0.044406965374946594,
-0.062376029789447784,
-0.015257853083312511,
0.1024991050362587,
0.04198858514428139,
-0.0... | -0.006585 |
step-size in the parameter space. The intercept :math:`b` is updated similarly but without regularization (and with additional decay for sparse matrices, as detailed in :ref:`implementation\_details`). The learning rate :math:`\eta` can be either constant or gradually decaying. For classification, the default learning ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/sgd.rst | main | scikit-learn | [
-0.06905706226825714,
-0.05588751658797264,
-0.025147918611764908,
0.053600870072841644,
0.042905379086732864,
-0.026067819446325302,
-0.07940816879272461,
0.0008358936756849289,
0.012575644068419933,
0.016191260889172554,
0.01996035873889923,
0.09816093742847443,
-0.056992799043655396,
-0... | 0.106972 |
.. \_semi\_supervised: =================================================== Semi-supervised learning =================================================== .. currentmodule:: sklearn.semi\_supervised `Semi-supervised learning `\_ is a situation in which in your training data some of the samples are not labeled. The semi-su... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/semi_supervised.rst | main | scikit-learn | [
-0.04380118101835251,
-0.040115341544151306,
-0.004464255180209875,
0.03415679559111595,
0.15073661506175995,
0.010009980760514736,
0.061087507754564285,
-0.08691751211881638,
-0.08424971252679825,
-0.046761564910411835,
0.03466561436653137,
-0.05794551596045494,
0.004659350495785475,
-0.0... | 0.117396 |
clamping factor can be relaxed, to say :math:`\alpha=0.2`, which means that we will always retain 80 percent of our original label distribution, but the algorithm gets to change its confidence of the distribution within 20 percent. :class:`LabelPropagation` uses the raw similarity matrix constructed from the data with ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/semi_supervised.rst | main | scikit-learn | [
-0.01830308884382248,
-0.08562533557415009,
-0.06868290901184082,
-0.07737993448972702,
0.0693824291229248,
0.008885260671377182,
0.03071596659719944,
0.015409665182232857,
-0.025435341522097588,
-0.10919728875160217,
0.07375406473875046,
0.03811274841427803,
0.044794514775276184,
-0.01099... | 0.029 |
.. \_decompositions: ================================================================= Decomposing signals in components (matrix factorization problems) ================================================================= .. currentmodule:: sklearn.decomposition .. \_PCA: Principal component analysis (PCA) ===============... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/decomposition.rst | main | scikit-learn | [
-0.03684031218290329,
-0.06268782168626785,
-0.00993876438587904,
-0.028679123148322105,
0.05109117180109024,
-0.0032836697064340115,
0.02473965287208557,
0.016751712188124657,
0.01764003373682499,
0.03317004069685936,
0.012810803018510342,
-0.03339441865682602,
0.0010856387671083212,
0.01... | -0.046234 |
instance). The PCA algorithm can be used to linearly transform the data while both reducing the dimensionality and preserving most of the explained variance at the same time. The class :class:`PCA` used with the optional parameter ``svd\_solver='randomized'`` is very useful in that case: since we are going to drop most... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/decomposition.rst | main | scikit-learn | [
-0.0811968743801117,
0.012286192737519741,
0.004605728201568127,
-0.06473790854215622,
0.04227783530950546,
-0.06579336524009705,
-0.08905255794525146,
-0.00942954234778881,
0.03985770791769028,
0.006239264737814665,
0.006890543736517429,
0.04109559580683708,
-0.09970134496688843,
-0.05314... | 0.029036 |
and different kinds of structure; see [Jen09]\_ for a review of such methods. For more details on how to use Sparse PCA, see the Examples section, below. .. |spca\_img| image:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_faces\_decomposition\_005.png :target: ../auto\_examples/decomposition/plot\_faces\_dec... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/decomposition.rst | main | scikit-learn | [
-0.07270952314138412,
-0.014084173366427422,
-0.021533869206905365,
-0.06893929094076157,
-0.00008609931683167815,
-0.09648188948631287,
0.01800127513706684,
0.0024829658214002848,
-0.031204167753458023,
0.013206075876951218,
0.004436340648680925,
0.03438655659556389,
0.019514193758368492,
... | -0.023787 |
the number of samples. It relies on randomized decomposition methods to find an approximate solution in a shorter time. The time complexity of the randomized :class:`KernelPCA` is :math:`O(n\_{\mathrm{samples}}^2 \cdot n\_{\mathrm{components}})` instead of :math:`O(n\_{\mathrm{samples}}^3)` for the exact method impleme... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/decomposition.rst | main | scikit-learn | [
-0.057981353253126144,
-0.031060228124260902,
-0.05572197958827019,
0.023247720673680305,
0.03469084948301315,
-0.13687248528003693,
-0.041267115622758865,
0.01417634915560484,
0.04938928037881851,
0.033431991934776306,
0.025034241378307343,
0.042709313333034515,
-0.03985084220767021,
-0.0... | 0.01531 |
a Gaussian distribution, compensating for LSA's erroneous assumptions about textual data. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_text\_plot\_document\_clustering.py` .. rubric:: References \* Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze (2008), \*Introduction to Information Retrieval... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/decomposition.rst | main | scikit-learn | [
-0.03786846622824669,
-0.06840569525957108,
-0.09949425607919693,
0.034958284348249435,
0.06175008788704872,
0.062063638120889664,
0.04749174788594246,
-0.04028681665658951,
-0.021595807746052742,
0.009387527592480183,
0.11362607032060623,
0.05176149681210518,
0.05644410103559494,
0.044647... | 0.103677 |
sparse coding step that shares the same implementation with all dictionary learning objects (see :ref:`SparseCoder`). It is also possible to constrain the dictionary and/or code to be positive to match constraints that may be present in the data. Below are the faces with different positivity constraints applied. Red in... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/decomposition.rst | main | scikit-learn | [
-0.04766840115189552,
0.0037488348316401243,
-0.017289044335484505,
-0.030985558405518532,
0.038862213492393494,
-0.07574868947267532,
0.02938855066895485,
-0.05628128722310066,
-0.07366060465574265,
-0.00787180196493864,
0.0521162748336792,
-0.054448749870061874,
0.057793863117694855,
0.0... | -0.01726 |
without any further assumptions the idea of having a latent variable :math:`h` would be superfluous -- :math:`x` can be completely modelled with a mean and a covariance. We need to impose some more specific structure on one of these two parameters. A simple additional assumption regards the structure of the error covar... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/decomposition.rst | main | scikit-learn | [
-0.025157185271382332,
-0.11198952794075012,
-0.021004172042012215,
0.03465244546532631,
0.0037424794863909483,
0.03951670229434967,
0.009419907815754414,
-0.006229887250810862,
0.04527128487825394,
-0.01033796090632677,
0.09402262419462204,
0.031209662556648254,
-0.009640530683100224,
0.0... | -0.021355 |
:math:`WH`. The most widely used distance function is the squared Frobenius norm, which is an obvious extension of the Euclidean norm to matrices: .. math:: d\_{\mathrm{Fro}}(X, Y) = \frac{1}{2} ||X - Y||\_{\mathrm{Fro}}^2 = \frac{1}{2} \sum\_{i,j} (X\_{ij} - {Y}\_{ij})^2 Unlike :class:`PCA`, the representation of a ve... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/decomposition.rst | main | scikit-learn | [
-0.053731732070446014,
-0.05056434124708176,
-0.10185831785202026,
-0.044723644852638245,
0.07714265584945679,
0.0003302463155705482,
-0.060703620314598083,
-0.022856002673506737,
0.06244940310716629,
-0.04136679321527481,
0.05982581153512001,
0.02444940246641636,
0.021235903725028038,
0.0... | 0.086874 |
as, for example, the (generalized) Kullback-Leibler (KL) divergence, also referred as I-divergence: .. math:: d\_{KL}(X, Y) = \sum\_{i,j} (X\_{ij} \log(\frac{X\_{ij}}{Y\_{ij}}) - X\_{ij} + Y\_{ij}) Or, the Itakura-Saito (IS) divergence: .. math:: d\_{IS}(X, Y) = \sum\_{i,j} (\frac{X\_{ij}}{Y\_{ij}} - \log(\frac{X\_{ij}... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/decomposition.rst | main | scikit-learn | [
-0.07100366055965424,
-0.09384847432374954,
-0.007731180172413588,
-0.05589741840958595,
-0.03079560585319996,
0.013354693539440632,
0.025114640593528748,
0.0352763868868351,
0.12075087428092957,
-0.0793561115860939,
0.04860714450478554,
0.010968327522277832,
-0.0333065465092659,
0.0401667... | 0.124435 |
factorization with the beta-divergence" <1010.1763>` C. Fevotte, J. Idier, 2011 .. [7] :arxiv:`"Online algorithms for nonnegative matrix factorization with the Itakura-Saito divergence" <1106.4198>` A. Lefevre, F. Bach, C. Fevotte, 2011 .. \_LatentDirichletAllocation: Latent Dirichlet Allocation (LDA) =================... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/decomposition.rst | main | scikit-learn | [
-0.033825214952230453,
-0.11302930116653442,
-0.08021390438079834,
-0.01092715747654438,
0.07667616754770279,
0.0749945268034935,
0.024412473663687706,
0.013037443161010742,
0.052718985825777054,
-0.01985197328031063,
0.059903841465711594,
0.06645923852920532,
0.037282492965459824,
0.06032... | 0.041953 |
is used when data can be fetched sequentially. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_applications\_plot\_topics\_extraction\_with\_nmf\_lda.py` .. rubric:: References \* `"Latent Dirichlet Allocation" `\_ D. Blei, A. Ng, M. Jordan, 2003 \* `"Online Learning for Latent Dirichlet Allocation” `\_ M. Hof... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/decomposition.rst | main | scikit-learn | [
-0.04700421914458275,
-0.0918898954987526,
-0.07063999027013779,
-0.03172305226325989,
0.06578430533409119,
0.1153821051120758,
0.01487761177122593,
0.031069884076714516,
0.045746803283691406,
-0.03316158428788185,
0.11723639070987701,
0.13367889821529388,
0.020712468773126602,
-0.00928351... | 0.104564 |
.. \_ensemble: =========================================================================== Ensembles: Gradient boosting, random forests, bagging, voting, stacking =========================================================================== .. currentmodule:: sklearn.ensemble \*\*Ensemble methods\*\* combine the predicti... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst | main | scikit-learn | [
-0.09956887364387512,
-0.10874049365520477,
-0.010870425030589104,
0.05194336920976639,
0.06462769210338593,
0.009950493462383747,
-0.003366154385730624,
0.00350636662915349,
-0.051920000463724136,
-0.005682784598320723,
-0.12235803157091141,
-0.08759615570306778,
0.06150403618812561,
-0.0... | 0.100535 |
loss version is selected based on :term:`y` passed to :term:`fit`. The size of the trees can be controlled through the ``max\_leaf\_nodes``, ``max\_depth``, and ``min\_samples\_leaf`` parameters. The number of bins used to bin the data is controlled with the ``max\_bins`` parameter. Using less bins acts as a form of re... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst | main | scikit-learn | [
0.05690394341945648,
0.01603829301893711,
0.016457853838801384,
0.02605072408914566,
0.11926008015871048,
0.02669598162174225,
-0.01025361567735672,
0.014438199810683727,
-0.05267554149031639,
0.010743574239313602,
-0.035345274955034256,
-0.03522924333810806,
0.04183317720890045,
-0.091524... | 0.041466 |
HistGradientBoostingClassifier(min\_samples\_leaf=1).fit(X, y) >>> gbdt.predict(X) array([0, 0, 1, 1]) When the missingness pattern is predictive, the splits can be performed on whether the feature value is missing or not:: >>> X = np.array([0, np.nan, 1, 2, np.nan]).reshape(-1, 1) >>> y = [0, 1, 0, 0, 1] >>> gbdt = Hi... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst | main | scikit-learn | [
-0.019705316051840782,
-0.0714949369430542,
0.014990836381912231,
0.00832286011427641,
0.13445861637592316,
0.041556622833013535,
-0.0055306158028542995,
-0.04203280061483383,
-0.1733388602733612,
-0.012513688765466213,
-0.053708530962467194,
-0.08105607330799103,
0.03467053920030594,
-0.0... | -0.012688 |
the :math:`2^{K - 1} - 1` partitions, where :math:`K` is the number of categories. This can quickly become prohibitive when :math:`K` is large. Fortunately, since gradient boosting trees are always regression trees (even for classification problems), there exists a faster strategy that can yield equivalent splits. Firs... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst | main | scikit-learn | [
-0.028816252946853638,
-0.035736650228500366,
0.06734434515237808,
0.005630994215607643,
0.00054958276450634,
-0.01930948719382286,
-0.05804574117064476,
-0.03928282484412193,
-0.0006317013176158071,
0.04103011637926102,
-0.08644530922174454,
-0.011306648142635822,
-0.018118275329470634,
0... | -0.049093 |
may interact with each other, as well as features 1 and 2. But note that features 0 and 2 are forbidden to interact. The following depicts a tree and the possible splits of the tree: .. code-block:: none 1 <- Both constraint groups could be applied from now on / \ 1 2 <- Left split still fulfills both constraint groups... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst | main | scikit-learn | [
-0.050427086651325226,
-0.04171590134501457,
-0.02477935142815113,
0.08480317890644073,
0.062495965510606766,
0.0032977897208184004,
0.023891840130090714,
0.003472972195595503,
-0.08775656670331955,
-0.04925161600112915,
0.0631217360496521,
-0.05578409135341644,
-0.0042466167360544205,
-0.... | 0.051068 |
example shows how to fit a gradient boosting classifier with 100 decision stumps as weak learners:: >>> from sklearn.datasets import make\_hastie\_10\_2 >>> from sklearn.ensemble import GradientBoostingClassifier >>> X, y = make\_hastie\_10\_2(random\_state=0) >>> X\_train, X\_test = X[:2000], X[2000:] >>> y\_train, y\... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst | main | scikit-learn | [
-0.05681997165083885,
-0.10231094807386398,
-0.007068746257573366,
0.033897753804922104,
0.07942842692136765,
-0.005805397871881723,
0.010498004034161568,
0.04015820845961571,
-0.08035764843225479,
0.0076649924740195274,
-0.07654247432947159,
-0.07542592287063599,
0.07117819786071777,
-0.0... | -0.018958 |
depth ``h`` will be grown. Such trees will have (at most) ``2\*\*h`` leaf nodes and ``2\*\*h - 1`` split nodes. Alternatively, you can control the tree size by specifying the number of leaf nodes via the parameter ``max\_leaf\_nodes``. In this case, trees will be grown using best-first search where nodes with the highe... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst | main | scikit-learn | [
-0.059939634054899216,
-0.01690801978111267,
0.01529951486736536,
-0.01733270101249218,
0.04062194749712944,
-0.05011072754859924,
-0.053323835134506226,
0.05556115135550499,
-0.06618835031986237,
0.000010337911589886062,
-0.023815887048840523,
-0.050471946597099304,
0.03498125076293945,
-... | 0.033604 |
a result, the leaves values of the tree :math:`h\_m` are modified once the tree is fitted, such that the leaves values minimize the loss :math:`L\_m`. The update is loss-dependent: for the absolute error loss, the value of a leaf is updated to the median of the samples in that leaf. .. dropdown:: Classification Gradien... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst | main | scikit-learn | [
-0.07098234444856644,
-0.012982857413589954,
0.03745632991194725,
0.05931149423122406,
0.1195962205529213,
-0.016672490164637566,
-0.01118102204054594,
0.01517909299582243,
0.037653516978025436,
0.09010608494281769,
-0.04363425448536873,
-0.019087521359324455,
0.06242024898529053,
-0.04305... | 0.032894 |
[HTF]\_ recommend to set the learning rate to a small constant (e.g. ``learning\_rate <= 0.1``) and choose ``n\_estimators`` large enough that early stopping applies, see :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_gradient\_boosting\_early\_stopping.py` for a more detailed discussion of the interaction between ``... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst | main | scikit-learn | [
-0.14177800714969635,
-0.0474373996257782,
0.04842239245772362,
0.04297279193997383,
0.12836115062236786,
-0.024012921378016472,
-0.036433104425668716,
0.0432920828461647,
-0.03421181067824364,
-0.057979654520750046,
-0.026572056114673615,
0.06503741443157196,
-0.0009217938641086221,
-0.03... | 0.037071 |
\* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_gradient\_boosting\_regression.py` .. rubric:: References .. [Friedman2001] Friedman, J.H. (2001). :doi:`Greedy function approximation: A gradient boosting machine <10.1214/aos/1013203451>`. Annals of Statistics, 29, 1189-1232. .. [Friedman2002] Friedman, J.H. (2002).... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst | main | scikit-learn | [
-0.15312789380550385,
-0.0743023157119751,
0.024751802906394005,
0.006373833864927292,
0.10136175900697708,
0.015462403185665607,
-0.028103845193982124,
0.02178930677473545,
-0.05324571952223778,
-0.002116026123985648,
-0.03508565202355385,
-0.0028247388545423746,
0.05577649548649788,
-0.0... | 0.023684 |
In contrast, random forests use a majority vote to predict the outcome, which can require a larger number of trees to achieve the same level of accuracy. - Efficient binning: HGBT uses an efficient binning algorithm that can handle large datasets with a high number of features. The binning algorithm can pre-process the... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst | main | scikit-learn | [
0.035859812051057816,
-0.029024222865700722,
-0.017077479511499405,
0.02116156369447708,
0.12870870530605316,
-0.05249759927392006,
-0.030164537951350212,
0.051380082964897156,
0.0011861971579492092,
0.026542602106928825,
-0.11134432256221771,
-0.019939163699746132,
-0.10046692937612534,
-... | -0.05702 |
(``bootstrap=True``) while the default strategy for extra-trees is to use the whole dataset (``bootstrap=False``). When using bootstrap sampling the generalization error can be estimated on the left out or out-of-bag samples. This can be enabled by setting ``oob\_score=True``. .. note:: The size of the model with the d... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst | main | scikit-learn | [
-0.026761559769511223,
0.024015652015805244,
0.02111578732728958,
0.07191493362188339,
0.11190592497587204,
-0.10440786182880402,
-0.06600914895534515,
0.06895680725574493,
-0.058591194450855255,
0.023091014474630356,
-0.023615844547748566,
-0.06016307696700096,
0.013147968798875809,
-0.06... | 0.011117 |
to the prediction function. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_forest\_importances.py` .. rubric:: References .. [L2014] G. Louppe, :arxiv:`"Understanding Random Forests: From Theory to Practice" <1407.7502>`, PhD Thesis, U. of Liege, 2014. .. \_random\_trees\_embedding: Totally Ra... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst | main | scikit-learn | [
-0.0552658773958683,
-0.04504227265715599,
-0.003039476228877902,
0.05040737986564636,
0.14613674581050873,
-0.03947114571928978,
0.012375126592814922,
-0.03412478044629097,
-0.02130236104130745,
0.058354854583740234,
-0.04511026293039322,
0.017900541424751282,
0.005030856002122164,
0.0178... | 0.094243 |
best with weak models (e.g., shallow decision trees). Bagging methods come in many flavours but mostly differ from each other by the way they draw random subsets of the training set: \* When random subsets of the dataset are drawn as random subsets of the samples, then this algorithm is known as Pasting [B1999]\_. \* W... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst | main | scikit-learn | [
-0.02132330648601055,
-0.009886502288281918,
0.05396734178066254,
0.009839052334427834,
0.10812553763389587,
-0.0662456750869751,
0.046443287283182144,
0.020242365077137947,
0.022902004420757294,
-0.005081588868051767,
-0.0510975643992424,
0.016129374504089355,
0.00973680429160595,
-0.0405... | 0.096173 |
= datasets.load\_iris() >>> X, y = iris.data[:, 1:3], iris.target >>> clf1 = LogisticRegression(random\_state=1) >>> clf2 = RandomForestClassifier(n\_estimators=50, random\_state=1) >>> clf3 = GaussianNB() >>> eclf = VotingClassifier( ... estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], ... voting='hard') >>> fo... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst | main | scikit-learn | [
0.03912947326898575,
-0.10968296229839325,
-0.08311982452869415,
-0.03720162436366081,
0.11976269632577896,
-0.06661499291658401,
0.04398055374622345,
-0.021570295095443726,
-0.10849611461162567,
-0.0078755933791399,
0.025376759469509125,
-0.0967484787106514,
0.06246431544423103,
-0.083964... | -0.021403 |
from sklearn.linear\_model import LinearRegression >>> from sklearn.ensemble import VotingRegressor >>> # Loading some example data >>> X, y = load\_diabetes(return\_X\_y=True) >>> # Training classifiers >>> reg1 = GradientBoostingRegressor(random\_state=1) >>> reg2 = RandomForestRegressor(random\_state=1) >>> reg3 = L... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst | main | scikit-learn | [
-0.06436654925346375,
-0.12945616245269775,
-0.04708387702703476,
0.036839768290519714,
0.11027778685092926,
0.018418705090880394,
0.0053242105059325695,
-0.016423102468252182,
-0.022864123806357384,
-0.020544331520795822,
-0.043322838842868805,
-0.02595406398177147,
-0.0030684652738273144,
... | 0.020773 |
= StackingRegressor( ... estimators=[('rf', final\_layer\_rfr), ... ('gbrt', final\_layer\_gbr)], ... final\_estimator=RidgeCV() ... ) >>> multi\_layer\_regressor = StackingRegressor( ... estimators=[('ridge', RidgeCV()), ... ('lasso', LassoCV(random\_state=42)), ... ('knr', KNeighborsRegressor(n\_neighbors=20, ... met... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst | main | scikit-learn | [
-0.05038244649767876,
-0.14605993032455444,
-0.037503682076931,
0.041568368673324585,
0.03420700132846832,
0.013000935316085815,
-0.04790036007761955,
0.03848765790462494,
-0.04411480948328972,
-0.01757447049021721,
0.007817740552127361,
-0.02828647382557392,
-0.014381649903953075,
-0.0358... | 0.015842 |
.. \_array\_api: ================================ Array API support (experimental) ================================ .. currentmodule:: sklearn The `Array API `\_ specification defines a standard API for all array manipulation libraries with a NumPy-like API. Scikit-learn vendors pinned copies of `array-api-compat `\_\_... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/array_api.rst | main | scikit-learn | [
-0.002509576268494129,
-0.042738884687423706,
-0.07778825610876083,
0.020756611600518227,
0.05492609366774559,
-0.10776469111442566,
-0.012901319190859795,
-0.059163570404052734,
-0.0721137523651123,
-0.016345879063010216,
-0.04914315044879913,
-0.029516039416193962,
-0.020744996145367622,
... | 0.154123 |
compatible inputs. Estimators ---------- - :class:`decomposition.PCA` (with `svd\_solver="full"`, `svd\_solver="covariance\_eigh"`, or `svd\_solver="randomized"` (`svd\_solver="randomized"` only if `power\_iteration\_normalizer="QR"`)) - :class:`kernel\_approximation.Nystroem` - :class:`linear\_model.Ridge` (with `solv... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/array_api.rst | main | scikit-learn | [
-0.059585511684417725,
-0.05279041454195976,
-0.03147295489907265,
-0.0018522517057135701,
0.02016250230371952,
-0.043747153133153915,
-0.03146160766482353,
0.04542805254459381,
-0.020204301923513412,
-0.005274751223623753,
0.03208563104271889,
-0.01065443828701973,
-0.02806500345468521,
-... | -0.044437 |
whose performance can be improved when passed arrays on a GPU, as they can handle large matrix operations very efficiently. `X` initially contains categorical string data (thus needs to be on CPU), which is target encoded to numerical values in :class:`~sklearn.preprocessing.TargetEncoder`. `X` is then explicitly moved... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/array_api.rst | main | scikit-learn | [
-0.01805937848985195,
-0.07022818177938461,
-0.0946175754070282,
-0.050232719630002975,
0.023338545113801956,
-0.09768501669168472,
-0.039461761713027954,
-0.06172392889857292,
-0.07307549566030502,
-0.06234059855341911,
-0.05214592069387436,
0.08225910365581512,
-0.03844879940152168,
-0.0... | 0.020851 |
on MPS device support -------------------------- On macOS, PyTorch can use the Metal Performance Shaders (MPS) to access hardware accelerators (e.g. the internal GPU component of the M1 or M2 chips). However, the MPS device support for PyTorch is incomplete at the time of writing. See the following github issue for mor... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/array_api.rst | main | scikit-learn | [
-0.0675823837518692,
-0.013695676811039448,
-0.07097894698381424,
0.025782842189073563,
0.007152245379984379,
-0.09970450401306152,
-0.062483157962560654,
-0.04554557427763939,
-0.10399205982685089,
-0.03970488905906677,
-0.02526860125362873,
-0.062134914100170135,
-0.07577311992645264,
-0... | -0.002116 |
.. currentmodule:: sklearn.model\_selection .. \_TunedThresholdClassifierCV: ================================================== Tuning the decision threshold for class prediction ================================================== Classification is best divided into two parts: \* the statistical problem of learning a mo... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/classification_threshold.rst | main | scikit-learn | [
-0.0915195420384407,
-0.033827804028987885,
-0.004847291391342878,
0.03256158158183098,
0.08783092349767685,
-0.061066146939992905,
0.048356473445892334,
0.046442531049251556,
-0.00612812303006649,
0.015644771978259087,
-0.07479488104581833,
-0.08616302162408829,
0.05492256581783295,
-0.04... | 0.094834 |
class of interest for a very low probability (around 0.02). This decision threshold optimizes a utility metric defined by the business (in this case an insurance company). .. figure:: ../auto\_examples/model\_selection/images/sphx\_glr\_plot\_cost\_sensitive\_learning\_002.png :target: ../auto\_examples/model\_selectio... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/classification_threshold.rst | main | scikit-learn | [
-0.060940779745578766,
-0.025669286027550697,
-0.06278136372566223,
-0.06316132098436356,
0.04819256439805031,
-0.0894695296883583,
0.08140929788351059,
0.05745331197977066,
0.05103307589888573,
0.06566382944583893,
-0.033482037484645844,
-0.05775746703147888,
0.028204595670104027,
-0.0104... | 0.100384 |
.. \_kernel\_ridge: =========================== Kernel ridge regression =========================== .. currentmodule:: sklearn.kernel\_ridge Kernel ridge regression (KRR) [M2012]\_ combines :ref:`ridge\_regression` (linear least squares with :math:`L\_2`-norm regularization) with the `kernel trick `\_. It thus learns a... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/kernel_ridge.rst | main | scikit-learn | [
-0.1338331401348114,
-0.08445383608341217,
-0.0330464132130146,
0.0036645811051130295,
0.10162992775440216,
0.006419527810066938,
-0.012385213747620583,
0.011506200768053532,
0.03968784958124161,
0.057544611394405365,
0.008709771558642387,
0.08939172327518463,
-0.0009330221218988299,
-0.02... | 0.083738 |
.. \_neural\_networks\_supervised: ================================== Neural network models (supervised) ================================== .. currentmodule:: sklearn.neural\_network .. warning:: This implementation is not intended for large-scale applications. In particular, scikit-learn offers no GPU support. For muc... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neural_networks_supervised.rst | main | scikit-learn | [
-0.07056731730699539,
-0.09669746458530426,
-0.04441114515066147,
-0.008536227978765965,
0.07561904937028885,
-0.045722369104623795,
-0.09047214686870575,
-0.027194058522582054,
-0.04741682857275009,
-0.07774800807237625,
-0.06693743914365768,
0.008442089892923832,
-0.07527030259370804,
0.... | 0.16632 |
supports only the Cross-Entropy loss function, which allows probability estimates by running the ``predict\_proba`` method. MLP trains using Backpropagation. More precisely, it trains using some form of gradient descent and the gradients are calculated using Backpropagation. For classification, it minimizes the Cross-E... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neural_networks_supervised.rst | main | scikit-learn | [
-0.06323490291833878,
-0.12143224477767944,
-0.045407719910144806,
-0.05113627016544342,
0.11053948104381561,
0.01597166806459427,
0.013378588482737541,
0.02393485978245735,
-0.03155104070901871,
-0.02461981400847435,
-0.08113288879394531,
-0.054418355226516724,
0.024125827476382256,
-0.03... | 0.06946 |
\cdot h \cdot h + h \cdot o))`, where :math:`i` is the number of iterations. Since backpropagation has a high time complexity, it is advisable to start with smaller number of hidden neurons and few hidden layers for training. .. dropdown:: Mathematical formulation Given a set of training examples :math:`\{(x\_1, y\_1),... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neural_networks_supervised.rst | main | scikit-learn | [
-0.08864082396030426,
-0.11341901123523712,
-0.00028364534955471754,
0.0017802661750465631,
0.021468231454491615,
0.026248903945088387,
0.020049868151545525,
-0.04455941170454025,
-0.01174095831811428,
-0.028891703113913536,
-0.0033760007936507463,
0.017252638936042786,
0.03536003455519676,
... | 0.041174 |
standardize it to have mean 0 and variance 1. Note that you must apply the \*same\* scaling to the test set for meaningful results. You can use :class:`~sklearn.preprocessing.StandardScaler` for standardization. >>> from sklearn.preprocessing import StandardScaler # doctest: +SKIP >>> scaler = StandardScaler() # doctes... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neural_networks_supervised.rst | main | scikit-learn | [
0.000490740523673594,
-0.0020397291518747807,
-0.02158243954181671,
-0.011131995357573032,
-0.057964399456977844,
-0.04576728492975235,
-0.040452517569065094,
0.036834631115198135,
-0.10967245697975159,
-0.04227130115032196,
0.0041214763186872005,
-0.057498082518577576,
-0.005350311752408743... | 0.020061 |
.. \_linear\_model: ============= Linear Models ============= .. currentmodule:: sklearn.linear\_model The following are a set of methods intended for regression in which the target value is expected to be a linear combination of the features. In mathematical notation, the predicted value :math:`\hat{y}` can be written... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst | main | scikit-learn | [
-0.05766735225915909,
-0.10348507016897202,
-0.06662443280220032,
0.01735454797744751,
0.09150569140911102,
-0.005476490594446659,
0.030182568356394768,
0.013171227648854256,
-0.05391208454966545,
-0.008844935335218906,
0.03514485061168671,
0.010854939930140972,
-0.031108565628528595,
0.06... | 0.094369 |
specified, :class:`Ridge` will choose between the `"lbfgs"`, `"cholesky"`, and `"sparse\_cg"` solvers. :class:`Ridge` will begin checking the conditions shown in the following table from top to bottom. If the condition is true, the corresponding solver is chosen. +-------------+-----------------------------------------... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst | main | scikit-learn | [
-0.10367961972951889,
0.005198994651436806,
-0.0729803815484047,
-0.024795860052108765,
-0.021803133189678192,
-0.051229219883680344,
0.01601702719926834,
0.039382364600896835,
-0.05374303460121155,
0.04518202319741249,
0.06225080043077469,
-0.03707707300782204,
0.01709222048521042,
-0.061... | -0.036094 |
consists of a linear model with an added regularization term. The objective function to minimize is: .. math:: \min\_{w} P(w) = {\frac{1}{2n\_{\text{samples}}} ||X w - y||\_2 ^ 2 + \alpha ||w||\_1} The lasso estimate thus solves the least-squares with added penalty :math:`\alpha ||w||\_1`, where :math:`\alpha` is a con... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst | main | scikit-learn | [
-0.01743023656308651,
-0.04905671626329422,
-0.09274324774742126,
0.034188877791166306,
0.0891127809882164,
0.00465955026447773,
0.02727002091705799,
0.017576513811945915,
-0.08893506228923798,
0.03570159524679184,
0.012134484015405178,
0.0758383572101593,
-0.04226545989513397,
0.004566777... | 0.090285 |
can safely exclude, i.e., set to zero with certainty. .. dropdown:: References The first reference explains the coordinate descent solver used in scikit-learn, the others treat gap safe screening rules. \* :doi:`Friedman, Hastie & Tibshirani. (2010). Regularization Path For Generalized linear Models by Coordinate Desce... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst | main | scikit-learn | [
-0.07396185398101807,
-0.006494174711406231,
-0.01375554408878088,
0.0032083173282444477,
0.1192661002278328,
0.019748980179429054,
-0.04423520341515541,
-0.031230401247739792,
-0.07492615282535553,
-0.0013825449859723449,
-0.006670157890766859,
0.10009448230266571,
-0.031153732910752296,
... | 0.029384 |
discarded since it is a constant when :math:`\sigma^2` is provided. In addition, it is sometimes stated that the AIC is equivalent to the :math:`C\_p` statistic [12]\_. In a strict sense, however, it is equivalent only up to some constant and a multiplicative factor. At last, we mentioned above that :math:`\sigma^2` is... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst | main | scikit-learn | [
-0.009275809861719608,
-0.10337728261947632,
0.02556367963552475,
0.0658838078379631,
0.04602862522006035,
0.016300924122333527,
0.043116070330142975,
0.046918563544750214,
-0.04216712340712547,
0.05395076423883438,
0.04891500249505043,
0.025826087221503258,
-0.02418232522904873,
-0.004388... | 0.147501 |
+ \alpha \rho ||w||\_1 + \frac{\alpha(1-\rho)}{2} ||w||\_2 ^ 2} .. figure:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_lasso\_lasso\_lars\_elasticnet\_path\_002.png :target: ../auto\_examples/linear\_model/plot\_lasso\_lasso\_lars\_elasticnet\_path.html :align: center :scale: 50% The class :class:`ElasticN... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst | main | scikit-learn | [
-0.028187235817313194,
-0.06909073144197464,
-0.11695980280637741,
-0.011918069794774055,
0.10495501756668091,
0.02469252608716488,
0.005755985155701637,
0.011609718203544617,
-0.08006273210048676,
0.01813584379851818,
0.04341365769505501,
-0.036520980298519135,
0.008107133209705353,
0.089... | 0.095998 |
>>> from sklearn import linear\_model >>> reg = linear\_model.LassoLars(alpha=.1) >>> reg.fit([[0, 0], [1, 1]], [0, 1]) LassoLars(alpha=0.1) >>> reg.coef\_ array([0.6, 0. ]) .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_lasso\_lasso\_lars\_elasticnet\_path.py` The LARS algorithm provides... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst | main | scikit-learn | [
-0.04260465130209923,
-0.07496237754821777,
-0.04658694937825203,
-0.01728709414601326,
0.09438753873109818,
-0.028162727132439613,
-0.002884599147364497,
-0.006739265285432339,
-0.02627122402191162,
0.06559453159570694,
0.015610300935804844,
0.06411771476268768,
-0.030151426792144775,
-0.... | 0.032212 |
`\_\_. \* Original Algorithm is detailed in the book `Bayesian learning for neural networks `\_\_ by Radford M. Neal. .. \_bayesian\_ridge\_regression: Bayesian Ridge Regression ------------------------- :class:`BayesianRidge` estimates a probabilistic model of the regression problem as described above. The prior for t... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst | main | scikit-learn | [
-0.13355448842048645,
-0.05552781745791435,
0.015260531567037106,
-0.024605754762887955,
0.010213085450232029,
0.0055811139754951,
0.05323053523898125,
0.05494336783885956,
0.032952114939689636,
0.0444125235080719,
0.09457720071077347,
0.13789822161197662,
0.039024464786052704,
-0.00401451... | 0.026523 |
View of Automatic Relevance Determination `\_ .. [3] Michael E. Tipping: `Sparse Bayesian Learning and the Relevance Vector Machine `\_ .. [4] Tristan Fletcher: `Relevance Vector Machines Explained `\_ .. \_Logistic\_regression: Logistic regression =================== The logistic regression is implemented in :class:`L... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst | main | scikit-learn | [
-0.03101547434926033,
-0.04887457564473152,
-0.042115334421396255,
-0.011858594603836536,
0.056364092975854874,
-0.029220236465334892,
0.05499346926808357,
0.08214755356311798,
0.004228068049997091,
-0.011287064291536808,
-0.019162066280841827,
0.040659911930561066,
0.03472196310758591,
0.... | 0.125927 |
since then the solution may not be unique, as shown in [16]\_. .. dropdown:: Mathematical details Let :math:`y\_i \in \{1, \ldots, K\}` be the label (ordinal) encoded target variable for observation :math:`i`. Instead of a single coefficient vector, we now have a matrix of coefficients :math:`W` where each row vector :... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst | main | scikit-learn | [
-0.07371994853019714,
-0.02871795743703842,
-0.04122215136885643,
0.0011729162652045488,
0.13486772775650024,
-0.01180177554488182,
0.04488382488489151,
-0.031095365062355995,
-0.011713934130966663,
0.02472180873155594,
0.029782608151435852,
-0.09728129953145981,
0.04600924998521805,
-0.00... | -0.037022 |
sparse multinomial logistic regression. It is also the only solver that supports Elastic-Net (`0 < l1\_ratio < 1`). \* The "lbfgs" is an optimization algorithm that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm [8]\_, which belongs to quasi-Newton methods. As such, it can deal with a wide range of differe... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst | main | scikit-learn | [
-0.05313386768102646,
-0.11456461995840073,
-0.037572238594293594,
0.0045334952883422375,
0.03754691779613495,
-0.06425388157367706,
-0.08041048049926758,
0.022567451000213623,
-0.057551417499780655,
0.029277436435222626,
-0.020808707922697067,
0.024939103052020073,
-0.0492636114358902,
-0... | 0.143488 |
Probability Density Functions (PDF) of these distributions are illustrated in the following figure, .. figure:: ./glm\_data/poisson\_gamma\_tweedie\_distributions.png :align: center :scale: 100% PDF of a random variable Y following Poisson, Tweedie (power=1.5) and Gamma distributions with different mean values (:math:`... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst | main | scikit-learn | [
0.041585035622119904,
-0.044736072421073914,
-0.013132604770362377,
-0.02014286257326603,
-0.0025617366190999746,
-0.10402467101812363,
0.08155152946710587,
-0.028076067566871643,
0.07696244865655899,
0.013939032331109047,
0.028297830373048782,
-0.03894241526722908,
0.07351198047399521,
0.... | -0.108953 |
to `TweedieRegressor(power=2, link='log')`. - ``power = 3``: Inverse Gaussian distribution. The link function is determined by the `link` parameter. Usage example:: >>> from sklearn.linear\_model import TweedieRegressor >>> reg = TweedieRegressor(power=1, alpha=0.5, link='log') >>> reg.fit([[0, 0], [0, 1], [2, 2]], [0,... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst | main | scikit-learn | [
-0.011811355128884315,
-0.0786975622177124,
-0.04798592999577522,
0.026809222996234894,
0.008793207816779613,
-0.07160069793462753,
0.05446483567357063,
-0.009331203997135162,
-0.009331203065812588,
-0.012640592642128468,
0.0463394820690155,
-0.0282170120626688,
0.002861975459381938,
0.034... | -0.055028 |
different things to keep in mind when dealing with data corrupted by outliers: .. |y\_outliers| image:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_robust\_fit\_003.png :target: ../auto\_examples/linear\_model/plot\_robust\_fit.html :scale: 60% .. |X\_outliers| image:: ../auto\_examples/linear\_model/images... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst | main | scikit-learn | [
-0.03127560764551163,
-0.05638172850012779,
0.0271438080817461,
0.04998046159744263,
0.08605339378118515,
-0.10853873938322067,
-0.0033840809483081102,
0.08663191646337509,
-0.07560648024082184,
-0.037302322685718536,
0.10919182002544403,
0.004786805249750614,
0.0723925307393074,
0.0092232... | 0.018398 |
These steps are performed either a maximum number of times (``max\_trials``) or until one of the special stop criteria are met (see ``stop\_n\_inliers`` and ``stop\_score``). The final model is estimated using all inlier samples (consensus set) of the previously determined best model. The ``is\_data\_valid`` and ``is\_... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst | main | scikit-learn | [
-0.06671881675720215,
-0.030790681019425392,
0.004184710793197155,
0.016128521412611008,
0.062047090381383896,
-0.10041588544845581,
-0.09832850098609924,
0.08287152647972107,
-0.004661426413804293,
-0.0059759411960840225,
0.01078044343739748,
0.0719764232635498,
0.014532629400491714,
-0.0... | 0.119492 |
by .. math:: H\_{\epsilon}(z) = \begin{cases} z^2, & \text {if } |z| < \epsilon, \\ 2\epsilon|z| - \epsilon^2, & \text{otherwise} \end{cases} It is advised to set the parameter ``epsilon`` to 1.35 to achieve 95% statistical efficiency. .. rubric:: References \* Peter J. Huber, Elvezio M. Ronchetti: Robust Statistics, C... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst | main | scikit-learn | [
-0.032202769070863724,
-0.018238386139273643,
-0.017065929248929024,
0.07077783346176147,
0.011510903015732765,
-0.027600688859820366,
-0.01485239528119564,
0.11285927891731262,
-0.005450299941003323,
-0.07931900024414062,
0.07798382639884949,
0.02564440667629242,
0.1322840452194214,
-0.06... | 0.05064 |
.. \_polynomial\_regression: Polynomial regression: extending linear models with basis functions =================================================================== .. currentmodule:: sklearn.preprocessing One common pattern within machine learning is to use linear models trained on nonlinear functions of the data. Thi... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst | main | scikit-learn | [
-0.06880504637956619,
-0.04729097709059715,
-0.01191532053053379,
-0.016064833849668503,
0.030269775539636612,
-0.04403252527117729,
-0.07518471032381058,
-0.025795914232730865,
-0.044868819415569305,
0.015097405761480331,
-0.024569928646087646,
0.037463944405317307,
-0.033261921256780624,
... | 0.150694 |
can be gotten from :class:`PolynomialFeatures` with the setting ``interaction\_only=True``. For example, when dealing with boolean features, :math:`x\_i^n = x\_i` for all :math:`n` and is therefore useless; but :math:`x\_i x\_j` represents the conjunction of two booleans. This way, we can solve the XOR problem with a l... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst | main | scikit-learn | [
-0.017428064718842506,
0.0014062434202060103,
-0.0283182505518198,
-0.018405519425868988,
0.04632025957107544,
-0.05135485529899597,
0.052624501287937164,
-0.12561143934726715,
-0.053775444626808167,
-0.012625733390450478,
0.03259211406111717,
-0.04110023379325867,
0.043822042644023895,
0.... | 0.019247 |
.. \_clustering: ========== Clustering ========== `Clustering `\_\_ of unlabeled data can be performed with the module :mod:`sklearn.cluster`. Each clustering algorithm comes in two variants: a class, that implements the ``fit`` method to learn the clusters on train data, and a function, that, given train data, returns... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.02556239441037178,
-0.05363903194665909,
-0.12555453181266785,
0.011049767956137657,
0.08306621760129929,
0.03907050937414169,
0.04605173319578171,
-0.08472103625535965,
-0.10448063164949417,
-0.08665840327739716,
0.06608154624700546,
0.0013858623569831252,
0.07207535207271576,
-0.04170... | 0.178668 |
is useful when the clusters have a specific shape, i.e. a non-flat manifold, and the standard euclidean distance is not the right metric. This case arises in the two top rows of the figure above. Gaussian mixture models, useful for clustering, are described in :ref:`another chapter of the documentation ` dedicated to m... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.009736282750964165,
-0.10333973169326782,
0.016703106462955475,
-0.027883905917406082,
0.0888359546661377,
0.010253334417939186,
-0.020652921870350838,
-0.05540013313293457,
0.0073030549101531506,
0.004274341743439436,
0.06477752327919006,
-0.034986089915037155,
0.08241122961044312,
-0.... | 0.185289 |
Each segment in the Voronoi diagram becomes a separate cluster. Secondly, the centroids are updated to the mean of each segment. The algorithm then repeats this until a stopping criterion is fulfilled. Usually, the algorithm stops when the relative decrease in the objective function between iterations is less than the ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
0.010270425118505955,
-0.04090201109647751,
0.037438880652189255,
-0.08215335756540298,
-0.041926342993974686,
-0.1461135894060135,
-0.06668256968259811,
-0.00989923533052206,
0.050278306007385254,
-0.040770649909973145,
0.019332803785800934,
0.024799847975373268,
0.008687208406627178,
-0.... | 0.085261 |
These steps are performed until convergence or a predetermined number of iterations is reached. :class:`MiniBatchKMeans` converges faster than :class:`KMeans`, but the quality of the results is reduced. In practice this difference in quality can be quite small, as shown in the example and cited reference. .. figure:: .... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.020482556894421577,
-0.017467956990003586,
-0.043625202029943466,
0.021936284378170967,
0.04905646666884422,
-0.026042431592941284,
-0.0685068890452385,
0.0040733832865953445,
-0.051681146025657654,
0.00024270056746900082,
0.08680875599384308,
-0.015899619087576866,
0.05258948355913162,
... | 0.143419 |
k) = \lambda\cdot r\_{t}(i, k) + (1-\lambda)\cdot r\_{t+1}(i, k) .. math:: a\_{t+1}(i, k) = \lambda\cdot a\_{t}(i, k) + (1-\lambda)\cdot a\_{t+1}(i, k) where :math:`t` indicates the iteration times. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_affinity\_propagation.py`: Affinity Propagation o... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.08057249337434769,
-0.08674075454473495,
-0.06454301625490189,
0.00939762033522129,
-0.03640427440404892,
-0.035343851894140244,
0.09218031913042068,
-0.011880805715918541,
0.015663836151361465,
-0.00500519759953022,
0.0789560079574585,
-0.03169596195220947,
0.06403924524784088,
-0.0040... | 0.181971 |
of clusters, but is not advised for many clusters. For two clusters, SpectralClustering solves a convex relaxation of the `normalized cuts `\_ problem on the similarity graph: cutting the graph in two so that the weight of the edges cut is small compared to the weights of the edges inside each cluster. This criteria is... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
0.005357165355235338,
-0.0845610573887825,
0.01030399464070797,
-0.014764283783733845,
0.04080171883106232,
-0.04004727676510811,
-0.029164202511310577,
-0.04708309844136238,
-0.032122351229190826,
-0.023776747286319733,
0.024979036301374435,
-0.10129031538963318,
0.03577220067381859,
0.05... | 0.052952 |
is the unique cluster that gathers all the samples, the leaves being the clusters with only one sample. See the `Wikipedia page `\_ for more details. The :class:`AgglomerativeClustering` object performs a hierarchical clustering using a bottom up approach: each observation starts in its own cluster, and clusters are su... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
0.012607655487954617,
-0.08421842753887177,
-0.0005872877663932741,
-0.009791473858058453,
0.0327259860932827,
-0.02201850898563862,
-0.005225836299359798,
0.036572616547346115,
0.008971689268946648,
-0.006236155051738024,
-0.018614567816257477,
-0.005733713507652283,
0.04641366004943848,
... | 0.178835 |
only at the intersection of a row and a column with indices of the dataset that should be connected. This matrix can be constructed from a-priori information: for instance, you may wish to cluster web pages by only merging pages with a link pointing from one to another. It can also be learned from the data, for instanc... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.0023609951604157686,
-0.06784085929393768,
-0.0897817611694336,
0.028041627258062363,
0.1265394240617752,
-0.0372091569006443,
-0.038203705102205276,
-0.06384208798408508,
-0.05189039930701256,
0.01396200805902481,
0.06963012367486954,
0.011813884600996971,
0.056151971220970154,
-0.0050... | 0.018405 |
result as accurate as picking by inertia and is faster (especially for larger amount of data points, where calculating error may be costly). Picking by largest amount of data points will also likely produce clusters of similar sizes while `KMeans` is known to produce clusters of different sizes. Difference between Bise... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
0.010338693857192993,
0.006625590845942497,
-0.002515111817047,
-0.016462991014122963,
0.0660722479224205,
-0.10925566405057907,
-0.0004971190355718136,
-0.007953990250825882,
0.07207217812538147,
0.03036162070930004,
0.04987151548266411,
-0.005682614166289568,
0.08121169358491898,
-0.0242... | 0.158328 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.