content large_stringlengths 3 20.5k | url large_stringlengths 54 193 | branch large_stringclasses 4
values | source large_stringclasses 42
values | embeddings listlengths 384 384 | score float64 -0.21 0.65 |
|---|---|---|---|---|---|
classification, you need to set the class label for which the PDPs should be created via the ``target`` argument:: >>> from sklearn.datasets import load\_iris >>> iris = load\_iris() >>> mc\_clf = GradientBoostingClassifier(n\_estimators=10, ... max\_depth=1).fit(iris.data, iris.target) >>> features = [3, 2, (3, 2)] >>... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/partial_dependence.rst | main | scikit-learn | [
-0.0411725789308548,
-0.12364161759614944,
0.012953424826264381,
-0.03234224393963814,
0.055444758385419846,
0.0038551168981939554,
-0.002197003923356533,
-0.0027757338248193264,
-0.09349940717220306,
-0.029003245756030083,
-0.02889353595674038,
-0.08505167067050934,
-0.016210978850722313,
... | -0.054307 |
heterogeneous relationships. cICE plots can be plotted by setting `centered=True`: >>> PartialDependenceDisplay.from\_estimator(clf, X, features, ... kind='both', centered=True) <...> Mathematical Definition ======================= Let :math:`X\_S` be the set of input features of interest (i.e. the `features` parameter... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/partial_dependence.rst | main | scikit-learn | [
-0.035888515412807465,
-0.06490553915500641,
0.064105324447155,
-0.04130511358380318,
-0.011124993674457073,
0.008711644448339939,
0.000571867567487061,
0.08610309660434723,
0.024765633046627045,
0.032264478504657745,
-0.009198921732604504,
-0.10415481775999069,
0.03616556525230408,
-0.065... | 0.065238 |
class (the positive class for binary classification), or the decision function. .. rubric:: References .. [H2009] T. Hastie, R. Tibshirani and J. Friedman, `The Elements of Statistical Learning `\_, Second Edition, Section 10.13.2, Springer, 2009. .. [M2019] C. Molnar, `Interpretable Machine Learning `\_, Section 5.1, ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/partial_dependence.rst | main | scikit-learn | [
-0.04110012575984001,
-0.042652491480112076,
0.0070032598450779915,
-0.00425674906000495,
0.08817509561777115,
-0.04782814905047417,
0.04160831496119499,
-0.014296302571892738,
0.021980546414852142,
0.01934078522026539,
-0.08445198833942413,
0.02202252484858036,
0.08537953346967697,
-0.022... | 0.08969 |
.. \_data\_reduction: ===================================== Unsupervised dimensionality reduction ===================================== If your number of features is high, it may be useful to reduce it with an unsupervised step prior to supervised steps. Many of the :ref:`unsupervised-learning` methods implement a ``tr... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/unsupervised_reduction.rst | main | scikit-learn | [
-0.08162905275821686,
0.018173277378082275,
0.06886368244886398,
-0.014104539528489113,
0.024361737072467804,
-0.027981039136648178,
-0.056408222764730453,
-0.029535779729485512,
-0.06282778829336166,
-0.006781009025871754,
-0.016953714191913605,
-0.0041685584001243114,
-0.03887851908802986,... | -0.006759 |
.. \_random\_projection: ================== Random Projection ================== .. currentmodule:: sklearn.random\_projection The :mod:`sklearn.random\_projection` module implements a simple and computationally efficient way to reduce the dimensionality of the data by trading a controlled amount of accuracy (as additi... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/random_projection.rst | main | scikit-learn | [
-0.08328422158956528,
0.0041399444453418255,
-0.07653825730085373,
-0.04639608785510063,
0.034683436155319214,
-0.04414323717355728,
0.029589014127850533,
-0.07616254687309265,
0.030731791630387306,
-0.018079368397593498,
0.01495350245386362,
0.0449124276638031,
0.11242390424013138,
-0.016... | 0.087202 |
1 / s \\ +\sqrt{\frac{s}{n\_{\text{components}}}} & & 1 / 2s\\ \end{array} \right. where :math:`n\_{\text{components}}` is the size of the projected subspace. By default the density of non zero elements is set to the minimum density as recommended by Ping Li et al.: :math:`1 / \sqrt{n\_{\text{features}}}`. Here is a sm... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/random_projection.rst | main | scikit-learn | [
-0.01807197742164135,
-0.05782594531774521,
-0.03251122683286667,
-0.03577451407909393,
0.03917283937335014,
0.02281426452100277,
0.037896931171417236,
-0.05351010337471962,
-0.04459358751773834,
-0.007634984329342842,
-0.03960577771067619,
0.001311191008426249,
0.10734330862760544,
-0.019... | -0.052538 |
.. \_calibration: ======================= Probability calibration ======================= .. currentmodule:: sklearn.calibration When performing classification you often want not only to predict the class label, but also obtain a probability of the respective label. This probability gives you some kind of confidence on... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/calibration.rst | main | scikit-learn | [
-0.07595499604940414,
-0.05994608253240585,
-0.10255628824234009,
-0.004192402586340904,
0.1034107431769371,
-0.025550970807671547,
0.08360994607210159,
0.0024199718609452248,
-0.0027199885807931423,
-0.02551424503326416,
-0.004871731158345938,
-0.06710757315158844,
0.08497254550457001,
-0... | 0.089319 |
consistent with the :class:`LogisticRegression` model (the model is 'well specified'), and the value of the regularization parameter `C` is tuned to be appropriate (neither too strong nor too low). As a consequence, this model returns accurate predictions from its `predict\_proba` method. In contrast to that, the other... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/calibration.rst | main | scikit-learn | [
-0.018649781122803688,
-0.09921790659427643,
-0.06219160556793213,
-0.007002928294241428,
0.07695785164833069,
-0.038056742399930954,
0.04107692837715149,
0.02301357127726078,
-0.03963834047317505,
0.02999526634812355,
-0.010118878446519375,
-0.032772429287433624,
0.07738631218671799,
-0.0... | 0.026577 |
(default), the following procedure is repeated independently for each cross-validation split: 1. a clone of `base\_estimator` is trained on the train subset 2. the trained `base\_estimator` makes predictions on the test subset 3. the predictions are used to fit a calibrator (either a sigmoid or isotonic regressor) (whe... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/calibration.rst | main | scikit-learn | [
-0.0970216691493988,
-0.07893390953540802,
-0.02104109525680542,
0.01718132011592388,
0.10170913487672806,
-0.006891990080475807,
0.006646479479968548,
-0.02253366820514202,
-0.0006256431806832552,
-0.03793012723326683,
-0.021685287356376648,
-0.14806869626045227,
0.08691729605197906,
-0.0... | 0.11391 |
works best if the calibration error is symmetrical, meaning the classifier output for each binary class is normally distributed with the same variance [7]\_. This can be a problem for highly imbalanced classification problems, where outputs do not have equal variance. In general this method is most effective for small ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/calibration.rst | main | scikit-learn | [
-0.07401641458272934,
-0.07006096839904785,
0.031765539199113846,
-0.039886269718408585,
0.04025329649448395,
-0.04639916121959686,
-0.03145681694149971,
0.011777810752391815,
-0.012558786198496819,
-0.020146651193499565,
0.0013461436610668898,
0.009126446209847927,
0.04168093949556351,
-0... | 0.133359 |
parameters for each single class. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_calibration\_plot\_calibration\_curve.py` \* :ref:`sphx\_glr\_auto\_examples\_calibration\_plot\_calibration\_multiclass.py` \* :ref:`sphx\_glr\_auto\_examples\_calibration\_plot\_calibration.py` \* :ref:`sphx\_glr\_auto\_example... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/calibration.rst | main | scikit-learn | [
-0.07684541493654251,
-0.08323515951633453,
-0.07019481807947159,
-0.037751276046037674,
0.04721794277429581,
-0.04074975475668907,
0.006031069904565811,
0.08719738572835922,
-0.03313259407877922,
-0.03843112289905548,
0.016356289386749268,
-0.06780513375997543,
0.05239179730415344,
-0.000... | 0.068244 |
.. \_permutation\_importance: Permutation feature importance ============================== .. currentmodule:: sklearn.inspection Permutation feature importance is a model inspection technique that measures the contribution of each feature to a :term:`fitted` model's statistical performance on a given tabular dataset. ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/permutation_importance.rst | main | scikit-learn | [
-0.00590089475736022,
-0.0011031687026843429,
0.04780792444944382,
-0.044643547385931015,
0.12548235058784485,
-0.032665055245161057,
0.07745220512151718,
-0.04252288490533829,
-0.030585909262299538,
0.022097930312156677,
-0.013027939014136791,
0.04704574868083,
0.036139413714408875,
0.027... | 0.075524 |
set. Using a held-out set makes it possible to highlight which features contribute the most to the generalization power of the inspected model. Features that are important on the training set but not on the held-out set might cause the model to overfit. The permutation feature importance depends on the score function t... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/permutation_importance.rst | main | scikit-learn | [
0.03136860206723213,
-0.006434097420424223,
0.00020685904019046575,
-0.033832862973213196,
0.06784062087535858,
0.04442180320620537,
0.031236795708537102,
0.04958605021238327,
-0.03150545805692673,
0.00014815728354733437,
-0.040639474987983704,
0.024441130459308624,
-0.010315664112567902,
... | 0.052543 |
to permutation-based feature importance: :ref:`sphx\_glr\_auto\_examples\_inspection\_plot\_permutation\_importance.py`. Misleading values on strongly correlated features ------------------------------------------------- When two features are correlated and one of the features is permuted, the model still has access to... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/permutation_importance.rst | main | scikit-learn | [
-0.006624940782785416,
-0.05742249637842178,
0.04083574190735817,
-0.04684671387076378,
0.10892682522535324,
-0.03553390130400658,
0.07458264380693436,
-0.05121629312634468,
-0.06609758734703064,
-0.01974238082766533,
0.004167995415627956,
0.0389757864177227,
0.0074005890637636185,
0.06217... | -0.045357 |
.. currentmodule:: sklearn.feature\_selection .. \_feature\_selection: ================= Feature selection ================= The classes in the :mod:`sklearn.feature\_selection` module can be used for feature selection/dimensionality reduction on sample sets, either to improve estimators' accuracy scores or to boost th... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/feature_selection.rst | main | scikit-learn | [
-0.021147368475794792,
0.018595661967992783,
0.09542923420667648,
0.04770808294415474,
0.10463594645261765,
-0.04074154421687126,
0.06335478276014328,
-0.06274248659610748,
-0.06385040283203125,
-0.04060953110456467,
0.01771959662437439,
-0.0025618094950914383,
-0.019021887332201004,
-0.02... | 0.030866 |
scores. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_feature\_selection\_plot\_feature\_selection.py` \* :ref:`sphx\_glr\_auto\_examples\_feature\_selection\_plot\_f\_test\_vs\_mi.py` .. \_rfe: Recursive feature elimination ============================= Given an external estimator that assigns weights to fe... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/feature_selection.rst | main | scikit-learn | [
-0.08567259460687637,
-0.08709418028593063,
0.056301265954971313,
0.06482435762882233,
0.07153359800577164,
-0.033587466925382614,
0.0383523590862751,
0.08674406260251999,
-0.0563359297811985,
0.031612664461135864,
0.00992318894714117,
0.03564348816871643,
0.02003171108663082,
0.0420379750... | 0.093181 |
certain specific conditions are met. In particular, the number of samples should be "sufficiently large", or L1 models will perform at random, where "sufficiently large" depends on the number of non-zero coefficients, the logarithm of the number of features, the amount of noise, the smallest absolute value of non-zero ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/feature_selection.rst | main | scikit-learn | [
-0.0403686985373497,
-0.07903177291154861,
-0.08045230060815811,
0.014616523869335651,
0.08933787792921066,
0.019028177484869957,
0.004969260189682245,
0.008923998102545738,
-0.004549302160739899,
0.02957153134047985,
-0.01200819294899702,
0.07523953169584274,
-0.013545762747526169,
0.0120... | -0.026362 |
to the other approaches. For example in backward selection, the iteration going from `m` features to `m - 1` features using k-fold cross-validation requires fitting `m \* k` models, while :class:`~sklearn.feature\_selection.RFE` would require only a single fit, and :class:`~sklearn.feature\_selection.SelectFromModel` a... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/feature_selection.rst | main | scikit-learn | [
-0.0821206197142601,
-0.10959811508655548,
0.041787587106227875,
0.0825544223189354,
0.05674197897315025,
-0.03896350413560867,
-0.0008257038425654173,
0.06269662082195282,
-0.10260394215583801,
-0.011293965391814709,
-0.039131831377744675,
-0.013852804899215698,
-0.034723877906799316,
-0.... | 0.025501 |
.. \_outlier\_detection: =================================================== Novelty and Outlier Detection =================================================== .. currentmodule:: sklearn Many applications require being able to decide whether a new observation belongs to the same distribution as existing observations (it... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/outlier_detection.rst | main | scikit-learn | [
-0.09471627324819565,
-0.057562265545129776,
0.024700822308659554,
0.037492793053388596,
0.13460074365139008,
-0.04872830957174301,
0.09582271426916122,
-0.038401391357183456,
-0.003992673475295305,
-0.039687637239694595,
0.03076893836259842,
-0.0038533282931894064,
0.0609789714217186,
-0.... | 0.133021 |
data ``score\_samples`` Use ``negative\_outlier\_factor\_`` Use only on new data ``negative\_outlier\_factor\_`` OK OK ============================ ================================ ===================== Overview of outlier detection methods ===================================== A comparison of the outlier detection alg... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/outlier_detection.rst | main | scikit-learn | [
-0.00043016590643674135,
-0.08498452603816986,
0.014273924753069878,
0.00908956490457058,
0.10624625533819199,
-0.09627977758646011,
0.020878300070762634,
0.06242266297340393,
-0.046757034957408905,
-0.03435122221708298,
0.10321702808141708,
-0.02117428183555603,
0.03138815239071846,
-0.00... | 0.032751 |
SVM ---------------------------- An online linear version of the One-Class SVM is implemented in :class:`linear\_model.SGDOneClassSVM`. This implementation scales linearly with the number of samples and can be used with a kernel approximation to approximate the solution of a kernelized :class:`svm.OneClassSVM` whose co... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/outlier_detection.rst | main | scikit-learn | [
-0.07726308703422546,
-0.12460944056510925,
-0.04648391902446747,
0.02046378329396248,
0.10703077167272568,
-0.08949948102235794,
-0.02817988023161888,
0.08472806215286255,
-0.025720465928316116,
-0.015679804608225822,
0.032732486724853516,
0.04681238904595375,
0.008204313926398754,
-0.057... | 0.052142 |
allows you to add more trees to an already fitted model:: >>> from sklearn.ensemble import IsolationForest >>> import numpy as np >>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [0, 0], [-20, 50], [3, 5]]) >>> clf = IsolationForest(n\_estimators=10, warm\_start=True) >>> clf.fit(X) # fit 10 trees # doctest: +SKIP >>> c... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/outlier_detection.rst | main | scikit-learn | [
-0.05714321881532669,
-0.10391885787248611,
-0.011241656728088856,
0.0570686012506485,
0.1677199900150299,
-0.049119047820568085,
0.014627208933234215,
-0.017876988276839256,
-0.06957074999809265,
0.009322214871644974,
0.03917103260755539,
-0.056013889610767365,
0.02425016649067402,
-0.025... | 0.05105 |
Breunig, Kriegel, Ng, and Sander (2000) `LOF: identifying density-based local outliers. `\_ Proc. ACM SIGMOD .. \_novelty\_with\_lof: Novelty detection with Local Outlier Factor =========================================== To use :class:`neighbors.LocalOutlierFactor` for novelty detection, i.e. predict labels or compute... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/outlier_detection.rst | main | scikit-learn | [
-0.02546350099146366,
-0.1098918616771698,
0.018378306180238724,
0.04648808389902115,
0.13348940014839172,
0.01933382637798786,
0.09181703627109528,
0.032886654138565063,
-0.05837174132466316,
-0.0779736191034317,
0.054990243166685104,
-0.12988972663879395,
0.04040642827749252,
-0.05294619... | 0.051015 |
.. currentmodule:: sklearn.manifold .. \_manifold: ================= Manifold learning ================= | Look for the bare necessities | The simple bare necessities | Forget about your worries and your strife | I mean the bare necessities | Old Mother Nature's recipes | That bring the bare necessities of life | | -- ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/manifold.rst | main | scikit-learn | [
-0.056898199021816254,
0.016152961179614067,
0.010743762366473675,
-0.02903534099459648,
0.08932673931121826,
-0.01751212775707245,
-0.05594624578952789,
0.017787912860512733,
-0.06709179282188416,
-0.012977725826203823,
0.05638093873858452,
-0.11120223253965378,
-0.01925223506987095,
-0.0... | 0.1002 |
comprises three stages: 1. \*\*Nearest neighbor search.\*\* Isomap uses :class:`~sklearn.neighbors.BallTree` for efficient neighbor search. The cost is approximately :math:`O[D \log(k) N \log(N)]`, for :math:`k` nearest neighbors of :math:`N` points in :math:`D` dimensions. 2. \*\*Shortest-path graph search.\*\* The mo... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/manifold.rst | main | scikit-learn | [
-0.012363094836473465,
0.006413325201719999,
-0.0581427738070488,
-0.08340435475111008,
0.024404842406511307,
-0.07569907605648041,
-0.08956929296255112,
-0.0683518499135971,
-0.06087174639105797,
-0.021583247929811478,
0.022941954433918,
-0.018833015114068985,
0.010321329347789288,
-0.041... | 0.064199 |
performed with function :func:`locally\_linear\_embedding` or its object-oriented counterpart :class:`LocallyLinearEmbedding`, with the keyword ``method = 'modified'``. It requires ``n\_neighbors > n\_components``. .. figure:: ../auto\_examples/manifold/images/sphx\_glr\_plot\_lle\_digits\_007.png :target: ../auto\_exa... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/manifold.rst | main | scikit-learn | [
-0.06433772295713425,
-0.0867224633693695,
-0.013397200033068657,
-0.09564090520143509,
0.030261268839240074,
-0.07982266694307327,
-0.0584101565182209,
-0.0066662197932600975,
0.003602210897952318,
-0.04238279163837433,
-0.016732238233089447,
-0.018911870196461678,
0.04273774102330208,
-0... | 0.010732 |
representation. 2. \*\*Graph Laplacian Construction\*\*. unnormalized Graph Laplacian is constructed as :math:`L = D - A` for and normalized one as :math:`L = D^{-\frac{1}{2}} (D - A) D^{-\frac{1}{2}}`. 3. \*\*Partial Eigenvalue Decomposition\*\*. Eigenvalue decomposition is done on graph Laplacian. The overall complex... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/manifold.rst | main | scikit-learn | [
-0.022336125373840332,
-0.06049170717597008,
-0.041304849088191986,
-0.06850896030664444,
0.013106582686305046,
-0.00861474871635437,
-0.08730746805667877,
-0.0698617473244667,
-0.008658001199364662,
-0.018382666632533073,
0.030513448640704155,
0.0031837462447583675,
0.0055257524363696575,
... | -0.066521 |
f(\delta\_{ij})` are some transformation of the dissimilarities. The MDS objective, called the raw stress, is then defined by :math:`\sum\_{i < j} (\hat{d}\_{ij} - d\_{ij}(Z))^2`, where :math:`d\_{ij}(Z)` are the pairwise distances between the coordinates :math:`Z` of the embedded points. .. dropdown:: Metric MDS In th... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/manifold.rst | main | scikit-learn | [
-0.05782698839902878,
-0.019412856549024582,
0.008166030049324036,
-0.07036557793617249,
0.014918691478669643,
-0.00911154318600893,
-0.008065426722168922,
0.03659696504473686,
0.04861756041646004,
-0.016186699271202087,
-0.03814282268285751,
-0.04574984312057495,
0.055910397320985794,
0.0... | 0.12449 |
Revealing data that lie in multiple, different, manifolds or clusters \* Reducing the tendency to crowd points together at the center While Isomap, LLE and variants are best suited to unfold a single continuous low dimensional manifold, t-SNE will focus on the local structure of the data and will tend to extract cluste... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/manifold.rst | main | scikit-learn | [
-0.08875299245119095,
-0.045191116631031036,
0.044359128922224045,
-0.04661426320672035,
0.0009200404747389257,
0.06855175644159317,
-0.07677669078111649,
-0.017505871132016182,
0.10723011940717697,
-0.05255132541060448,
-0.044341374188661575,
0.010699748992919922,
0.06514590233564377,
-0.... | 0.079736 |
is too low gradient descent will get stuck in a bad local minimum. If it is too high the KL divergence will increase during optimization. A heuristic suggested in Belkina et al. (2019) is to set the learning rate to the sample size divided by the early exaggeration factor. We implement this heuristic as `learning\_rate... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/manifold.rst | main | scikit-learn | [
-0.035158757120370865,
0.012597724795341492,
0.010048866271972656,
-0.03798810765147209,
-0.017839474603533745,
0.02485192008316517,
-0.10025198012590408,
0.047632746398448944,
-0.015133748762309551,
0.003108255797997117,
-0.04101023077964783,
0.03616427630186081,
0.0688251182436943,
0.014... | 0.011506 |
5415 (2019). Tips on practical use ===================== \* Make sure the same scale is used over all features. Because manifold learning methods are based on a nearest-neighbor search, the algorithm may perform poorly otherwise. See :ref:`StandardScaler ` for convenient ways of scaling heterogeneous data. \* The recon... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/manifold.rst | main | scikit-learn | [
-0.05929054319858551,
-0.023975903168320656,
0.08815094083547592,
0.007366550154983997,
-0.03225845843553543,
-0.06946378946304321,
-0.06548944115638733,
-0.048095881938934326,
-0.05461617559194565,
-0.05141676962375641,
-0.053676996380090714,
0.04912281781435013,
0.06465685367584229,
-0.0... | 0.031911 |
.. currentmodule:: sklearn.model\_selection .. \_grid\_search: =========================================== Tuning the hyper-parameters of an estimator =========================================== Hyper-parameters are parameters that are not directly learnt within estimators. In scikit-learn they are passed as arguments ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/grid_search.rst | main | scikit-learn | [
-0.06406011432409286,
-0.07071683555841446,
-0.07477688789367676,
-0.006395278498530388,
0.06707572937011719,
-0.033827487379312515,
0.0445929653942585,
-0.014453843235969543,
-0.05080299824476242,
0.02605762891471386,
-0.015975525602698326,
-0.03830624744296074,
-0.004177418537437916,
-0.... | 0.090211 |
in :class:`GridSearchCV`. The example shows how this interface adds a certain amount of flexibility in identifying the "best" estimator. This interface can also be used in multiple metrics evaluation. - See :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_grid\_search\_stats.py` for an example of how to do a st... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/grid_search.rst | main | scikit-learn | [
-0.07757914811372757,
0.0031278072856366634,
-0.08608733862638474,
0.03703669086098671,
0.04202057421207428,
-0.017991209402680397,
0.018437568098306656,
0.042879801243543625,
0.0031462127808481455,
0.01253002230077982,
-0.07295016199350357,
-0.02122119627892971,
0.059190016239881516,
-0.0... | 0.062246 |
of training samples, but it can also be an arbitrary numeric parameter such as `n\_estimators` in a random forest. .. note:: The resource increase chosen should be large enough so that a large improvement in scores is obtained when taking into account statistical significance. As illustrated in the figure below, only a... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/grid_search.rst | main | scikit-learn | [
-0.00973639264702797,
-0.07203605026006699,
-0.046138957142829895,
0.06911777704954147,
0.15947507321834564,
-0.022168638184666634,
0.0016812172252684832,
0.01826675795018673,
0.031663764268159866,
-0.006197236478328705,
-0.0566151924431324,
0.014970988035202026,
0.009066876024007797,
-0.0... | 0.117634 |
ideal: it means that many candidates will run with the full resources, basically reducing the procedure to standard search. In the case of :class:`HalvingRandomSearchCV`, the number of candidates is set by default such that the last iteration uses as much of the available resources as possible. For :class:`HalvingGridS... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/grid_search.rst | main | scikit-learn | [
-0.06724045425653458,
0.06949923932552338,
-0.07496333122253418,
0.010273911990225315,
0.06677618622779846,
-0.03706047311425209,
-0.003955950029194355,
0.04770490899682045,
0.040651626884937286,
-0.008892749436199665,
-0.03303663432598114,
0.0011341448407620192,
0.016403349116444588,
-0.0... | 0.051897 |
candidates: the best candidate is the best out of these 2 candidates. It is not necessary to run an additional iteration, since it would only evaluate one candidate (namely the best one, which we have already identified). For this reason, in general, we want the last iteration to run at most ``factor`` candidates. If t... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/grid_search.rst | main | scikit-learn | [
-0.06903348863124847,
-0.008715028874576092,
-0.054822687059640884,
0.054655082523822784,
0.10285239666700363,
-0.007887573912739754,
0.017277687788009644,
-0.012087239883840084,
-0.010582136921584606,
-0.029113484546542168,
-0.04935944452881813,
-0.025346023961901665,
0.04474802687764168,
... | -0.027824 |
candidates ------------------------------------ Using the ``aggressive\_elimination`` parameter, you can force the search process to end up with less than ``factor`` candidates at the last iteration. .. dropdown:: Code example of aggressive elimination Ideally, we want the last iteration to evaluate ``factor`` candidat... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/grid_search.rst | main | scikit-learn | [
-0.014796246774494648,
-0.010239101946353912,
-0.05346173048019409,
0.0088317496702075,
0.09673740714788437,
-0.04959378391504288,
-0.03871581703424454,
-0.028143571689724922,
-0.04550565406680107,
0.010605032555758953,
0.004265068098902702,
-0.010267390869557858,
0.016236815601587296,
-0.... | -0.068155 |
search ========================= .. \_gridsearch\_scoring: Specifying an objective metric ------------------------------ By default, parameter search uses the ``score`` function of the estimator to evaluate a parameter setting. These are the :func:`sklearn.metrics.accuracy\_score` for classification and :func:`sklearn.... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/grid_search.rst | main | scikit-learn | [
-0.10574311017990112,
-0.018242396414279938,
-0.11666902154684067,
0.012041694484651089,
0.020502032712101936,
-0.019621651619672775,
-0.00009855652751866728,
0.015175874345004559,
-0.009677063673734665,
-0.025634299963712692,
-0.06459332257509232,
-0.06350000947713852,
0.023347947746515274,... | 0.101447 |
will be `np.nan`. This can be controlled by setting `error\_score="raise"` to raise an exception if one fit fails, or for example `error\_score=0` to set another value for the score of failing parameter combinations. .. \_alternative\_cv: Alternatives to brute force parameter search ====================================... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/grid_search.rst | main | scikit-learn | [
-0.06525734812021255,
-0.03036469779908657,
-0.042552754282951355,
0.014079288579523563,
0.08197272568941116,
0.046237438917160034,
0.021604398265480995,
-0.007127811666578054,
-0.10262418538331985,
-0.02974672242999077,
0.021275173872709274,
-0.09134874492883682,
0.01956724375486374,
-0.0... | 0.148786 |
.. \_feature\_extraction: ================== Feature extraction ================== .. currentmodule:: sklearn.feature\_extraction The :mod:`sklearn.feature\_extraction` module can be used to extract features in a format supported by machine learning algorithms from datasets consisting of formats such as text and image.... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/feature_extraction.rst | main | scikit-learn | [
-0.09747269749641418,
0.015128346160054207,
0.020787997171282768,
0.008294324390590191,
0.13219602406024933,
-0.08677376061677933,
0.0337626114487648,
-0.04818582534790039,
-0.09016168117523193,
-0.048221755772829056,
-0.02435699664056301,
0.0291560310870409,
-0.042248643934726715,
0.02714... | 0.104141 |
of the time. So as to make the resulting data structure able to fit in memory the ``DictVectorizer`` class uses a ``scipy.sparse`` matrix by default instead of a ``numpy.ndarray``. .. \_feature\_hashing: Feature hashing =============== .. currentmodule:: sklearn.feature\_extraction The class :class:`FeatureHasher` is a... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/feature_extraction.rst | main | scikit-learn | [
-0.03342873230576515,
-0.014754089526832104,
0.0027429847978055477,
-0.02042795531451702,
0.06432291120290756,
-0.07572956383228302,
-0.026424486190080643,
-0.056626658886671066,
-0.02065187506377697,
-0.04581763222813606,
0.0029443849343806505,
0.1133168414235115,
-0.0001250156929017976,
... | 0.027976 |
a simple modulo is used to transform the hash function to a column index, it is advisable to use a power of two as the ``n\_features`` parameter; otherwise the features will not be mapped evenly to the columns. .. rubric:: References \* `MurmurHash3 `\_. .. rubric:: References \* Kilian Weinberger, Anirban Dasgupta, Jo... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/feature_extraction.rst | main | scikit-learn | [
-0.0482562780380249,
-0.0005242021288722754,
-0.04678313806653023,
-0.04175005480647087,
0.04353618621826172,
-0.022805549204349518,
0.015977848321199417,
-0.012205061502754688,
0.005525624379515648,
-0.029558034613728523,
0.02603609673678875,
0.043916940689086914,
0.05394416302442551,
-0.... | 0.084771 |
2 letters. The specific function that does this step can be requested explicitly:: >>> analyze = vectorizer.build\_analyzer() >>> analyze("This is a text document to analyze.") == ( ... ['this', 'is', 'text', 'document', 'to', 'analyze']) True Each term found by the analyzer during the fit is assigned a unique integer ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/feature_extraction.rst | main | scikit-learn | [
0.0007226120796985924,
0.03804900497198105,
-0.10679536312818527,
0.052098337560892105,
0.059784747660160065,
-0.0021575712598860264,
0.050450731068849564,
-0.08334003388881683,
-0.07551240175962448,
-0.02792021818459034,
0.001768451533280313,
-0.015470498241484165,
0.012209518812596798,
-... | 0.004956 |
by CountVectorizer's default tokenizer, so if \*we've\* is in ``stop\_words``, but \*ve\* is not, \*ve\* will be retained from \*we've\* in transformed text. Our vectorizers will try to identify and warn about some kinds of inconsistencies. .. rubric:: References .. [NQY18] J. Nothman, H. Qin and R. Yurchak (2018). `"S... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/feature_extraction.rst | main | scikit-learn | [
-0.03151006996631622,
-0.05167499557137489,
0.023631684482097626,
0.050705231726169586,
0.033727016299963,
0.0470491424202919,
0.014411375857889652,
-0.02046259492635727,
0.05667626112699509,
0.02092832140624523,
0.01015729084610939,
0.006634147372096777,
0.07466476410627365,
-0.0177355054... | 0.026441 |
= \log \frac{n}{\text{df}(t)} + 1 = \log(1)+1 = 1` :math:`\text{tf-idf}\_{\text{term1}} = \text{tf} \times \text{idf} = 3 \times 1 = 3` Now, if we repeat this computation for the remaining 2 terms in the document, we get :math:`\text{tf-idf}\_{\text{term2}} = 0 \times (\log(6/1)+1) = 0` :math:`\text{tf-idf}\_{\text{ter... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/feature_extraction.rst | main | scikit-learn | [
0.011599907651543617,
-0.07557991147041321,
-0.03729390725493431,
-0.10820452123880386,
0.06244352459907532,
-0.05182483419775963,
0.026497986167669296,
-0.004420056007802486,
0.10845550894737244,
0.05610186979174614,
0.06538563221693039,
-0.02861621044576168,
0.026790738105773926,
0.02183... | 0.029291 |
The vectorizers can be told to be silent about decoding errors by setting the ``decode\_error`` parameter to either ``"ignore"`` or ``"replace"``. See the documentation for the Python function ``bytes.decode`` for more details (type ``help(bytes.decode)`` at the Python prompt). .. dropdown:: Troubleshooting decoding te... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/feature_extraction.rst | main | scikit-learn | [
0.0025032912380993366,
-0.0005262199556455016,
-0.030279411002993584,
0.02656915783882141,
-0.01935858093202114,
-0.09227998554706573,
0.02708342671394348,
-0.021616190671920776,
-0.048369940370321274,
-0.06970404833555222,
-0.06248006224632263,
0.0075157396495342255,
0.0198706965893507,
-... | -0.10192 |
A collection of unigrams (what bag of words is) cannot capture phrases and multi-word expressions, effectively disregarding any word order dependence. Additionally, the bag of words model doesn't account for potential misspellings or word derivations. N-grams to the rescue! Instead of building a simple collection of un... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/feature_extraction.rst | main | scikit-learn | [
-0.08272339403629303,
-0.05421273782849312,
0.022680841386318207,
-0.0022182699758559465,
0.015455411747097969,
0.07888158410787582,
-0.00926154013723135,
0.0031455825082957745,
0.07370585948228836,
-0.03541325032711029,
0.002161620184779167,
0.03385065123438835,
0.0450427420437336,
-0.006... | 0.061192 |
the same size), - it is not easily possible to split the vectorization work into concurrent sub tasks as the ``vocabulary\_`` attribute would have to be a shared state with a fine grained synchronization barrier: the mapping from token string to feature index is dependent on the ordering of the first occurrence of each... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/feature_extraction.rst | main | scikit-learn | [
-0.049864619970321655,
-0.03846997395157814,
0.03744862601161003,
-0.01131823193281889,
0.022726673632860184,
-0.031193219125270844,
-0.013277816586196423,
-0.06092926487326622,
-0.03417343646287918,
-0.044571198523044586,
0.034157197922468185,
0.04754190519452095,
-0.00752082047984004,
-0... | -0.003947 |
:ref:`sphx\_glr\_auto\_examples\_applications\_plot\_out\_of\_core\_classification.py`. Customizing the vectorizer classes ---------------------------------- It is possible to customize the behavior by passing a callable to the vectorizer constructor:: >>> def my\_tokenizer(s): ... return s.split() ... >>> vectorizer =... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/feature_extraction.rst | main | scikit-learn | [
-0.05274226516485214,
0.03777891397476196,
0.020471755415201187,
0.011664902791380882,
0.03001045249402523,
0.01348141673952341,
0.05197751894593239,
0.02026781626045704,
-0.03541845455765724,
-0.03780209273099899,
-0.01003773882985115,
0.02119605615735054,
-0.010639342479407787,
0.0088047... | 0.040431 |
6, 9], [12, 15, 18, 21], [24, 27, 30, 33], [36, 39, 42, 45]]) >>> patches = image.extract\_patches\_2d(one\_image, (2, 2), max\_patches=2, ... random\_state=0) >>> patches.shape (2, 2, 2, 3) >>> patches[:, :, :, 0] array([[[ 0, 3], [12, 15]], [[15, 18], [27, 30]]]) >>> patches = image.extract\_patches\_2d(one\_image, (... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/feature_extraction.rst | main | scikit-learn | [
-0.047645535320043564,
0.018724817782640457,
0.09409671276807785,
-0.03772318363189697,
0.03441071882843971,
-0.06026696786284447,
0.013724695891141891,
-0.08384823799133301,
-0.08313227444887161,
-0.044584762305021286,
-0.06056548282504082,
0.02863369695842266,
0.047659896314144135,
0.046... | 0.033248 |
.. \_contributing: .. raw:: html ### ⚠️ NOTE FOR AI TOOLS AND THEIR USERS: Opening a PR on scikit-learn requires \*\*human judgment, contextual understanding\*\*, and \*\*familiarity with scikit-learn’s structure and goals\*\*. It is \*\*not suitable for automatic processing\*\* by AI tools or casual code assistant usa... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/contributing.rst | main | scikit-learn | [
-0.10342933982610703,
0.03467128053307533,
-0.02865491807460785,
0.02714480459690094,
0.13407498598098755,
-0.008586964569985867,
0.05124221369624138,
0.02233009971678257,
0.01997341960668564,
0.08874966204166412,
0.0313669852912426,
-0.0990121141076088,
0.04070848971605301,
-0.03324044123... | 0.203883 |
work on, including smaller projects such as: \* `scikit-learn-contrib `\_\_ \* `joblib `\_\_ \* `sphinx-gallery `\_\_ \* `numpydoc `\_\_ \* `liac-arff `\_\_ and larger projects: \* `numpy `\_\_ \* `scipy `\_\_ \* `matplotlib `\_\_ \* and so on. Look for issues marked "help wanted" or similar. Helping these projects may... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/contributing.rst | main | scikit-learn | [
-0.02965855970978737,
-0.04825809225440025,
0.004350390750914812,
-0.0004922260413877666,
0.10363508015871048,
-0.027679305523633957,
-0.0008725301595404744,
0.02502492070198059,
-0.1118762418627739,
0.0639151856303215,
0.01236603781580925,
-0.03543006628751755,
-0.018703695386648178,
0.03... | 0.173419 |
implemented. In case you experience issues using this package, do not hesitate to submit a ticket to the `Bug Tracker `\_. You are also welcome to post feature requests or pull requests. It is recommended to check that your issue complies with the following rules before submitting: - Verify that your issue is not being... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/contributing.rst | main | scikit-learn | [
-0.11605160683393478,
0.01574612595140934,
0.027364512905478477,
0.005529009271413088,
0.061504580080509186,
0.02312937192618847,
-0.046435821801424026,
0.037589702755212784,
-0.171089768409729,
0.022220220416784286,
0.010116331279277802,
-0.051662471145391464,
0.013237458653748035,
-0.012... | 0.098765 |
their work, otherwise consider it stalled and take it over. To maintain the quality of the codebase and ease the review process, any contribution must conform to the project's :ref:`coding guidelines `, in particular: - Don't modify unrelated lines to keep the PR focused on the scope stated in its description or issue.... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/contributing.rst | main | scikit-learn | [
-0.049262743443250656,
-0.006119228899478912,
-0.034892722964286804,
-0.006099022924900055,
0.04838370531797409,
-0.0335424579679966,
-0.037496842443943024,
0.035409919917583466,
-0.006966058164834976,
0.05695139616727829,
0.006759569514542818,
0.012560080736875534,
0.010141397826373577,
-... | -0.00775 |
ones are especially important: 1. \*\*Give your pull request a helpful title\*\* that summarizes what your contribution does. This title will often become the commit message once merged so it should summarize your contribution for posterity. In some cases "Fix " is enough. "Fix #" is never a good title. 2. \*\*Make sur... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/contributing.rst | main | scikit-learn | [
-0.03999607637524605,
-0.0033033997751772404,
0.033874042332172394,
-0.018219931051135063,
0.06831426918506622,
-0.04100152850151062,
-0.038303863257169724,
0.0635792464017868,
-0.05304049327969551,
0.05054536461830139,
0.03863872215151787,
-0.020761050283908844,
-0.024153007194399834,
0.0... | 0.01185 |
overhead. We expect PR authors to take part in the maintenance for the code they submit, at least initially. New features need to be illustrated with narrative documentation in the user guide, with small code snippets. If relevant, please also add references in the literature, with PDF links when possible. 11. The user... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/contributing.rst | main | scikit-learn | [
-0.01774204894900322,
0.005260360427200794,
-0.05069746449589729,
-0.003932554740458727,
0.05309169366955757,
-0.08521150052547455,
-0.03737495467066765,
0.0724080428481102,
-0.09157424420118332,
0.06882752478122711,
-0.018970131874084473,
0.013420866802334785,
-0.014201391488313675,
-0.00... | 0.012708 |
latest upstream/main git pull upstream main --no-rebase # resolve conflicts - keeping the upstream/main version for specific files git checkout --theirs build\_tools/\*/\*.lock build\_tools/\*/\*environment.yml \ build\_tools/\*/\*lock.txt build\_tools/\*/\*requirements.txt git add build\_tools/\*/\*.lock build\_tools/... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/contributing.rst | main | scikit-learn | [
-0.07286878675222397,
-0.060140542685985565,
0.03126656264066696,
-0.06095602363348007,
0.03097057342529297,
-0.07784733921289444,
-0.04162723571062088,
0.015800131484866142,
-0.018965434283018112,
0.008815527893602848,
0.006222037132829428,
-0.01404309831559658,
0.00045774845057167113,
0.... | -0.007687 |
the procedure described in the :ref:`stalled\_pull\_request` section rather than working directly on the issue. .. \_issues\_tagged\_needs\_triage: Issues tagged "Needs Triage" ---------------------------- The `"Needs Triage" `\_ label means that the issue is not yet confirmed or fully understood. It signals to scikit-... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/contributing.rst | main | scikit-learn | [
-0.09608577191829681,
0.007151855621486902,
-0.07172409445047379,
-0.06689618527889252,
0.05100615695118904,
-0.06923352926969528,
0.03223435580730438,
0.018590077757835388,
-0.05334089696407318,
0.028735626488924026,
0.08606255799531937,
-0.07285318523645401,
-0.10192877054214478,
0.01043... | 0.144439 |
The default value is `np.ones(shape=(n\_samples,))`. list\_param : list of int typed\_ndarray : ndarray of shape (n\_samples,), dtype=np.int32 sample\_weight : array-like of shape (n\_samples,), default=None multioutput\_array : ndarray of shape (n\_samples, n\_classes) or list of such arrays In general have the follow... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/contributing.rst | main | scikit-learn | [
0.03532493859529495,
-0.013183187693357468,
-0.03426109999418259,
0.007850524969398975,
-0.028285644948482513,
-0.09693372994661331,
0.07189943641424179,
-0.0475924089550972,
-0.041844312101602554,
-0.003511099610477686,
-0.06319677829742432,
-0.027835117653012276,
-0.016810044646263123,
0... | 0.156247 |
primarily interested in understanding the feature's practical implications rather than its underlying mechanics. \* When editing reStructuredText (``.rst``) files, try to keep line length under 88 characters when possible (exceptions include links and tables). \* In scikit-learn reStructuredText files both single and d... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/contributing.rst | main | scikit-learn | [
-0.10030492395162582,
0.046375419944524765,
-0.029551731422543526,
0.03685431554913521,
0.01908566616475582,
0.05576483532786369,
-0.04100267216563225,
0.09057047963142395,
0.0066830250434577465,
0.0388680025935173,
-0.04768989235162735,
-0.006363918539136648,
-0.005206397734582424,
-0.015... | 0.119002 |
if your modifications have introduced new sphinx warnings by building the documentation locally and try to fix them.\*\* First, make sure you have :ref:`properly installed ` the development version. On top of that, building the documentation requires installing some additional packages: .. packaging is not needed once ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/contributing.rst | main | scikit-learn | [
0.021122818812727928,
0.026380088180303574,
0.03961002826690674,
-0.03640151396393776,
0.008799723349511623,
-0.02238663285970688,
-0.08200851082801819,
0.012929406017065048,
-0.0676170140504837,
0.10475311428308487,
-0.012071638368070126,
-0.05924636125564575,
0.029606057330965996,
0.0533... | -0.017296 |
as an argument:: def test\_requiring\_mpl\_fixture(pyplot): # you can now safely use matplotlib .. dropdown:: Workflow to improve test coverage To test code coverage, you need to install the `coverage `\_ package in addition to `pytest`. 1. Run `pytest --cov sklearn /path/to/tests`. The output lists for each file the l... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/contributing.rst | main | scikit-learn | [
-0.024179227650165558,
-0.01727190613746643,
-0.06603642553091049,
-0.014295520260930061,
0.09188848733901978,
-0.1113167330622673,
-0.017780372872948647,
-0.003846988780423999,
-0.07326138764619827,
0.034994933754205704,
-0.04018080607056618,
0.00391638046130538,
-0.09189334511756897,
-0.... | 0.015835 |
benchmark suite supports additional configurable options which can be set in the `benchmarks/config.json` configuration file. For example, the benchmarks can run for a provided list of values for the `n\_jobs` parameter. More information on how to write a benchmark and how to use asv can be found in the `asv documentat... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/contributing.rst | main | scikit-learn | [
-0.09023931622505188,
-0.012921656481921673,
-0.06742024421691895,
0.10850688070058823,
0.08322988450527191,
-0.0651111900806427,
-0.027786098420619965,
0.02696031518280506,
-0.1664322018623352,
-0.01230719592422247,
-0.014046878553926945,
-0.03079129010438919,
-0.005357964895665646,
0.007... | 0.12498 |
n\_clusters=8, k='deprecated'): self.n\_clusters = n\_clusters self.k = k def fit(self, X, y): if self.k != "deprecated": warnings.warn( "`k` was renamed to `n\_clusters` in 0.13 and will be removed in 0.15.", FutureWarning, ) self.\_n\_clusters = self.k else: self.\_n\_clusters = self.n\_clusters As in these examples,... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/contributing.rst | main | scikit-learn | [
0.0024582725018262863,
0.026711774989962578,
0.04652082547545433,
0.0412830226123333,
0.10703178495168686,
-0.05788504704833031,
-0.008711885660886765,
0.0052487519569695,
-0.012707333080470562,
-0.032091349363327026,
0.07746003568172455,
-0.013969331979751587,
0.03458045423030853,
-0.0441... | 0.104838 |
highly educational for everybody involved. This is particularly appropriate if it is a feature you would like to use, and so can respond critically about whether the PR meets your needs. While each pull request needs to be signed off by two core developers, you can speed up this process by providing your feedback. .. n... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/contributing.rst | main | scikit-learn | [
-0.08815188705921173,
0.051082730293273926,
-0.013002828694880009,
-0.012027639895677567,
0.055958621203899384,
-0.0300757959485054,
0.036182425916194916,
0.09158408641815186,
-0.06338362395763397,
0.06976352632045746,
-0.013479828834533691,
0.005809385795146227,
-0.06597542762756348,
-0.0... | 0.12872 |
is an act of generosity. Opening with a positive comment will help the author feel rewarded, and your subsequent remarks may be heard more clearly. You may feel good also. - Begin if possible with the large issues, so the author knows they've been understood. Resist the temptation to immediately go line by line, or to ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/contributing.rst | main | scikit-learn | [
-0.10023817420005798,
0.03535822778940201,
-0.02137807197868824,
0.018532950431108475,
0.04629499837756157,
-0.05647992715239525,
0.05695930868387222,
0.0603107213973999,
-0.04032125696539879,
0.041031330823898315,
0.022604677826166153,
-0.004172143060714006,
-0.001998784253373742,
-0.0355... | 0.088171 |
located in ``.pyx`` and ``.pxd`` files. Cython code has a more C-like flavor: we use pointers, perform manual memory allocation, etc. Having some minimal experience in C / C++ is pretty much mandatory here. For more information see :ref:`cython`. - Master your tools. - With such a big project, being efficient with your... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/contributing.rst | main | scikit-learn | [
-0.09695577621459961,
0.019550243392586708,
-0.05200689285993576,
-0.03835437074303627,
0.04422735050320625,
-0.05802910029888153,
0.009668455459177494,
0.10558399558067322,
0.08845642954111099,
-0.019611379131674767,
0.022497985512018204,
0.08343464881181717,
0.049758244305849075,
-0.0034... | 0.080267 |
.. \_cython: Cython Best Practices, Conventions and Knowledge ================================================ This document contains tips to develop Cython code in scikit-learn. Tips for developing with Cython in scikit-learn ----------------------------------------------- Tips to ease development ^^^^^^^^^^^^^^^^^^^^... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/cython.rst | main | scikit-learn | [
-0.12011537700891495,
0.04926074296236038,
-0.09488390386104584,
-0.07065796852111816,
-0.012692480348050594,
-0.08922640234231949,
-0.05290401726961136,
0.08360068500041962,
-0.052690498530864716,
-0.039062339812517166,
0.09287306666374207,
-0.05754586681723595,
0.0317409411072731,
-0.013... | 0.11591 |
of Cython code, the better" is a good rule of thumb. \* ``nogil`` declarations are just hints: when declaring the ``cdef`` functions as nogil, it means that they can be called without holding the GIL, but it does not release the GIL when entering them. You have to do that yourself either by passing ``nogil=True`` to ``... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/cython.rst | main | scikit-learn | [
-0.10825751721858978,
0.041499752551317215,
-0.09179515391588211,
-0.025541482493281364,
-0.02960025519132614,
-0.06961177289485931,
0.014272885397076607,
0.029403476044535637,
-0.0029626472387462854,
-0.03888406231999397,
0.08658625185489655,
-0.05459366738796234,
0.02156701311469078,
-0.... | 0.04045 |
.. \_develop: ================================== Developing scikit-learn estimators ================================== Whether you are proposing an estimator for inclusion in scikit-learn, developing a separate package compatible with scikit-learn, or implementing custom components for your own projects, this chapter d... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/develop.rst | main | scikit-learn | [
-0.061067335307598114,
-0.06329572200775146,
-0.050432685762643814,
0.05403371900320053,
0.08902857452630997,
-0.06875024735927582,
-0.03002801723778248,
0.028628354892134666,
-0.044728294014930725,
0.015065867453813553,
-0.020318277180194855,
-0.0994306355714798,
-0.03357267379760742,
-0.... | 0.152183 |
few places, only in some meta-estimators, where the sub-estimator(s) argument is a required argument. Most arguments correspond to hyperparameters describing the model or the optimisation problem the estimator tries to solve. Other parameters might define how the estimator behaves, e.g. defining the location of a cache... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/develop.rst | main | scikit-learn | [
-0.025138352066278458,
-0.030751612037420273,
-0.016541311517357826,
0.0622994527220726,
0.07334987074136734,
-0.0584271065890789,
0.03350792080163956,
0.04451199993491173,
0.005017015151679516,
-0.013958845287561417,
-0.018119318410754204,
-0.04462398216128349,
0.0012453896924853325,
-0.0... | 0.173515 |
need to accept a ``y`` argument in the second place if they are implemented. The method should return the object (``self``). This pattern is useful to be able to implement quick one liners in an IPython session such as:: y\_predicted = SGDClassifier(alpha=10).fit(X\_train, y\_train).predict(X\_test) Depending on the na... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/develop.rst | main | scikit-learn | [
-0.0863163098692894,
-0.04933292046189308,
-0.08244634419679642,
0.03107893280684948,
0.024341128766536713,
0.0016625712160021067,
0.08836860209703445,
0.015368996188044548,
-0.1209077313542366,
-0.03420966863632202,
0.031975455582141876,
0.0039489250630140305,
0.03470605984330177,
-0.0248... | 0.082639 |
ways to achieve the correct interface more easily. .. topic:: Project template: We provide a `project template `\_ which helps in the creation of Python packages containing scikit-learn compatible estimators. It provides: \* an initial git repository with Python package directory structure \* a template of a scikit-lea... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/develop.rst | main | scikit-learn | [
-0.05877802148461342,
-0.032664842903614044,
-0.006182016339153051,
0.050812020897865295,
0.06441441923379898,
-0.09307407587766647,
-0.001681530731730163,
0.08807379007339478,
-0.030890654772520065,
0.021000642329454422,
-0.001899663358926773,
-0.07925283908843994,
-0.013264290988445282,
... | 0.125399 |
0.0 subestimator\_\_max\_iter -> 100 subestimator\_\_n\_jobs -> None subestimator\_\_penalty -> deprecated subestimator\_\_random\_state -> None subestimator\_\_solver -> lbfgs subestimator\_\_tol -> 0.0001 subestimator\_\_verbose -> 0 subestimator\_\_warm\_start -> False subestimator -> LogisticRegression() If the met... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/develop.rst | main | scikit-learn | [
-0.029014302417635918,
-0.029407085850834846,
-0.04090527445077896,
0.053707920014858246,
0.03145786002278328,
-0.12312920391559601,
-0.021774282678961754,
0.07912086695432663,
-0.036580219864845276,
0.0029679029248654842,
-0.04798649996519089,
-0.08217068016529083,
-0.009752604179084301,
... | 0.089277 |
accept a ``y`` parameter in their ``fit`` method, but it should be ignored. Clustering algorithms should set a ``labels\_`` attribute, storing the labels assigned to each sample. If applicable, they can also implement a ``predict`` method, returning the labels assigned to newly given samples. If one needs to check the ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/develop.rst | main | scikit-learn | [
-0.07381996512413025,
-0.0007127921562641859,
-0.07952259480953217,
0.05782423913478851,
0.13494285941123962,
-0.03202767297625542,
-0.010515090078115463,
0.04011122137308121,
-0.0806349515914917,
-0.01620544120669365,
0.057028885930776596,
-0.0746675506234169,
-0.027480971068143845,
-0.03... | 0.14559 |
arrays in `transform`, auto wrapping will only wrap the first array and not alter the other arrays. See :ref:`sphx\_glr\_auto\_examples\_miscellaneous\_plot\_set\_output.py` for an example on how to use the API. .. \_developer\_api\_check\_is\_fitted: Developer API for `check\_is\_fitted` ==============================... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/develop.rst | main | scikit-learn | [
-0.07183323055505753,
-0.04986099153757095,
-0.047209106385707855,
0.026016592979431152,
0.08082598447799683,
-0.07255357503890991,
0.04803679138422012,
-0.03868715465068817,
-0.08673921227455139,
-0.001522010425105691,
0.00347051746211946,
-0.06918148696422577,
-0.03792637586593628,
-0.00... | 0.065519 |
be found `here `\_. Input validation ---------------- .. currentmodule:: sklearn.utils The module :mod:`sklearn.utils` contains various functions for doing input validation and conversion. Sometimes, ``np.asarray`` suffices for validation; do \*not\* use ``np.asanyarray`` or ``np.atleast\_2d``, since those let NumPy's ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/develop.rst | main | scikit-learn | [
-0.037023790180683136,
-0.05137502774596214,
-0.13716217875480652,
-0.035522811114788055,
0.06323174387216568,
-0.08197099715471268,
0.022294361144304276,
-0.054641593247652054,
-0.08894094079732895,
-0.0015016755787655711,
-0.012104855850338936,
-0.024997832253575325,
-0.004061154089868069,... | 0.118298 |
.. \_developers-utils: ======================== Utilities for Developers ======================== Scikit-learn contains a number of utilities to help with development. These are located in :mod:`sklearn.utils`, and include tools in a number of categories. All the following functions and classes are in the module :mod:`... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/utilities.rst | main | scikit-learn | [
-0.07288956642150879,
-0.03635719418525696,
-0.1253102868795395,
-0.008724356070160866,
0.06695452332496643,
-0.07999037951231003,
0.02570491097867489,
-0.07949931174516678,
-0.08478138595819473,
-0.0005322629003785551,
0.052271194756031036,
-0.03235815465450287,
-0.04259416460990906,
0.00... | 0.135986 |
the minimum of the positive values within an array. - :func:`extmath.fast\_logdet`: efficiently compute the log of the determinant of a matrix. - :func:`extmath.density`: efficiently compute the density of a sparse vector - :func:`extmath.safe\_sparse\_dot`: dot product which will correctly handle ``scipy.sparse`` inpu... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/utilities.rst | main | scikit-learn | [
-0.033346086740493774,
-0.04707810655236244,
-0.1267000138759613,
0.012741835787892342,
0.029373615980148315,
-0.09320574998855591,
0.10192730277776718,
-0.0597512386739254,
-0.018044820055365562,
-0.010299879126250744,
0.031188126653432846,
0.02646998129785061,
0.06784773617982864,
0.0070... | 0.112339 |
module can also be "cimported" from other cython modules so as to benefit from the high performance of MurmurHash while skipping the overhead of the Python interpreter. Warnings and Exceptions ======================= - :class:`deprecated`: Decorator to mark a function or class as deprecated. - :class:`~sklearn.exceptio... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/utilities.rst | main | scikit-learn | [
-0.10770192742347717,
-0.018494874238967896,
-0.034860964864492416,
-0.05169687792658806,
-0.03346129506826401,
-0.05375950410962105,
-0.04253242164850235,
0.016239887103438377,
-0.04619927331805229,
-0.09173759818077087,
0.030540598556399345,
0.035096168518066406,
0.040553897619247437,
-0... | 0.103941 |
.. \_developers-tips: =========================== Developers' Tips and Tricks =========================== Productivity and sanity-preserving tips ======================================= In this section we gather some useful advice and tools that may increase your quality-of-life when reviewing pull requests, running un... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/tips.rst | main | scikit-learn | [
-0.12002725154161453,
-0.057148128747940063,
0.00961555726826191,
0.04490591585636139,
0.008392632007598877,
-0.05354789271950722,
-0.005537146702408791,
0.02099914662539959,
-0.06856366991996765,
-0.04467829316854477,
-0.036076463758945465,
0.029976485297083855,
-0.00974787026643753,
-0.0... | 0.065036 |
wrapped on screen. Issue: Usage questions :: You are asking a usage question. The issue tracker is for bugs and new features. For usage questions, it is recommended to try [Stack Overflow](https://stackoverflow.com/questions/tagged/scikit-learn) or [the Mailing List](https://mail.python.org/mailman/listinfo/scikit-lear... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/tips.rst | main | scikit-learn | [
-0.07715827971696854,
-0.018953895196318626,
-0.039433397352695465,
0.05659441277384758,
0.12321458756923676,
-0.12167919427156448,
0.02067478373646736,
0.05169465392827988,
-0.1235310360789299,
0.032055310904979706,
0.00019856328435707837,
-0.06490449607372284,
-0.051866013556718826,
-0.0... | 0.166202 |
developers to agree that your pull request is desirable and ready. [Please be patient](https://scikit-learn.org/dev/faq.html#why-is-my-pull-request-not-getting-any-attention), as we mostly rely on volunteered time from busy core developers. (You are also welcome to help us out with [reviewing other PRs](https://scikit-... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/tips.rst | main | scikit-learn | [
-0.08998221904039383,
-0.009888535365462303,
0.030858205631375313,
-0.029627779498696327,
0.09550187736749649,
-0.053560350090265274,
-0.07893063873052597,
0.07787197828292847,
0.020874587818980217,
0.07076212763786316,
-0.02966299280524254,
-0.01841616816818714,
-0.12444069236516953,
-0.0... | 0.097403 |
errors. Uninitialized variables can lead to unexpected behavior that is difficult to track down. A very useful tool when debugging these sorts of errors is valgrind\_. Valgrind is a command-line tool that can trace memory errors in a variety of code. Follow these steps: 1. Install `valgrind`\_ on your system. 2. Downlo... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/tips.rst | main | scikit-learn | [
0.06046546995639801,
-0.009248935617506504,
-0.05392468720674515,
0.08420045673847198,
-0.0006808819016441703,
-0.05922510474920273,
0.024722540751099586,
0.042628224939107895,
-0.040643252432346344,
-0.028762707486748695,
-0.015909189358353615,
0.01701236702501774,
0.04005870223045349,
0.... | -0.079519 |
written good guides about what it is and how it works. - `pandas setup doc `\_: pandas has a similar setup as ours (no spin or dev.py) - `scipy Meson doc `\_ gives more background about how Meson works behind the scenes | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/tips.rst | main | scikit-learn | [
-0.1223519891500473,
-0.011855576187372208,
-0.05657152086496353,
-0.02617982216179371,
-0.006281436886638403,
-0.1387985199689865,
-0.04259105399250984,
0.07060733437538147,
0.02497868426144123,
0.018603818491101265,
0.06478588283061981,
0.08641574531793594,
-0.05527673289179802,
-0.00495... | 0.214877 |
.. \_plotting\_api: ================================ Developing with the Plotting API ================================ Scikit-learn defines a simple API for creating visualizations for machine learning. The key features of this API are to run calculations once and to have the flexibility to adjust the visualizations af... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/plotting.rst | main | scikit-learn | [
-0.05697203055024147,
-0.01662290282547474,
-0.06745792925357819,
0.013343350030481815,
0.0376649871468544,
-0.09008770436048508,
-0.062473829835653305,
0.0663561150431633,
-0.020729370415210724,
0.030309775844216347,
-0.010831165127456188,
-0.03602007031440735,
-0.013476083055138588,
-0.0... | 0.175065 |
are a 1d ndarray corresponding to the list of axes passed in. | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/plotting.rst | main | scikit-learn | [
-0.024688027799129486,
-0.04290103167295456,
-0.09312502294778824,
-0.07692006230354309,
-0.0009078467264771461,
-0.04093620926141739,
0.01834855042397976,
-0.07372237741947174,
-0.03082742728292942,
-0.019025756046175957,
0.002144104102626443,
0.05397459864616394,
-0.012142544612288475,
0... | 0.087393 |
.. \_bug\_triaging: Bug triaging and issue curation =============================== The `issue tracker `\_ is important to the communication in the project: it helps developers identify major projects to work on, as well as to discuss priorities. For this reason, it is important to curate it, adding labels to issues an... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/bug_triaging.rst | main | scikit-learn | [
-0.09953528642654419,
0.043331317603588104,
-0.021033380180597305,
0.032530348747968674,
0.033679623156785965,
0.03873066604137421,
0.03493371605873108,
0.026602141559123993,
-0.10910861194133759,
-0.01654900424182415,
0.039489008486270905,
-0.07087858766317368,
0.04641271010041237,
-0.020... | 0.108389 |
a good way to approach issue triaging: #. Thank the reporter for opening an issue The issue tracker is many people's first interaction with the scikit-learn project itself, beyond just using the library. As such, we want it to be a welcoming, pleasant experience. #. Is this a usage question? If so close it with a polit... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/bug_triaging.rst | main | scikit-learn | [
-0.03557385876774788,
0.07202468812465668,
-0.04319804534316063,
0.03320448845624924,
0.08584403991699219,
-0.026609044522047043,
0.05917328968644142,
0.035590242594480515,
-0.06846874952316284,
0.011876745149493217,
-0.02036951296031475,
-0.050188302993774414,
-0.035659585148096085,
0.007... | 0.157724 |
.. \_setup\_development\_environment: Set up your development environment ----------------------------------- .. \_git\_repo: Fork the scikit-learn repository ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ First, you need to `create an account `\_ on GitHub (if you do not already have one) and fork the `project repository `\_\_ by c... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/development_setup.rst | main | scikit-learn | [
-0.0116440923884511,
-0.08981061726808548,
-0.023113304749131203,
-0.036009348928928375,
0.0441289022564888,
-0.09269582480192184,
-0.0019490040140226483,
0.058931611478328705,
-0.04617808014154434,
0.024902889505028725,
0.038990870118141174,
-0.04451672360301018,
0.012815430760383606,
-0.... | 0.032466 |
Download the `Build Tools for Visual Studio installer `\_ and run the downloaded `vs\_buildtools.exe` file. During the installation you will need to make sure you select "Desktop development with C++", similarly to this screenshot: .. image:: ../images/visual-studio-build-tools-selection.png Next, install the 64-bit ve... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/development_setup.rst | main | scikit-learn | [
-0.041814614087343216,
-0.03096972219645977,
0.009818915277719498,
-0.038675595074892044,
0.009580734185874462,
-0.025518564507365227,
-0.08697390556335449,
0.012023748829960823,
-0.06659985333681107,
0.006343855056911707,
-0.03880564868450165,
-0.08990658074617386,
0.004712045192718506,
0... | -0.037196 |
C++, as well as the Python header files: .. prompt:: sudo yum -y install gcc gcc-c++ python3-devel \* On Arche Linux, the Python header files are already included in the python installation, and `gcc`` includes the required compilers for C and C++: .. prompt:: sudo pacman -S gcc Now create a virtual environment (venv\_... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/development_setup.rst | main | scikit-learn | [
-0.0610249787569046,
-0.03351621702313423,
0.012455257587134838,
-0.0417240709066391,
0.04727621749043465,
-0.056804023683071136,
-0.09615161269903183,
0.03384711965918541,
-0.05817102640867233,
-0.03038322739303112,
-0.06309875845909119,
-0.09866952896118164,
-0.041589267551898956,
0.0285... | 0.025142 |
.. \_minimal\_reproducer: ============================================== Crafting a minimal reproducer for scikit-learn ============================================== Whether submitting a bug report, designing a suite of tests, or simply posting a question in the discussions, being able to craft minimal, reproducible e... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/minimal_reproducer.rst | main | scikit-learn | [
-0.12006977945566177,
0.013739789836108685,
0.008191123604774475,
0.04208341985940933,
0.03791143372654915,
-0.10666289925575256,
-0.069379061460495,
0.08576463162899017,
-0.09270940721035004,
0.0040345145389437675,
-0.024646563455462456,
-0.08562447130680084,
0.06698747724294662,
-0.08237... | 0.098708 |
GradientBoostingRegressor(random\_state=0, n\_iter\_no\_change=5) gbdt.fit(X\_train, y\_train) # raises warning other\_score = gbdt.score(X\_test, y\_test) other\_score = gbdt.score(X\_test, y\_test) Boil down your script to something as small as possible ------------------------------------------------------- You have... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/minimal_reproducer.rst | main | scikit-learn | [
-0.0979340672492981,
-0.09647269546985626,
0.03205587714910507,
0.08070828765630722,
0.07908529788255692,
0.0573020875453949,
0.035498909652233124,
0.043039720505476,
-0.14833621680736542,
-0.05444661155343056,
-0.03855589032173157,
-0.07826817780733109,
0.00034289751783944666,
-0.09074306... | -0.032571 |
snippet as follows .. code-block:: python from sklearn.datasets import make\_blobs n\_samples = 100 n\_components = 3 X, y = make\_blobs(n\_samples=n\_samples, centers=n\_components) It is not necessary to create several blocks of code when submitting a bug report. Remember other reviewers are going to copy-paste your ... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/minimal_reproducer.rst | main | scikit-learn | [
-0.022926799952983856,
0.036937907338142395,
-0.02471807226538658,
0.045326411724090576,
0.06221534684300423,
-0.01501591969281435,
0.004438592121005058,
0.08011769503355026,
-0.12462933361530304,
-0.015719663351774216,
0.023933688178658485,
-0.0937499925494194,
0.05208513140678406,
-0.105... | -0.022459 |
per class. Noise can be introduced by means of correlated, redundant or uninformative features. .. code-block:: python from sklearn.datasets import make\_classification X, y = make\_classification( n\_features=2, n\_redundant=0, n\_informative=2, n\_clusters\_per\_class=1 ) `make\_blobs` ------------ Similarly to `make... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/minimal_reproducer.rst | main | scikit-learn | [
-0.046296216547489166,
-0.11840356141328812,
-0.022493571043014526,
-0.004471359308809042,
0.06704746931791306,
-0.05283403396606445,
0.02212345041334629,
-0.11471818387508392,
-0.02436993084847927,
-0.05567442625761032,
-0.03219929337501526,
-0.054201867431402206,
0.04981120303273201,
-0.... | 0.171723 |
.. \_misc-info: ================================================== Miscellaneous information / Troubleshooting ================================================== Here, you find some more advanced notes and troubleshooting tips related to :ref:`setup\_development\_environment`. .. \_openMP\_notes: Notes on OpenMP ======... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/misc_info.rst | main | scikit-learn | [
-0.029021335765719414,
-0.030170291662216187,
-0.01923346146941185,
-0.0013777459971606731,
0.019351065158843994,
-0.07839100062847137,
-0.03656366467475891,
0.05571240931749344,
-0.023991035297513008,
-0.005553922150284052,
0.047778453677892685,
-0.11883135885000229,
-0.061282843351364136,
... | 0.110693 |
.. \_performance-howto: ========================= How to optimize for speed ========================= The following gives some practical guidelines to help you write efficient code for the scikit-learn project. .. note:: While it is always useful to profile your code so as to \*\*check performance assumptions\*\*, it i... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/performance.rst | main | scikit-learn | [
-0.0997878685593605,
0.07029667496681213,
-0.0647653341293335,
0.020697809755802155,
-0.008446975611150265,
-0.13224934041500092,
-0.05538172274827957,
-0.007457522675395012,
-0.0688924491405487,
-0.005949418060481548,
-0.04581727087497711,
0.007902913726866245,
0.0017772286664694548,
-0.0... | 0.104824 |
Before starting the profiling session and engaging in tentative optimization iterations, it is important to measure the total execution time of the function we want to optimize without any kind of profiler overhead and save it somewhere for later reference:: In [4]: %timeit NMF(n\_components=16, tol=1e-2).fit(X) 1 loop... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/performance.rst | main | scikit-learn | [
0.015435515902936459,
-0.01863747276365757,
-0.1100749671459198,
0.014748230576515198,
0.07762739062309265,
-0.09795242547988892,
0.004422927275300026,
0.0726405456662178,
-0.06356888264417648,
-0.054418981075286865,
-0.01232079602777958,
-0.015531385317444801,
-0.004262243863195181,
-0.03... | 0.039078 |
bash $ ipython profile create Then register the line\_profiler extension in ``~/.ipython/profile\_default/ipython\_config.py``:: c.TerminalIPythonApp.extensions.append('line\_profiler') c.InteractiveShellApp.extensions.append('line\_profiler') This will register the ``%lprun`` magic command in the IPython terminal appl... | https://github.com/scikit-learn/scikit-learn/blob/main//doc/developers/performance.rst | main | scikit-learn | [
-0.0012474183458834887,
-0.04824225604534149,
-0.08787943422794342,
-0.0009899160359054804,
-0.010977706871926785,
0.01839805394411087,
0.014547745697200298,
0.10659162700176239,
-0.06628824770450592,
-0.07029794156551361,
-0.0001751391973812133,
-0.03853045031428337,
-0.001430243020877242,
... | -0.005378 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.