markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
FN (False negative) A false negative test result is one that does not detect the condition whenthe condition is present (incorrectly rejected) [[3]](ref3). | cm.FN | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
P (Condition positive) Number of positive samples.Also known as support (the number of occurrences of each class in y_true) [[3]](ref3). $$P=TP+FN$$ | cm.P | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
N (Condition negative) Number of negative samples [[3]](ref3). $$N=TN+FP$$ | cm.N | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
TOP (Test outcome positive) Number of positive outcomes [[3]](ref3). $$TOP=TP+FP$$ | cm.TOP | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
TON (Test outcome negative) Number of negative outcomes [[3]](ref3). $$TON=TN+FN$$ | cm.TON | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
POP (Population) Total sample size [[3]](ref3). $$POP=TP+TN+FN+FP$$ | cm.POP | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
* Wikipedia page Class statistics TPR (True positive rate) Sensitivity (also called the true positive rate, the recall, or probability of detection in some fields) measures the proportion of positives that are correctly identified as such (e.g. the percentage of sick people who are correctly identified as having the ... | cm.TPR | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
TNR (True negative rate) Specificity (also called the true negative rate) measures the proportion of negatives that are correctly identified as such (e.g. the percentage of healthy people who are correctly identified as not having the condition) [[3]](ref3).Wikipedia page $$TNR=\frac{TN}{N}=\frac{TN}{TN+FP}$$ | cm.TNR | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
PPV (Positive predictive value) Positive predictive value (PPV) is the proportion of positives that correspond tothe presence of the condition [[3]](ref3).Wikipedia page $$PPV=\frac{TP}{TP+FP}$$ | cm.PPV | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
NPV (Negative predictive value) Negative predictive value (NPV) is the proportion of negatives that correspond tothe absence of the condition [[3]](ref3).Wikipedia page $$NPV=\frac{TN}{TN+FN}$$ | cm.NPV | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
FNR (False negative rate) The false negative rate is the proportion of positives which yield negative test outcomes with the test, i.e., the conditional probability of a negative test result given that the condition being looked for is present [[3]](ref3).Wikipedia page $$FNR=\frac{FN}{P}=\frac{FN}{FN+TP}=1-TPR$$ | cm.FNR | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
FPR (False positive rate) The false positive rate is the proportion of all negatives that still yield positive test outcomes, i.e., the conditional probability of a positive test result given an event that was not present [[3]](ref3).The false positive rate is equal to the significance level. The specificity of the te... | cm.FPR | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
FDR (False discovery rate) The false discovery rate (FDR) is a method of conceptualizing the rate of type I errors in null hypothesis testing when conducting multiple comparisons. FDR-controlling procedures are designed to control the expected proportion of "discoveries" (rejected null hypotheses) that are false (inco... | cm.FDR | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
FOR (False omission rate) False omission rate (FOR) is a statistical method used in multiple hypothesis testing to correct for multiple comparisons and it is the complement of the negative predictive value. It measures the proportion of false negatives which are incorrectly rejected [[3]](ref3).Wikipedia page $$FOR=\f... | cm.FOR | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
ACC (Accuracy) The accuracy is the number of correct predictions from all predictions made [[3]](ref3).Wikipedia page $$ACC=\frac{TP+TN}{P+N}=\frac{TP+TN}{TP+TN+FP+FN}$$ | cm.ACC | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
ERR (Error rate) The error rate is the number of incorrect predictions from all predictions made [[3]](ref3). $$ERR=\frac{FP+FN}{P+N}=\frac{FP+FN}{TP+TN+FP+FN}=1-ACC$$ | cm.ERR | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.4 FBeta-Score In statistical analysis of classification, the F1 score (also F-score or F-measure) is a measure of a test's accuracy. It considers both the precision $ p $ and the recall $ r $ of the test to compute the score.The F1 score is the harmonic average of the precision and rec... | cm.F1
cm.F05
cm.F2
cm.F_beta(beta=4) | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Parameters 1. `beta` : beta parameter (type : `float`) Output `{class1: FBeta-Score1, class2: FBeta-Score2, ...}` Notice : new in version 0.4 MCC (Matthews correlation coefficient) The Matthews correlation coefficient is used in machine learning as a measure of the quality of binary (two-class) classificatio... | cm.MCC | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
BM (Bookmaker informedness) The informedness of a prediction method as captured by a contingency matrix is defined as the probability that the prediction method will make a correct decision as opposed to guessing and is calculated using the bookmaker algorithm [[2]](ref2).Equals to Youden Index $$BM=TPR+TNR-1$$ | cm.BM | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
MK (Markedness) In statistics and psychology, the social science concept of markedness is quantified as a measure of how much one variable is marked as a predictor or possible cause of another and is also known as $ \triangle P $ in simple two-choice cases [[2]](ref2). $$MK=PPV+NPV-1$$ | cm.MK | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
PLR (Positive likelihood ratio) Likelihood ratios are used for assessing the value of performing a diagnostic test. They use the sensitivity and specificity of the test to determine whether a test result usefully changes the probability that a condition (such as a disease state) exists. The first description of the us... | cm.PLR | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : `LR+` renamed to `PLR` in version 1.5 NLR (Negative likelihood ratio) Likelihood ratios are used for assessing the value of performing a diagnostic test. They use the sensitivity and specificity of the test to determine whether a test result usefully changes the probability that a condition (such as a ... | cm.NLR | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : `LR-` renamed to `NLR` in version 1.5 DOR (Diagnostic odds ratio) The diagnostic odds ratio is a measure of the effectiveness of a diagnostic test. It is defined as the ratio of the odds of the test being positive if the subject has a disease relative to the odds of the test being positive if the subje... | cm.DOR | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
PRE (Prevalence) Prevalence is a statistical concept referring to the number of cases of a disease that are present in a particular population at a given time (Reference Likelihood) [[14]](ref14).Wikipedia page $$Prevalence=\frac{P}{POP}$$ | cm.PRE | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
G (G-measure) The geometric mean of precision and sensitivity, also known as Fowlkes–Mallows index [[3]](ref3).Wikipedia page $$G=\sqrt{PPV\times TPR}$$ | cm.G | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
RACC (Random accuracy) The expected accuracy from a strategy of randomly guessing categories according to reference and response distributions [[24]](ref24). $$RACC=\frac{TOP \times P}{POP^2}$$ | cm.RACC | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.3 RACCU (Random accuracy unbiased) The expected accuracy from a strategy of randomly guessing categories according to the average of the reference and response distributions [[25]](ref25). $$RACCU=(\frac{TOP+P}{2 \times POP})^2$$ | cm.RACCU | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.8.1 J (Jaccard index) The Jaccard index, also known as Intersection over Union and the Jaccard similarity coefficient (originally coined coefficient de communauté by Paul Jaccard), is a statistic used for comparing the similarity and diversity of sample sets [[29]](ref29).Wikipedia pag... | cm.J | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.9 IS (Information score) The amount of information needed to correctly classify an example intoclass C, whose prior probability is $ p(C) $, is defined as $ -\log_2(p(C)) $ [[18]](ref18) [[39]](ref39). $$IS=-log_2(\frac{TP+FN}{POP})+log_2(\frac{TP}{TP+FP})$$ | cm.IS | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 1.3 CEN (Confusion entropy) CEN based upon the concept of entropy for evaluating classifier performances. By exploiting the misclassification information of confusion matrices, the measure evaluates the confusion level of the class distribution ofmisclassified samples. Both theoretical a... | cm.CEN | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : $ |C| $ is the number of classes Notice : new in version 1.3 MCEN (Modified confusion entropy) Modified version of CEN [[19]](ref19). $$P_{i,j}^{j}=\frac{Matrix(i,j)}{\sum_{k=1}^{|C|}\Big(Matrix(j,k)+Matrix(k,j)\Big)-Matrix(j,j)}$$ $$P_{i,j}^{i}=\frac{Matrix(i,j)}{\sum_{k=1}^{|C|}\Big(Matrix(i,k... | cm.MCEN | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 1.3 AUC (Area under the ROC curve) The area under the curve (often referred to as simply the AUC) is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one (assuming 'positive' ranks higher than 'negative').Thus... | cm.AUC | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 1.4 Notice : this is an approximate calculation of AUC dInd (Distance index) Euclidean distance of a ROC point from the top left corner of the ROC space, which can take values between 0 (perfect classification) and $ \sqrt{2} $ [[23]](ref23). $$dInd=\sqrt{(1-TNR)^2+(1-TPR)^2}$$ | cm.dInd | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 1.4 sInd (Similarity index) sInd is comprised between $ 0 $ (no correct classifications) and $ 1 $ (perfect classification) [[23]](ref23). $$sInd = 1 - \sqrt{\frac{(1-TNR)^2+(1-TPR)^2}{2}}$$ | cm.sInd | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 1.4 DP (Discriminant power) Discriminant power (DP) is a measure that summarizes sensitivity and specificity.The DP has been used mainly in feature selection over imbalanced data [[33]](ref33).Interpretation $$X=\frac{TPR}{1-TPR}$$ $$Y=\frac{TNR}{1-TNR}$$ $$DP=\frac{\sqrt{3}}{\pi}(log_{... | cm.DP | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 1.5 Y (Youden index) Youden’s index evaluates the algorithm’s ability to avoid failure; it’s derived from sensitivity andspecificity and denotes a linear correspondence balanced accuracy.As Youden’s index is a linear transformation of the mean sensitivity and specificity, its values are ... | cm.Y | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 1.5 PLRI (Positive likelihood ratio interpretation) For more information visit [[33]](ref33). PLR Model contribution 1 > Negligible 1 - 5 Poor 5 - 10 Fair ... | cm.PLRI | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 1.5 NLRI (Negative likelihood ratio interpretation) For more information visit [[48]](ref48). NLR Model contribution 0.5 - 1 Negligible 0.2 - 0.5 Poor 0.1 - 0.2 ... | cm.NLRI | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 2.2 DPI (Discriminant power interpretation) For more information visit [[33]](ref33). DP Model contribution 1 > Poor 1 - 2 Limited 2 - 3 Fair ... | cm.DPI | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 1.5 AUCI (AUC value interpretation) For more information visit [[33]](ref33). AUC Model performance 0.5 - 0.6 Poor 0.6 - 0.7 Fair 0.7 - 0.8 Good ... | cm.AUCI | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 1.6 MCCI (Matthews correlation coefficient interpretation) MCC is a confusion matrix method of calculating the Pearson product-moment correlation coefficient (not to be confused with Pearson's C). Therefore, it has the same interpretation [[2]](ref2).For more information visit [[49]](ref... | cm.MCCI | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 2.2 Notice : only positive values are considered QI (Yule's Q interpretation) For more information visit [[67]](ref67). Q Interpretation 0.25 > Negligible 0.25 - 0.5 Weak ... | cm.QI | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 2.6 GI (Gini index) A chance-standardized variant of the AUC is given by Gini coefficient, taking values between $ 0 $ (no differencebetween the score distributions of the two classes) and $ 1 $ (complete separation between the two distributions).Gini coefficient is widespread use metric... | cm.GI | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 1.7 LS (Lift score) In the context of classification, lift compares model predictions to randomly generated predictions. Lift is often used in marketing research combined with gain and lift charts as a visual aid [[35]](ref35) [[36]](ref36). $$LS=\frac{PPV}{PRE}$$ | cm.LS | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 1.8 AM (Automatic/Manual) Difference between automatic and manual classification i.e., the difference between positive outcomes and of positive samples. $$AM=TOP-P=(TP+FP)-(TP+FN)$$ | cm.AM | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 1.9 BCD (Bray-Curtis dissimilarity) In ecology and biology, the Bray–Curtis dissimilarity, named after J. Roger Bray and John T. Curtis, is a statistic used to quantify the compositional dissimilarity between two different sites, based on counts at each site [[37]](ref37).Wikipedia page ... | cm.BCD | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 1.9 OP (Optimized precision) Optimized precision is a type of hybrid threshold metric and has been proposed as adiscriminator for building an optimized heuristic classifier. This metric is a combination ofaccuracy, sensitivity and specificity metrics. The sensitivity and specificity metr... | cm.OP | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 2.0 IBA (Index of balanced accuracy) The method combines an unbiased index of its overall accuracy and a measure abouthow dominant is the class with the highest individual accuracy rate [[41]](ref41) [[42]](ref42). $$IBA_{\alpha}=(1+\alpha \times(TPR-TNR))\times TNR \times TPR$$ | cm.IBA
cm.IBA_alpha(0.5)
cm.IBA_alpha(0.1) | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Parameters 1. `alpha` : alpha parameter (type : `float`) Output `{class1: IBA1, class2: IBA2, ...}` Notice : new in version 2.0 GM (G-mean) Geometric mean of specificity and sensitivity [[3]](ref3) [[41]](ref41) [[42]](ref42). $$GM=\sqrt{TPR \times TNR}$$ | cm.GM | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 2.0 Q (Yule's Q) In statistics, Yule's Q, also known as the coefficient of colligation, is a measure of association between two binary variables [[45]](ref45).InterpretationWikipedia page $$OR = \frac{TP\times TN}{FP\times FN}$$ $$Q = \frac{OR-1}{OR+1}$$ | cm.Q | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 2.1 AGM (Adjusted G-mean) An adjusted version of the geometric mean of specificity and sensitivity [[46]](ref46). $$N_n=\frac{N}{POP}$$ $$AGM=\frac{GM+TNR\times N_n}{1+N_n};TPR>0$$ $$AGM=0;TPR=0$$ | cm.AGM | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 2.1 AGF (Adjusted F-score) The F-measures used only three of the four elements of the confusion matrix and hence two classifiers with different TNR values may have the same F-score. Therefore, the AGF metric is introduced to use all elements of the confusion matrix and provide more weigh... | cm.AGF | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 2.3 OC (Overlap coefficient) The overlap coefficient, or Szymkiewicz–Simpson coefficient, is a similarity measure that measures the overlap between two finite sets. It is defined as the size of the intersection divided by the smaller of the size of the two sets [[52]](ref52).Wikipedia pa... | cm.OC | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 2.3 OOC (Otsuka-Ochiai coefficient) In biology, there is a similarity index, known as the Otsuka-Ochiai coefficient named after Yanosuke Otsuka and Akira Ochiai, also known as the Ochiai-Barkman or Ochiai coefficient. If sets are represented as bit vectors, the Otsuka-Ochiai coefficient ... | cm.OOC | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 2.3 TI (Tversky index) The Tversky index, named after Amos Tversky, is an asymmetric similarity measure on sets that compares a variant to a prototype. The Tversky index can be seen as a generalization of Dice's coefficient and Tanimoto coefficient [[54]](ref54).Wikipedia page $$TI(\alph... | cm.TI(2,3) | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Parameters 1. `alpha` : alpha coefficient (type : `float`)2. `beta` : beta coefficient (type : `float`) Output `{class1: TI1, class2: TI2, ...}` Notice : new in version 2.4 AUPR (Area under the PR curve) A PR curve is plotting precision against recall. The precision recall area under curve (AUPR) is just the... | cm.AUPR | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 2.4 Notice : this is an approximate calculation of AUPR ICSI (Individual classification success index) The Individual Classification Success Index (ICSI), is aclass-specific symmetric measure defined for classificationassessment purpose. ICSI is hence $ 1 $ minus the sum of type I ... | cm.ICSI | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 2.5 CI (Confidence interval) In statistics, a confidence interval (CI) is a type of interval estimate (of a population parameter) that is computed from the observed data. The confidence level is the frequency (i.e., the proportion) of possible confidence intervals that contain the true v... | cm.CI("TPR")
cm.CI("FNR",alpha=0.001,one_sided=True)
cm.CI("PRE",alpha=0.05,binom_method="wilson")
cm.CI("Overall ACC",alpha=0.02,binom_method="agresti-coull")
cm.CI("Overall ACC",alpha=0.05) | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Parameters 1. `param` : input parameter (type : `str`)2. `alpha` : type I error (type : `float`, default : `0.05`)3. `one_sided` : one-sided mode (type : `bool`, default : `False`)4. `binom_method` : binomial confidence intervals method (type : `str`, default : `normal-approx`) Output 1. Two-sided : `{class1: [SE1, ... | cm.NB(w=0.059) | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Parameters 1. `w` : weight Output `{class1: NB1, class2: NB2, ...}` Notice : new in version 2.6 Overall statistics Kappa Kappa is a statistic that measures inter-rater agreement for qualitative (categorical) items. It is generally thought to be a more robust measure than simple percent agreement calculation... | cm.Kappa | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.3 Kappa unbiased The unbiased kappa value is defined in terms of total accuracy and a slightly different computation of expected likelihood that averages the reference and response probabilities [[25]](ref25).Equals to [Scott's Pi](Scott's-Pi) $$Kappa_{Unbiased}=\frac{ACC_{Overall}-RAC... | cm.KappaUnbiased | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.8.1 Kappa no prevalence The kappa statistic adjusted for prevalence [[14]](ref14). $$Kappa_{NoPrevalence}=2 \times ACC_{Overall}-1$$ | cm.KappaNoPrevalence | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.8.1 Kappa standard error The standard error(s) of the Kappa coefficient was obtained by Fleiss (1969) [[24]](ref24) [[38]](ref38). $$SE_{Kappa}=\sqrt{\frac{ACC_{Overall}\times (1-RACC_{Overall})}{(1-RACC_{Overall})^2}}$$ | cm.Kappa_SE | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.7 Kappa 95% CI Kappa 95% Confidence Interval [[24]](ref24) [[38]](ref38). $$CI_{Kappa}=Kappa \pm 1.96\times SE_{Kappa}$$ | cm.Kappa_CI | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.7 Chi-squared Pearson's chi-squared test is a statistical test applied to sets of categorical data to evaluate how likely it is that any observed difference between the sets arose by chance. It is suitable for unpaired data from large samples [[10]](ref10).Wikipedia page $$\chi^2=\sum_... | cm.Chi_Squared | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.7 Chi-squared DF Number of degrees of freedom of this confusion matrix for the chi-squared statistic [[10]](ref10). $$DF=(|C|-1)^2$$ | cm.DF | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.7 Phi-squared In statistics, the phi coefficient (or mean square contingency coefficient) is a measure of association for two binary variables. Introduced by Karl Pearson, this measure is similar to the Pearson correlation coefficient in its interpretation. In fact, a Pearson correlati... | cm.Phi_Squared | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.7 Cramer's V In statistics, Cramér's V (sometimes referred to as Cramér's phi) is a measure of association between two nominal variables, giving a value between $ 0 $ and $ +1 $ (inclusive). It is based on Pearson's chi-squared statistic and was published by Harald Cramér in 1946 [[26]... | cm.V | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.7 Standard error The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation [[31]](ref31).Wikipedia page $$SE_{ACC}=\sqrt{\frac{ACC\times (1-ACC)}{POP}}$$ | cm.SE | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.7 95% CI In statistics, a confidence interval (CI) is a type of interval estimate (of a population parameter) that is computed from the observed data. The confidence level is the frequency (i.e., the proportion) of possible confidence intervals that contain the true value of their corr... | cm.CI95 | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.7 Notice : `CI` renamed to `CI95` in version 2.5 Bennett's S Bennett, Alpert & Goldstein’s S is a statistical measure of inter-rater agreement. It was created by Bennett et al. in 1954.Bennett et al. suggested adjusting inter-rater reliability to accommodate the percentage of ra... | cm.S | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.5 Scott's Pi Scott's pi (named after William A. Scott) is a statistic for measuring inter-rater reliability for nominal data in communication studies. Textual entities are annotated with categories by different annotators, and various measures are used to assess the extent of agreement... | cm.PI | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.5 Gwet's AC1 AC1 was originally introduced by Gwet in 2001 (Gwet, 2001). The interpretation of AC1 is similar to generalized kappa (Fleiss, 1971), which is used to assess inter-rater reliability when there are multiple raters. Gwet (2002) demonstrated that AC1 can overcome the limitati... | cm.AC1 | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.5 Reference entropy The entropy of the decision problem itself as defined by the counts for the reference. The entropy of a distribution is the average negative log probability of outcomes [[30]](ref30). $$Likelihood_{Reference}=\frac{P_i}{POP}$$ $$Entropy_{Reference}=-\sum_{i=1}^{|C|}... | cm.ReferenceEntropy | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.8.1 Response entropy The entropy of the response distribution. The entropy of a distribution is the average negative log probability of outcomes [[30]](ref30). $$Likelihood_{Response}=\frac{TOP_i}{POP}$$ $$Entropy_{Response}=-\sum_{i=1}^{|C|}Likelihood_{Response}(i)\times\log_{2}{Likel... | cm.ResponseEntropy | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.8.1 Cross entropy The cross-entropy of the response distribution against the reference distribution. The cross-entropy is defined by the negative log probabilities of the response distribution weighted by the reference distribution [[30]](ref30).Wikipedia page $$Likelihood_{Reference}=... | cm.CrossEntropy | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.8.1 Joint entropy The entropy of the joint reference and response distribution as defined by the underlying matrix [[30]](ref30). $$P^{'}(i,j)=\frac{Matrix(i,j)}{POP}$$ $$Entropy_{Joint}=-\sum_{i=1}^{|C|}\sum_{j=1}^{|C|}P^{'}(i,j)\times\log_{2}{P^{'}(i,j)}$$ $$0\times\log_{2}{0}\equiv0... | cm.JointEntropy | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.8.1 Conditional entropy The entropy of the distribution of categories in the response given that the reference category was as specified [[30]](ref30).Wikipedia page $$P^{'}(j|i)=\frac{Matrix(j,i)}{P_i}$$ $$Entropy_{Conditional}=\sum_{i=1}^{|C|}\Bigg(Likelihood_{Reference}(i)\times\Big... | cm.ConditionalEntropy | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.8.1 Kullback-Leibler divergence In mathematical statistics, the Kullback–Leibler divergence (also called relative entropy) is a measure of how one probability distribution diverges from a second, expected probability distribution [[11]](ref11) [[30]](ref30).Wikipedia Page $$Likelihood... | cm.KL | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.8.1 Mutual information Mutual information is defined as Kullback-Leibler divergence, between the product of the individual distributions and the joint distribution.Mutual information is symmetric. We could also subtract the conditional entropy of the reference given the response from t... | cm.MutualInformation | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.8.1 Goodman & Kruskal's lambda A In probability theory and statistics, Goodman & Kruskal's lambda is a measure of proportional reduction in error in cross tabulation analysis [[12]](ref12).Wikipedia page $$\lambda_A=\frac{\sum_{j=1}^{|C|}Max\Big(Matrix(-,j)\Big)-Max(P)}{POP-Max(P)}$$ | cm.LambdaA | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.8.1 Goodman & Kruskal's lambda B In probability theory and statistics, Goodman & Kruskal's lambda is a measure of proportional reduction in error in cross tabulation analysis [[13]](ref13).Wikipedia Page $$\lambda_B=\frac{\sum_{i=1}^{|C|}Max\Big(Matrix(i,-)\Big)-Max(TOP)}{POP-Max(TOP)... | cm.LambdaB | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.8.1 SOA1 (Landis & Koch's benchmark) For more information visit [[1]](ref1). Kappa Strength of Agreement 0 > Poor 0 - 0.2 Slight 0.2 – 0.4 Fair ... | cm.SOA1 | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.3 SOA2 (Fleiss' benchmark) For more information visit [[4]](ref4). Kappa Strength of Agreement 0.40 > Poor 0.40 - 0.75 Intermediate to Good More than 0.75 ... | cm.SOA2 | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.4 SOA3 (Altman's benchmark) For more information visit [[5]](ref5). Kappa Strength of Agreement 0.2 > Poor 0.2 – 0.4 Fair 0.4 – 0.6 Moderate ... | cm.SOA3 | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.4 SOA4 (Cicchetti's benchmark) For more information visit [[9]](ref9). Kappa Strength of Agreement 0.40 > Poor 0.40 – 0.59 Fair 0.59 – 0.74 Good ... | cm.SOA4 | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.7 SOA5 (Cramer's benchmark) For more information visit [[47]](ref47). Cramer's V Strength of Association 0.1 > Negligible 0.1 – 0.2 Weak 0.2 – 0.4 Moder... | cm.SOA5 | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 2.2 SOA6 (Matthews's benchmark) MCC is a confusion matrix method of calculating the Pearson product-moment correlation coefficient (not to be confused with Pearson's C). Therefore, it has the same interpretation [[2]](ref2).For more information visit [[49]](ref49). Over... | cm.SOA6 | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 2.2 Notice : only positive values are considered Overall_ACC For more information visit [[3]](ref3). $$ACC_{Overall}=\frac{\sum_{i=1}^{|C|}TP_i}{POP}$$ | cm.Overall_ACC | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.4 Overall_RACC For more information visit [[24]](ref24). $$RACC_{Overall}=\sum_{i=1}^{|C|}RACC_i$$ | cm.Overall_RACC | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.4 Overall_RACCU For more information visit [[25]](ref25). $$RACCU_{Overall}=\sum_{i=1}^{|C|}RACCU_i$$ | cm.Overall_RACCU | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.8.1 PPV_Micro For more information visit [[3]](ref3). $$PPV_{Micro}=\frac{\sum_{i=1}^{|C|}TP_i}{\sum_{i=1}^{|C|}TP_i+FP_i}$$ | cm.PPV_Micro | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.4 TPR_Micro For more information visit [[3]](ref3). $$TPR_{Micro}=\frac{\sum_{i=1}^{|C|}TP_i}{\sum_{i=1}^{|C|}TP_i+FN_i}$$ | cm.TPR_Micro | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.4 TNR_Micro For more information visit [[3]](ref3). $$TNR_{Micro}=\frac{\sum_{i=1}^{|C|}TN_i}{\sum_{i=1}^{|C|}TN_i+FP_i}$$ | cm.TNR_Micro | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 2.6 FPR_Micro For more information visit [[3]](ref3). $$FPR_{Micro}=\frac{\sum_{i=1}^{|C|}FP_i}{\sum_{i=1}^{|C|}TN_i+FP_i}$$ | cm.FPR_Micro | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 2.6 FNR_Micro For more information visit [[3]](ref3). $$FNR_{Micro}=\frac{\sum_{i=1}^{|C|}FN_i}{\sum_{i=1}^{|C|}TP_i+FN_i}$$ | cm.FNR_Micro | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 2.6 F1_Micro For more information visit [[3]](ref3). $$F_{1_{Micro}}=2\frac{\sum_{i=1}^{|C|}TPR_i\times PPV_i}{\sum_{i=1}^{|C|}TPR_i+PPV_i}$$ | cm.F1_Micro | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 2.2 PPV_Macro For more information visit [[3]](ref3). $$PPV_{Macro}=\frac{1}{|C|}\sum_{i=1}^{|C|}\frac{TP_i}{TP_i+FP_i}$$ | cm.PPV_Macro | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.4 TPR_Macro For more information visit [[3]](ref3). $$TPR_{Macro}=\frac{1}{|C|}\sum_{i=1}^{|C|}\frac{TP_i}{TP_i+FN_i}$$ | cm.TPR_Macro | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Notice : new in version 0.4 TNR_Macro For more information visit [[3]](ref3). $$TNR_{Macro}=\frac{1}{|C|}\sum_{i=1}^{|C|}\frac{TN_i}{TN_i+FP_i}$$ | cm.TNR_Macro | _____no_output_____ | MIT | Document/Document.ipynb | GeetDsa/pycm |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.