MattStammers commited on
Commit
12cce40
·
verified ·
1 Parent(s): cbe7f9a

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +12 -12
README.md CHANGED
@@ -133,18 +133,18 @@ MedRxiv- [MedRxiv Paper](https://www.medrxiv.org/content/10.1101/2025.07.06.2533
133
 
134
  | Term | Description |
135
  |-------------------------------------|-------------|
136
- | **Accuracy** | The percentage of results that were correct among all results from the system. Calc: (TP + TN) / (TP + FP + TN + FN). |
137
- | **Precision (PPV)** | Also called positive predictive value (PPV), it is the percentage of true positive results among all results that the system flagged as positive. Calc: TP / (TP + FP). |
138
- | **Negative Predictive Value (NPV)** | The percentage of results that were true negative (TN) among all results that the system flagged as negative. Calc: TN / (TN + FN). |
139
- | **Recall** | Also called sensitivity. The percentage of results flagged positive among all results that should have been obtained. Calc: TP / (TP + FN). |
140
- | **Specificity** | The percentage of results that were flagged negative among all negative results. Calc: TN / (TN + FP). |
141
- | **F1-Score** | The harmonic mean of PPV/precision and sensitivity/recall. Calc: 2 × (Precision × Recall) / (Precision + Recall). Moderately useful in the context of class imbalance. |
142
- | **Matthews’ Correlation Coefficient (MCC)** | A statistical measure used to evaluate the quality of binary classifications. Unlike other metrics, MCC considers all four categories of a confusion matrix. Calc: (TP × TN − FP × FN) / √((TP + FP)(TP + FN)(TN + FP)(TN + FN)). |
143
- | **Precision / Recall AUC** | Represents the area under the Precision-Recall curve, which plots Precision against Recall at various threshold settings. It is more resistant to class imbalance than alternatives like AUROC. |
144
- | **Demographic Parity (DP)** | Demographic Parity, also known as Statistical Parity, requires that the probability of a positive prediction is the same across different demographic groups. Calc: DP = P(Ŷ=1∣A=a) = P(Ŷ=1∣A=b). This figure is given as an absolute difference where positive values suggest the more privileged group gains and negative values the reverse. |
145
- | **Equal Opportunity (EO)** | Equal Opportunity focuses on equalising the true positive rates across groups. Among those who truly belong to the positive class, the model should predict positive outcomes at equal rates across groups. Calc: EO = P(Ŷ=1∣Y=1, A=a) = P(Ŷ=1∣Y=1, A=b). A higher value indicates a bias against the more vulnerable group. |
146
- | **Disparate Impact (DI)** | Divides the protected group’s positive prediction rate by that of the most-favoured group. If the ratio is below 0.8 or above 1.25, disparate impact is considered present. Calc: DI = P(Ŷ=1∣A=unfavoured) / P(Ŷ=1∣A=favoured). Values outside 0.8–1.25 range suggest bias. |
147
- | **Execution Time / Energy / CO₂ Emissions** | Measured in minutes and total energy consumption in kilowatt-hours (kWh), which is then converted to CO₂ emissions using a factor of 0.20705 Kg CO₂e per kWh. |
148
 
149
  ## Model Card Authors
150
 
 
133
 
134
  | Term | Description |
135
  |-------------------------------------|-------------|
136
+ | **Accuracy** | The percentage of results that were correct among all results from the system overall. Calc: ((TP+TN))/((TP+FP+TN+FN)). Very susceptible to class imbalance. |
137
+ | **Precision (PPV)** | Also called positive predictive value (PPV), it is the percentage of true positive results among all results that the system flagged as positive. Calc: TP/((TP+FP)). A measure of the trustworthiness of positive results. |
138
+ | **Negative Predictive Value (NPV)** | The percentage of results that were true negatives (TN) among all results that the system flagged as negative. Calc: TN/((TN+FN)). A measure of the trustworthiness of negative results. |
139
+ | **Recall** | Also called sensitivity. The percentage of results flagged positive among all results that should have been obtained. Calc: TP/((TP+FN)). 100% recall means there are no false negative results – useful in confidently screening out negative cases. |
140
+ | **Specificity** | The percentage of results that were flagged negative among all negative results. Calc: TN/((TN+FP)). 100% specificity means there are no false-positive results and rules in the disease. |
141
+ | **F1-Score** | In this case, the unweighted harmonic mean of PPV/precision and sensitivity/recall. Calc: (2 × (Precision × Recall))/((Precision + Recall)). More resistant to the effects of class imbalance. |
142
+ | **Matthews’ Correlation Coefficient (MCC)** | A statistical measure used to evaluate the quality of binary classifications. Unlike other metrics, MCC considers all four categories of a confusion matrix. Calc: ((TP×TN)(FP×FN))/√((TP+FP)(TP+FN)(TN+FP)(TN+FN)). The results are more abstract but highly resistant to the effects of class imbalance, which is why they are included in this study. |
143
+ | **Precision / Recall AUC** | Represents the area under the Precision-Recall curve, which plots Precision against Recall at various threshold settings. It is more resistant to class imbalance than AUROC (area under the receiver operator curve). Both were used in testing, but the confusion matrix components form the primary outcomes for this study because they are more human-interpretable and harder to statistically manipulate. |
144
+ | **Demographic Parity (DP)** | Demographic Parity, also known as Statistical Parity, requires that the probability of a positive prediction is the same across different demographic groups. Calc: DP = P(Ŷ=1∣A=a) = P(Ŷ=1∣A=b), where Ŷ is the predicted outcome and A represents the protected attribute (e.g., race, gender). This figure is given as an absolute difference where positive values suggest the more privileged group gains and negative values the reverse. The rule of 10% is used here to look for significant biases. |
145
+ | **Equal Opportunity (EO)** | Equal Opportunity focuses on equalising the true positive rates across groups. Among those who truly belong to the positive class, the model should predict positive outcomes at equal rates across different groups. Calc: EO = P(Ŷ=1∣Y=1, A=a) = P(Ŷ=1∣Y=1, A=b), where Ŷ is the predicted outcome and Y is the actual outcome. If there is no bias, then the value will be equal for all groups. A higher value indicates a bias against the group considered more vulnerable. The rule of 10% is used here to look for significant biases. |
146
+ | **Disparate Impact (DI)** | Divides the protected group’s positive prediction rate by that of the most-favoured group. If the ratio is below 0.8 or above 1.25, disparate impact is considered present. Calc: DI = P(Ŷ=1∣A=unfavoured) / P(Ŷ=1∣A=favoured). Values outside the 0.8–1.25 range suggest bias. |
147
+ | **Time / Energy / CO₂ Emissions** | Measured in minutes and total energy consumption in kilowatt-hours (kWh), which can then be extrapolated to CO₂ emissions via a conversion factor set at 0.20705 Kg CO₂e per kWh for this study. |
148
 
149
  ## Model Card Authors
150