Upload folder using huggingface_hub
Browse files- README.md +28 -28
- model.safetensors +1 -1
README.md
CHANGED
|
@@ -28,7 +28,7 @@ As above. This is a model trained to detect IBD patients from clinical text
|
|
| 28 |
- **Developed by:** Matt Stammers
|
| 29 |
- **Funded by:** University Hospital Foundation NHS Trust
|
| 30 |
- **Shared by:** Matt Stammers - SETT Data and AI Clinical Lead
|
| 31 |
-
- **Model type:** BERT Transformer
|
| 32 |
- **Language(s) (NLP):** English
|
| 33 |
- **License:** cc-by-nc-4.0
|
| 34 |
- **Finetuned from model:** emilyalsentzer/Bio_ClinicalBERT
|
|
@@ -38,30 +38,31 @@ As above. This is a model trained to detect IBD patients from clinical text
|
|
| 38 |
- **Repository:** https://huggingface.co/MattStammers/BioClinicalBERT_IBD
|
| 39 |
- **Paper:** MedRxiv- [MedRxiv Paper](https://www.medrxiv.org/content/10.1101/2025.07.06.25330961v1)
|
| 40 |
- **Demo:** https://huggingface.co/spaces/MattStammers/IBD_Cohort_Identification
|
|
|
|
| 41 |
|
| 42 |
## Uses
|
| 43 |
|
| 44 |
-
For document classification tasks to differentiate between documents likely to be
|
| 45 |
|
| 46 |
### Direct Use
|
| 47 |
|
| 48 |
-
|
| 49 |
|
| 50 |
### Downstream Use
|
| 51 |
|
| 52 |
-
Others
|
| 53 |
|
| 54 |
### Out-of-Scope Use
|
| 55 |
|
| 56 |
-
This model is less powerful (in terms of F1 Score) when making predictions at the patient level by 1-2%. It can be used for this purpose but with care.
|
| 57 |
|
| 58 |
## Bias, Risks, and Limitations
|
| 59 |
|
| 60 |
-
This model contains substantial biases and is known to be biased against
|
| 61 |
|
| 62 |
### Recommendations
|
| 63 |
|
| 64 |
-
It will work best in a predominantly
|
| 65 |
|
| 66 |
## How to Get Started with the Model
|
| 67 |
|
|
@@ -73,7 +74,7 @@ The model is best used with the transformers library.
|
|
| 73 |
|
| 74 |
### Training Data
|
| 75 |
|
| 76 |
-
The model was trained on fully pseudonymised clinical information at UHSFT which was carefully labelled by a consultant (attending) physician and evaluated against a randomly selected internal holdout set.
|
| 77 |
|
| 78 |
### Training Procedure
|
| 79 |
|
|
@@ -89,7 +90,7 @@ This model (part of a set of models) took 213.55 minutes to train
|
|
| 89 |
|
| 90 |
## Evaluation
|
| 91 |
|
| 92 |
-
The model was internally validated against a holdout set
|
| 93 |
|
| 94 |
### Testing Data, Factors & Metrics
|
| 95 |
|
|
@@ -107,21 +108,21 @@ Full evaluation metrics are available in the paper with a summary below
|
|
| 107 |
|
| 108 |
### Results
|
| 109 |
|
| 110 |
-
| Model
|
| 111 |
-
|
| 112 |
-
|
|
| 113 |
|
| 114 |
#### Summary
|
| 115 |
|
| 116 |
-
Overall performance of the model is high
|
| 117 |
|
| 118 |
## Environmental Impact
|
| 119 |
|
| 120 |
Training the model used 2.01kWh of energy emmitting 416.73 grams of CO2
|
| 121 |
|
| 122 |
- **Hardware Type:** L40S
|
| 123 |
-
- **Hours used:**
|
| 124 |
-
- **Carbon Emitted:** 0.
|
| 125 |
|
| 126 |
## Citation
|
| 127 |
|
|
@@ -132,19 +133,18 @@ MedRxiv- [MedRxiv Paper](https://www.medrxiv.org/content/10.1101/2025.07.06.2533
|
|
| 132 |
|
| 133 |
| Term | Description |
|
| 134 |
|-------------------------------------|-------------|
|
| 135 |
-
| **Accuracy** | The percentage of results that were correct among all results from the system. Calc: (TP
|
| 136 |
-
| **Precision (PPV)** | Also called positive predictive value (PPV), it is the percentage of true positive results among all results that the system flagged as positive. Calc: TP
|
| 137 |
-
| **Negative Predictive Value (NPV)** | The percentage of results that were true
|
| 138 |
-
| **Recall** | Also called sensitivity. The percentage of results flagged positive among all results that should have been obtained. Calc: TP
|
| 139 |
-
| **Specificity** | The percentage of results that were flagged negative among all negative results. Calc: TN
|
| 140 |
-
| **F1-Score** |
|
| 141 |
-
| **Matthews’ Correlation Coefficient (MCC)** | A statistical measure used to evaluate the quality of binary classifications. Unlike other metrics, MCC considers all four categories of a confusion matrix. Calc: (
|
| 142 |
-
| **Precision / Recall AUC**
|
| 143 |
-
| **Demographic Parity (DP)**
|
| 144 |
-
| **Equal Opportunity (EO)**
|
| 145 |
-
| **Disparate Impact (DI)**
|
| 146 |
-
| **
|
| 147 |
-
|
| 148 |
|
| 149 |
## Model Card Authors
|
| 150 |
|
|
|
|
| 28 |
- **Developed by:** Matt Stammers
|
| 29 |
- **Funded by:** University Hospital Foundation NHS Trust
|
| 30 |
- **Shared by:** Matt Stammers - SETT Data and AI Clinical Lead
|
| 31 |
+
- **Model type:** BERT Transformer (Finetuned)
|
| 32 |
- **Language(s) (NLP):** English
|
| 33 |
- **License:** cc-by-nc-4.0
|
| 34 |
- **Finetuned from model:** emilyalsentzer/Bio_ClinicalBERT
|
|
|
|
| 38 |
- **Repository:** https://huggingface.co/MattStammers/BioClinicalBERT_IBD
|
| 39 |
- **Paper:** MedRxiv- [MedRxiv Paper](https://www.medrxiv.org/content/10.1101/2025.07.06.25330961v1)
|
| 40 |
- **Demo:** https://huggingface.co/spaces/MattStammers/IBD_Cohort_Identification
|
| 41 |
+
- **GitHub:** https://github.com/MattStammers/An_Open_Source_Collection_Of_IBD_Cohort_Identification_Models
|
| 42 |
|
| 43 |
## Uses
|
| 44 |
|
| 45 |
+
For document classification tasks to differentiate between documents likely to be for patients with IBD and those not suggestive of IBD.
|
| 46 |
|
| 47 |
### Direct Use
|
| 48 |
|
| 49 |
+
A similar model can be tested directly at [Cohort Identification Demo](https://huggingface.co/spaces/MattStammers/IBD_Cohort_Identification)
|
| 50 |
|
| 51 |
### Downstream Use
|
| 52 |
|
| 53 |
+
Others are encouraged to build on this model and improve it but only for non-commercial purposes.
|
| 54 |
|
| 55 |
### Out-of-Scope Use
|
| 56 |
|
| 57 |
+
This model is less powerful (in terms of F1 Score) when making predictions at the patient level by 1-2%. It can be used for this purpose but with care. Its biggest weakness in terms of performance is specificity and it is very likely currently overfitted to the training data.
|
| 58 |
|
| 59 |
## Bias, Risks, and Limitations
|
| 60 |
|
| 61 |
+
This model contains substantial biases and is known to be biased against non-white patients, women and the wealthy so use with care (see the paper for information on the training cohort).
|
| 62 |
|
| 63 |
### Recommendations
|
| 64 |
|
| 65 |
+
It will likely work best in a predominantly caucasian/Western population.
|
| 66 |
|
| 67 |
## How to Get Started with the Model
|
| 68 |
|
|
|
|
| 74 |
|
| 75 |
### Training Data
|
| 76 |
|
| 77 |
+
The model was trained on fully pseudonymised clinical information at UHSFT which was carefully labelled by a consultant (attending) physician and evaluated against a randomly selected internal holdout set. All non-IBD patients have now been removed.
|
| 78 |
|
| 79 |
### Training Procedure
|
| 80 |
|
|
|
|
| 90 |
|
| 91 |
## Evaluation
|
| 92 |
|
| 93 |
+
The model was internally validated against a holdout set - Type 2a validation according to TRIPOD.
|
| 94 |
|
| 95 |
### Testing Data, Factors & Metrics
|
| 96 |
|
|
|
|
| 108 |
|
| 109 |
### Results
|
| 110 |
|
| 111 |
+
| Model | Doc Coverage | Accuracy | Precision | Recall | Specificity | NPV | F1 Score | MCC |
|
| 112 |
+
|------------|------------------|----------------------------------|----------------------------------|----------------------------------|----------------------------------|----------------------------------|----------------------------------|----------------------------------|
|
| 113 |
+
| DistilBERT | 768 (100.00%) | 89.67% (CI: 86.64% - 92.08%) | 90.31% (CI: 86.97% - 92.86%) | 96.72% (CI: 94.36% - 98.11%) | 67.80% (CI: 58.92% - 75.55%) | 86.96% (CI: 78.57% - 92.38%) | 93.40% (CI: 91.81% - 94.96%) | 0.7060 (CI: 0.6298 - 0.7795) |
|
| 114 |
|
| 115 |
#### Summary
|
| 116 |
|
| 117 |
+
Overall performance of the model is high but it has so far only been validated internally.
|
| 118 |
|
| 119 |
## Environmental Impact
|
| 120 |
|
| 121 |
Training the model used 2.01kWh of energy emmitting 416.73 grams of CO2
|
| 122 |
|
| 123 |
- **Hardware Type:** L40S
|
| 124 |
+
- **Hours used:** 2
|
| 125 |
+
- **Carbon Emitted:** 0.416.73 Kg CO2
|
| 126 |
|
| 127 |
## Citation
|
| 128 |
|
|
|
|
| 133 |
|
| 134 |
| Term | Description |
|
| 135 |
|-------------------------------------|-------------|
|
| 136 |
+
| **Accuracy** | The percentage of results that were correct among all results from the system overall. Calc: ((TP+TN))/((TP+FP+TN+FN)). Very susceptible to class imbalance. |
|
| 137 |
+
| **Precision (PPV)** | Also called positive predictive value (PPV), it is the percentage of true positive results among all results that the system flagged as positive. Calc: TP/((TP+FP)). A measure of the trustworthiness of positive results. |
|
| 138 |
+
| **Negative Predictive Value (NPV)** | The percentage of results that were true negatives (TN) among all results that the system flagged as negative. Calc: TN/((TN+FN)). A measure of the trustworthiness of negative results. |
|
| 139 |
+
| **Recall** | Also called sensitivity. The percentage of results flagged positive among all results that should have been obtained. Calc: TP/((TP+FN)). 100% recall means there are no false negative results – useful in confidently screening out negative cases. |
|
| 140 |
+
| **Specificity** | The percentage of results that were flagged negative among all negative results. Calc: TN/((TN+FP)). 100% specificity means there are no false-positive results and rules in the disease. |
|
| 141 |
+
| **F1-Score** | In this case, the unweighted harmonic mean of PPV/precision and sensitivity/recall. Calc: (2 × (Precision × Recall))/((Precision + Recall)). More resistant to the effects of class imbalance. |
|
| 142 |
+
| **Matthews’ Correlation Coefficient (MCC)** | A statistical measure used to evaluate the quality of binary classifications. Unlike other metrics, MCC considers all four categories of a confusion matrix. Calc: ((TP×TN) − (FP×FN))/√((TP+FP)(TP+FN)(TN+FP)(TN+FN)). The results are more abstract but highly resistant to the effects of class imbalance, which is why they are included in this study. |
|
| 143 |
+
| **Precision / Recall AUC** | Represents the area under the Precision-Recall curve, which plots Precision against Recall at various threshold settings. It is more resistant to class imbalance than AUROC (area under the receiver operator curve). Both were used in testing, but the confusion matrix components form the primary outcomes for this study because they are more human-interpretable and harder to statistically manipulate. |
|
| 144 |
+
| **Demographic Parity (DP)** | Demographic Parity, also known as Statistical Parity, requires that the probability of a positive prediction is the same across different demographic groups. Calc: DP = P(Ŷ=1∣A=a) = P(Ŷ=1∣A=b), where Ŷ is the predicted outcome and A represents the protected attribute (e.g., race, gender). This figure is given as an absolute difference where positive values suggest the more privileged group gains and negative values the reverse. The rule of 10% is used here to look for significant biases. |
|
| 145 |
+
| **Equal Opportunity (EO)** | Equal Opportunity focuses on equalising the true positive rates across groups. Among those who truly belong to the positive class, the model should predict positive outcomes at equal rates across different groups. Calc: EO = P(Ŷ=1∣Y=1, A=a) = P(Ŷ=1∣Y=1, A=b), where Ŷ is the predicted outcome and Y is the actual outcome. If there is no bias, then the value will be equal for all groups. A higher value indicates a bias against the group considered more vulnerable. The rule of 10% is used here to look for significant biases. |
|
| 146 |
+
| **Disparate Impact (DI)** | Divides the protected group’s positive prediction rate by that of the most-favoured group. If the ratio is below 0.8 or above 1.25, disparate impact is considered present. Calc: DI = P(Ŷ=1∣A=unfavoured) / P(Ŷ=1∣A=favoured). Values outside the 0.8–1.25 range suggest bias. |
|
| 147 |
+
| **Time / Energy / CO₂ Emissions** | Measured in minutes and total energy consumption in kilowatt-hours (kWh), which can then be extrapolated to CO₂ emissions via a conversion factor set at 0.20705 Kg CO₂e per kWh for this study. |
|
|
|
|
| 148 |
|
| 149 |
## Model Card Authors
|
| 150 |
|
model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 433270768
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:17b7d50b76f5b924e4abbe8d3b6c6b3749e152a778ce27fb11fe81ad4893471b
|
| 3 |
size 433270768
|