MattStammers commited on
Commit
511cc5d
·
verified ·
1 Parent(s): c3f6bd6

Upload folder using huggingface_hub

Browse files
Files changed (6) hide show
  1. config.json +25 -0
  2. model.safetensors +3 -0
  3. readme.md +163 -0
  4. special_tokens_map.json +7 -0
  5. tokenizer_config.json +58 -0
  6. vocab.txt +0 -0
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "distilbert-base-uncased",
3
+ "activation": "gelu",
4
+ "architectures": [
5
+ "DistilBertForSequenceClassification"
6
+ ],
7
+ "attention_dropout": 0.1,
8
+ "dim": 768,
9
+ "dropout": 0.1,
10
+ "hidden_dim": 3072,
11
+ "initializer_range": 0.02,
12
+ "max_position_embeddings": 512,
13
+ "model_type": "distilbert",
14
+ "n_heads": 12,
15
+ "n_layers": 6,
16
+ "pad_token_id": 0,
17
+ "problem_type": "single_label_classification",
18
+ "qa_dropout": 0.1,
19
+ "seq_classif_dropout": 0.2,
20
+ "sinusoidal_pos_embds": false,
21
+ "tie_weights_": true,
22
+ "torch_dtype": "float32",
23
+ "transformers_version": "4.49.0",
24
+ "vocab_size": 30522
25
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3db78c17dcdf5f18d22c1d5a452612c439a77dc44bd28b344fa8065be82f3cc
3
+ size 267832560
readme.md ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ language:
4
+ - en
5
+ metrics:
6
+ - precision
7
+ - recall
8
+ - brier_score
9
+ - f1
10
+ - matthews_correlation
11
+ base_model:
12
+ - distilbert/distilbert-base-uncased
13
+ tags:
14
+ - IBD
15
+ - cohort_identification
16
+ - case_finding
17
+ ---
18
+ # Model Card for BioClinicalBERT IBD
19
+
20
+ The model classifies documents as either IBD or Not IBD
21
+
22
+ ## Model Details
23
+
24
+ ### Model Description
25
+
26
+ As above. This is a model trained to detect IBD patients from clinical text
27
+
28
+ - **Developed by:** Matt Stammers
29
+ - **Funded by:** University Hospital Foundation NHS Trust
30
+ - **Shared by:** Matt Stammers - SETT Data and AI Clinical Lead
31
+ - **Model type:** BERT Transformer (Finetuned)
32
+ - **Language(s) (NLP):** English
33
+ - **License:** cc-by-nc-4.0
34
+ - **Finetuned from model:** distilbert/distilbert-base-uncased
35
+
36
+ ### Model Sources
37
+
38
+ - **Repository:** https://huggingface.co/MattStammers/Distil_IBD_BERT
39
+ - **Paper:** MedRxiv- [MedRxiv Paper](https://www.medrxiv.org/content/10.1101/2025.07.06.25330961v1)
40
+ - **Demo:** https://huggingface.co/spaces/MattStammers/IBD_Cohort_Identification
41
+ - **GitHub:** https://github.com/MattStammers/An_Open_Source_Collection_Of_IBD_Cohort_Identification_Models
42
+
43
+ ## Uses
44
+
45
+ For document classification tasks to differentiate between documents likely to be for patients with IBD and those not suggestive of IBD.
46
+
47
+ ### Direct Use
48
+
49
+ This model can be used directly at [Cohort Identification Demo](https://huggingface.co/spaces/MattStammers/IBD_Cohort_Identification)
50
+
51
+ ### Downstream Use
52
+
53
+ Others are encouraged to build on this model and improve it but only for non-commercial purposes.
54
+
55
+ ### Out-of-Scope Use
56
+
57
+ This model is less powerful (in terms of F1 Score) when making predictions at the patient level by 1-2%. It can be used for this purpose but with care. Its biggest weakness in terms of performance is specificity and it is very likely currently overfitted to the training data.
58
+
59
+ ## Bias, Risks, and Limitations
60
+
61
+ This model contains substantial biases and is known to be biased against non-white patients, women and the wealthy so use with care (see the paper for information on the training cohort).
62
+
63
+ ### Recommendations
64
+
65
+ It will likely work best in a predominantly caucasian/Western population.
66
+
67
+ ## How to Get Started with the Model
68
+
69
+ Use the code below to get started with the model.
70
+
71
+ The model is best used with the transformers library.
72
+
73
+ ## Training Details
74
+
75
+ ### Training Data
76
+
77
+ The model was trained on fully pseudonymised clinical information at UHSFT which was carefully labelled by a consultant (attending) physician and evaluated against a randomly selected internal holdout set. All non-IBD patients have now been removed.
78
+
79
+ ### Training Procedure
80
+
81
+ See the paper for more information on the training procedure
82
+
83
+ #### Training Hyperparameters
84
+
85
+ - **Training regime:** fp32
86
+
87
+ #### Speeds, Sizes, Times
88
+
89
+ This model (part of a set of models) took 213.55 minutes to train
90
+
91
+ ## Evaluation
92
+
93
+ The model was internally validated against a holdout set - Type 2a validation according to TRIPOD.
94
+
95
+ ### Testing Data, Factors & Metrics
96
+
97
+ #### Testing Data
98
+
99
+ The testing data cannot be revealed due to IG regulations and to remain compliant with GDPR, only the resulting model can be
100
+
101
+ #### Factors
102
+
103
+ IBD vs Not-IBD
104
+
105
+ #### Metrics
106
+
107
+ Full evaluation metrics are available in the paper with a summary below
108
+
109
+ ### Results
110
+
111
+ | Model | Doc Coverage | Accuracy | Precision | Recall | Specificity | NPV | F1 Score | MCC |
112
+ |------------|------------------|----------------------------------|----------------------------------|----------------------------------|----------------------------------|----------------------------------|----------------------------------|----------------------------------|
113
+ | DistilBERT | 768 (100.00%) | 89.88% (CI: 86.87% - 92.26%) | 90.33% (CI: 87.01% - 92.87%) | 96.99% (CI: 94.70% - 98.31%) | 67.80% (CI: 58.92% - 75.55%) | 87.91% (CI: 79.64% - 93.11%) | 93.54% (CI: 92.07% - 95.07%) | 0.7120 (CI: 0.6324 - 0.7857) |
114
+
115
+ #### Summary
116
+
117
+ Overall performance of the model is high but it has so far only been validated internally.
118
+
119
+ ## Environmental Impact
120
+
121
+ Training the model used 2.01kWh of energy emmitting 416.73 grams of CO2
122
+
123
+ - **Hardware Type:** L40S
124
+ - **Hours used:** 2
125
+ - **Carbon Emitted:** 0.230 Kg CO2
126
+
127
+ ## Citation
128
+
129
+ Stammers M, Gwiggner M, Nouraei R, Metcalf C, Batchelor J. From Rule-Based to DeepSeek R1: A Robust Comparative Evaluation of Fifty Years of Natural Language Processing (NLP) Models To Identify Inflammatory Bowel Disease Cohorts. medRxiv. 2025:2025-07.
130
+ MedRxiv- [MedRxiv Paper](https://www.medrxiv.org/content/10.1101/2025.07.06.25330961v1)
131
+
132
+ ## Glossary
133
+
134
+ | Term | Description |
135
+ |-------------------------------------|-------------|
136
+ | **Accuracy** | The percentage of results that were correct among all results from the system. Calc: (TP + TN) / (TP + FP + TN + FN). |
137
+ | **Precision (PPV)** | Also called positive predictive value (PPV), it is the percentage of true positive results among all results that the system flagged as positive. Calc: TP / (TP + FP). |
138
+ | **Negative Predictive Value (NPV)** | The percentage of results that were true negative (TN) among all results that the system flagged as negative. Calc: TN / (TN + FN). |
139
+ | **Recall** | Also called sensitivity. The percentage of results flagged positive among all results that should have been obtained. Calc: TP / (TP + FN). |
140
+ | **Specificity** | The percentage of results that were flagged negative among all negative results. Calc: TN / (TN + FP). |
141
+ | **F1-Score** | The harmonic mean of PPV/precision and sensitivity/recall. Calc: 2 × (Precision × Recall) / (Precision + Recall). Moderately useful in the context of class imbalance. |
142
+ | **Matthews’ Correlation Coefficient (MCC)** | A statistical measure used to evaluate the quality of binary classifications. Unlike other metrics, MCC considers all four categories of a confusion matrix. Calc: (TP × TN − FP × FN) / √((TP + FP)(TP + FN)(TN + FP)(TN + FN)). |
143
+ | **Precision / Recall AUC** | Represents the area under the Precision-Recall curve, which plots Precision against Recall at various threshold settings. It is more resistant to class imbalance than alternatives like AUROC. |
144
+ | **Demographic Parity (DP)** | Demographic Parity, also known as Statistical Parity, requires that the probability of a positive prediction is the same across different demographic groups. Calc: DP = P(Ŷ=1∣A=a) = P(Ŷ=1∣A=b). This figure is given as an absolute difference where positive values suggest the more privileged group gains and negative values the reverse. |
145
+ | **Equal Opportunity (EO)** | Equal Opportunity focuses on equalising the true positive rates across groups. Among those who truly belong to the positive class, the model should predict positive outcomes at equal rates across groups. Calc: EO = P(Ŷ=1∣Y=1, A=a) = P(Ŷ=1∣Y=1, A=b). A higher value indicates a bias against the more vulnerable group. |
146
+ | **Disparate Impact (DI)** | Divides the protected group’s positive prediction rate by that of the most-favoured group. If the ratio is below 0.8 or above 1.25, disparate impact is considered present. Calc: DI = P(Ŷ=1∣A=unfavoured) / P(Ŷ=1∣A=favoured). Values outside 0.8–1.25 range suggest bias. |
147
+ | **Execution Time / Energy / CO₂ Emissions** | Measured in minutes and total energy consumption in kilowatt-hours (kWh), which is then converted to CO₂ emissions using a factor of 0.20705 Kg CO₂e per kWh. |
148
+
149
+ ## Model Card Authors
150
+
151
+ Matt Stammers - Computational Gastroenterologist
152
+
153
+ ## Model Card Contact
154
+
155
+ m.stammers@soton.ac.uk
156
+
157
+ ## Legal
158
+
159
+ 1. No guarantee is given of model performance in any production capacity whatsoever.
160
+ 2. These models should be used in full accordance with the EU AI Act - Regulation 2024/1689.
161
+ 3. These models are not CE marked medical devices and are suitable at this point only for research and development / experimentation at users own discretion.
162
+ 4. They can be improved but any improvements should be published openly and shared openly with the community.
163
+ 5. UHSFT and the author own the copyright and are choosing to share them freely under a CC BY-NC 4.0 Licence for the benefit of the wider research community but not for commercial organisations who are breaking copyright law and infringing upon NHS intellectual property if they try to sell/market these models for profit.
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "model_max_length": 512,
51
+ "never_split": null,
52
+ "pad_token": "[PAD]",
53
+ "sep_token": "[SEP]",
54
+ "strip_accents": null,
55
+ "tokenize_chinese_chars": true,
56
+ "tokenizer_class": "DistilBertTokenizer",
57
+ "unk_token": "[UNK]"
58
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff