MattStammers commited on
Commit
23bec39
·
1 Parent(s): daeac9c

full code and documentation uploaded

Browse files
Files changed (6) hide show
  1. README.md +60 -117
  2. config.json +26 -0
  3. model.safetensors +3 -0
  4. special_tokens_map.json +7 -0
  5. tokenizer_config.json +58 -0
  6. vocab.txt +0 -0
README.md CHANGED
@@ -9,203 +9,146 @@ metrics:
9
  - f1
10
  - matthews_correlation
11
  base_model:
12
- - distilbert/distilbert-base-uncased
13
  tags:
14
  - IBD
15
  - cohort_identification
16
  - case_finding
17
  ---
18
- # Model Card for Model ID
19
 
20
- <!-- Provide a quick summary of what the model is/does. -->
21
-
22
- This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
23
 
24
  ## Model Details
25
 
26
  ### Model Description
27
 
28
- <!-- Provide a longer summary of what this model is. -->
29
-
30
-
31
 
32
- - **Developed by:** [More Information Needed]
33
- - **Funded by [optional]:** [More Information Needed]
34
- - **Shared by [optional]:** [More Information Needed]
35
- - **Model type:** [More Information Needed]
36
- - **Language(s) (NLP):** [More Information Needed]
37
- - **License:** [More Information Needed]
38
- - **Finetuned from model [optional]:** [More Information Needed]
39
 
40
- ### Model Sources [optional]
41
 
42
- <!-- Provide the basic links for the model. -->
43
-
44
- - **Repository:** [More Information Needed]
45
- - **Paper [optional]:** [More Information Needed]
46
- - **Demo [optional]:** [More Information Needed]
47
 
48
  ## Uses
49
 
50
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
51
 
52
  ### Direct Use
53
 
54
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
55
-
56
- [More Information Needed]
57
 
58
- ### Downstream Use [optional]
59
 
60
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
61
-
62
- [More Information Needed]
63
 
64
  ### Out-of-Scope Use
65
 
66
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
67
-
68
- [More Information Needed]
69
 
70
  ## Bias, Risks, and Limitations
71
 
72
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
73
-
74
- [More Information Needed]
75
 
76
  ### Recommendations
77
 
78
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
79
-
80
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
81
 
82
  ## How to Get Started with the Model
83
 
84
  Use the code below to get started with the model.
85
 
86
- [More Information Needed]
87
 
88
  ## Training Details
89
 
90
  ### Training Data
91
 
92
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
93
-
94
- [More Information Needed]
95
 
96
  ### Training Procedure
97
 
98
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
99
-
100
- #### Preprocessing [optional]
101
-
102
- [More Information Needed]
103
-
104
 
105
  #### Training Hyperparameters
106
 
107
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
108
 
109
- #### Speeds, Sizes, Times [optional]
110
 
111
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
112
-
113
- [More Information Needed]
114
 
115
  ## Evaluation
116
 
117
- <!-- This section describes the evaluation protocols and provides the results. -->
118
 
119
  ### Testing Data, Factors & Metrics
120
 
121
  #### Testing Data
122
 
123
- <!-- This should link to a Dataset Card if possible. -->
124
-
125
- [More Information Needed]
126
 
127
  #### Factors
128
 
129
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
130
-
131
- [More Information Needed]
132
 
133
  #### Metrics
134
 
135
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
136
-
137
- [More Information Needed]
138
 
139
  ### Results
140
 
141
- [More Information Needed]
 
 
142
 
143
  #### Summary
144
 
145
-
146
-
147
- ## Model Examination [optional]
148
-
149
- <!-- Relevant interpretability work for the model goes here -->
150
-
151
- [More Information Needed]
152
 
153
  ## Environmental Impact
154
 
155
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
156
-
157
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
158
-
159
- - **Hardware Type:** [More Information Needed]
160
- - **Hours used:** [More Information Needed]
161
- - **Cloud Provider:** [More Information Needed]
162
- - **Compute Region:** [More Information Needed]
163
- - **Carbon Emitted:** [More Information Needed]
164
-
165
- ## Technical Specifications [optional]
166
-
167
- ### Model Architecture and Objective
168
-
169
- [More Information Needed]
170
-
171
- ### Compute Infrastructure
172
-
173
- [More Information Needed]
174
-
175
- #### Hardware
176
-
177
- [More Information Needed]
178
-
179
- #### Software
180
-
181
- [More Information Needed]
182
-
183
- ## Citation [optional]
184
-
185
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
186
-
187
- **BibTeX:**
188
-
189
- [More Information Needed]
190
-
191
- **APA:**
192
 
193
- [More Information Needed]
 
 
194
 
195
- ## Glossary [optional]
196
 
197
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
198
 
199
- [More Information Needed]
200
 
201
- ## More Information [optional]
 
 
 
 
 
 
 
 
 
 
 
 
 
202
 
203
- [More Information Needed]
204
 
205
- ## Model Card Authors [optional]
206
 
207
- [More Information Needed]
208
 
209
  ## Model Card Contact
210
 
211
- [More Information Needed]
 
9
  - f1
10
  - matthews_correlation
11
  base_model:
12
+ - emilyalsentzer/Bio_ClinicalBERT
13
  tags:
14
  - IBD
15
  - cohort_identification
16
  - case_finding
17
  ---
18
+ # Model Card for BioClinicalBERT IBD
19
 
20
+ The model classifies documents as either IBD or Not IBD
 
 
21
 
22
  ## Model Details
23
 
24
  ### Model Description
25
 
26
+ As above. This is a model trained to detect IBD patients from clinical text
 
 
27
 
28
+ - **Developed by:** Matt Stammers
29
+ - **Funded by:** University Hospital Foundation NHS Trust
30
+ - **Shared by:** Matt Stammers - SETT Data and AI Clinical Lead
31
+ - **Model type:** BERT Transformer
32
+ - **Language(s) (NLP):** English
33
+ - **License:** cc-by-nc-4.0
34
+ - **Finetuned from model:** emilyalsentzer/Bio_ClinicalBERT
35
 
36
+ ### Model Sources
37
 
38
+ - **Repository:** https://huggingface.co/MattStammers/BioClinicalBERT_IBD
39
+ - **Paper:** Arxiv (Pending)
40
+ - **Demo:** https://huggingface.co/spaces/MattStammers/IBD_Cohort_Identification
 
 
41
 
42
  ## Uses
43
 
44
+ For document classification tasks to differentiate between documents likely to be diagnostic of IBD and those unlikely to be diagnostic of IBD.
45
 
46
  ### Direct Use
47
 
48
+ This model can be used directly at [Cohort Identification Demo](https://huggingface.co/spaces/MattStammers/IBD_Cohort_Identification)
 
 
49
 
50
+ ### Downstream Use
51
 
52
+ Others can build on this model and improve it but only for non-commercial purposes.
 
 
53
 
54
  ### Out-of-Scope Use
55
 
56
+ This model is less powerful (in terms of F1 Score) when making predictions at the patient level by 1-2%. It can be used for this purpose but with care.
 
 
57
 
58
  ## Bias, Risks, and Limitations
59
 
60
+ This model contains substantial biases and is known to be biased against older patients and non-white patients so use with care.
 
 
61
 
62
  ### Recommendations
63
 
64
+ It will work best in a predominantly younger caucasian population.
 
 
65
 
66
  ## How to Get Started with the Model
67
 
68
  Use the code below to get started with the model.
69
 
70
+ The model is best used with the transformers library.
71
 
72
  ## Training Details
73
 
74
  ### Training Data
75
 
76
+ The model was trained on fully pseudonymised clinical information at UHSFT which was carefully labelled by a consultant (attending) physician and evaluated against a randomly selected internal holdout set.
 
 
77
 
78
  ### Training Procedure
79
 
80
+ See the paper for more information on the training procedure
 
 
 
 
 
81
 
82
  #### Training Hyperparameters
83
 
84
+ - **Training regime:** fp32
85
 
86
+ #### Speeds, Sizes, Times
87
 
88
+ This model (part of a set of models) took 213.55 minutes to train
 
 
89
 
90
  ## Evaluation
91
 
92
+ The model was internally validated against a holdout set
93
 
94
  ### Testing Data, Factors & Metrics
95
 
96
  #### Testing Data
97
 
98
+ The testing data cannot be revealed due to IG regulations and to remain compliant with GDPR, only the resulting model can be
 
 
99
 
100
  #### Factors
101
 
102
+ IBD vs Not-IBD
 
 
103
 
104
  #### Metrics
105
 
106
+ Full evaluation metrics are available in the paper with a summary below
 
 
107
 
108
  ### Results
109
 
110
+ | Model | Doc Coverage | Accuracy | Precision | Recall | Specificity | NPV | F1 Score | MCC |
111
+ |------------------|------------------|----------------------------------|-----------------------------------|-----------------------------------|-----------------------------------|----------------------------------|----------------------------------|----------------------------------|
112
+ | BioclinicalBERT | 768 (100.00%) | 90.29% (CI: 87.33% - 92.62%) | 91.48% (CI: 88.39% - 93.81%) | 96.91% (CI: 94.67% - 98.22%) | 63.54% (CI: 53.57% - 72.48%) | 83.56% (CI: 73.43% - 90.34%) | 94.12% (CI: 92.79% - 95.48%) | 0.6735 (CI: 0.5892 - 0.7538) |
113
 
114
  #### Summary
115
 
116
+ Overall performance of the model is high with an F1 Score of >94% on our internal holdout set.
 
 
 
 
 
 
117
 
118
  ## Environmental Impact
119
 
120
+ Training the model used 2.01kWh of energy emmitting 416.73 grams of CO2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
121
 
122
+ - **Hardware Type:** L40S
123
+ - **Hours used:** 3.55
124
+ - **Carbon Emitted:** 0.417 Kg CO2
125
 
126
+ ## Citation
127
 
128
+ Arxiv (Pending)
129
 
130
+ ## Glossary
131
 
132
+ | Term | Description |
133
+ |-------------------------------------|-------------|
134
+ | **Accuracy** | The percentage of results that were correct among all results from the system. Calc: (TP + TN) / (TP + FP + TN + FN). |
135
+ | **Precision (PPV)** | Also called positive predictive value (PPV), it is the percentage of true positive results among all results that the system flagged as positive. Calc: TP / (TP + FP). |
136
+ | **Negative Predictive Value (NPV)** | The percentage of results that were true negative (TN) among all results that the system flagged as negative. Calc: TN / (TN + FN). |
137
+ | **Recall** | Also called sensitivity. The percentage of results flagged positive among all results that should have been obtained. Calc: TP / (TP + FN). |
138
+ | **Specificity** | The percentage of results that were flagged negative among all negative results. Calc: TN / (TN + FP). |
139
+ | **F1-Score** | The harmonic mean of PPV/precision and sensitivity/recall. Calc: 2 × (Precision × Recall) / (Precision + Recall). Moderately useful in the context of class imbalance. |
140
+ | **Matthews’ Correlation Coefficient (MCC)** | A statistical measure used to evaluate the quality of binary classifications. Unlike other metrics, MCC considers all four categories of a confusion matrix. Calc: (TP × TN − FP × FN) / √((TP + FP)(TP + FN)(TN + FP)(TN + FN)). |
141
+ | **Precision / Recall AUC** | Represents the area under the Precision-Recall curve, which plots Precision against Recall at various threshold settings. It is more resistant to class imbalance than alternatives like AUROC. |
142
+ | **Demographic Parity (DP)** | Demographic Parity, also known as Statistical Parity, requires that the probability of a positive prediction is the same across different demographic groups. Calc: DP = P(Ŷ=1∣A=a) = P(Ŷ=1∣A=b). This figure is given as an absolute difference where positive values suggest the more privileged group gains and negative values the reverse. |
143
+ | **Equal Opportunity (EO)** | Equal Opportunity focuses on equalising the true positive rates across groups. Among those who truly belong to the positive class, the model should predict positive outcomes at equal rates across groups. Calc: EO = P(Ŷ=1∣Y=1, A=a) = P(Ŷ=1∣Y=1, A=b). A higher value indicates a bias against the more vulnerable group. |
144
+ | **Disparate Impact (DI)** | Divides the protected group’s positive prediction rate by that of the most-favoured group. If the ratio is below 0.8 or above 1.25, disparate impact is considered present. Calc: DI = P(Ŷ=1∣A=unfavoured) / P(Ŷ=1∣A=favoured). Values outside 0.8–1.25 range suggest bias. |
145
+ | **Execution Time / Energy / CO₂ Emissions** | Measured in minutes and total energy consumption in kilowatt-hours (kWh), which is then converted to CO₂ emissions using a factor of 0.20705 Kg CO₂e per kWh. |
146
 
 
147
 
148
+ ## Model Card Authors
149
 
150
+ Matt Stammers - Computational Gastroenterologist
151
 
152
  ## Model Card Contact
153
 
154
+ m.stammers@soton.ac.uk
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "emilyalsentzer/Bio_ClinicalBERT",
3
+ "architectures": [
4
+ "BertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 768,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 3072,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 12,
17
+ "num_hidden_layers": 12,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "problem_type": "single_label_classification",
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.49.0",
23
+ "type_vocab_size": 2,
24
+ "use_cache": true,
25
+ "vocab_size": 28996
26
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8f3fd198328902ef862679f8c2e62491563faa2845cbf3c3d97585f12f36133
3
+ size 433270768
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "model_max_length": 1000000000000000019884624838656,
51
+ "never_split": null,
52
+ "pad_token": "[PAD]",
53
+ "sep_token": "[SEP]",
54
+ "strip_accents": null,
55
+ "tokenize_chinese_chars": true,
56
+ "tokenizer_class": "BertTokenizer",
57
+ "unk_token": "[UNK]"
58
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff