Siddharth63 commited on
Commit
a7b0630
·
verified ·
1 Parent(s): cd1d7a2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +188 -22
README.md CHANGED
@@ -1,40 +1,206 @@
1
  ---
2
- datasets:
3
- - Siddharth63/biological_dataset
4
  license: apache-2.0
 
 
 
 
 
 
 
 
 
5
  ---
6
- # Bioul2-tiny-nl6
7
 
8
- Pretrained T5 model on Biological dataset using a UL2 (Mixture-of-Denoisers) objective. T5 model was introduced in this paper and first released at this page. The UL2 objective was introduced in [this paper](https://arxiv.org/abs/1910.10683) and first released on [this page](https://github.com/google-research/text-to-text-transfer-transformer).
9
 
10
- ## Model description
11
- T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
12
 
13
- BioT5 is a transformers model pretrained on a very large corpus of biological data (25 million abstracts) in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and outputs from those texts.
14
 
15
- This model used the T5 v1.1 improvements compared to the original T5 model during the pretraining:
 
 
16
 
17
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see here
18
- Dropout was turned off in pretraining (quality win). Dropout should be re-enabled during fine-tuning
19
- Pretrained on self-supervised objective only without mixing in the downstream tasks
20
- No parameter sharing between embedding and classifier layer
21
- This model also used the "efficient" T5 architecture findings presented in this paper. In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures of similar parameter count. To be more precise, model depth is defined as the number of transformer blocks that are stacked sequentially.
22
 
23
- This model uses the t5-efficient-tiny-nl6 architecture's layer depth which means both the encoder and the decoder have 6 transformer layers compared to the original T5 "tiny" model's architecture of 4 transformer layers.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
- In total, this model has 31 million parameters.
 
 
26
 
 
27
 
 
28
 
29
- ## UL2 pretraining objective
30
- This model was pretrained with the UL2's Mixture-of-Denoisers (MoD) objective, that combines diverse pre-training paradigms together. UL2 frames different objective functions for training language models as denoising tasks, where the model has to recover missing sub-sequences of a given input. During pre-training it uses a novel mixture-of-denoisers that samples from a varied set of such objectives, each with different configurations. UL2 is trained using a mixture of three denoising tasks: (1) R-denoising (or regular span corruption), which emulates the standard T5 span corruption objective; (2) X-denoising (or extreme span corruption); and (3) S-denoising (or sequential PrefixLM). During pre-training, we sample from the available denoising tasks based on user-specified ratios.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
 
32
- UL2 introduces a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training denoising task. During the pretraining, a paradigm token is inserted to the input ([NLU] for R-denoising, [NLG] for X-denoising, or [S2S] for S-denoising) indicating the denoising task at hand. Then, during fine-tuning the same input token should be inserted to get the best performance for different downstream fine-tuning tasks.
33
 
34
- Intended uses & limitations
35
- This model was only pretrained in a self-supervised way excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, like text classification, unlike the Google's original T5 model. Note: You most likely need to fine-tune these T5/UL2 models without mixed precision so fine-tune them with full fp32 precision. You can also find more fine-tuning tips from here, for example.
36
 
37
- Note: For fine-tuning, most likely you can get better results if you insert a prefix token of [NLU], [NLG], or [S2S] to your input texts. For general language understanding fine-tuning tasks, you could use the [NLU] token. For GPT-style causal language generation, you could use the [S2S] token. The token [NLG] of the X-denoising pretrain task is somewhat mix between the language understanding and causal language generation so the token [NLG] could maybe be used for language generation fine-tuning too.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
  ## Acknowledgements
40
- This project would not have been possible without compute generously provided by Google through the [Google TPU Research Cloud](https://sites.research.google/trc/about/). Thanks to the [Finnish-NLP](https://huggingface.co/Finnish-NLP) authors for releasing their code for the UL2 objective, associated task definitions and their guidance. Thanks to [Yeb Havinga](https://huggingface.co/yhavinga) for helping me get started with the t5x framework.
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
  license: apache-2.0
5
+ tags:
6
+ - biomedical
7
+ - clinical
8
+ - ul2
9
+ - t5
10
+ - encoder-decoder
11
+ - pretraining
12
+ - text2text-generation
13
+ - medical
14
  ---
 
15
 
16
+ # PubMedUL2 & MedUL2
17
 
18
+ ## Model Description
 
19
 
20
+ **PubMedUL2** and **MedUL2** are a family of **domain-specific UL2/T5-style encoder–decoder language models** pretrained on large-scale biomedical and medical corpora using the **UL2 (Mixture-of-Denoisers)** objective.
21
 
22
+ - **PubMedUL2** models are pretrained on **25 million PubMed abstracts**
23
+ - **MedUL2** models are pretrained on **PubMed abstracts + clinical notes + additional medical documents**
24
+ - All models use a **T5-efficient architecture**, inspired by Google’s efficient T5 variants
25
 
26
+ These checkpoints are **pretraining-only models** and **must be fine-tuned** before use on downstream tasks.
 
 
 
 
27
 
28
+ ---
29
+
30
+ ## Pretraining Objective: UL2 (Mixture-of-Denoisers)
31
+
32
+ These models were pretrained using **UL2**, a unified framework that formulates language modeling objectives as **denoising tasks**.
33
+
34
+ UL2 introduces a **Mixture-of-Denoisers (MoD)** approach that samples from multiple denoising paradigms during pretraining.
35
+
36
+ ### Denoising Tasks
37
+
38
+ UL2 pretraining uses a mixture of three denoising tasks:
39
+
40
+ 1. **R-denoising (Regular Span Corruption)**
41
+ - Equivalent to standard T5 span corruption
42
+ - Optimized for language understanding tasks
43
+
44
+ 2. **X-denoising (Extreme Span Corruption)**
45
+ - Uses very large masked spans
46
+ - Encourages long-form generation and abstraction
47
 
48
+ 3. **S-denoising (Sequential / PrefixLM)**
49
+ - Prefix language modeling similar to causal LM
50
+ - Suitable for sequence-to-sequence and generative tasks
51
 
52
+ ### Paradigm Tokens (Mode Switching)
53
 
54
+ During pretraining, a **paradigm token** is inserted at the beginning of each input:
55
 
56
+ | Token | Mode | Recommended Use |
57
+ |------|------|------------------|
58
+ | `[NLU]` | R-denoising | Classification, QA, retrieval |
59
+ | `[NLG]` | X-denoising | Mixed understanding & generation |
60
+ | `[S2S]` | S-denoising | Generative / causal tasks |
61
+
62
+ **Important:**
63
+ For best performance, the same token should be **prepended during fine-tuning and inference**.
64
+
65
+ ---
66
+
67
+ ## Architecture
68
+
69
+ - Encoder–decoder Transformer (T5-style)
70
+ - Uses **T5-efficient architecture**
71
+ - Compatible with Hugging Face `T5ForConditionalGeneration`
72
+
73
+ ---
74
 
75
+ ## Intended Uses
76
 
77
+ These models are intended to be **fine-tuned** for:
 
78
 
79
+ - Biomedical and clinical **text classification**
80
+ - **Question answering**
81
+ - **Summarization** of medical literature or clinical notes
82
+ - **Text generation** in medical contexts
83
+
84
+ ---
85
+
86
+ ## Limitations
87
+
88
+ - ❌ Not instruction-tuned
89
+ - ❌ No supervised training
90
+ - ❌ Not suitable for zero-shot use
91
+
92
+ These checkpoints are **self-supervised pretraining models only** and require task-specific fine-tuning.
93
+
94
+ ---
95
+
96
+ ## Fine-Tuning Recommendations
97
+
98
+ - **Avoid mixed precision** (fp16 / bf16) initially
99
+ - Fine-tuning is more stable in **fp32**
100
+ - Always prepend one of `[NLU]`, `[NLG]`, or `[S2S]` to input text
101
+ - Suggested defaults:
102
+ - Classification / QA → `[NLU]`
103
+ - Causal or generative tasks → `[S2S]`
104
+ - Mixed tasks → `[NLG]`
105
+
106
+ ---
107
+
108
+ ## Model Parameter Summary
109
+
110
+ | Model Name | Parameter Count | Description | Access
111
+ |-----------|----------------|------------|------------|
112
+ | `pubmedul2-tiny-nl6` | **19.26M** | Tiny UL2-style model with 6 layers | Open
113
+ | `pubmedul2-mini-nl8` | **50.12M** | Mini UL2 with 8 layers | Open
114
+ | `pubmedul2-small` | **60.52M** | Small UL2 variant | Open
115
+ | `pubmedul2-small-nl24` | **192.73M** | Small UL2 with 24 layers | Open
116
+ | `medul2-base` | **222.93M** | Base UL2/T5-style model | Open
117
+ | `pubmedul2-base` | **222.93M** | Base UL2/T5-style model | Open
118
+ | `medul2-base-nl36` | **619.44M** | Base UL2 with 36 layers | Gated commercial
119
+ | `pubmedul2-base-nl36` | **619.44M** | Base UL2 with 36 layers | Gated commercial
120
+ | `medul2-large` | **737.72M** | Large UL2/T5-style model | Gated non-commercial
121
+ | `pubmedul2-large` | **737.72M** | Large UL2/T5-style model | Gated non-commercial
122
+ | `medul2-large-nl36` | **1090.14M** | Very large UL2 with 36 layers | Access on Request
123
+
124
+ ---
125
+
126
+ ## Named Entity Recognition (NER) Evaluation
127
+
128
+ We evaluate PubMedUL2 and MedUL2 models on a biomedical **Named Entity Recognition (NER)** task using multiple matching criteria to better capture boundary-level performance.
129
+
130
+ The evaluation reports **entity-level F1 scores** across different biomedical entity types and model sizes.
131
+
132
+ ### Exact Match F1
133
+
134
+ An entity prediction is considered correct only if both the **entity span and label exactly match** the gold annotation.
135
+
136
+ | entity_type | medul2-base | pubmedul2-base | pubmedul2-mini-nl8 | pubmedul2-small | pubmedul2-tiny-nl6 |
137
+ |:--------------|--------------:|-----------------:|---------------------:|------------------:|---------------------:|
138
+ | cell_line | 0.42 | 0.43 | 0.44 | 0.43 | 0.35 |
139
+ | cell_type | 0.59 | 0.58 | 0.59 | 0.58 | 0.52 |
140
+ | chemical | 0.76 | 0.75 | 0.72 | 0.72 | 0.56 |
141
+ | disease | 0.7 | 0.73 | 0.7 | 0.68 | 0.63 |
142
+ | dna | 0.59 | 0.55 | 0.54 | 0.55 | 0.45 |
143
+ | gene | 0.62 | 0.59 | 0.6 | 0.59 | 0.55 |
144
+ | protein | 0.59 | 0.58 | 0.58 | 0.59 | 0.55 |
145
+ | rna | 0.6 | 0.56 | 0.55 | 0.6 | 0.56 |
146
+ | species | 0.66 | 0.67 | 0.58 | 0.63 | 0.54 |
147
+
148
+ ---
149
+
150
+ ### Partial Match F1
151
+
152
+ A prediction is counted as correct if it **partially overlaps** with a gold entity of the same type.
153
+
154
+ | entity_type | medul2-base | pubmedul2-base | pubmedul2-mini-nl8 | pubmedul2-small | pubmedul2-tiny-nl6 |
155
+ |:--------------|--------------:|-----------------:|---------------------:|------------------:|---------------------:|
156
+ | cell_line | 0.48 | 0.49 | 0.48 | 0.48 | 0.41 |
157
+ | cell_type | 0.66 | 0.64 | 0.66 | 0.65 | 0.59 |
158
+ | chemical | 0.79 | 0.78 | 0.76 | 0.75 | 0.6 |
159
+ | disease | 0.82 | 0.84 | 0.8 | 0.79 | 0.74 |
160
+ | dna | 0.65 | 0.61 | 0.6 | 0.61 | 0.53 |
161
+ | gene | 0.76 | 0.74 | 0.74 | 0.73 | 0.68 |
162
+ | protein | 0.66 | 0.66 | 0.66 | 0.67 | 0.64 |
163
+ | rna | 0.68 | 0.63 | 0.64 | 0.66 | 0.65 |
164
+ | species | 0.68 | 0.7 | 0.61 | 0.65 | 0.56 |
165
+
166
+ ---
167
+
168
+ ### IoU Match F1
169
+
170
+ Predictions are evaluated using **Intersection-over-Union (IoU)** overlap between predicted and gold spans, providing a softer boundary-based metric.
171
+
172
+ | entity_type | medul2-base | pubmedul2-base | pubmedul2-mini-nl8 | pubmedul2-small | pubmedul2-tiny-nl6 |
173
+ |:--------------|--------------:|-----------------:|---------------------:|------------------:|---------------------:|
174
+ | cell_line | 0.5 | 0.5 | 0.5 | 0.5 | 0.42 |
175
+ | cell_type | 0.67 | 0.66 | 0.68 | 0.67 | 0.62 |
176
+ | chemical | 0.83 | 0.83 | 0.82 | 0.82 | 0.72 |
177
+ | disease | 0.85 | 0.86 | 0.86 | 0.85 | 0.82 |
178
+ | dna | 0.65 | 0.62 | 0.62 | 0.62 | 0.55 |
179
+ | gene | 0.76 | 0.75 | 0.75 | 0.74 | 0.71 |
180
+ | protein | 0.67 | 0.66 | 0.67 | 0.67 | 0.66 |
181
+ | rna | 0.68 | 0.65 | 0.66 | 0.67 | 0.67 |
182
+ | species | 0.72 | 0.74 | 0.65 | 0.69 | 0.58 |
183
+
184
+ ---
185
+
186
+ ### Observations
187
+
188
+ - **MedUL2 models** generally outperform PubMedUL2 on clinical-heavy entity types such as *disease* and *chemical*
189
+ - Performance improves consistently from **tiny → base models**
190
+ - Boundary-sensitive metrics (Partial / IoU) show significantly higher scores than Exact Match, highlighting boundary ambiguity in biomedical NER
191
+
192
+ ---
193
 
194
  ## Acknowledgements
195
+
196
+ This project would not have been possible without compute generously provided by **Google TPU Research Cloud**.
197
+
198
+ Thanks to:
199
+ - The **Finnish-NLP** authors for releasing the UL2 objective code, task definitions, and guidance
200
+ - **Yeb Havinga** for help getting started with the **t5x** framework
201
+
202
+ ---
203
+
204
+ ## License
205
+
206
+ Please refer to the individual model repositories for **license and access details**, which may vary depending on training data sources.