Commit
·
94b8b63
1
Parent(s):
8600bb6
Update README.md
Browse files
README.md
CHANGED
|
@@ -5,14 +5,14 @@ license: artistic-2.0
|
|
| 5 |
---
|
| 6 |
# Bioul2-tiny-nl6
|
| 7 |
|
| 8 |
-
Pretrained T5 model on Biological dataset using a UL2 (Mixture-of-Denoisers) objective. T5 model was introduced in this paper and first released at this page. The UL2 objective was introduced in this paper and first released
|
| 9 |
|
| 10 |
Note: The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on a specific downstream task to be useful in practice.
|
| 11 |
|
| 12 |
## Model description
|
| 13 |
T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
|
| 14 |
|
| 15 |
-
|
| 16 |
|
| 17 |
This model used the T5 v1.1 improvements compared to the original T5 model during the pretraining:
|
| 18 |
|
|
@@ -39,5 +39,5 @@ This model was only pretrained in a self-supervised way excluding any supervised
|
|
| 39 |
Note: For fine-tuning, most likely you can get better results if you insert a prefix token of [NLU], [NLG], or [S2S] to your input texts. For general language understanding fine-tuning tasks, you could use the [NLU] token. For GPT-style causal language generation, you could use the [S2S] token. The token [NLG] of the X-denoising pretrain task is somewhat mix between the language understanding and causal language generation so the token [NLG] could maybe be used for language generation fine-tuning too.
|
| 40 |
|
| 41 |
## Acknowledgements
|
| 42 |
-
This project would not have been possible without compute generously provided by Google through the [Google TPU Research Cloud](https://sites.research.google/trc/about/). Thanks to the Finnish-NLP authors for releasing their code for the UL2 objective
|
| 43 |
|
|
|
|
| 5 |
---
|
| 6 |
# Bioul2-tiny-nl6
|
| 7 |
|
| 8 |
+
Pretrained T5 model on Biological dataset using a UL2 (Mixture-of-Denoisers) objective. T5 model was introduced in this paper and first released at this page. The UL2 objective was introduced in [this paper](https://arxiv.org/abs/1910.10683) and first released on [this page](https://github.com/google-research/text-to-text-transfer-transformer).
|
| 9 |
|
| 10 |
Note: The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on a specific downstream task to be useful in practice.
|
| 11 |
|
| 12 |
## Model description
|
| 13 |
T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
|
| 14 |
|
| 15 |
+
BioT5 is a transformers model pretrained on a very large corpus of biological data (25 million abstracts) in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and outputs from those texts.
|
| 16 |
|
| 17 |
This model used the T5 v1.1 improvements compared to the original T5 model during the pretraining:
|
| 18 |
|
|
|
|
| 39 |
Note: For fine-tuning, most likely you can get better results if you insert a prefix token of [NLU], [NLG], or [S2S] to your input texts. For general language understanding fine-tuning tasks, you could use the [NLU] token. For GPT-style causal language generation, you could use the [S2S] token. The token [NLG] of the X-denoising pretrain task is somewhat mix between the language understanding and causal language generation so the token [NLG] could maybe be used for language generation fine-tuning too.
|
| 40 |
|
| 41 |
## Acknowledgements
|
| 42 |
+
This project would not have been possible without compute generously provided by Google through the [Google TPU Research Cloud](https://sites.research.google/trc/about/). Thanks to the [Finnish-NLP](https://huggingface.co/Finnish-NLP) authors for releasing their code for the UL2 objective, associated task definitions and their guidance. Thanks to [Yeb Havinga](https://huggingface.co/yhavinga) for helping me get started with the t5x framework.
|
| 43 |
|