Update README.md
Browse files
README.md
CHANGED
|
@@ -41,7 +41,9 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 41 |
|
| 42 |
# checkpoints
|
| 43 |
|
| 44 |
-
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on the `vblagoje/lfqa` dataset, with training duration of 2 epochs
|
|
|
|
|
|
|
| 45 |
|
| 46 |
## Model description
|
| 47 |
|
|
@@ -49,7 +51,8 @@ More information needed
|
|
| 49 |
|
| 50 |
## Intended uses & limitations
|
| 51 |
|
| 52 |
-
|
|
|
|
| 53 |
|
| 54 |
## Training and evaluation data
|
| 55 |
|
|
|
|
| 41 |
|
| 42 |
# checkpoints
|
| 43 |
|
| 44 |
+
- This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on the `vblagoje/lfqa` dataset, with training duration of 2 epochs, for a (_somewhat_) apples-to-apples comparison with [t5-base](https://huggingface.co/pszemraj/t5-base-askscience) on the standard eli5 dataset.
|
| 45 |
+
- This checkpoint does seem to be more coherent than t5-base on the original dataset.
|
| 46 |
+
- Compared to [bart on lfqa](https://huggingface.co/vblagoje/bart_lfqa), it seems to be able to respond to some questions independently of retrieval.
|
| 47 |
|
| 48 |
## Model description
|
| 49 |
|
|
|
|
| 51 |
|
| 52 |
## Intended uses & limitations
|
| 53 |
|
| 54 |
+
- Q&A, information retrieval
|
| 55 |
+
- it is probably better to use it with a [retrieval pipeline](https://github.com/deepset-ai/haystack) than alone
|
| 56 |
|
| 57 |
## Training and evaluation data
|
| 58 |
|