Update README.md
Browse files
README.md
CHANGED
|
@@ -46,7 +46,7 @@ model-index:
|
|
| 46 |
|
| 47 |
---
|
| 48 |
|
| 49 |
-
# mt5-
|
| 50 |
|
| 51 |
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an enhanced version of the Natural Questions dataset.
|
| 52 |
It achieves the following results on the evaluation set:
|
|
@@ -83,6 +83,23 @@ It can give false negatives and false positives on occasion (see Training Result
|
|
| 83 |
|
| 84 |
More information needed
|
| 85 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 86 |
## Training procedure
|
| 87 |
|
| 88 |
### Training hyperparameters
|
|
@@ -105,14 +122,15 @@ The following hyperparameters were used during training:
|
|
| 105 |
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len | Meteor | True negatives | False negatives | Cosine Sim |
|
| 106 |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------:|:------:|:--------------:|:---------------:|:----------:|
|
| 107 |
| 2.5724 | 1.0 | 175 | 0.9876 | 18.7781 | 15.6002 | 18.22 | 18.2686 | 7.6676 | 7.7661 | 0.1628 | 72.8701 | 56.677 | 0.4003 |
|
| 108 |
-
| 1.1469 | 1.99 | 350 | 0.8580 | 36.8209 | 31.2514 | 35.5008 | 35.5462 | 25.7137 | 12.0014 | 0.3311 | 62.8399 | 20.3934 | 0.
|
|
|
|
| 109 |
| 0.9468 | 2.99 | 525 | 0.7997 | 40.4128 | 34.716 | 39.0867 | 39.0972 | 29.3028 | 12.4287 | 0.3656 | 63.4441 | 15.295 | 0.7114 |
|
| 110 |
| 0.8129 | 3.98 | 700 | 0.7733 | 42.6764 | 36.7266 | 41.2465 | 41.2833 | 32.0644 | 12.9002 | 0.3871 | 62.1752 | 11.413 | 0.7425 |
|
| 111 |
| 0.7228 | 4.98 | 875 | 0.7483 | 42.9082 | 36.957 | 41.482 | 41.5233 | 32.4942 | 12.8866 | 0.3906 | 63.3233 | 11.5166 | 0.747 |
|
| 112 |
| 0.6493 | 5.97 | 1050 | 0.7293 | 40.3205 | 34.9632 | 39.1111 | 39.1168 | 28.8249 | 11.6867 | 0.3674 | 73.8973 | 17.9865 | 0.7068 |
|
| 113 |
| 0.5883 | 6.97 | 1225 | 0.7172 | 42.7342 | 37.0855 | 41.4069 | 41.424 | 32.1296 | 12.48 | 0.3887 | 70.0302 | 12.7847 | 0.7392 |
|
| 114 |
| 0.5409 | 7.96 | 1400 | 0.7387 | 44.6657 | 38.8426 | 43.3276 | 43.3496 | 34.4773 | 12.9395 | 0.4084 | 66.3444 | 9.5238 | 0.7658 |
|
| 115 |
-
| 0.5035 | 8.96 | 1575 | 0.7330 | 43.4925 | 38.0013 | 42.2697 | 42.2372 | 32.6131 | 12.2789 | 0.3979 | 72.6284 | 12.8364 | 0.
|
| 116 |
| 0.4652 | 9.95 | 1750 | 0.7291 | 44.4366 | 38.8202 | 43.113 | 43.1423 | 34.1596 | 12.6724 | 0.4049 | 69.7281 | 10.4037 | 0.763 |
|
| 117 |
|
| 118 |
|
|
|
|
| 46 |
|
| 47 |
---
|
| 48 |
|
| 49 |
+
# mt5-small
|
| 50 |
|
| 51 |
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an enhanced version of the Natural Questions dataset.
|
| 52 |
It achieves the following results on the evaluation set:
|
|
|
|
| 83 |
|
| 84 |
More information needed
|
| 85 |
|
| 86 |
+
## Usage
|
| 87 |
+
```python
|
| 88 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 89 |
+
|
| 90 |
+
# Load model and tokenizer
|
| 91 |
+
model_name = "username/model-name"
|
| 92 |
+
model = AutoModelForCausalLM.from_pretrained(model_name)
|
| 93 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 94 |
+
|
| 95 |
+
# Generate text
|
| 96 |
+
input_text = "Once upon a time"
|
| 97 |
+
input_ids = tokenizer.encode(input_text, return_tensors="pt")
|
| 98 |
+
output = model.generate(input_ids, max_length=50)
|
| 99 |
+
|
| 100 |
+
print(tokenizer.decode(output[0], skip_special_tokens=True))
|
| 101 |
+
```
|
| 102 |
+
|
| 103 |
## Training procedure
|
| 104 |
|
| 105 |
### Training hyperparameters
|
|
|
|
| 122 |
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len | Meteor | True negatives | False negatives | Cosine Sim |
|
| 123 |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------:|:------:|:--------------:|:---------------:|:----------:|
|
| 124 |
| 2.5724 | 1.0 | 175 | 0.9876 | 18.7781 | 15.6002 | 18.22 | 18.2686 | 7.6676 | 7.7661 | 0.1628 | 72.8701 | 56.677 | 0.4003 |
|
| 125 |
+
| 1.1469 | 1.99 | 350 | 0.8580 | 36.8209 | 31.2514 | 35.5008 | 35.5462 | 25.7137 | 12.0014 | 0.3311 | 62.8399 | 20.3934 | 0.66
|
| 126 |
+
|
|
| 127 |
| 0.9468 | 2.99 | 525 | 0.7997 | 40.4128 | 34.716 | 39.0867 | 39.0972 | 29.3028 | 12.4287 | 0.3656 | 63.4441 | 15.295 | 0.7114 |
|
| 128 |
| 0.8129 | 3.98 | 700 | 0.7733 | 42.6764 | 36.7266 | 41.2465 | 41.2833 | 32.0644 | 12.9002 | 0.3871 | 62.1752 | 11.413 | 0.7425 |
|
| 129 |
| 0.7228 | 4.98 | 875 | 0.7483 | 42.9082 | 36.957 | 41.482 | 41.5233 | 32.4942 | 12.8866 | 0.3906 | 63.3233 | 11.5166 | 0.747 |
|
| 130 |
| 0.6493 | 5.97 | 1050 | 0.7293 | 40.3205 | 34.9632 | 39.1111 | 39.1168 | 28.8249 | 11.6867 | 0.3674 | 73.8973 | 17.9865 | 0.7068 |
|
| 131 |
| 0.5883 | 6.97 | 1225 | 0.7172 | 42.7342 | 37.0855 | 41.4069 | 41.424 | 32.1296 | 12.48 | 0.3887 | 70.0302 | 12.7847 | 0.7392 |
|
| 132 |
| 0.5409 | 7.96 | 1400 | 0.7387 | 44.6657 | 38.8426 | 43.3276 | 43.3496 | 34.4773 | 12.9395 | 0.4084 | 66.3444 | 9.5238 | 0.7658 |
|
| 133 |
+
| 0.5035 | 8.96 | 1575 | 0.7330 | 43.4925 | 38.0013 | 42.2697 | 42.2372 | 32.6131 | 12.2789 | 0.3979 | 72.6284 | 12.8364 | 0.7```1 |
|
| 134 |
| 0.4652 | 9.95 | 1750 | 0.7291 | 44.4366 | 38.8202 | 43.113 | 43.1423 | 34.1596 | 12.6724 | 0.4049 | 69.7281 | 10.4037 | 0.763 |
|
| 135 |
|
| 136 |
|