File size: 1,178 Bytes
a30143a 1d2f0c2 a30143a f97a30a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
---
datasets:
- shay681/Precedents
language:
- he
base_model:
- google/mt5-small
pipeline_tag: text2text-generation
---
# Text2Text Precedents Finetuned Model
This model fine-tunes [google/mt5-small](https://huggingface.co/google/mt5-small) model on [shay681/Precedents](https://huggingface.co/datasets/shay681/Precedents) dataset.
## Training and evaluation data
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| Precedents | train | 473,204 |
| Precedents | validation | 118,302 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- evaluation_strategy: "epoch"
- learning_rate: 5e-5
- train_batch_size: 4
- eval_batch_size: 4
- num_train_epochs: 5
- weight_decay: 0.01
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
### Results
| Metric | # Value |
| ------ | --------- |
| **Accuracy** | **0.075** |
| **F1** | **0.024** |
### About Me
Created by Shay Doner.
This is my final project as part of intelligent systems M.Sc studies at Afeka College in Tel-Aviv.
For more cooperation, please contact email:
shay681@gmail.com |