shay681 commited on
Commit
a30143a
·
verified ·
1 Parent(s): f97a30a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -46
README.md CHANGED
@@ -1,47 +1,56 @@
1
- # Text2Text Precedents Finetuned Model
2
-
3
- This model fine-tunes google/mt5-small model on shay681/Precedents dataset.
4
-
5
-
6
- ## Training and evaluation data
7
-
8
- | Dataset | Split | # samples |
9
- | -------- | ----- | --------- |
10
- | Precedents | train | 473,204 |
11
- | Precedents | validation | 118,302 |
12
-
13
-
14
- ## Training procedure
15
-
16
- ### Training hyperparameters
17
-
18
- The following hyperparameters were used during training:
19
- - evaluation_strategy: "epoch"
20
- - learning_rate: 5e-5
21
- - train_batch_size: 4
22
- - eval_batch_size: 4
23
- - num_train_epochs: 5
24
- - weight_decay: 0.01
25
-
26
-
27
- ### Framework versions
28
-
29
- - Transformers 4.17.0
30
- - Pytorch 1.10.0+cu111
31
- - Datasets 1.18.4
32
- - Tokenizers 0.11.6
33
-
34
-
35
- ### Results
36
-
37
- | Metric | # Value |
38
- | ------ | --------- |
39
- | **Accuracy** | **0.075** |
40
- | **F1** | **0.024** |
41
-
42
-
43
- ### About Me
44
- Created by Shay Doner.
45
- This is my final project as part of intelligent systems M.Sc studies at Afeka College in Tel-Aviv.
46
- For more cooperation, please contact email:
 
 
 
 
 
 
 
 
 
47
  shay681@gmail.com
 
1
+ ---
2
+ datasets:
3
+ - shay681/Precedents
4
+ language:
5
+ - he
6
+ base_model:
7
+ - google/mt5-small
8
+ pipeline_tag: text2text-generation
9
+ ---
10
+ # Text2Text Precedents Finetuned Model
11
+
12
+ This model fine-tunes google/mt5-small model on shay681/Precedents dataset.
13
+
14
+
15
+ ## Training and evaluation data
16
+
17
+ | Dataset | Split | # samples |
18
+ | -------- | ----- | --------- |
19
+ | Precedents | train | 473,204 |
20
+ | Precedents | validation | 118,302 |
21
+
22
+
23
+ ## Training procedure
24
+
25
+ ### Training hyperparameters
26
+
27
+ The following hyperparameters were used during training:
28
+ - evaluation_strategy: "epoch"
29
+ - learning_rate: 5e-5
30
+ - train_batch_size: 4
31
+ - eval_batch_size: 4
32
+ - num_train_epochs: 5
33
+ - weight_decay: 0.01
34
+
35
+
36
+ ### Framework versions
37
+
38
+ - Transformers 4.17.0
39
+ - Pytorch 1.10.0+cu111
40
+ - Datasets 1.18.4
41
+ - Tokenizers 0.11.6
42
+
43
+
44
+ ### Results
45
+
46
+ | Metric | # Value |
47
+ | ------ | --------- |
48
+ | **Accuracy** | **0.075** |
49
+ | **F1** | **0.024** |
50
+
51
+
52
+ ### About Me
53
+ Created by Shay Doner.
54
+ This is my final project as part of intelligent systems M.Sc studies at Afeka College in Tel-Aviv.
55
+ For more cooperation, please contact email:
56
  shay681@gmail.com