ptran74 commited on
Commit
f3ae04b
1 Parent(s): 75a3902

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -3
README.md CHANGED
@@ -13,7 +13,7 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # DSPFirst-Finetuning-5
15
 
16
- This model is a fine-tuned version of [ahotrod/electra_large_discriminator_squad2_512](https://huggingface.co/ahotrod/electra_large_discriminator_squad2_512) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
  - Loss: 0.9496
19
  - Exact: 64.0557
@@ -24,15 +24,76 @@ It achieves the following results on the evaluation set:
24
 
25
  More information needed
26
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
  ## Intended uses & limitations
28
 
29
- More information needed
 
30
 
31
  ## Training and evaluation data
32
 
33
- More information needed
 
 
 
34
 
35
  ## Training procedure
 
 
 
36
 
37
  ### Training hyperparameters
38
 
@@ -47,6 +108,11 @@ The following hyperparameters were used during training:
47
  - lr_scheduler_type: linear
48
  - num_epochs: 10
49
 
 
 
 
 
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss | Exact | F1 | Combined |
 
13
 
14
  # DSPFirst-Finetuning-5
15
 
16
+ This model is a fine-tuned version of [ahotrod/electra_large_discriminator_squad2_512](https://huggingface.co/ahotrod/electra_large_discriminator_squad2_512) on a generated Questions and Answers dataset from the DSPFirst textbook based on the SQuAD 2.0 format.<br />
17
  It achieves the following results on the evaluation set:
18
  - Loss: 0.9496
19
  - Exact: 64.0557
 
24
 
25
  More information needed
26
 
27
+ ## More accurate metrics:
28
+
29
+ ### Before fine-tuning:
30
+
31
+ ```
32
+ 'HasAns_exact': 53.09537088678193,
33
+ 'HasAns_f1': 58.61604504258551,
34
+ 'HasAns_total': 1793,
35
+ 'NoAns_exact': 86.11111111111111,
36
+ 'NoAns_f1': 86.11111111111111,
37
+ 'NoAns_total': 288,
38
+ 'best_exact': 57.66458433445459,
39
+ 'best_exact_thresh': 0.0,
40
+ 'best_f1': 62.42122477720136,
41
+ 'best_f1_thresh': 0.0,
42
+ 'exact': 57.66458433445459,
43
+ 'f1': 62.42122477720133,
44
+ 'total': 2081
45
+ ```
46
+
47
+ ### After fine-tuning:
48
+
49
+ ```
50
+ 'HasAns_exact': 64.138315672058,
51
+ 'HasAns_f1': 71.25733612355444,
52
+ 'HasAns_total': 1793,
53
+ 'NoAns_exact': 63.19444444444444,
54
+ 'NoAns_f1': 63.19444444444444,
55
+ 'NoAns_total': 288,
56
+ 'best_exact': 63.95963479096588,
57
+ 'best_exact_thresh': 0.0,
58
+ 'best_f1': 70.09341838997268,
59
+ 'best_f1_thresh': 0.0,
60
+ 'exact': 64.00768861124459,
61
+ 'f1': 70.14147221025135,
62
+ 'total': 2081
63
+ ```
64
+
65
+ # Dataset
66
+ A visualization of the dataset can be found [here](https://github.gatech.edu/pages/VIP-ITS/textbook_SQuAD_explore/explore/textbookv1.0/textbook/).<br />
67
+ The split between train and test is 65% and 35% respectively.
68
+ ```
69
+ DatasetDict({
70
+ train: Dataset({
71
+ features: ['id', 'title', 'context', 'question', 'answers'],
72
+ num_rows: 3863
73
+ })
74
+ test: Dataset({
75
+ features: ['id', 'title', 'context', 'question', 'answers'],
76
+ num_rows: 2081
77
+ })
78
+ })
79
+ ```
80
+
81
  ## Intended uses & limitations
82
 
83
+ This model is fine-tuned to answer questions from the DSPFirst textbook. I'm not really sure what I am doing so you should review before using it.<br />
84
+ Also, you should improve the Dataset either by using a **better generated questions and answers model** (currently using https://github.com/patil-suraj/question_generation) or perform **data augmentation** to increase dataset size.
85
 
86
  ## Training and evaluation data
87
 
88
+ - `batch_size` of 6 results in 14.82 GB VRAM
89
+ - Utilizes `gradient_accumulation_steps` to get total batch size to 514 (batch size should be at least 256)
90
+ - 4.52 GB RAM
91
+ - 30% of the total questions is dedicated for evaluating.
92
 
93
  ## Training procedure
94
+ - The model was trained from [Google Colab](https://colab.research.google.com/drive/1dJXNstk2NSenwzdtl9xA8AqjP4LL-Ks_?usp=sharing)
95
+ - Utilizes Tesla P100 16GB, took 6.3 hours to train
96
+ - `load_best_model_at_end` is enabled in TrainingArguments
97
 
98
  ### Training hyperparameters
99
 
 
108
  - lr_scheduler_type: linear
109
  - num_epochs: 10
110
 
111
+ ### Model hyperparameters
112
+
113
+ - hidden_dropout_prob: 0.36
114
+ - attention_probs_dropout_prob = 0.36
115
+
116
  ### Training results
117
 
118
  | Training Loss | Epoch | Step | Validation Loss | Exact | F1 | Combined |