ptran74 commited on
Commit
d008c8c
1 Parent(s): 7d6ea93

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +71 -6
README.md CHANGED
@@ -11,28 +11,93 @@ model-index:
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
  should probably proofread and complete it, then remove this comment. -->
13
 
 
 
 
 
 
 
 
 
14
  # DSPFirst-Finetuning-5
15
 
16
- This model is a fine-tuned version of [ahotrod/electra_large_discriminator_squad2_512](https://huggingface.co/ahotrod/electra_large_discriminator_squad2_512) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
  - Loss: 0.8529
19
  - Exact: 66.3117
20
  - F1: 73.4039
21
  - Combined: 70.2124
22
 
23
- ## Model description
24
-
25
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
  ## Intended uses & limitations
28
 
29
- More information needed
 
30
 
31
  ## Training and evaluation data
32
 
33
- More information needed
 
 
 
34
 
35
  ## Training procedure
 
 
 
36
 
37
  ### Training hyperparameters
38
 
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
  should probably proofread and complete it, then remove this comment. -->
13
 
14
+ # Important Note:
15
+ I created the `combined` metric (55% F1 score + 45% exact match score) to retrieve the best result. Here is the setting in the `TrainingArguments`:
16
+ ```
17
+ load_best_model_at_end=True,
18
+ metric_for_best_model='combined',
19
+ greater_is_better=True,
20
+ ```
21
+
22
  # DSPFirst-Finetuning-5
23
 
24
+ This model is a fine-tuned version of [ahotrod/electra_large_discriminator_squad2_512](https://huggingface.co/ahotrod/electra_large_discriminator_squad2_512) on a generated Questions and Answers dataset from the DSPFirst textbook based on the SQuAD 2.0 format.<br />
25
  It achieves the following results on the evaluation set:
26
  - Loss: 0.8529
27
  - Exact: 66.3117
28
  - F1: 73.4039
29
  - Combined: 70.2124
30
 
31
+ ## More accurate metrics:
32
+
33
+ ### Before fine-tuning:
34
+
35
+ ```
36
+ 'HasAns_exact': 54.71817606079797,
37
+ 'HasAns_f1': 61.08672724332754,
38
+ 'HasAns_total': 1579,
39
+ 'NoAns_exact': 88.78048780487805,
40
+ 'NoAns_f1': 88.78048780487805,
41
+ 'NoAns_total': 205,
42
+ 'best_exact': 58.63228699551569,
43
+ 'best_exact_thresh': 0.0,
44
+ 'best_f1': 64.26902596256402,
45
+ 'best_f1_thresh': 0.0,
46
+ 'exact': 58.63228699551569,
47
+ 'f1': 64.26902596256404,
48
+ 'total': 1784
49
+ ```
50
+
51
+ ### After fine-tuning:
52
+
53
+ ```
54
+ 'HasAns_exact': 67.57441418619379,
55
+ 'HasAns_f1': 75.92137683558988,
56
+ 'HasAns_total': 1579,
57
+ 'NoAns_exact': 63.41463414634146,
58
+ 'NoAns_f1': 63.41463414634146,
59
+ 'NoAns_total': 205,
60
+ 'best_exact': 67.0964125560538,
61
+ 'best_exact_thresh': 0.0,
62
+ 'best_f1': 74.48422310728503,
63
+ 'best_f1_thresh': 0.0,
64
+ 'exact': 67.0964125560538,
65
+ 'f1': 74.48422310728503,
66
+ 'total': 1784
67
+ ```
68
+
69
+ # Dataset
70
+ A visualization of the dataset can be found [here](https://github.gatech.edu/pages/VIP-ITS/textbook_SQuAD_explore/explore/textbookv1.0/textbook/).<br />
71
+ The split between train and test is 70% and 30% respectively.
72
+ ```
73
+ DatasetDict({
74
+ train: Dataset({
75
+ features: ['id', 'title', 'context', 'question', 'answers'],
76
+ num_rows: 4160
77
+ })
78
+ test: Dataset({
79
+ features: ['id', 'title', 'context', 'question', 'answers'],
80
+ num_rows: 1784
81
+ })
82
+ })
83
+ ```
84
 
85
  ## Intended uses & limitations
86
 
87
+ This model is fine-tuned to answer questions from the DSPFirst textbook. I'm not really sure what I am doing so you should review before using it.<br />
88
+ Also, you should improve the Dataset either by using a **better generated questions and answers model** (currently using https://github.com/patil-suraj/question_generation) or perform **data augmentation** to increase dataset size.
89
 
90
  ## Training and evaluation data
91
 
92
+ - `batch_size` of 6 results in 14.03 GB VRAM
93
+ - Utilizes `gradient_accumulation_steps` to get total batch size to 514 (batch size should be at least 256)
94
+ - 4.52 GB RAM
95
+ - 30% of the total questions is dedicated for evaluating.
96
 
97
  ## Training procedure
98
+ - The model was trained from [Google Colab](https://colab.research.google.com/drive/1dJXNstk2NSenwzdtl9xA8AqjP4LL-Ks_?usp=sharing)
99
+ - Utilizes Tesla P100 16GB, took 6.3 hours to train
100
+ - `load_best_model_at_end` is enabled in TrainingArguments
101
 
102
  ### Training hyperparameters
103