am5uc commited on
Commit
710351f
·
verified ·
1 Parent(s): b62bc7a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -41,7 +41,7 @@ These were the arguments/hyperparameters, I used. I tried using higher epochs, b
41
  ```
42
 
43
  ## Evaluation
44
- I had three benchmarks, the WikiTableQuestions dataset, the TabFact dataset, and the Synthetic Validation set. Fine-tuning did not harm the results of on the WTQ Validation Set and the TabFact Dataset, in which I got accuracies of .3405 and .5005, respectively for both the pre-trained and fine-tuned model. There were improvements in the validation and test results after training though. On Validation, there was a jump from 0.4000 to 0.4222. On the test set, there was quite a larger jump in accuracy from 0.2033 to 0.4667 after fine-tuning.
45
 
46
  | Model | Test Set of Synthetic Dataset | Benchmark 1 (WTQ Validation Set) | Benchmark 2 (TabFact) | Benchmark 3 (SQA) |
47
  |------------------------------------------------------|-------------------------------|------------------------------------------|-----------------------|-------------------|
 
41
  ```
42
 
43
  ## Evaluation
44
+ I had three benchmarks, the WikiTableQuestions dataset, the TabFact dataset, and SQA. Fine-tuning did not harm the results of on the WTQ Validation Set and the TabFact Dataset, in which I got accuracies of .3405 and .5005, respectively for both the pre-trained and fine-tuned model. It slightly improved the results on the SQA dataset. There were improvements in the test results after training though. On the test set, there was quite a large jump in accuracy from 0.2033 to 0.4667 after fine-tuning.
45
 
46
  | Model | Test Set of Synthetic Dataset | Benchmark 1 (WTQ Validation Set) | Benchmark 2 (TabFact) | Benchmark 3 (SQA) |
47
  |------------------------------------------------------|-------------------------------|------------------------------------------|-----------------------|-------------------|