gpt2-qa / README.md
danitamayo's picture
Update README.md
b1a4483
|
raw
history blame
1.69 kB

Question Answering Model applying fine tuning to a GPT2 text generator model in a Catalan Dataset "projecte-aina/catalanqa".

Results over the first epoch

200it [01:14, 2.29it/s]Train: wpb=10, num_updates=200, accuracy=2.5, loss=0.97

500it [02:57, 3.06it/s]Train: wpb=10, num_updates=500, accuracy=3.1, loss=0.98

1000it [05:47, 2.72it/s]Train: wpb=10, num_updates=1000, accuracy=3.7, loss=0.91

2000it [11:29, 3.32it/s]Train: wpb=10, num_updates=2000, accuracy=3.7, loss=0.85

3000it [16:48, 3.90it/s]Train: wpb=10, num_updates=3000, accuracy=3.7, loss=0.82

4000it [22:10, 3.06it/s]Train: wpb=10, num_updates=4000, accuracy=3.9, loss=0.79

5000it [27:24, 3.50it/s]Train: wpb=10, num_updates=5000, accuracy=4.1, loss=0.77

6000it [32:41, 2.19it/s]Train: wpb=10, num_updates=6000, accuracy=4.5, loss=0.76

7000it [37:56, 3.03it/s]Train: wpb=10, num_updates=7000, accuracy=4.6, loss=0.75

8000it [43:06, 3.73it/s]Train: wpb=10, num_updates=8000, accuracy=4.8, loss=0.74

9000it [48:28, 2.85it/s]Train: wpb=10, num_updates=9000, accuracy=4.9, loss=0.73

10000it [53:43, 2.89it/s]Train: wpb=10, num_updates=10000, accuracy=5.1, loss=0.73

11000it [59:09, 3.10it/s]Train: wpb=10, num_updates=11000, accuracy=5.2, loss=0.73

12000it [1:04:37, 2.64it/s]Train: wpb=10, num_updates=12000, accuracy=5.3, loss=0.72

13000it [1:10:02, 2.66it/s]Train: wpb=10, num_updates=13000, accuracy=5.4, loss=0.72

14000it [1:15:15, 2.68it/s]Train: wpb=10, num_updates=14000, accuracy=5.4, loss=0.72

14150it [1:16:05, 3.10it/s]

Train: wpb=9, num_updates=14150, accuracy=5.4, loss=0.72

| epoch 000 | train accuracy=5.4%, train loss=0.72

| epoch 000 | valid accuracy=7.6%, valid loss=0.69