File size: 2,970 Bytes
4cc2695
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
tags:
- generated_from_trainer
datasets:
- roneneldan/TinyStories
metrics:
- accuracy
model-index:
- name: gpt2_m030_tiny-stories_1024
  results:
  - task:
      name: Causal Language Modeling
      type: text-generation
    dataset:
      name: roneneldan/TinyStories
      type: roneneldan/TinyStories
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.6756425005551174
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/scads-nlp/morph-gpt_gpt2_tiny-stories/runs/t3jfpuq6)
# gpt2_m030_tiny-stories_1024

This model is a fine-tuned version of [](https://huggingface.co/) on the roneneldan/TinyStories dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2217
- Accuracy: 0.6756

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0

### Training results

| Training Loss | Epoch  | Step  | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 2.9308        | 0.0525 | 1000  | 2.4752          | 0.4408   |
| 1.9919        | 0.1050 | 2000  | 1.8136          | 0.5648   |
| 1.7406        | 0.1575 | 3000  | 1.6235          | 0.5984   |
| 1.6185        | 0.2101 | 4000  | 1.5258          | 0.6165   |
| 1.5461        | 0.2626 | 5000  | 1.4625          | 0.6282   |
| 1.4955        | 0.3151 | 6000  | 1.4170          | 0.6368   |
| 1.4553        | 0.3676 | 7000  | 1.3824          | 0.6433   |
| 1.4218        | 0.4201 | 8000  | 1.3532          | 0.6492   |
| 1.3986        | 0.4726 | 9000  | 1.3305          | 0.6537   |
| 1.3722        | 0.5252 | 10000 | 1.3100          | 0.6575   |
| 1.3573        | 0.5777 | 11000 | 1.2934          | 0.6608   |
| 1.3448        | 0.6302 | 12000 | 1.2785          | 0.6639   |
| 1.3291        | 0.6827 | 13000 | 1.2657          | 0.6665   |
| 1.3174        | 0.7352 | 14000 | 1.2551          | 0.6686   |
| 1.3052        | 0.7877 | 15000 | 1.2463          | 0.6704   |
| 1.2968        | 0.8402 | 16000 | 1.2366          | 0.6725   |
| 1.2856        | 0.8928 | 17000 | 1.2308          | 0.6735   |
| 1.2817        | 0.9453 | 18000 | 1.2249          | 0.6749   |
| 1.2814        | 0.9978 | 19000 | 1.2216          | 0.6757   |


### Framework versions

- Transformers 4.42.3
- Pytorch 2.2.2+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1