File size: 2,826 Bytes
47ceb30
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B-Instruct
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 0ec236299b317d76c0f94de06fc85471
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# 0ec236299b317d76c0f94de06fc85471

This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the contemmcm/cls_mmlu dataset.
It achieves the following results on the evaluation set:
- Loss: 5.8746
- Data Size: 1.0
- Epoch Runtime: 78.5633
- Accuracy: 0.2779
- F1 Macro: 0.2241

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 50

### Training results

| Training Loss | Epoch | Step | Validation Loss | Data Size | Epoch Runtime | Accuracy | F1 Macro |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------------:|:--------:|:--------:|
| No log        | 0     | 0    | 9.9364          | 0         | 2.9379        | 0.2420   | 0.1876   |
| No log        | 1     | 438  | 11.6798         | 0.0078    | 3.5674        | 0.2453   | 0.0998   |
| No log        | 2     | 876  | 6.0949          | 0.0156    | 5.5874        | 0.2374   | 0.1504   |
| No log        | 3     | 1314 | 5.8527          | 0.0312    | 8.3921        | 0.2487   | 0.0996   |
| No log        | 4     | 1752 | 5.9142          | 0.0625    | 11.6323       | 0.2354   | 0.1503   |
| 0.3881        | 5     | 2190 | 6.0489          | 0.125     | 17.1439       | 0.2620   | 0.1625   |
| 0.7521        | 6     | 2628 | 5.5870          | 0.25      | 26.2689       | 0.2666   | 0.1350   |
| 5.6417        | 7     | 3066 | 5.6066          | 0.5       | 44.1598       | 0.2460   | 0.0999   |
| 5.6381        | 8.0   | 3504 | 5.6027          | 1.0       | 81.3182       | 0.2527   | 0.1008   |
| 5.5548        | 9.0   | 3942 | 5.7473          | 1.0       | 80.5339       | 0.2533   | 0.1011   |
| 5.1021        | 10.0  | 4380 | 5.8746          | 1.0       | 78.5633       | 0.2779   | 0.2241   |


### Framework versions

- Transformers 4.57.0
- Pytorch 2.8.0+cu128
- Datasets 4.3.0
- Tokenizers 0.22.1