ashabrawy commited on
Commit
b5889f3
·
verified ·
1 Parent(s): c443547

NLP702-bert-base-uncased_finetuning-distillation_hs768-nh32-nl12

Browse files
Files changed (5) hide show
  1. README.md +6 -6
  2. best/config.json +2 -2
  3. best/model.safetensors +2 -2
  4. config.json +2 -2
  5. model.safetensors +2 -2
README.md CHANGED
@@ -15,8 +15,8 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 0.4056
19
- - Accuracy: 0.8416
20
 
21
  ## Model description
22
 
@@ -50,10 +50,10 @@ The following hyperparameters were used during training:
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
52
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
53
- | 1.495 | 1.39 | 500 | 0.6461 | 0.7467 |
54
- | 0.437 | 2.78 | 1000 | 0.4789 | 0.8131 |
55
- | 0.2094 | 4.17 | 1500 | 0.4312 | 0.8337 |
56
- | 0.1147 | 5.56 | 2000 | 0.3926 | 0.8515 |
57
 
58
 
59
  ### Framework versions
 
15
 
16
  This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 0.4298
19
+ - Accuracy: 0.8332
20
 
21
  ## Model description
22
 
 
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
52
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
53
+ | 1.5966 | 1.39 | 500 | 0.7974 | 0.6719 |
54
+ | 0.5528 | 2.78 | 1000 | 0.5465 | 0.7900 |
55
+ | 0.2857 | 4.17 | 1500 | 0.4594 | 0.8264 |
56
+ | 0.1546 | 5.56 | 2000 | 0.4235 | 0.8401 |
57
 
58
 
59
  ### Framework versions
best/config.json CHANGED
@@ -136,8 +136,8 @@
136
  "layer_norm_eps": 1e-12,
137
  "max_position_embeddings": 512,
138
  "model_type": "bert",
139
- "num_attention_heads": 16,
140
- "num_hidden_layers": 8,
141
  "pad_token_id": 0,
142
  "position_embedding_type": "absolute",
143
  "torch_dtype": "float32",
 
136
  "layer_norm_eps": 1e-12,
137
  "max_position_embeddings": 512,
138
  "model_type": "bert",
139
+ "num_attention_heads": 32,
140
+ "num_hidden_layers": 12,
141
  "pad_token_id": 0,
142
  "position_embedding_type": "absolute",
143
  "torch_dtype": "float32",
best/model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3a7f3f5f6c30da89ae2f557f15b5d424e5f0a660d0490be1077e2cf04b60a0d8
3
- size 324723536
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc07f53d1945a88e44b1167465147924ab30a65231d714f855e4e5a74e19178d
3
+ size 438137056
config.json CHANGED
@@ -136,8 +136,8 @@
136
  "layer_norm_eps": 1e-12,
137
  "max_position_embeddings": 512,
138
  "model_type": "bert",
139
- "num_attention_heads": 16,
140
- "num_hidden_layers": 8,
141
  "pad_token_id": 0,
142
  "position_embedding_type": "absolute",
143
  "torch_dtype": "float32",
 
136
  "layer_norm_eps": 1e-12,
137
  "max_position_embeddings": 512,
138
  "model_type": "bert",
139
+ "num_attention_heads": 32,
140
+ "num_hidden_layers": 12,
141
  "pad_token_id": 0,
142
  "position_embedding_type": "absolute",
143
  "torch_dtype": "float32",
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3a7f3f5f6c30da89ae2f557f15b5d424e5f0a660d0490be1077e2cf04b60a0d8
3
- size 324723536
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc07f53d1945a88e44b1167465147924ab30a65231d714f855e4e5a74e19178d
3
+ size 438137056