gjonesQ02 commited on
Commit
cdd4455
verified
1 Parent(s): a627678

End of training

Browse files
README.md CHANGED
@@ -3,8 +3,6 @@ license: apache-2.0
3
  base_model: distilbert-base-uncased
4
  tags:
5
  - generated_from_trainer
6
- datasets:
7
- - squad
8
  model-index:
9
  - name: testLLMModels
10
  results: []
@@ -15,7 +13,9 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # testLLMModels
17
 
18
- This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
 
 
19
 
20
  ## Model description
21
 
@@ -40,18 +40,20 @@ The following hyperparameters were used during training:
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
- - num_epochs: 1
44
 
45
  ### Training results
46
 
47
  | Training Loss | Epoch | Step | Validation Loss |
48
  |:-------------:|:-----:|:----:|:---------------:|
49
- | No log | 1.0 | 250 | 2.9686 |
 
 
50
 
51
 
52
  ### Framework versions
53
 
54
- - Transformers 4.36.1
55
- - Pytorch 2.1.0+cu118
56
- - Datasets 2.15.0
57
- - Tokenizers 0.15.0
 
3
  base_model: distilbert-base-uncased
4
  tags:
5
  - generated_from_trainer
 
 
6
  model-index:
7
  - name: testLLMModels
8
  results: []
 
13
 
14
  # testLLMModels
15
 
16
+ This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 1.7927
19
 
20
  ## Model description
21
 
 
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
+ - num_epochs: 3
44
 
45
  ### Training results
46
 
47
  | Training Loss | Epoch | Step | Validation Loss |
48
  |:-------------:|:-----:|:----:|:---------------:|
49
+ | No log | 1.0 | 250 | 2.3645 |
50
+ | 2.7423 | 2.0 | 500 | 1.8809 |
51
+ | 2.7423 | 3.0 | 750 | 1.7927 |
52
 
53
 
54
  ### Framework versions
55
 
56
+ - Transformers 4.37.2
57
+ - Pytorch 2.1.0+cu121
58
+ - Datasets 2.16.1
59
+ - Tokenizers 0.15.1
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "./testLLMModels",
3
  "activation": "gelu",
4
  "architectures": [
5
  "DistilBertForQuestionAnswering"
@@ -19,6 +19,6 @@
19
  "sinusoidal_pos_embds": false,
20
  "tie_weights_": true,
21
  "torch_dtype": "float32",
22
- "transformers_version": "4.36.1",
23
  "vocab_size": 30522
24
  }
 
1
  {
2
+ "_name_or_path": "distilbert-base-uncased",
3
  "activation": "gelu",
4
  "architectures": [
5
  "DistilBertForQuestionAnswering"
 
19
  "sinusoidal_pos_embds": false,
20
  "tie_weights_": true,
21
  "torch_dtype": "float32",
22
+ "transformers_version": "4.37.2",
23
  "vocab_size": 30522
24
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2e44e3d195afc0cffc69d78325de05e74d5f2efb9e1a8a5257a15f3c1e12ac48
3
  size 265470032
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ee398de1eb2a3292232d24cc186cbd0c3a91f9239f131b3966fbb816ec0a04d
3
  size 265470032
runs/Jan30_17-54-17_075e53ae2159/events.out.tfevents.1706637257.075e53ae2159.3867.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f0d17dc097259c73710229bd276a6f49dcb6f17da0eb9e1814347d76a7a595b
3
+ size 5548
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:743c2013c16b44c836448bf1f34f3efe9f525161ec47b82d49f12f9a298ba3dd
3
  size 4664
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:919a16a65397f9f20e2d87ed0dff5c1566938f63abfbf776f4ee5a8c1c3370d7
3
  size 4664