3v324v23 commited on
Commit
c9e9926
1 Parent(s): ab425d2

add model

Browse files
Files changed (3) hide show
  1. README.md +60 -0
  2. config.json +23 -0
  3. tf_model.h5 +3 -0
README.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_keras_callback
5
+ model-index:
6
+ - name: distilbertbaseuncasedz
7
+ results: []
8
+ ---
9
+
10
+ <!-- This model card has been generated automatically according to the information Keras had access to. You should
11
+ probably proofread and complete it, then remove this comment. -->
12
+
13
+ # distilbertbaseuncasedz
14
+
15
+ This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - Train Loss: 0.5368
18
+ - Train End Logits Accuracy: 0.8401
19
+ - Train Start Logits Accuracy: 0.8078
20
+ - Validation Loss: 1.2427
21
+ - Validation End Logits Accuracy: 0.7050
22
+ - Validation Start Logits Accuracy: 0.6725
23
+ - Epoch: 3
24
+
25
+ ## Model description
26
+
27
+ More information needed
28
+
29
+ ## Intended uses & limitations
30
+
31
+ More information needed
32
+
33
+ ## Training and evaluation data
34
+
35
+ More information needed
36
+
37
+ ## Training procedure
38
+
39
+ ### Training hyperparameters
40
+
41
+ The following hyperparameters were used during training:
42
+ - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 29508, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
43
+ - training_precision: float32
44
+
45
+ ### Training results
46
+
47
+ | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
48
+ |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
49
+ | 1.3338 | 0.6448 | 0.6045 | 1.1322 | 0.6906 | 0.6563 | 0 |
50
+ | 0.9044 | 0.7466 | 0.7090 | 1.0996 | 0.7032 | 0.6720 | 1 |
51
+ | 0.6756 | 0.8042 | 0.7680 | 1.1416 | 0.7047 | 0.6718 | 2 |
52
+ | 0.5368 | 0.8401 | 0.8078 | 1.2427 | 0.7050 | 0.6725 | 3 |
53
+
54
+
55
+ ### Framework versions
56
+
57
+ - Transformers 4.20.1
58
+ - TensorFlow 2.6.4
59
+ - Datasets 2.1.0
60
+ - Tokenizers 0.12.1
config.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "distilbert-base-uncased",
3
+ "activation": "gelu",
4
+ "architectures": [
5
+ "DistilBertForQuestionAnswering"
6
+ ],
7
+ "attention_dropout": 0.1,
8
+ "dim": 768,
9
+ "dropout": 0.1,
10
+ "hidden_dim": 3072,
11
+ "initializer_range": 0.02,
12
+ "max_position_embeddings": 512,
13
+ "model_type": "distilbert",
14
+ "n_heads": 12,
15
+ "n_layers": 6,
16
+ "pad_token_id": 0,
17
+ "qa_dropout": 0.1,
18
+ "seq_classif_dropout": 0.2,
19
+ "sinusoidal_pos_embds": false,
20
+ "tie_weights_": true,
21
+ "transformers_version": "4.20.1",
22
+ "vocab_size": 30522
23
+ }
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:73c2aae9e0bc60461a6717a02cef823abea71590cb61b7c30a060b0cd6f25310
3
+ size 265583592