ayshi commited on
Commit
624f757
·
1 Parent(s): 93dd7c8

Training in progress epoch 0

Browse files
Files changed (5) hide show
  1. README.md +8 -18
  2. config.json +1 -1
  3. special_tokens_map.json +0 -7
  4. tf_model.h5 +1 -1
  5. tokenizer_config.json +1 -11
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  license: mit
3
- base_model: ayshi/basic_roberta
4
  tags:
5
  - generated_from_keras_callback
6
  model-index:
@@ -13,12 +13,12 @@ probably proofread and complete it, then remove this comment. -->
13
 
14
  # ayshi/basic_roberta
15
 
16
- This model is a fine-tuned version of [ayshi/basic_roberta](https://huggingface.co/ayshi/basic_roberta) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Train Loss: 0.0661
19
- - Validation Loss: 0.8073
20
- - Train Accuracy: 0.8
21
- - Epoch: 10
22
 
23
  ## Model description
24
 
@@ -37,24 +37,14 @@ More information needed
37
  ### Training hyperparameters
38
 
39
  The following hyperparameters were used during training:
40
- - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 960, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
41
  - training_precision: float32
42
 
43
  ### Training results
44
 
45
  | Train Loss | Validation Loss | Train Accuracy | Epoch |
46
  |:----------:|:---------------:|:--------------:|:-----:|
47
- | 0.5693 | 0.8997 | 0.7689 | 0 |
48
- | 0.4191 | 0.8351 | 0.7778 | 1 |
49
- | 0.3566 | 0.8170 | 0.7733 | 2 |
50
- | 0.2794 | 0.8220 | 0.8089 | 3 |
51
- | 0.2342 | 0.8998 | 0.7867 | 4 |
52
- | 0.1893 | 0.8396 | 0.8089 | 5 |
53
- | 0.1335 | 0.9013 | 0.7689 | 6 |
54
- | 0.1115 | 0.8215 | 0.8089 | 7 |
55
- | 0.0790 | 0.7676 | 0.8222 | 8 |
56
- | 0.0848 | 0.7698 | 0.8222 | 9 |
57
- | 0.0661 | 0.8073 | 0.8 | 10 |
58
 
59
 
60
  ### Framework versions
 
1
  ---
2
  license: mit
3
+ base_model: xlm-roberta-base
4
  tags:
5
  - generated_from_keras_callback
6
  model-index:
 
13
 
14
  # ayshi/basic_roberta
15
 
16
+ This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Train Loss: 1.3581
19
+ - Validation Loss: 1.1546
20
+ - Train Accuracy: 0.6667
21
+ - Epoch: 0
22
 
23
  ## Model description
24
 
 
37
  ### Training hyperparameters
38
 
39
  The following hyperparameters were used during training:
40
+ - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 320, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
41
  - training_precision: float32
42
 
43
  ### Training results
44
 
45
  | Train Loss | Validation Loss | Train Accuracy | Epoch |
46
  |:----------:|:---------------:|:--------------:|:-----:|
47
+ | 1.3581 | 1.1546 | 0.6667 | 0 |
 
 
 
 
 
 
 
 
 
 
48
 
49
 
50
  ### Framework versions
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "ayshi/basic_roberta",
3
  "architectures": [
4
  "XLMRobertaForSequenceClassification"
5
  ],
 
1
  {
2
+ "_name_or_path": "xlm-roberta-base",
3
  "architectures": [
4
  "XLMRobertaForSequenceClassification"
5
  ],
special_tokens_map.json CHANGED
@@ -1,11 +1,4 @@
1
  {
2
- "additional_special_tokens": [
3
- "<s>",
4
- "<pad>",
5
- "</s>",
6
- "<unk>",
7
- "<mask>"
8
- ],
9
  "bos_token": "<s>",
10
  "cls_token": "<s>",
11
  "eos_token": "</s>",
 
1
  {
 
 
 
 
 
 
 
2
  "bos_token": "<s>",
3
  "cls_token": "<s>",
4
  "eos_token": "</s>",
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:df90eeb472c747dca6fbc1544c1bce75ad6bd220b4468095742ab300d55217e9
3
  size 1112482624
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b9dc23e8424daf17dc7653c7892a28c93ce9039bee5b0927e2868fc9bab3dbc
3
  size 1112482624
tokenizer_config.json CHANGED
@@ -41,25 +41,15 @@
41
  "special": true
42
  }
43
  },
44
- "additional_special_tokens": [
45
- "<s>",
46
- "<pad>",
47
- "</s>",
48
- "<unk>",
49
- "<mask>"
50
- ],
51
  "bos_token": "<s>",
52
  "clean_up_tokenization_spaces": true,
53
  "cls_token": "<s>",
54
  "eos_token": "</s>",
55
  "mask_token": "<mask>",
56
- "max_length": 512,
57
  "model_max_length": 512,
58
  "pad_token": "<pad>",
59
  "sep_token": "</s>",
60
- "stride": 0,
61
  "tokenizer_class": "XLMRobertaTokenizer",
62
- "truncation_side": "right",
63
- "truncation_strategy": "longest_first",
64
  "unk_token": "<unk>"
65
  }
 
41
  "special": true
42
  }
43
  },
44
+ "additional_special_tokens": [],
 
 
 
 
 
 
45
  "bos_token": "<s>",
46
  "clean_up_tokenization_spaces": true,
47
  "cls_token": "<s>",
48
  "eos_token": "</s>",
49
  "mask_token": "<mask>",
 
50
  "model_max_length": 512,
51
  "pad_token": "<pad>",
52
  "sep_token": "</s>",
 
53
  "tokenizer_class": "XLMRobertaTokenizer",
 
 
54
  "unk_token": "<unk>"
55
  }