selsar commited on
Commit
e698487
·
verified ·
1 Parent(s): db38dfc

Train target-abroad-de (best val threshold=0.05)

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: mit
4
+ base_model: MoritzLaurer/mDeBERTa-v3-base-mnli-xnli
5
+ tags:
6
+ - generated_from_trainer
7
+ model-index:
8
+ - name: target-abroad-de
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # target-abroad-de
16
+
17
+ This model is a fine-tuned version of [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) on the None dataset.
18
+
19
+ ## Model description
20
+
21
+ More information needed
22
+
23
+ ## Intended uses & limitations
24
+
25
+ More information needed
26
+
27
+ ## Training and evaluation data
28
+
29
+ More information needed
30
+
31
+ ## Training procedure
32
+
33
+ ### Training hyperparameters
34
+
35
+ The following hyperparameters were used during training:
36
+ - learning_rate: 2e-05
37
+ - train_batch_size: 16
38
+ - eval_batch_size: 16
39
+ - seed: 42
40
+ - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
41
+ - lr_scheduler_type: linear
42
+ - lr_scheduler_warmup_steps: 41
43
+ - num_epochs: 3
44
+
45
+ ### Framework versions
46
+
47
+ - Transformers 5.0.0
48
+ - Pytorch 2.9.0+cu126
49
+ - Datasets 4.5.0
50
+ - Tokenizers 0.22.2
test_predictions.csv ADDED
The diff for this file is too large to render. See raw diff
 
threshold_selection.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_threshold": 0.05,
3
+ "validation_stats": {
4
+ "precision": 0.0,
5
+ "recall": 0.0,
6
+ "f1": 0.0
7
+ }
8
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6157112cd4c487f0cf0eafa49bf8395fe7199fdd12983d59380d1056800fa946
3
+ size 16014723
tokenizer_config.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": true,
3
+ "backend": "tokenizers",
4
+ "bos_token": "[CLS]",
5
+ "clean_up_tokenization_spaces": true,
6
+ "cls_token": "[CLS]",
7
+ "do_lower_case": false,
8
+ "eos_token": "[SEP]",
9
+ "is_local": false,
10
+ "mask_token": "[MASK]",
11
+ "model_max_length": 512,
12
+ "pad_token": "[PAD]",
13
+ "sep_token": "[SEP]",
14
+ "sp_model_kwargs": {},
15
+ "split_by_punct": false,
16
+ "tokenizer_class": "DebertaV2Tokenizer",
17
+ "unk_id": 3,
18
+ "unk_token": "[UNK]",
19
+ "vocab_type": "spm"
20
+ }