modelId
stringlengths
4
111
lastModified
stringlengths
24
24
tags
list
pipeline_tag
stringlengths
5
30
author
stringlengths
2
34
config
null
securityStatus
null
id
stringlengths
4
111
likes
int64
0
9.53k
downloads
int64
2
73.6M
library_name
stringlengths
2
84
created
timestamp[us]
card
stringlengths
101
901k
card_len
int64
101
901k
embeddings
list
YakovElm/Jira5Classic_512
2023-05-28T19:57:41.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira5Classic_512
0
2
transformers
2023-05-28T19:57:07
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira5Classic_512 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira5Classic_512 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4916 - Train Accuracy: 0.7775 - Validation Loss: 0.7670 - Validation Accuracy: 0.4953 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5471 | 0.7597 | 0.8190 | 0.4858 | 0 | | 0.4951 | 0.7566 | 0.7401 | 0.5237 | 1 | | 0.4916 | 0.7775 | 0.7670 | 0.4953 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,776
[ [ -0.04144287109375, -0.039154052734375, 0.0209503173828125, -0.00028896331787109375, -0.0343017578125, -0.02752685546875, -0.01629638671875, -0.026214599609375, 0.01470947265625, 0.01128387451171875, -0.05194091796875, -0.048919677734375, -0.05047607421875, -...
tobro/distilbert-base-uncased-finetuned-emotion
2023-05-28T20:39:06.000Z
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
tobro
null
null
tobro/distilbert-base-uncased-finetuned-emotion
0
2
transformers
2023-05-28T20:16:55
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.924 - name: F1 type: f1 value: 0.9240544367354029 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2170 - Accuracy: 0.924 - F1: 0.9241 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8285 | 1.0 | 250 | 0.3046 | 0.905 | 0.9021 | | 0.2469 | 2.0 | 500 | 0.2170 | 0.924 | 0.9241 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
1,846
[ [ -0.038482666015625, -0.041839599609375, 0.01511383056640625, 0.0220794677734375, -0.0264434814453125, -0.019317626953125, -0.013275146484375, -0.0089569091796875, 0.0104827880859375, 0.00858306884765625, -0.056610107421875, -0.051849365234375, -0.059417724609375...
YakovElm/IntelDAOS10Classic_512
2023-05-28T20:29:54.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS10Classic_512
0
2
transformers
2023-05-28T20:29:21
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS10Classic_512 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS10Classic_512 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2776 - Train Accuracy: 0.9200 - Validation Loss: 0.3822 - Validation Accuracy: 0.8739 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3159 | 0.8920 | 0.4005 | 0.8739 | 0 | | 0.2834 | 0.9200 | 0.3910 | 0.8739 | 1 | | 0.2776 | 0.9200 | 0.3822 | 0.8739 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,788
[ [ -0.044464111328125, -0.040496826171875, 0.0210113525390625, 0.0008249282836914062, -0.032867431640625, -0.02789306640625, -0.018768310546875, -0.0276641845703125, 0.01404571533203125, 0.01036834716796875, -0.052886962890625, -0.048065185546875, -0.05154418945312...
YakovElm/MariaDB10Classic_512
2023-05-28T20:50:36.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB10Classic_512
0
2
transformers
2023-05-28T20:49:40
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB10Classic_512 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB10Classic_512 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2262 - Train Accuracy: 0.9180 - Validation Loss: 0.1954 - Validation Accuracy: 0.9523 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3315 | 0.8787 | 0.1847 | 0.9523 | 0 | | 0.2425 | 0.9163 | 0.1867 | 0.9523 | 1 | | 0.2262 | 0.9180 | 0.1954 | 0.9523 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,784
[ [ -0.04290771484375, -0.042633056640625, 0.021759033203125, 0.0036373138427734375, -0.034515380859375, -0.030364990234375, -0.0149993896484375, -0.0253143310546875, 0.016632080078125, 0.013702392578125, -0.0545654296875, -0.0479736328125, -0.052490234375, -0.0...
YakovElm/IntelDAOS15Classic_512
2023-05-28T21:46:54.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS15Classic_512
0
2
transformers
2023-05-28T21:46:21
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS15Classic_512 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS15Classic_512 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1928 - Train Accuracy: 0.9460 - Validation Loss: 0.3544 - Validation Accuracy: 0.8859 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2386 | 0.9410 | 0.3525 | 0.8859 | 0 | | 0.2091 | 0.9460 | 0.3540 | 0.8859 | 1 | | 0.1928 | 0.9460 | 0.3544 | 0.8859 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,788
[ [ -0.044464111328125, -0.042327880859375, 0.020660400390625, 0.0022735595703125, -0.034881591796875, -0.028167724609375, -0.0184326171875, -0.0264892578125, 0.014495849609375, 0.01070404052734375, -0.0540771484375, -0.0482177734375, -0.0518798828125, -0.024963...
YakovElm/Jira10Classic_512
2023-05-28T22:07:50.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira10Classic_512
0
2
transformers
2023-05-28T22:07:15
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira10Classic_512 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira10Classic_512 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3196 - Train Accuracy: 0.8730 - Validation Loss: 0.7679 - Validation Accuracy: 0.6278 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5012 | 0.7880 | 0.6635 | 0.6215 | 0 | | 0.4297 | 0.8174 | 0.6604 | 0.6562 | 1 | | 0.3196 | 0.8730 | 0.7679 | 0.6278 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,778
[ [ -0.040924072265625, -0.04150390625, 0.0205841064453125, 0.0007038116455078125, -0.03302001953125, -0.0290985107421875, -0.0169525146484375, -0.0254974365234375, 0.015716552734375, 0.01230621337890625, -0.050750732421875, -0.04766845703125, -0.0509033203125, ...
YakovElm/MariaDB15Classic_512
2023-05-28T22:18:49.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB15Classic_512
0
2
transformers
2023-05-28T22:18:15
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB15Classic_512 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB15Classic_512 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2040 - Train Accuracy: 0.9347 - Validation Loss: 0.1593 - Validation Accuracy: 0.9598 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2696 | 0.9172 | 0.1661 | 0.9598 | 0 | | 0.2249 | 0.9297 | 0.1700 | 0.9598 | 1 | | 0.2040 | 0.9347 | 0.1593 | 0.9598 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,784
[ [ -0.04364013671875, -0.042266845703125, 0.021270751953125, 0.003101348876953125, -0.03387451171875, -0.0310821533203125, -0.016510009765625, -0.0265350341796875, 0.01485443115234375, 0.01413726806640625, -0.05474853515625, -0.04840087890625, -0.0518798828125, ...
YakovElm/Qt5Classic_512
2023-05-28T22:23:51.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt5Classic_512
0
2
transformers
2023-05-28T22:23:04
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt5Classic_512 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt5Classic_512 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2969 - Train Accuracy: 0.8951 - Validation Loss: 0.2446 - Validation Accuracy: 0.9294 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3406 | 0.8918 | 0.2640 | 0.9294 | 0 | | 0.3195 | 0.8940 | 0.2617 | 0.9294 | 1 | | 0.2969 | 0.8951 | 0.2446 | 0.9294 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,772
[ [ -0.040496826171875, -0.034881591796875, 0.022705078125, 0.0007238388061523438, -0.034271240234375, -0.0253143310546875, -0.011199951171875, -0.023468017578125, 0.00717926025390625, 0.01215362548828125, -0.053802490234375, -0.049468994140625, -0.049560546875, ...
YakovElm/IntelDAOS20Classic_512
2023-05-28T23:03:42.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS20Classic_512
0
2
transformers
2023-05-28T23:03:09
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS20Classic_512 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS20Classic_512 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1413 - Train Accuracy: 0.9610 - Validation Loss: 0.3492 - Validation Accuracy: 0.9099 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2049 | 0.9540 | 0.3516 | 0.9099 | 0 | | 0.1533 | 0.9610 | 0.3182 | 0.9099 | 1 | | 0.1413 | 0.9610 | 0.3492 | 0.9099 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,788
[ [ -0.04498291015625, -0.040802001953125, 0.02130126953125, 0.0011682510375976562, -0.032501220703125, -0.02813720703125, -0.018280029296875, -0.0283660888671875, 0.01464080810546875, 0.011077880859375, -0.054901123046875, -0.04840087890625, -0.051666259765625, ...
YakovElm/MariaDB20Classic_512
2023-05-28T23:49:01.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB20Classic_512
0
2
transformers
2023-05-28T23:48:27
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB20Classic_512 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB20Classic_512 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2062 - Train Accuracy: 0.9347 - Validation Loss: 0.1332 - Validation Accuracy: 0.9698 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2920 | 0.9054 | 0.1423 | 0.9698 | 0 | | 0.2138 | 0.9356 | 0.1391 | 0.9698 | 1 | | 0.2062 | 0.9347 | 0.1332 | 0.9698 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,784
[ [ -0.043548583984375, -0.043121337890625, 0.0213165283203125, 0.0035266876220703125, -0.0335693359375, -0.030853271484375, -0.0165252685546875, -0.0267486572265625, 0.0152587890625, 0.0141448974609375, -0.05560302734375, -0.049591064453125, -0.0517578125, -0.0...
YakovElm/Jira15Classic_512
2023-05-29T00:33:18.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira15Classic_512
0
2
transformers
2023-05-29T00:32:35
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira15Classic_512 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira15Classic_512 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4218 - Train Accuracy: 0.8048 - Validation Loss: 0.8710 - Validation Accuracy: 0.5773 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5326 | 0.7681 | 0.7742 | 0.5205 | 0 | | 0.4878 | 0.7870 | 0.7395 | 0.5205 | 1 | | 0.4218 | 0.8048 | 0.8710 | 0.5773 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,778
[ [ -0.041748046875, -0.042633056640625, 0.019989013671875, 0.001056671142578125, -0.034698486328125, -0.02923583984375, -0.017486572265625, -0.025482177734375, 0.01531219482421875, 0.01255035400390625, -0.05169677734375, -0.0479736328125, -0.051300048828125, -0...
YakovElm/Qt10Classic_512
2023-05-29T02:45:42.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt10Classic_512
0
2
transformers
2023-05-29T02:40:35
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt10Classic_512 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt10Classic_512 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2304 - Train Accuracy: 0.9200 - Validation Loss: 0.2101 - Validation Accuracy: 0.9416 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2779 | 0.9191 | 0.2090 | 0.9416 | 0 | | 0.2541 | 0.9210 | 0.2225 | 0.9416 | 1 | | 0.2304 | 0.9200 | 0.2101 | 0.9416 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,774
[ [ -0.04052734375, -0.03631591796875, 0.0232391357421875, 0.0021343231201171875, -0.033660888671875, -0.02532958984375, -0.01177978515625, -0.021636962890625, 0.0091705322265625, 0.012054443359375, -0.053497314453125, -0.0478515625, -0.049163818359375, -0.02633...
YakovElm/Jira20Classic_512
2023-05-29T03:02:53.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira20Classic_512
0
2
transformers
2023-05-29T02:58:12
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira20Classic_512 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira20Classic_512 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2224 - Train Accuracy: 0.9182 - Validation Loss: 0.2787 - Validation Accuracy: 0.9306 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3831 | 0.8678 | 0.2569 | 0.9338 | 0 | | 0.2901 | 0.8793 | 0.2538 | 0.9338 | 1 | | 0.2224 | 0.9182 | 0.2787 | 0.9306 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,778
[ [ -0.0408935546875, -0.041290283203125, 0.0206451416015625, 0.0018434524536132812, -0.032745361328125, -0.02801513671875, -0.01715087890625, -0.025177001953125, 0.01464080810546875, 0.01265716552734375, -0.05224609375, -0.048248291015625, -0.0501708984375, -0....
cesullivan99/sms-spam-weighted
2023-05-29T07:07:49.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
cesullivan99
null
null
cesullivan99/sms-spam-weighted
0
2
transformers
2023-05-29T04:15:10
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: sms-spam-weighted results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sms-spam-weighted This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2336 - Accuracy: 0.989 - F1: 0.9575 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.0009 | 1.0 | 125 | 0.1323 | 0.987 | 0.9494 | | 0.0034 | 2.0 | 250 | 0.1401 | 0.988 | 0.9531 | | 0.0001 | 3.0 | 375 | 0.2087 | 0.991 | 0.9647 | | 0.0001 | 4.0 | 500 | 0.2121 | 0.988 | 0.9538 | | 0.0001 | 5.0 | 625 | 0.2129 | 0.988 | 0.9538 | | 0.0 | 6.0 | 750 | 0.2242 | 0.99 | 0.9612 | | 0.0 | 7.0 | 875 | 0.2285 | 0.989 | 0.9575 | | 0.0 | 8.0 | 1000 | 0.2314 | 0.989 | 0.9575 | | 0.0 | 9.0 | 1125 | 0.2330 | 0.989 | 0.9575 | | 0.0 | 10.0 | 1250 | 0.2336 | 0.989 | 0.9575 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
2,008
[ [ -0.03460693359375, -0.03515625, 0.00971221923828125, 0.0178070068359375, -0.0152587890625, -0.0263519287109375, -0.00641632080078125, -0.0079803466796875, 0.0229644775390625, 0.024200439453125, -0.055755615234375, -0.0506591796875, -0.057861328125, -0.016708...
gokuls/hBERTv1_no_pretrain_cola
2023-05-29T04:31:54.000Z
[ "transformers", "pytorch", "tensorboard", "hybridbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "model-index", "endpoints_compatible", "region:us" ]
text-classification
gokuls
null
null
gokuls/hBERTv1_no_pretrain_cola
0
2
transformers
2023-05-29T04:18:06
--- language: - en tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation - accuracy model-index: - name: hBERTv1_no_pretrain_cola results: - task: name: Text Classification type: text-classification dataset: name: GLUE COLA type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.0 - name: Accuracy type: accuracy value: 0.6912751793861389 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hBERTv1_no_pretrain_cola This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6184 - Matthews Correlation: 0.0 - Accuracy: 0.6913 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:| | 0.8952 | 1.0 | 67 | 0.6664 | 0.0 | 0.6913 | | 0.6234 | 2.0 | 134 | 0.6184 | 0.0 | 0.6913 | | 0.6127 | 3.0 | 201 | 0.6197 | 0.0 | 0.6913 | | 0.6115 | 4.0 | 268 | 0.6209 | 0.0 | 0.6913 | | 0.6096 | 5.0 | 335 | 0.6237 | 0.0 | 0.6913 | | 0.6104 | 6.0 | 402 | 0.6209 | 0.0 | 0.6913 | | 0.6123 | 7.0 | 469 | 0.6185 | 0.0 | 0.6913 | ### Framework versions - Transformers 4.29.2 - Pytorch 1.14.0a0+410ce96 - Datasets 2.12.0 - Tokenizers 0.13.3
2,384
[ [ -0.026824951171875, -0.043853759765625, 0.004024505615234375, 0.0176544189453125, -0.01538848876953125, -0.005733489990234375, 0.0011692047119140625, -0.006488800048828125, 0.036468505859375, 0.0096282958984375, -0.057037353515625, -0.041168212890625, -0.0589294...
gokuls/hBERTv2_new_no_pretrain_cola
2023-06-14T13:14:18.000Z
[ "transformers", "pytorch", "tensorboard", "hybridbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "model-index", "endpoints_compatible", "region:us" ]
text-classification
gokuls
null
null
gokuls/hBERTv2_new_no_pretrain_cola
0
2
transformers
2023-05-29T04:29:32
--- language: - en tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation - accuracy model-index: - name: hBERTv2_new_no_pretrain_cola results: - task: name: Text Classification type: text-classification dataset: name: GLUE COLA type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.0 - name: Accuracy type: accuracy value: 0.6912751793861389 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hBERTv2_new_no_pretrain_cola This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6181 - Matthews Correlation: 0.0 - Accuracy: 0.6913 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:| | 0.6421 | 1.0 | 67 | 0.6186 | 0.0 | 0.6913 | | 0.6181 | 2.0 | 134 | 0.6403 | 0.0 | 0.6913 | | 0.6176 | 3.0 | 201 | 0.6252 | 0.0 | 0.6913 | | 0.6185 | 4.0 | 268 | 0.6313 | 0.0 | 0.6913 | | 0.6163 | 5.0 | 335 | 0.6181 | 0.0 | 0.6913 | | 0.6118 | 6.0 | 402 | 0.6182 | 0.0 | 0.6913 | | 0.6516 | 7.0 | 469 | 0.6316 | 0.0 | 0.6913 | | 0.6363 | 8.0 | 536 | 0.6240 | 0.0 | 0.6913 | | 0.6235 | 9.0 | 603 | 0.6310 | 0.0 | 0.6913 | | 0.6152 | 10.0 | 670 | 0.6441 | 0.0 | 0.6913 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.14.0a0+410ce96 - Datasets 2.12.0 - Tokenizers 0.13.3
2,607
[ [ -0.027008056640625, -0.04217529296875, 0.005451202392578125, 0.01552581787109375, -0.0116729736328125, -0.005462646484375, 0.0009822845458984375, -0.005123138427734375, 0.03265380859375, 0.00995635986328125, -0.052490234375, -0.040252685546875, -0.0582275390625,...
gokuls/sa_BERT_no_pretrain_cola
2023-05-29T04:50:30.000Z
[ "transformers", "pytorch", "tensorboard", "hybridbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "model-index", "endpoints_compatible", "region:us" ]
text-classification
gokuls
null
null
gokuls/sa_BERT_no_pretrain_cola
0
2
transformers
2023-05-29T04:37:10
--- language: - en tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation - accuracy model-index: - name: sa_BERT_no_pretrain_cola results: - task: name: Text Classification type: text-classification dataset: name: GLUE COLA type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.0 - name: Accuracy type: accuracy value: 0.6912751793861389 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sa_BERT_no_pretrain_cola This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6180 - Matthews Correlation: 0.0 - Accuracy: 0.6913 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:| | 0.8826 | 1.0 | 67 | 0.6624 | 0.0 | 0.6913 | | 0.616 | 2.0 | 134 | 0.6358 | 0.0 | 0.6913 | | 0.6134 | 3.0 | 201 | 0.6195 | 0.0 | 0.6913 | | 0.6139 | 4.0 | 268 | 0.6285 | 0.0 | 0.6913 | | 0.6117 | 5.0 | 335 | 0.6180 | 0.0 | 0.6913 | | 0.6099 | 6.0 | 402 | 0.6183 | 0.0 | 0.6913 | | 0.6113 | 7.0 | 469 | 0.6232 | 0.0 | 0.6913 | | 0.6135 | 8.0 | 536 | 0.6182 | 0.0 | 0.6913 | | 0.6094 | 9.0 | 603 | 0.6221 | 0.0 | 0.6913 | | 0.6096 | 10.0 | 670 | 0.6310 | 0.0 | 0.6913 | ### Framework versions - Transformers 4.29.2 - Pytorch 1.14.0a0+410ce96 - Datasets 2.12.0 - Tokenizers 0.13.3
2,639
[ [ -0.03167724609375, -0.044219970703125, 0.0042724609375, 0.01486968994140625, -0.01389312744140625, -0.00823211669921875, -0.0007786750793457031, -0.01132965087890625, 0.039642333984375, 0.00907135009765625, -0.056976318359375, -0.03955078125, -0.055755615234375,...
gokuls/sa_BERT_no_pretrain_mrpc
2023-06-14T16:04:48.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
gokuls
null
null
gokuls/sa_BERT_no_pretrain_mrpc
0
2
transformers
2023-05-29T04:50:48
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: sa_BERT_no_pretrain_mrpc results: - task: name: Text Classification type: text-classification dataset: name: GLUE MRPC type: glue config: mrpc split: validation args: mrpc metrics: - name: Accuracy type: accuracy value: 0.6813725490196079 - name: F1 type: f1 value: 0.7781569965870307 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sa_BERT_no_pretrain_mrpc This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.6003 - Accuracy: 0.6814 - F1: 0.7782 - Combined Score: 0.7298 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.6845 | 1.0 | 39 | 0.6307 | 0.6838 | 0.8122 | 0.7480 | | 0.6398 | 2.0 | 78 | 0.6313 | 0.6838 | 0.8122 | 0.7480 | | 0.6384 | 3.0 | 117 | 0.6247 | 0.6838 | 0.8122 | 0.7480 | | 0.6428 | 4.0 | 156 | 0.6467 | 0.6667 | 0.7806 | 0.7237 | | 0.6021 | 5.0 | 195 | 0.6003 | 0.6814 | 0.7782 | 0.7298 | | 0.5125 | 6.0 | 234 | 0.6875 | 0.6863 | 0.7874 | 0.7368 | | 0.3735 | 7.0 | 273 | 0.8672 | 0.6422 | 0.7355 | 0.6888 | | 0.2662 | 8.0 | 312 | 0.9928 | 0.6765 | 0.7857 | 0.7311 | | 0.2247 | 9.0 | 351 | 0.9605 | 0.6789 | 0.7798 | 0.7294 | | 0.1655 | 10.0 | 390 | 1.0684 | 0.6275 | 0.7206 | 0.6740 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.14.0a0+410ce96 - Datasets 2.12.0 - Tokenizers 0.13.3
2,658
[ [ -0.04644775390625, -0.034576416015625, 0.005889892578125, 0.007793426513671875, -0.0183258056640625, -0.0175628662109375, -0.006389617919921875, -0.013153076171875, 0.027862548828125, 0.018157958984375, -0.059722900390625, -0.04364013671875, -0.050811767578125, ...
gokuls/add_BERT_no_pretrain_cola
2023-06-14T12:46:24.000Z
[ "transformers", "pytorch", "tensorboard", "hybridbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "model-index", "endpoints_compatible", "region:us" ]
text-classification
gokuls
null
null
gokuls/add_BERT_no_pretrain_cola
0
2
transformers
2023-05-29T04:55:14
--- language: - en tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation - accuracy model-index: - name: add_BERT_no_pretrain_cola results: - task: name: Text Classification type: text-classification dataset: name: GLUE COLA type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.0 - name: Accuracy type: accuracy value: 0.6912751793861389 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # add_BERT_no_pretrain_cola This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6181 - Matthews Correlation: 0.0 - Accuracy: 0.6913 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:| | 0.6339 | 1.0 | 67 | 0.6182 | 0.0 | 0.6913 | | 0.6177 | 2.0 | 134 | 0.6421 | 0.0 | 0.6913 | | 0.6204 | 3.0 | 201 | 0.6295 | 0.0 | 0.6913 | | 0.6182 | 4.0 | 268 | 0.6268 | 0.0 | 0.6913 | | 0.6149 | 5.0 | 335 | 0.6181 | 0.0 | 0.6913 | | 0.612 | 6.0 | 402 | 0.6189 | 0.0 | 0.6913 | | 0.6132 | 7.0 | 469 | 0.6292 | 0.0 | 0.6913 | | 0.6125 | 8.0 | 536 | 0.6185 | 0.0 | 0.6913 | | 0.6108 | 9.0 | 603 | 0.6280 | 0.0 | 0.6913 | | 0.6092 | 10.0 | 670 | 0.6310 | 0.0 | 0.6913 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.14.0a0+410ce96 - Datasets 2.12.0 - Tokenizers 0.13.3
2,601
[ [ -0.0305328369140625, -0.042755126953125, 0.00800323486328125, 0.014495849609375, -0.01114654541015625, -0.0095367431640625, -0.0007734298706054688, -0.0101470947265625, 0.035980224609375, 0.007320404052734375, -0.055572509765625, -0.038665771484375, -0.055847167...
gokuls/sa_BERT_no_pretrain_qnli
2023-06-14T18:49:41.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
gokuls
null
null
gokuls/sa_BERT_no_pretrain_qnli
0
2
transformers
2023-05-29T05:01:29
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: sa_BERT_no_pretrain_qnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE QNLI type: glue config: qnli split: validation args: qnli metrics: - name: Accuracy type: accuracy value: 0.6058941973274757 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sa_BERT_no_pretrain_qnli This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6547 - Accuracy: 0.6059 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6847 | 1.0 | 1092 | 0.6580 | 0.6068 | | 0.6491 | 2.0 | 2184 | 0.6547 | 0.6059 | | 0.6223 | 3.0 | 3276 | 0.6778 | 0.6021 | | 0.5814 | 4.0 | 4368 | 0.7237 | 0.5843 | | 0.5176 | 5.0 | 5460 | 0.7387 | 0.5757 | | 0.4447 | 6.0 | 6552 | 0.8224 | 0.5733 | | 0.3761 | 7.0 | 7644 | 0.9915 | 0.5598 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.14.0a0+410ce96 - Datasets 2.12.0 - Tokenizers 0.13.3
2,055
[ [ -0.029266357421875, -0.0290069580078125, 0.007595062255859375, 0.007686614990234375, -0.02435302734375, -0.0238800048828125, -0.00525665283203125, -0.013427734375, 0.0185699462890625, 0.01322174072265625, -0.0638427734375, -0.04254150390625, -0.042236328125, ...
YakovElm/Qt15Classic_512
2023-05-29T06:41:14.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt15Classic_512
0
2
transformers
2023-05-29T06:40:38
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt15Classic_512 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt15Classic_512 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2050 - Train Accuracy: 0.9367 - Validation Loss: 0.2113 - Validation Accuracy: 0.9505 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2401 | 0.9365 | 0.1849 | 0.9505 | 0 | | 0.2232 | 0.9367 | 0.1818 | 0.9505 | 1 | | 0.2050 | 0.9367 | 0.2113 | 0.9505 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,774
[ [ -0.041351318359375, -0.038818359375, 0.02154541015625, 0.00391387939453125, -0.035888671875, -0.0275115966796875, -0.01354217529296875, -0.0222320556640625, 0.01038360595703125, 0.0124053955078125, -0.053863525390625, -0.04888916015625, -0.050445556640625, -...
xavidejuan/dqn-SpaceInvadersNoFrameskip-v4
2023-05-29T07:39:43.000Z
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
xavidejuan
null
null
xavidejuan/dqn-SpaceInvadersNoFrameskip-v4
0
2
stable-baselines3
2023-05-29T07:39:22
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 614.00 +/- 187.23 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga xavidejuan -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga xavidejuan -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga xavidejuan ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
2,698
[ [ -0.04107666015625, -0.032745361328125, 0.0204925537109375, 0.02203369140625, -0.0101470947265625, -0.015899658203125, 0.01123809814453125, -0.01407623291015625, 0.01447296142578125, 0.0248260498046875, -0.0704345703125, -0.034149169921875, -0.026092529296875, ...
gokuls/sa_BERT_no_pretrain_qqp
2023-06-15T05:40:30.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
gokuls
null
null
gokuls/sa_BERT_no_pretrain_qqp
0
2
transformers
2023-05-29T07:55:56
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: sa_BERT_no_pretrain_qqp results: - task: name: Text Classification type: text-classification dataset: name: GLUE QQP type: glue config: qqp split: validation args: qqp metrics: - name: Accuracy type: accuracy value: 0.7934207271827851 - name: F1 type: f1 value: 0.6836123948783999 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sa_BERT_no_pretrain_qqp This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.4355 - Accuracy: 0.7934 - F1: 0.6836 - Combined Score: 0.7385 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:| | 0.5241 | 1.0 | 3791 | 0.4947 | 0.7638 | 0.6550 | 0.7094 | | 0.4527 | 2.0 | 7582 | 0.4524 | 0.7853 | 0.7027 | 0.7440 | | 0.404 | 3.0 | 11373 | 0.4355 | 0.7934 | 0.6836 | 0.7385 | | 0.3675 | 4.0 | 15164 | 0.4407 | 0.8038 | 0.7438 | 0.7738 | | 0.3315 | 5.0 | 18955 | 0.4426 | 0.8060 | 0.7368 | 0.7714 | | 0.3031 | 6.0 | 22746 | 0.4437 | 0.8067 | 0.7444 | 0.7755 | | 0.2747 | 7.0 | 26537 | 0.4359 | 0.8046 | 0.7523 | 0.7785 | | 0.2441 | 8.0 | 30328 | 0.4718 | 0.8074 | 0.7547 | 0.7811 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.14.0a0+410ce96 - Datasets 2.12.0 - Tokenizers 0.13.3
2,486
[ [ -0.03509521484375, -0.0290679931640625, 0.009735107421875, 0.00777435302734375, -0.020904541015625, -0.0218963623046875, -0.0017633438110351562, -0.01372528076171875, 0.0177764892578125, 0.018096923828125, -0.05816650390625, -0.044219970703125, -0.04736328125, ...
rzhu/distilbert-base-uncased_emotion_ft_0529
2023-05-29T09:01:08.000Z
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
rzhu
null
null
rzhu/distilbert-base-uncased_emotion_ft_0529
0
2
transformers
2023-05-29T08:09:23
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 - precision model-index: - name: distilbert-base-uncased_emotion_ft_0529 results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9375 - name: F1 type: f1 value: 0.9378132226886893 - name: Precision type: precision value: 0.9124034390576776 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased_emotion_ft_0529 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1485 - Accuracy: 0.9375 - F1: 0.9378 - Precision: 0.9124 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:| | 0.8109 | 1.0 | 250 | 0.2686 | 0.913 | 0.9111 | 0.8958 | | 0.2078 | 2.0 | 500 | 0.1663 | 0.931 | 0.9309 | 0.9148 | | 0.1383 | 3.0 | 750 | 0.1562 | 0.9365 | 0.9366 | 0.9170 | | 0.114 | 4.0 | 1000 | 0.1485 | 0.9375 | 0.9378 | 0.9124 | ### Framework versions - Transformers 4.27.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
2,166
[ [ -0.035980224609375, -0.03607177734375, 0.013641357421875, 0.0198822021484375, -0.022369384765625, -0.0168914794921875, -0.0092926025390625, -0.006992340087890625, 0.01342010498046875, 0.00881195068359375, -0.0535888671875, -0.051910400390625, -0.059814453125, ...
rzhu/distilbert-base-uncased_emotion_ft_0416
2023-05-29T08:16:34.000Z
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
rzhu
null
null
rzhu/distilbert-base-uncased_emotion_ft_0416
0
2
transformers
2023-05-29T08:13:30
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 - precision model-index: - name: distilbert-base-uncased_emotion_ft_0416 results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9385 - name: F1 type: f1 value: 0.9386825580211653 - name: Precision type: precision value: 0.9103398923984992 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased_emotion_ft_0416 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1481 - Accuracy: 0.9385 - F1: 0.9387 - Precision: 0.9103 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:| | 0.7769 | 1.0 | 250 | 0.2467 | 0.9205 | 0.9196 | 0.8974 | | 0.2029 | 2.0 | 500 | 0.1649 | 0.9325 | 0.9321 | 0.9162 | | 0.1382 | 3.0 | 750 | 0.1523 | 0.935 | 0.9355 | 0.9023 | | 0.1121 | 4.0 | 1000 | 0.1481 | 0.9385 | 0.9387 | 0.9103 | ### Framework versions - Transformers 4.27.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
2,166
[ [ -0.036041259765625, -0.03631591796875, 0.01367950439453125, 0.0200042724609375, -0.0229339599609375, -0.017181396484375, -0.00917816162109375, -0.006591796875, 0.01364898681640625, 0.0093231201171875, -0.05364990234375, -0.051544189453125, -0.060394287109375, ...
L3tsG0/distilbert-base-uncased-finetuned-emotion
2023-05-29T09:40:23.000Z
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
L3tsG0
null
null
L3tsG0/distilbert-base-uncased-finetuned-emotion
0
2
transformers
2023-05-29T08:58:56
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9415 - name: F1 type: f1 value: 0.9418231040913105 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1351 - Accuracy: 0.9415 - F1: 0.9418 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5238 | 1.0 | 250 | 0.1800 | 0.928 | 0.9270 | | 0.141 | 2.0 | 500 | 0.1351 | 0.9415 | 0.9418 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu117 - Datasets 2.12.0 - Tokenizers 0.13.2
1,848
[ [ -0.037628173828125, -0.04193115234375, 0.01416778564453125, 0.02191162109375, -0.0259246826171875, -0.019378662109375, -0.01378631591796875, -0.008880615234375, 0.01120758056640625, 0.00791168212890625, -0.056610107421875, -0.051788330078125, -0.059478759765625,...
tonirodriguez/roberta-base-bne-finetuned-toxicity-tweets-balanced-12000
2023-05-29T09:16:04.000Z
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
tonirodriguez
null
null
tonirodriguez/roberta-base-bne-finetuned-toxicity-tweets-balanced-12000
0
2
transformers
2023-05-29T09:13:30
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-base-bne-finetuned-toxicity-tweets-balanced-12000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-toxicity-tweets-balanced-12000 This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2973 - Accuracy: 0.906 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3404 | 1.0 | 73 | 0.2558 | 0.9 | | 0.1613 | 2.0 | 146 | 0.2973 | 0.906 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,483
[ [ -0.026153564453125, -0.04754638671875, 0.01291656494140625, 0.01215362548828125, -0.0267486572265625, -0.036651611328125, -0.01175689697265625, -0.01377105712890625, 0.01006317138671875, 0.0294342041015625, -0.04571533203125, -0.056488037109375, -0.0523071289062...
vsrinivas/ppo-LunarLander-v2-vs-ver3
2023-05-31T06:32:32.000Z
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
vsrinivas
null
null
vsrinivas/ppo-LunarLander-v2-vs-ver3
0
2
stable-baselines3
2023-05-29T10:31:52
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 285.54 +/- 18.52 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
784
[ [ -0.00023484230041503906, -0.02716064453125, 0.017059326171875, 0.023345947265625, -0.00606536865234375, 0.002735137939453125, 0.034454345703125, -0.012115478515625, 0.019866943359375, 0.06500244140625, -0.043212890625, -0.035247802734375, -0.0343017578125, -...
bitextor/bicleaner-ai-full-en-sw
2023-08-24T10:28:26.000Z
[ "transformers", "tf", "xlm-roberta", "bicleaner-ai", "en", "sw", "multilingual", "license:cc-by-sa-4.0", "endpoints_compatible", "region:us" ]
null
bitextor
null
null
bitextor/bicleaner-ai-full-en-sw
0
2
transformers
2023-05-29T11:46:21
--- language: - en - sw - multilingual license: cc-by-sa-4.0 tags: - bicleaner-ai tasks: - text-classification --- # Bicleaner AI full model for en-sw Bicleaner AI is a tool that aims at detecting noisy sentence pairs in a parallel corpus. It indicates the likelihood of a pair of sentences being mutual translations (with a value near to 1) or not (with a value near to 0). Sentence pairs considered very noisy are scored with 0. Find out at our repository for further instructions on how to use it: https://github.com/bitextor/bicleaner-ai
554
[ [ -0.029205322265625, -0.07476806640625, 0.025634765625, 0.020751953125, -0.0257720947265625, 0.01128387451171875, -0.0164031982421875, -0.046417236328125, 0.020477294921875, 0.0266876220703125, -0.0264434814453125, -0.0279388427734375, -0.053314208984375, 0.0...
vnykr/dqn-SpaceInvadersNoFrameskip-v4
2023-05-29T12:07:20.000Z
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
vnykr
null
null
vnykr/dqn-SpaceInvadersNoFrameskip-v4
0
2
stable-baselines3
2023-05-29T12:06:41
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 508.50 +/- 81.46 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga vnykr -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga vnykr -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga vnykr ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
2,681
[ [ -0.041534423828125, -0.0367431640625, 0.0214385986328125, 0.0245208740234375, -0.0099029541015625, -0.0175323486328125, 0.01291656494140625, -0.01427459716796875, 0.01297760009765625, 0.024749755859375, -0.0706787109375, -0.0352783203125, -0.0265655517578125, ...
partypress/partypress-monolingual-austria
2023-09-14T13:51:31.000Z
[ "transformers", "pytorch", "tf", "bert", "text-classification", "partypress", "political science", "parties", "press releases", "de", "license:cc-by-sa-4.0", "endpoints_compatible", "region:us" ]
text-classification
partypress
null
null
partypress/partypress-monolingual-austria
0
2
transformers
2023-05-29T12:10:33
--- license: cc-by-sa-4.0 language: - de metrics: - accuracy pipeline_tag: text-classification tags: - partypress - political science - parties - press releases widget: - text: 'Immissionsschutzgesetz muss ein Klagerecht für BürgerInnen beinhalten: "Es ist seit Jahren bekannt, welche Maßnahmen zur Reduktion der Feinstaubbelastung gesetzt werden müssen. Diese neuerlich bloß aufzuzählen, wie es jetzt Minister Berlakovich tut, hilft den Betroffenen nicht", kritisiert die Grüne Umweltsprecherin Christiane Brunner die jüngsten Aussagen des Umweltministers zur Problematik Feinstaub.' --- # PARTYPRESS monolingual Austria Fine-tuned model, based on [dbmdz/bert-base-german-cased](https://huggingface.co/dbmdz/bert-base-german-cased). Used in Erfort et al. (2023), building on the PARTYPRESS database. For the downstream task of classyfing press releases from political parties into 23 unique policy areas we achieve a performance comparable to expert human coders. ## Model description The PARTYPRESS monolingual model builds on [dbmdz/bert-base-german-cased](https://huggingface.co/dbmdz/bert-base-german-cased) but has a supervised component. This means, it was fine-tuned using texts labeled by humans. The labels indicate 23 different political issue categories derived from the Comparative Agendas Project (CAP): | Code | Issue | |--|-------| | 1 | Macroeconomics | | 2 | Civil Rights | | 3 | Health | | 4 | Agriculture | | 5 | Labor | | 6 | Education | | 7 | Environment | | 8 | Energy | | 9 | Immigration | | 10 | Transportation | | 12 | Law and Crime | | 13 | Social Welfare | | 14 | Housing | | 15 | Domestic Commerce | | 16 | Defense | | 17 | Technology | | 18 | Foreign Trade | | 19.1 | International Affairs | | 19.2 | European Union | | 20 | Government Operations | | 23 | Culture | | 98 | Non-thematic | | 99 | Other | ## Model variations There are several monolingual models for different countries, and a multilingual model. The multilingual model can be easily extended to other languages, country contexts, or time periods by fine-tuning it with minimal additional labeled texts. ## Intended uses & limitations The main use of the model is for text classification of press releases from political parties. It may also be useful for other political texts. The classification can then be used to measure which issues parties are discussing in their communication. ### How to use This model can be used directly with a pipeline for text classification: ```python >>> from transformers import pipeline >>> tokenizer_kwargs = {'padding':True,'truncation':True,'max_length':512} >>> partypress = pipeline("text-classification", model = "cornelius/partypress-monolingual-austria", tokenizer = "cornelius/partypress-monolingual-austria", **tokenizer_kwargs) >>> partypress("Your text here.") ``` ### Limitations and bias The model was trained with data from parties in Austria. For use in other countries, the model may be further fine-tuned. Without further fine-tuning, the performance of the model may be lower. The model may have biased predictions. We discuss some biases by country, party, and over time in the release paper for the PARTYPRESS database. For example, the performance is highest for press releases from Ireland (75%) and lowest for Poland (55%). ## Training data The PARTYPRESS multilingual model was fine-tuned with about 3,000 press releases from parties in Austria. The press releases were labeled by two expert human coders. For the training data of the underlying model, please refer to [dbmdz/bert-base-german-cased](https://huggingface.co/dbmdz/bert-base-german-cased) ## Training procedure ### Preprocessing For the preprocessing, please refer to [dbmdz/bert-base-german-cased](https://huggingface.co/dbmdz/bert-base-german-cased) ### Pretraining For the pretraining, please refer to [dbmdz/bert-base-german-cased](https://huggingface.co/dbmdz/bert-base-german-cased) ### Fine-tuning We fine-tuned the model using about 3,000 labeled press releases from political parties in Austria. #### Training Hyperparameters The batch size for training was 12, for testing 2, with four epochs. All other hyperparameters were the standard from the transformers library. #### Framework versions - Transformers 4.28.0 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3 ## Evaluation results Fine-tuned on our downstream task, this model achieves the following results in a five-fold cross validation that are comparable to the performance of our expert human coders. Please refer to Erfort et al. (2023) ### BibTeX entry and citation info ```bibtex @article{erfort_partypress_2023, author = {Cornelius Erfort and Lukas F. Stoetzer and Heike Klüver}, title = {The PARTYPRESS Database: A new comparative database of parties’ press releases}, journal = {Research and Politics}, volume = {10}, number = {3}, year = {2023}, doi = {10.1177/20531680231183512}, URL = {https://doi.org/10.1177/20531680231183512} } ``` Erfort, C., Stoetzer, L. F., & Klüver, H. (2023). The PARTYPRESS Database: A new comparative database of parties’ press releases. Research & Politics, 10(3). [https://doi.org/10.1177/20531680231183512](https://doi.org/10.1177/20531680231183512) ### Further resources Github: [cornelius-erfort/partypress](https://github.com/cornelius-erfort/partypress) Research and Politics Dataverse: [Replication Data for: The PARTYPRESS Database: A New Comparative Database of Parties’ Press Releases](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi%3A10.7910%2FDVN%2FOINX7Q) ## Acknowledgements Research for this contribution is part of the Cluster of Excellence "Contestations of the Liberal Script" (EXC 2055, Project-ID: 390715649), funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy. Cornelius Erfort is moreover grateful for generous funding provided by the DFG through the Research Training Group DYNAMICS (GRK 2458/1). ## Contact Cornelius Erfort Humboldt-Universität zu Berlin [corneliuserfort.de](corneliuserfort.de)
6,168
[ [ -0.0295867919921875, -0.0305633544921875, 0.01251983642578125, 0.03302001953125, -0.0242919921875, 0.0010461807250976562, -0.041656494140625, -0.014007568359375, 0.0106964111328125, 0.033477783203125, -0.0445556640625, -0.06646728515625, -0.054412841796875, ...
gokuls/sa_BERT_no_pretrain_rte
2023-06-15T05:47:43.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
gokuls
null
null
gokuls/sa_BERT_no_pretrain_rte
0
2
transformers
2023-05-29T13:29:15
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: sa_BERT_no_pretrain_rte results: - task: name: Text Classification type: text-classification dataset: name: GLUE RTE type: glue config: rte split: validation args: rte metrics: - name: Accuracy type: accuracy value: 0.5306859205776173 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sa_BERT_no_pretrain_rte This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.6909 - Accuracy: 0.5307 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7596 | 1.0 | 26 | 0.6909 | 0.5307 | | 0.6968 | 2.0 | 52 | 0.6914 | 0.5235 | | 0.7026 | 3.0 | 78 | 0.6911 | 0.5307 | | 0.6961 | 4.0 | 104 | 0.6928 | 0.5379 | | 0.7114 | 5.0 | 130 | 0.6917 | 0.5271 | | 0.7005 | 6.0 | 156 | 0.7069 | 0.4729 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.14.0a0+410ce96 - Datasets 2.12.0 - Tokenizers 0.13.3
1,987
[ [ -0.035675048828125, -0.04425048828125, 0.0081024169921875, 0.01471710205078125, -0.027587890625, -0.0338134765625, -0.01366424560546875, -0.0167083740234375, 0.0183868408203125, 0.0200042724609375, -0.059112548828125, -0.042724609375, -0.05535888671875, -0.0...
gokuls/sa_BERT_no_pretrain_sst2
2023-06-15T07:48:32.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
gokuls
null
null
gokuls/sa_BERT_no_pretrain_sst2
0
2
transformers
2023-05-29T13:35:42
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: sa_BERT_no_pretrain_sst2 results: - task: name: Text Classification type: text-classification dataset: name: GLUE SST2 type: glue config: sst2 split: validation args: sst2 metrics: - name: Accuracy type: accuracy value: 0.8027522935779816 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sa_BERT_no_pretrain_sst2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.4637 - Accuracy: 0.8028 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4863 | 1.0 | 702 | 0.4747 | 0.7890 | | 0.2723 | 2.0 | 1404 | 0.4974 | 0.7901 | | 0.2219 | 3.0 | 2106 | 0.4637 | 0.8028 | | 0.1848 | 4.0 | 2808 | 0.7501 | 0.7833 | | 0.1591 | 5.0 | 3510 | 0.5357 | 0.8005 | | 0.1346 | 6.0 | 4212 | 0.5450 | 0.7833 | | 0.1148 | 7.0 | 4914 | 0.8002 | 0.7741 | | 0.1034 | 8.0 | 5616 | 0.8853 | 0.7821 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.14.0a0+410ce96 - Datasets 2.12.0 - Tokenizers 0.13.3
2,117
[ [ -0.0244140625, -0.0390625, 0.01160430908203125, 0.00963592529296875, -0.031890869140625, -0.02178955078125, -0.01288604736328125, -0.01435089111328125, 0.0152740478515625, 0.01551055908203125, -0.05767822265625, -0.03570556640625, -0.054473876953125, -0.0277...
gokuls/sa_BERT_no_pretrain_stsb
2023-06-15T08:03:24.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
gokuls
null
null
gokuls/sa_BERT_no_pretrain_stsb
0
2
transformers
2023-05-29T14:26:57
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - spearmanr model-index: - name: sa_BERT_no_pretrain_stsb results: - task: name: Text Classification type: text-classification dataset: name: GLUE STSB type: glue config: stsb split: validation args: stsb metrics: - name: Spearmanr type: spearmanr value: 0.12459536879199183 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sa_BERT_no_pretrain_stsb This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 2.5396 - Pearson: 0.1394 - Spearmanr: 0.1246 - Combined Score: 0.1320 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:| | 2.257 | 1.0 | 60 | 3.1111 | 0.0528 | 0.0709 | 0.0619 | | 2.0476 | 2.0 | 120 | 2.5396 | 0.1394 | 0.1246 | 0.1320 | | 1.8905 | 3.0 | 180 | 2.5928 | 0.1553 | 0.1593 | 0.1573 | | 1.5383 | 4.0 | 240 | 3.1130 | 0.1930 | 0.2086 | 0.2008 | | 1.3384 | 5.0 | 300 | 2.8651 | 0.1788 | 0.2014 | 0.1901 | | 1.1299 | 6.0 | 360 | 2.9651 | 0.1818 | 0.1947 | 0.1883 | | 1.0952 | 7.0 | 420 | 2.6404 | 0.2100 | 0.2124 | 0.2112 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.14.0a0+410ce96 - Datasets 2.12.0 - Tokenizers 0.13.3
2,355
[ [ -0.038818359375, -0.041015625, 0.0099639892578125, 0.0157318115234375, -0.0265655517578125, -0.02239990234375, -0.0125274658203125, -0.01549530029296875, 0.021270751953125, 0.0161895751953125, -0.05645751953125, -0.045684814453125, -0.0545654296875, -0.02005...
gokuls/sa_BERT_no_pretrain_wnli
2023-06-15T08:08:30.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
gokuls
null
null
gokuls/sa_BERT_no_pretrain_wnli
0
2
transformers
2023-05-29T14:36:04
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: sa_BERT_no_pretrain_wnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE WNLI type: glue config: wnli split: validation args: wnli metrics: - name: Accuracy type: accuracy value: 0.5633802816901409 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sa_BERT_no_pretrain_wnli This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6866 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0074 | 1.0 | 7 | 0.6958 | 0.4366 | | 0.6986 | 2.0 | 14 | 0.7035 | 0.4366 | | 0.7007 | 3.0 | 21 | 0.6866 | 0.5634 | | 0.7052 | 4.0 | 28 | 0.7037 | 0.4366 | | 0.7008 | 5.0 | 35 | 0.6951 | 0.4366 | | 0.7107 | 6.0 | 42 | 0.6908 | 0.5634 | | 0.6963 | 7.0 | 49 | 0.6945 | 0.4366 | | 0.7012 | 8.0 | 56 | 0.6894 | 0.5634 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.14.0a0+410ce96 - Datasets 2.12.0 - Tokenizers 0.13.3
2,117
[ [ -0.038604736328125, -0.034027099609375, 0.006427764892578125, 0.007068634033203125, -0.0207061767578125, -0.027008056640625, -0.01230621337890625, -0.0228729248046875, 0.020172119140625, 0.01221466064453125, -0.064453125, -0.039703369140625, -0.04754638671875, ...
gokuls/sa_BERT_no_pretrain_mnli
2023-06-15T22:16:47.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
gokuls
null
null
gokuls/sa_BERT_no_pretrain_mnli
0
2
transformers
2023-05-29T14:41:40
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: sa_BERT_no_pretrain_mnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE MNLI type: glue config: mnli split: validation_matched args: mnli metrics: - name: Accuracy type: accuracy value: 0.6700569568755086 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sa_BERT_no_pretrain_mnli This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.7747 - Accuracy: 0.6701 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.9765 | 1.0 | 4091 | 0.9090 | 0.5823 | | 0.8799 | 2.0 | 8182 | 0.8625 | 0.6123 | | 0.8193 | 3.0 | 12273 | 0.8227 | 0.6362 | | 0.7551 | 4.0 | 16364 | 0.7929 | 0.6542 | | 0.6961 | 5.0 | 20455 | 0.7901 | 0.6643 | | 0.6403 | 6.0 | 24546 | 0.8298 | 0.6687 | | 0.5831 | 7.0 | 28637 | 0.8135 | 0.6701 | | 0.5224 | 8.0 | 32728 | 0.8831 | 0.6718 | | 0.4602 | 9.0 | 36819 | 0.9055 | 0.6652 | | 0.4003 | 10.0 | 40910 | 0.9812 | 0.6603 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.14.0a0+410ce96 - Datasets 2.12.0 - Tokenizers 0.13.3
2,261
[ [ -0.038604736328125, -0.03887939453125, 0.00627899169921875, 0.0081329345703125, -0.0218048095703125, -0.0269012451171875, -0.009429931640625, -0.0109405517578125, 0.02154541015625, 0.01953125, -0.060791015625, -0.0430908203125, -0.049072265625, -0.0180511474...
HassanCS/ChemBERTa-77M-MLM-finetuned-4M
2023-05-29T17:43:23.000Z
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
HassanCS
null
null
HassanCS/ChemBERTa-77M-MLM-finetuned-4M
0
2
transformers
2023-05-29T15:19:41
--- tags: - generated_from_trainer model-index: - name: ChemBERTa-77M-MLM-finetuned-4M results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ChemBERTa-77M-MLM-finetuned-4M This model is a fine-tuned version of [DeepChem/ChemBERTa-77M-MLM](https://huggingface.co/DeepChem/ChemBERTa-77M-MLM) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4601 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 0.5648 | 1.0 | 61007 | 0.5632 | | 0.4649 | 2.0 | 122014 | 0.4801 | | 0.4338 | 3.0 | 183021 | 0.4601 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,390
[ [ -0.0266571044921875, -0.0330810546875, 0.0280303955078125, -0.002216339111328125, -0.02166748046875, -0.01215362548828125, -0.00806427001953125, -0.012237548828125, 0.00798797607421875, 0.040069580078125, -0.0615234375, -0.05511474609375, -0.037689208984375, ...
fredymad/distilbert_estricto_2e-5_16_2
2023-05-29T16:05:04.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
fredymad
null
null
fredymad/distilbert_estricto_2e-5_16_2
0
2
transformers
2023-05-29T16:00:19
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert_estricto_2e-5_16_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_estricto_2e-5_16_2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3345 - Accuracy: 0.8712 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 400 | 0.3638 | 0.8399 | | 0.4495 | 2.0 | 800 | 0.3345 | 0.8712 | ### Framework versions - Transformers 4.29.2 - Pytorch 1.13.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
1,425
[ [ -0.02862548828125, -0.04638671875, 0.01471710205078125, 0.01837158203125, -0.030609130859375, -0.0277862548828125, -0.01062774658203125, -0.0147552490234375, 0.002193450927734375, 0.0149688720703125, -0.046356201171875, -0.047271728515625, -0.058319091796875, ...
sitthichok0230/finetuned-bert
2023-05-29T19:28:45.000Z
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
sitthichok0230
null
null
sitthichok0230/finetuned-bert
0
2
transformers
2023-05-29T16:06:14
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: finetuned-bert results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: mrpc split: validation args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8627450980392157 - name: F1 type: f1 value: 0.9037800687285222 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-bert This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4431 - Accuracy: 0.8627 - F1: 0.9038 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5331 | 1.0 | 230 | 0.3900 | 0.8333 | 0.8870 | | 0.2878 | 2.0 | 460 | 0.3675 | 0.8505 | 0.8935 | | 0.1395 | 3.0 | 690 | 0.4431 | 0.8627 | 0.9038 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,849
[ [ -0.036041259765625, -0.055206298828125, 0.0081329345703125, 0.0125885009765625, -0.0252838134765625, -0.034454345703125, -0.018951416015625, -0.0119171142578125, 0.018646240234375, 0.0188140869140625, -0.05926513671875, -0.04168701171875, -0.04986572265625, ...
edata/dqn-SpaceInvadersNoFrameskip-v4
2023-05-29T18:00:11.000Z
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
edata
null
null
edata/dqn-SpaceInvadersNoFrameskip-v4
0
2
stable-baselines3
2023-05-29T17:08:15
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 15.50 +/- 12.54 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga edata -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga edata -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga edata ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 100000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
2,679
[ [ -0.04168701171875, -0.03717041015625, 0.0220947265625, 0.02459716796875, -0.01018524169921875, -0.0182037353515625, 0.013427734375, -0.01337432861328125, 0.01312255859375, 0.0245513916015625, -0.0706787109375, -0.035400390625, -0.0261383056640625, -0.0046157...
fredymad/distilbert_laxo_2e-5_16_2
2023-05-29T19:06:30.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
fredymad
null
null
fredymad/distilbert_laxo_2e-5_16_2
0
2
transformers
2023-05-29T18:55:04
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert_laxo_2e-5_16_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_laxo_2e-5_16_2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2674 - Accuracy: 0.9106 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 400 | 0.2416 | 0.9068 | | 0.3067 | 2.0 | 800 | 0.2674 | 0.9106 | ### Framework versions - Transformers 4.29.0 - Pytorch 1.13.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
1,417
[ [ -0.024444580078125, -0.0455322265625, 0.01262664794921875, 0.0209808349609375, -0.0273895263671875, -0.026580810546875, -0.00626373291015625, -0.0132904052734375, -0.0013189315795898438, 0.015777587890625, -0.044036865234375, -0.04302978515625, -0.05404663085937...
fredymad/bert_estricto_2e-5_16_2
2023-05-29T20:18:12.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
fredymad
null
null
fredymad/bert_estricto_2e-5_16_2
0
2
transformers
2023-05-29T19:41:32
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert_estricto_2e-5_16_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_estricto_2e-5_16_2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3195 - Accuracy: 0.8693 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 400 | 0.3410 | 0.8518 | | 0.4447 | 2.0 | 800 | 0.3195 | 0.8693 | ### Framework versions - Transformers 4.29.0 - Pytorch 1.13.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
1,401
[ [ -0.031097412109375, -0.045562744140625, 0.01396942138671875, 0.01751708984375, -0.034027099609375, -0.03961181640625, -0.0229034423828125, -0.031585693359375, 0.00811767578125, 0.020416259765625, -0.051788330078125, -0.045318603515625, -0.047515869140625, -0...
sperera/bert-finetuned-ner
2023-05-29T22:36:20.000Z
[ "transformers", "tf", "bert", "token-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
sperera
null
null
sperera/bert-finetuned-ner
0
2
transformers
2023-05-29T19:56:35
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: sperera/bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # sperera/bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1878 - Validation Loss: 0.0689 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1878 | 0.0689 | 0 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,473
[ [ -0.049041748046875, -0.055419921875, 0.0200958251953125, 0.013671875, -0.03680419921875, -0.039337158203125, -0.02484130859375, -0.01479339599609375, 0.01012420654296875, 0.0167388916015625, -0.058746337890625, -0.04156494140625, -0.057861328125, -0.02003479...
a-grishman/bert-base-banking77-pt2
2023-05-30T07:17:48.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:banking77", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
a-grishman
null
null
a-grishman/bert-base-banking77-pt2
0
2
transformers
2023-05-29T20:14:10
--- license: apache-2.0 tags: - generated_from_trainer datasets: - banking77 metrics: - f1 model-index: - name: bert-base-banking77-pt2 results: - task: name: Text Classification type: text-classification dataset: name: banking77 type: banking77 config: default split: test args: default metrics: - name: F1 type: f1 value: 0.9368591300797698 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-banking77-pt2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the banking77 dataset. It achieves the following results on the evaluation set: - Loss: 0.2758 - F1: 0.9369 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7116 | 1.0 | 1251 | 0.5905 | 0.8722 | | 0.2675 | 2.0 | 2502 | 0.3136 | 0.9229 | | 0.16 | 3.0 | 3753 | 0.2758 | 0.9369 | ### Framework versions - Transformers 4.27.1 - Pytorch 2.0.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.3
1,727
[ [ -0.02972412109375, -0.039398193359375, 0.01045989990234375, 0.01326751708984375, -0.042205810546875, -0.0263824462890625, -0.00951385498046875, -0.01739501953125, -0.003925323486328125, 0.040771484375, -0.04254150390625, -0.043731689453125, -0.05364990234375, ...
fredymad/bert_laxo_2e-5_16_2
2023-05-29T20:46:36.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
fredymad
null
null
fredymad/bert_laxo_2e-5_16_2
0
2
transformers
2023-05-29T20:23:59
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert_laxo_2e-5_16_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_laxo_2e-5_16_2 This model is a fine-tuned version of [fredymad/bert_estricto_2e-5_16_2](https://huggingface.co/fredymad/bert_estricto_2e-5_16_2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2882 - Accuracy: 0.9187 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 400 | 0.2139 | 0.9162 | | 0.2436 | 2.0 | 800 | 0.2882 | 0.9187 | ### Framework versions - Transformers 4.29.0 - Pytorch 1.13.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
1,423
[ [ -0.03106689453125, -0.045196533203125, 0.01103973388671875, 0.02001953125, -0.0290679931640625, -0.039154052734375, -0.0158233642578125, -0.037567138671875, 0.00955963134765625, 0.0211181640625, -0.05194091796875, -0.03997802734375, -0.042724609375, -0.00549...
ga21902298/dbert-finetuned-433-1
2023-05-29T21:58:42.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
ga21902298
null
null
ga21902298/dbert-finetuned-433-1
0
2
transformers
2023-05-29T20:27:14
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: dbert-finetuned-433-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dbert-finetuned-433-1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5437 - Accuracy: 0.8438 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3563 | 1.0 | 6250 | 0.3636 | 0.8400 | | 0.2989 | 2.0 | 12500 | 0.3517 | 0.8490 | | 0.2287 | 3.0 | 18750 | 0.3928 | 0.8486 | | 0.1646 | 4.0 | 25000 | 0.4724 | 0.8458 | | 0.1383 | 5.0 | 31250 | 0.5437 | 0.8438 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,601
[ [ -0.03338623046875, -0.0447998046875, 0.01052093505859375, 0.00763702392578125, -0.0256195068359375, -0.0262908935546875, -0.01042938232421875, -0.008270263671875, 0.00505828857421875, 0.020294189453125, -0.049713134765625, -0.04840087890625, -0.05401611328125, ...
platzi/platzi-distilroberta-base-mrpc-glue-luis-rascon
2023-05-29T21:52:52.000Z
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
platzi
null
null
platzi/platzi-distilroberta-base-mrpc-glue-luis-rascon
0
2
transformers
2023-05-29T20:29:18
--- license: apache-2.0 tags: - text-classification - generated_from_trainer datasets: - glue metrics: - accuracy - f1 widget: - text: ["Yucaipa owned Dominick 's before selling the chain to Safeway in 1998 for $ 2.5 billion.", "Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to Safeway for $ 1.8 billion in 1998."] example_title: Not Equivalent - text: ["Revenue in the first quarter of the year dropped 15 percent from the same period a year earlier.", "With the scandal hanging over Stewart's company revenue the first quarter of the year dropped 15 percent from the same period a year earlier."] example_title: Equivalent model-index: - name: platzi-distilroberta-base-mrpc-glue-luis-rascon results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: mrpc split: validation args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8235294117647058 - name: F1 type: f1 value: 0.8641509433962264 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-distilroberta-base-mrpc-glue-luis-rascon This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets. It achieves the following results on the evaluation set: - Loss: 0.5052 - Accuracy: 0.8235 - F1: 0.8642 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5201 | 1.09 | 500 | 0.6599 | 0.8382 | 0.8842 | | 0.3684 | 2.18 | 1000 | 0.5052 | 0.8235 | 0.8642 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
2,424
[ [ -0.03271484375, -0.03985595703125, 0.007541656494140625, 0.0196685791015625, -0.0308074951171875, -0.0240631103515625, -0.0127105712890625, -0.00452423095703125, 0.007778167724609375, 0.010498046875, -0.04864501953125, -0.042449951171875, -0.059478759765625, ...
pandma/es_billynator_ah
2023-05-29T20:31:32.000Z
[ "spacy", "token-classification", "es", "model-index", "region:us" ]
token-classification
pandma
null
null
pandma/es_billynator_ah
0
2
spacy
2023-05-29T20:31:07
--- tags: - spacy - token-classification language: - es model-index: - name: es_billynator_ah results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.9998979071 - name: NER Recall type: recall value: 0.9998979071 - name: NER F Score type: f_score value: 0.9998979071 --- | Feature | Description | | --- | --- | | **Name** | `es_billynator_ah` | | **Version** | `0.0.0` | | **spaCy** | `>=3.5.1,<3.6.0` | | **Default Pipeline** | `transformer`, `ner` | | **Components** | `transformer`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (29 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `BILLING_PERIOD_END`, `BILLING_PERIOD_START`, `BILL_OWNER`, `COMPANY_NAME`, `CUPS`, `DIRECTION`, `DISCOUNT_TOTAL`, `END_CONTRACT`, `ENERGY_P1_PRICE`, `ENERGY_P2_PRICE`, `ENERGY_P3_PRICE`, `FISCAL_DIRECTION`, `IBAN`, `NIF`, `POWER_EXCESSES_P1`, `POWER_EXCESSES_P2`, `POWER_EXCESSES_P3`, `POWER_P1_PRICE`, `POWER_P2_PRICE`, `POWER_P3_PRICE`, `POWER_P4_PRICE`, `POWER_P5_PRICE`, `POWER_P6_PRICE`, `REACTIVE_P1`, `REACTIVE_P2`, `REACTIVE_P3`, `TOP_GAS_PRICE`, `TOP_GAS_TOTAL`, `TOTAL_IMPORTE` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 99.99 | | `ENTS_P` | 99.99 | | `ENTS_R` | 99.99 | | `TRANSFORMER_LOSS` | 282.40 | | `NER_LOSS` | 18900.94 |
1,560
[ [ -0.04486083984375, -0.0093994140625, 0.013214111328125, 0.0157928466796875, -0.0300445556640625, 0.0105438232421875, -0.00238037109375, -0.004810333251953125, 0.040283203125, 0.04986572265625, -0.061492919921875, -0.061676025390625, -0.035064697265625, -0.00...
danieliser/ppo-PyramidsRND-v1
2023-05-29T21:47:45.000Z
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
danieliser
null
null
danieliser/ppo-PyramidsRND-v1
0
2
ml-agents
2023-05-29T21:47:39
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: danieliser/ppo-PyramidsRND-v1 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
959
[ [ -0.0266876220703125, -0.0196990966796875, 0.00016498565673828125, 0.025970458984375, -0.01026153564453125, 0.005550384521484375, 0.027069091796875, -0.00281524658203125, 0.035797119140625, 0.035003662109375, -0.03607177734375, -0.051483154296875, -0.035888671875...
le1andonly/universityexerciseanothertry
2023-05-29T23:10:48.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
le1andonly
null
null
le1andonly/universityexerciseanothertry
0
2
transformers
2023-05-29T21:53:09
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: universityexerciseanothertry results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # universityexerciseanothertry This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4735 - Accuracy: 0.7773 - F1: 0.7992 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,192
[ [ -0.0301971435546875, -0.046112060546875, 0.0207366943359375, 0.0072021484375, -0.03094482421875, -0.0240478515625, -0.01212310791015625, -0.00926971435546875, 0.00604248046875, 0.016876220703125, -0.0433349609375, -0.0450439453125, -0.044586181640625, -0.002...
pedroplanel/bert-base-banking77
2023-05-29T22:50:01.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:banking77", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
pedroplanel
null
null
pedroplanel/bert-base-banking77
0
2
transformers
2023-05-29T22:37:08
--- license: apache-2.0 tags: - generated_from_trainer datasets: - banking77 metrics: - f1 model-index: - name: bert-base-banking77 results: - task: name: Text Classification type: text-classification dataset: name: banking77 type: banking77 config: default split: test args: default metrics: - name: F1 type: f1 value: 0.9292916887388843 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-banking77 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the banking77 dataset. It achieves the following results on the evaluation set: - Loss: 0.3032 - F1: 0.9293 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.0309 | 1.0 | 626 | 0.7660 | 0.8508 | | 0.3691 | 2.0 | 1252 | 0.3553 | 0.9234 | | 0.1738 | 3.0 | 1878 | 0.3032 | 0.9293 | ### Framework versions - Transformers 4.27.1 - Pytorch 2.0.0+cu117 - Datasets 2.9.0 - Tokenizers 0.13.3
1,720
[ [ -0.033355712890625, -0.041961669921875, 0.00952911376953125, 0.01251983642578125, -0.03997802734375, -0.0257568359375, -0.01171875, -0.01611328125, 0.00046706199645996094, 0.04248046875, -0.04522705078125, -0.046966552734375, -0.049041748046875, -0.025817871...
neiz/distilbert-base-uncased-finetuned-sst-2-english
2023-05-29T22:53:08.000Z
[ "transformers", "onnx", "distilbert", "text-classification", "en", "dataset:sst2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
neiz
null
null
neiz/distilbert-base-uncased-finetuned-sst-2-english
0
2
transformers
2023-05-29T22:39:04
--- language: en license: apache-2.0 datasets: - sst2 --- # ONNX convert DistilBERT base uncased finetuned SST-2 ## Conversion of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) This model is a fine-tune checkpoint of [DistilBERT-base-uncased](https://huggingface.co/distilbert-base-uncased), fine-tuned on SST-2. This model reaches an accuracy of 91.3 on the dev set (for comparison, Bert bert-base-uncased version reaches an accuracy of 92.7). For more details about DistilBERT, we encourage users to check out [this model card](https://huggingface.co/distilbert-base-uncased). # Fine-tuning hyper-parameters - learning_rate = 1e-5 - batch_size = 32 - warmup = 600 - max_seq_length = 128 - num_train_epochs = 3.0 # Bias Based on a few experimentations, we observed that this model could produce biased predictions that target underrepresented populations. For instance, for sentences like `This film was filmed in COUNTRY`, this binary classification model will give radically different probabilities for the positive label depending on the country (0.89 if the country is France, but 0.08 if the country is Afghanistan) when nothing in the input indicates such a strong semantic shift. In this [colab](https://colab.research.google.com/gist/ageron/fb2f64fb145b4bc7c49efc97e5f114d3/biasmap.ipynb), [Aurélien Géron](https://twitter.com/aureliengeron) made an interesting map plotting these probabilities for each country. <img src="https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english/resolve/main/map.jpeg" alt="Map of positive probabilities per country." width="500"/> We strongly advise users to thoroughly probe these aspects on their use-cases in order to evaluate the risks of this model. We recommend looking at the following bias evaluation datasets as a place to start: [WinoBias](https://huggingface.co/datasets/wino_bias), [WinoGender](https://huggingface.co/datasets/super_glue), [Stereoset](https://huggingface.co/datasets/stereoset).
2,053
[ [ -0.0205535888671875, -0.0509033203125, 0.0234527587890625, 0.0213470458984375, -0.03753662109375, -0.0119476318359375, -0.01410675048828125, -0.025238037109375, 0.0029582977294921875, 0.036468505859375, -0.043182373046875, -0.038970947265625, -0.06011962890625, ...
pszemraj/e5-small-LinkedCringe-setfit-skl-20it-2e
2023-05-30T18:31:50.000Z
[ "sentence-transformers", "pytorch", "bert", "setfit", "text-classification", "LinkedCringe", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
pszemraj
null
null
pszemraj/e5-small-LinkedCringe-setfit-skl-20it-2e
0
2
sentence-transformers
2023-05-30T03:11:29
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification - LinkedCringe pipeline_tag: text-classification thumbnail: https://i.ibb.co/SPVBJrz/model-card.jpg --- # LinkedCringe v0.2: e5-small > fine-tuned on LinkedCringe v0.2 from [intfloat/e5-small](https://huggingface.co/intfloat/e5-small) <a href="https://ibb.co/VMJPTwK"><img src="https://i.ibb.co/XFjvtYw/carbon.png" alt="carbon" border="0"></a> <!-- alternate --> <!-- <a href="https://ibb.co/hR49z8Q"><img src="https://i.ibb.co/991g5YK/image.png" alt="image" border="0"></a> --> <a href="https://colab.research.google.com/gist/pszemraj/0b0c2663aa38f3b5f2d923010cfda5a8/scratchpad.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> This is an initial test/work-in-progress, but not bad thus far. ## Model This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ### Labels This model has been trained (_using methods described above_) to predict a single class label for `<text>' from the following: ``` # numeric id: text label { 1: 'cringe', 2: 'relevant', 3: 'info', 4: 'noise' } ``` --- ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` ### basic inference You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("pszemraj/e5-small-LinkedCringe-setfit-skl-20it-2e") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) # manually refer to labels above preds ``` ### Class object with utils create a"custom" wrapper class with the labels: ```python from setfit import SetFitModel from typing import List, Dict class PostClassifier: DEFAULT_ID2LABEL = {1: "cringe", 2: "relevant", 3: "info", 4: "noise"} def __init__( self, model_id: str = "pszemraj/e5-small-LinkedCringe-setfit-skl-20it-2e", id2label: Dict[int, str] = None, ): """Initialize PostClassifier with model name and/or label mapping.""" self.model = SetFitModel.from_pretrained(model_id) self.id2label = id2label if id2label else self.DEFAULT_ID2LABEL def classify(self, texts: List[str]) -> List[str]: """Classify list of texts, return list of corresponding labels.""" preds = self.model(texts) return [self.id2label[int(pred)] for pred in preds] def predict_proba(self, texts: List[str]) -> List[Dict[str, float]]: """Predict label probabilities for a list of texts, return a list of probability dictionaries.""" proba = self.model.predict_proba(texts) return [ {self.id2label.get(i + 1, "Unknown"): float(pred) for i, pred in enumerate(pred)} for pred in proba ] def __call__(self, texts: List[str]) -> List[str]: """Enable class instance to act as a function for text classification.""" return self.classify(texts) ``` instantiate & classify : ```python # import PostClassifier if you defined it in another script etc model_name="pszemraj/e5-small-LinkedCringe-setfit-skl-20it-2e" classifier = PostClassifier(model_name) # classify some posts (these should all be cringe maaaaybe noise) posts = [ "🚀 Innovation is our middle name! We're taking synergy to new heights and disrupting the market with our game-changing solutions. Stay tuned for the next paradigm shift! 💥 #CorporateRevolution #SynergisticSolutions", "🌟 Attention all trailblazers! Our cutting-edge product is the epitome of excellence. It's time to elevate your success and ride the wave of unparalleled achievements. Join us on this journey towards greatness! 🚀 #UnleashYourPotential #SuccessRevolution", "🌍 We're not just a company, we're a global force for change! Our world-class team is committed to revolutionizing industries and making a lasting impact. Together, let's reshape the future and leave a legacy that will be remembered for ages! 💪 #GlobalTrailblazers #LegacyMakers", "🔥 Harness the power of synergy and unlock your true potential with our transformative solutions. Together, we'll ignite a fire of success that will radiate across industries. Join the league of winners and conquer new frontiers! 🚀 #SynergyChampions #UnleashThePowerWithin", "💡 Innovation alert! Our visionary team has cracked the code to redefine excellence. Get ready to be blown away by our mind-boggling breakthroughs that will leave your competitors in the dust. It's time to disrupt the status quo and embrace the future! 🌟 #InnovationRevolution #ExcellenceUnleashed", "🌐 Welcome to the era of limitless possibilities! Our revolutionary platform will empower you to transcend boundaries and achieve unprecedented success. Together, let's shape a future where dreams become realities and ordinary becomes extraordinary! ✨ #LimitlessSuccess #DreamBig", "💥 Brace yourselves for a seismic shift in the industry! Our game-changing product is set to revolutionize the way you work, think, and succeed. Say goodbye to mediocrity and join the league of pioneers leading the charge towards a brighter tomorrow! 🚀 #IndustryDisruptors #PioneeringSuccess", "🚀 Attention all innovators and disruptors! It's time to break free from the chains of convention and rewrite the rulebook of success. Join us on this exhilarating journey as we create a new chapter in the annals of greatness. The sky's not the limit—it's just the beginning! 💫 #BreakingBarriers #UnleashGreatness", "🌟 Unlock the secret to unprecedented achievements with our exclusive formula for success. Our team of experts has distilled years of wisdom into a powerful elixir that will propel you to the zenith of greatness. It's time to embrace the extraordinary and become a legend in your own right! 💥 #FormulaForSuccess #RiseToGreatness", "🔑 Step into the realm of infinite possibilities and seize the keys to your success. Our groundbreaking solutions will unlock doors you never knew existed, propelling you towards a future filled with limitless growth and prosperity. Dare to dream big and let us be your catalyst for greatness! 🚀 #UnlockYourPotential #LimitlessSuccess" ] post_preds = classifier(posts) print(post_preds) ``` ## eval - detailed ``` ***** Running evaluation ***** {'accuracy': 0.8, 'based_model_id': 'intfloat/e5-small', 'tuned_model_id': 'e5-small-LinkedCringe-setfit-skl-20it-2e'} # 10-post results ['cringe', 'cringe', 'info', 'cringe', 'cringe', 'cringe', 'cringe', 'cringe', 'cringe', 'cringe'] ``` --- ## BibTeX entry and citation info > Note: this is for `setfit` and not this checkpoint. ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
7,599
[ [ -0.02020263671875, -0.06451416015625, 0.0148468017578125, 0.01319122314453125, -0.0096893310546875, 0.01209259033203125, -0.0028858184814453125, -0.044281005859375, 0.0206451416015625, 0.0162811279296875, -0.04998779296875, -0.043243408203125, -0.05902099609375,...
flashvenom/mpt-7b-base-lora-fix
2023-05-30T04:56:24.000Z
[ "transformers", "pytorch", "mpt", "text-generation", "Composer", "MosaicML", "llm-foundry", "StreamingDatasets", "custom_code", "dataset:mc4", "dataset:c4", "dataset:togethercomputer/RedPajama-Data-1T", "dataset:bigcode/the-stack", "dataset:allenai/s2orc", "arxiv:2108.12409", "arxiv:23...
text-generation
flashvenom
null
null
flashvenom/mpt-7b-base-lora-fix
0
2
transformers
2023-05-30T03:40:57
--- license: apache-2.0 tags: - Composer - MosaicML - llm-foundry - StreamingDatasets datasets: - mc4 - c4 - togethercomputer/RedPajama-Data-1T - bigcode/the-stack - allenai/s2orc inference: false duplicated_from: mosaicml/mpt-7b --- ## Authors Note: This is MPT-7B with some fixes borrowed from https://huggingface.co/Birchlabs/mosaicml-mpt-7b-chat-qlora to allow LoRA fine-tuning # MPT-7B MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code. This model was trained by [MosaicML](https://www.mosaicml.com). MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference. These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)). Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence. MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer). This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference. ### How is this model different? MPT-7B is * **Licensed for the possibility of commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)). * **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)). * **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409) (we finetuned [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter) on up to 65k inputs and can handle up to 84k vs. 2k-4k for other open source models). * **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer)) * **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry) ### Models finetuned off MPT-7B: The following models are finetuned on MPT-7B: * [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter): a model designed to read and write fictional stories with super long context lengths. Built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3). At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens. We demonstrate generations as long as 80k tokens on a single A100-80GB GPU in our [blogpost](www.mosaicml.com/blog/mpt-7b). * License: Apache 2.0 * [MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct): a model for short-form instruction following. Built by finetuning MPT-7B on a [dataset](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) we also release, derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets. * License: _CC-By-SA-3.0_ * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct) * [MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat): a chatbot-like model for dialogue generation. Built by finetuning MPT-7B on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3), [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets. * License: _CC-By-NC-SA-4.0_ * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-chat) ## Model Date May 5, 2023 ## Model License Apache-2.0 ## Documentation * [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b) * [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/) * Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)! ## How to Use This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning. ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b', trust_remote_code=True ) ``` Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package. `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more. To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`: ```python config = transformers.AutoConfig.from_pretrained( 'mosaicml/mpt-7b', trust_remote_code=True ) config.attn_config['attn_impl'] = 'triton' model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b', config=config, torch_dtype=torch.bfloat16, trust_remote_code=True ) model.to(device='cuda:0') ``` Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example: ```python config = transformers.AutoConfig.from_pretrained( 'mosaicml/mpt-7b', trust_remote_code=True ) config.update({"max_seq_len": 4096}) model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b', config=config, trust_remote_code=True ) ``` This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b") ``` ## Model Description The architecture is a modification of a standard decoder-only transformer. The model has been modified from a standard transformer in the following ways: * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings * It does not use biases | Hyperparameter | Value | |----------------|-------| |n_parameters | 6.7B | |n_layers | 32 | | n_heads | 32 | | d_model | 4096 | | vocab size | 50432 | | sequence length | 2048 | ## Training Data ### Streaming Datasets Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training. StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset. ### Data Mix The model was trained for 1T tokens (with batch size 1760 and sequence length 2048). It was trained on the following data mix: | Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs | |-------------|----------------------------|------------|----------------------------|--------| | mC4 3.1.0 - English | 417.99 B | 0.33 | 330 B | 0.14 | | C4 - English - SemDedup 80% | 100.42 B | 0.299 | 299 B | 2.98 | | RedPajama - CommonCrawl | 878.45 B | 0.1 | 100 B | 0.11 | | The Stack - Selected Languages | 463.78 B | 0.1 | 100 B | 0.22 | | RedPajama - Wikipedia - En | 4.87 B | 0.04 | 40 B | 8.21 | | The Stack - Markdown | 107.07 B | 0.035 | 35 B | 0.33 | | S2ORC | 48.85 B | 0.033 | 33 B | 0.68 | | RedPajama - Books | 26.02 B | 0.03 | 30B | 1.15 | | RedPajama - arXiv | 28.10 B | 0.019 | 19 B | 0.68 | | RedPajama - StackExchange | 20.54 B | 0.014 | 14 B |0.68 | Samples for each batch were selected from one of the datasets with the probability specified above. The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length. The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics, most of which are relevant for tokenizing code: (1) It was trained on a diverse mix of data that includes code (The Pile) (2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces (3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters. The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)), model flop utilization (MFU) increased by up to four percentage points. ### Training Configuration This model was trained on 440 A100-40GBs for about 9.5 days using the [MosaicML Platform](https://www.mosaicml.com/platform). The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer. ## Limitations and Biases _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_ MPT-7B (Base) is **not** intended for deployment without finetuning. It should not be used for human-facing interactions without further guardrails and user consent. MPT-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information. MPT-7B was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. ## MosaicML Platform If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b). ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. ## Citation Please cite this model using the following format: ``` @online{MosaicML2023Introducing, author = {MosaicML NLP Team}, title = {Introducing MPT-7B: A New Standard for Open-Source, ly Usable LLMs}, year = {2023}, url = {www.mosaicml.com/blog/mpt-7b}, note = {Accessed: 2023-03-28}, % change this date urldate = {2023-03-28} % change this date } ```
11,674
[ [ -0.04107666015625, -0.03460693359375, 0.016510009765625, 0.027313232421875, -0.033355712890625, -0.0015077590942382812, -0.004184722900390625, -0.03192138671875, 0.006099700927734375, 0.0281524658203125, -0.05059814453125, -0.041961669921875, -0.0458984375, ...
xmj2002/gpt2_tang_poetry
2023-05-30T06:31:12.000Z
[ "transformers", "pytorch", "gpt2", "text-generation", "zh", "dataset:xmj2002/tang_poems", "license:apache-2.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
xmj2002
null
null
xmj2002/gpt2_tang_poetry
0
2
transformers
2023-05-30T05:11:49
--- license: apache-2.0 datasets: - xmj2002/tang_poems language: - zh --- 使用的预训练模型为[uer/gpt2-chinese-cluecorpussmall](https://huggingface.co/uer/gpt2-chinese-cluecorpussmall) ## Usage ```python from transformers import AutoModelForCausalLM from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("xmj2002/gpt2_tang_poetry") model = AutoModelForCausalLM.from_pretrained("xmj2002/gpt2_tang_poetry") text = "白居易《远方》" inputs = tokenizer(text, return_tensors="pt").input_ids outputs = model.generate(inputs, max_new_tokens=100, do_sample=True, top_k=100, top_p=0.95) tokenizer.decode(outputs[0], skip_special_tokens=True) ```
650
[ [ -0.0037670135498046875, -0.030181884765625, -0.0027942657470703125, 0.04351806640625, -0.0439453125, -0.00864410400390625, -0.01386260986328125, -0.002429962158203125, -0.002925872802734375, 0.00139617919921875, -0.04058837890625, -0.035430908203125, -0.06280517...
fredymad/siebert_laxo_2e-5_16_2
2023-05-30T06:01:36.000Z
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "endpoints_compatible", "region:us" ]
text-classification
fredymad
null
null
fredymad/siebert_laxo_2e-5_16_2
0
2
transformers
2023-05-30T05:43:06
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: siebert_laxo_2e-5_16_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # siebert_laxo_2e-5_16_2 This model is a fine-tuned version of [fredymad/siebert_estricto_2e-5_16_2](https://huggingface.co/fredymad/siebert_estricto_2e-5_16_2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2449 - Accuracy: 0.9412 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 400 | 0.1726 | 0.9443 | | 0.191 | 2.0 | 800 | 0.2449 | 0.9412 | ### Framework versions - Transformers 4.29.0 - Pytorch 1.13.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
1,415
[ [ -0.028900146484375, -0.042572021484375, 0.0146636962890625, 0.028350830078125, -0.027618408203125, -0.03631591796875, -0.01079559326171875, -0.028228759765625, 0.01229095458984375, 0.0236663818359375, -0.051910400390625, -0.040863037109375, -0.048370361328125, ...
fredymad/Financial_laxo_2e-5_16_2
2023-05-30T06:33:21.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "endpoints_compatible", "region:us" ]
text-classification
fredymad
null
null
fredymad/Financial_laxo_2e-5_16_2
0
2
transformers
2023-05-30T06:26:57
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: Financial_laxo_2e-5_16_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Financial_laxo_2e-5_16_2 This model is a fine-tuned version of [fredymad/Financial_estricto_2e-5_16_2](https://huggingface.co/fredymad/Financial_estricto_2e-5_16_2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3601 - Accuracy: 0.8762 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 400 | 0.3033 | 0.8743 | | 0.3393 | 2.0 | 800 | 0.3601 | 0.8762 | ### Framework versions - Transformers 4.29.0 - Pytorch 1.13.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
1,423
[ [ -0.02288818359375, -0.046112060546875, 0.0014476776123046875, 0.023162841796875, -0.02410888671875, -0.0233306884765625, -0.00849151611328125, -0.03472900390625, 0.00496673583984375, 0.0288238525390625, -0.046905517578125, -0.040771484375, -0.0428466796875, ...
casarf/comment_model_test_zucchi
2023-05-30T10:07:43.000Z
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
casarf
null
null
casarf/comment_model_test_zucchi
0
2
transformers
2023-05-30T10:04:00
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: casarf/comment_model_test_zucchi results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # casarf/comment_model_test_zucchi This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3930 - Validation Loss: 0.4734 - Train Accuracy: 0.7590 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 820, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.6538 | 0.5351 | 0.7952 | 0 | | 0.5180 | 0.4653 | 0.7952 | 1 | | 0.3930 | 0.4734 | 0.7590 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,842
[ [ -0.046539306640625, -0.0511474609375, 0.0209808349609375, 0.0011644363403320312, -0.024200439453125, -0.0175628662109375, -0.014556884765625, -0.007114410400390625, 0.006443023681640625, -0.0036869049072265625, -0.049285888671875, -0.04388427734375, -0.062225341...
declare-lab/tango-full
2023-06-17T07:20:32.000Z
[ "transformers", "music", "en", "dataset:declare-lab/TangoPromptBank", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
null
declare-lab
null
null
declare-lab/tango-full
4
2
transformers
2023-05-30T10:27:30
--- license: cc-by-nc-sa-4.0 datasets: - declare-lab/TangoPromptBank language: - en tags: - music --- # TANGO: Text to Audio using iNstruction-Guided diffusiOn **TANGO** is a latent diffusion model for text-to-audio generation. **TANGO** can generate realistic audios including human sounds, animal sounds, natural and artificial sounds and sound effects from textual prompts. We use the frozen instruction-tuned LLM Flan-T5 as the text encoder and train a UNet based diffusion model for audio generation. We outperform current state-of-the-art models for audio generation across both objective and subjective metrics. We release our model, training, inference code and pre-trained checkpoints for the research community. 📣 We are releasing **Tango-Full** which was pre-trained on **TangoPromptBank**. ## Code Our code is released here: [https://github.com/declare-lab/tango](https://github.com/declare-lab/tango) We uploaded several **TANGO** generated samples here: [https://tango-web.github.io/](https://tango-web.github.io/) Please follow the instructions in the repository for installation, usage and experiments. ## Quickstart Guide Download the **TANGO** model and generate audio from a text prompt: ```python import IPython import soundfile as sf from tango import Tango tango = Tango("declare-lab/tango-full-ft-audiocaps") prompt = "An audience cheering and clapping" audio = tango.generate(prompt) sf.write(f"{prompt}.wav", audio, samplerate=16000) IPython.display.Audio(data=audio, rate=16000) ``` [An audience cheering and clapping.webm](https://user-images.githubusercontent.com/13917097/233851915-e702524d-cd35-43f7-93e0-86ea579231a7.webm) The model will be automatically downloaded and saved in cache. Subsequent runs will load the model directly from cache. The `generate` function uses 100 steps by default to sample from the latent diffusion model. We recommend using 200 steps for generating better quality audios. This comes at the cost of increased run-time. ```python prompt = "Rolling thunder with lightning strikes" audio = tango.generate(prompt, steps=200) IPython.display.Audio(data=audio, rate=16000) ``` [Rolling thunder with lightning strikes.webm](https://user-images.githubusercontent.com/13917097/233851929-90501e41-911d-453f-a00b-b215743365b4.webm) <!-- [MachineClicking](https://user-images.githubusercontent.com/25340239/233857834-bfda52b4-4fcc-48de-b47a-6a6ddcb3671b.mp4 "sample 1") --> Use the `generate_for_batch` function to generate multiple audio samples for a batch of text prompts: ```python prompts = [ "A car engine revving", "A dog barks and rustles with some clicking", "Water flowing and trickling" ] audios = tango.generate_for_batch(prompts, samples=2) ``` This will generate two samples for each of the three text prompts.
2,802
[ [ -0.017791748046875, -0.06494140625, 0.0258636474609375, 0.035003662109375, -0.014556884765625, -0.006122589111328125, -0.008331298828125, -0.0110321044921875, 0.0012798309326171875, 0.02813720703125, -0.055084228515625, -0.05340576171875, -0.028228759765625, ...
RosyB/distilbert-base-uncased-finetuned-cola
2023-05-31T09:19:35.000Z
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
RosyB
null
null
RosyB/distilbert-base-uncased-finetuned-cola
0
2
transformers
2023-05-30T11:51:52
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5471613867597194 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5251 - Matthews Correlation: 0.5472 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5221 | 1.0 | 535 | 0.5371 | 0.4275 | | 0.3491 | 2.0 | 1070 | 0.5129 | 0.4946 | | 0.2382 | 3.0 | 1605 | 0.5251 | 0.5472 | | 0.1758 | 4.0 | 2140 | 0.7505 | 0.5378 | | 0.125 | 5.0 | 2675 | 0.7983 | 0.5414 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cpu - Datasets 1.18.4 - Tokenizers 0.13.2
2,040
[ [ -0.0230712890625, -0.0491943359375, 0.0108642578125, 0.018096923828125, -0.022796630859375, -0.01053619384765625, -0.0071868896484375, -0.0035533905029296875, 0.0214996337890625, 0.01087188720703125, -0.04730224609375, -0.035369873046875, -0.06109619140625, ...
fredymad/bert_Pfinal_2e-5_16_2
2023-06-02T10:50:02.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "endpoints_compatible", "region:us" ]
text-classification
fredymad
null
null
fredymad/bert_Pfinal_2e-5_16_2
0
2
transformers
2023-05-30T12:23:27
--- tags: - generated_from_trainer metrics: - f1 model-index: - name: bert_Pfinal_2e-5_16_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_Pfinal_2e-5_16_2 This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2437 - F1: 0.7464 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2444 | 1.0 | 669 | 0.1785 | 0.7321 | | 0.1729 | 2.0 | 1338 | 0.2437 | 0.7464 | ### Framework versions - Transformers 4.28.0 - Pytorch 1.13.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
1,401
[ [ -0.0350341796875, -0.045654296875, 0.01222991943359375, 0.0246124267578125, -0.0321044921875, -0.0299072265625, -0.0233154296875, -0.019989013671875, 0.00518798828125, 0.018218994140625, -0.059051513671875, -0.044036865234375, -0.0460205078125, -0.0199737548...
fredymad/bert_Pfinal_2e-5_16_10
2023-06-02T11:41:58.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "endpoints_compatible", "region:us" ]
text-classification
fredymad
null
null
fredymad/bert_Pfinal_2e-5_16_10
0
2
transformers
2023-05-30T12:42:33
--- tags: - generated_from_trainer metrics: - f1 model-index: - name: bert_Pfinal_2e-5_16_10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_Pfinal_2e-5_16_10 This model is a fine-tuned version of [fredymad/bert_Pfinal_2e-5_16_2](https://huggingface.co/fredymad/bert_Pfinal_2e-5_16_2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6295 - F1: 0.7355 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.101 | 1.0 | 669 | 0.3000 | 0.7169 | | 0.1282 | 2.0 | 1338 | 0.2993 | 0.7361 | | 0.0548 | 3.0 | 2007 | 0.3924 | 0.7308 | | 0.0278 | 4.0 | 2676 | 0.4989 | 0.7221 | | 0.0229 | 5.0 | 3345 | 0.6089 | 0.6940 | | 0.0168 | 6.0 | 4014 | 0.5561 | 0.7361 | | 0.0082 | 7.0 | 4683 | 0.6112 | 0.7297 | | 0.008 | 8.0 | 5352 | 0.6101 | 0.7343 | | 0.0052 | 9.0 | 6021 | 0.6253 | 0.7400 | | 0.003 | 10.0 | 6690 | 0.6295 | 0.7355 | ### Framework versions - Transformers 4.28.0 - Pytorch 1.13.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
1,866
[ [ -0.042938232421875, -0.041656494140625, 0.009857177734375, 0.01247406005859375, -0.0195159912109375, -0.0309906005859375, -0.00734710693359375, -0.01541900634765625, 0.0167236328125, 0.020111083984375, -0.05615234375, -0.039703369140625, -0.045684814453125, ...
CarnivoraCanis/berturk-cased-tr-fakenews
2023-06-05T11:06:03.000Z
[ "transformers", "pytorch", "bert", "text-classification", "Fake News", "tr", "endpoints_compatible", "region:us" ]
text-classification
CarnivoraCanis
null
null
CarnivoraCanis/berturk-cased-tr-fakenews
0
2
transformers
2023-05-30T13:28:07
--- language: - tr metrics: - accuracy - f1 pipeline_tag: text-classification tags: - Fake News --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Yakup Haydar Baba, İlkay Yağız Gür, Melih Önol, Hasan Atabey Ayhan, Deniz Bedran Yıldırım - **Language(s) (NLP):** Turkish - **Finetuned from model [optional]:** dbmdz/bert-base-turkish-uncased ## Uses - This model can be used for Turkish fake news detection purposes ### Training Data Training dataset used from Mertoğlu, U., & Genç, B. (2020). "Automated fake news detection in the age of digital libraries. Information Technology and Libraries, 39(4)" paper. #### Training Hyperparameters - All training hyperparameters are used as default ## Model Card Authors - Yakup Haydar Baba
1,137
[ [ -0.040557861328125, -0.05999755859375, 0.0171356201171875, 0.0128936767578125, -0.046112060546875, -0.0238494873046875, 0.01381683349609375, -0.0271759033203125, 0.037628173828125, 0.033477783203125, -0.0394287109375, -0.06231689453125, -0.041595458984375, -...
YakovElm/Apache5Classic_Balance_DATA_ratio_Half
2023-05-30T15:14:14.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Apache5Classic_Balance_DATA_ratio_Half
0
2
transformers
2023-05-30T13:55:52
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache5Classic_Balance_DATA_ratio_Half results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache5Classic_Balance_DATA_ratio_Half This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5918 - Train Accuracy: 0.6987 - Validation Loss: 0.6202 - Validation Accuracy: 0.6882 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6351 | 0.6418 | 0.6371 | 0.6426 | 0 | | 0.6209 | 0.6734 | 0.6201 | 0.6426 | 1 | | 0.5918 | 0.6987 | 0.6202 | 0.6882 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,820
[ [ -0.045867919921875, -0.04486083984375, 0.01064300537109375, 0.01218414306640625, -0.03302001953125, -0.0301666259765625, -0.007198333740234375, -0.0257568359375, 0.01447296142578125, 0.011322021484375, -0.057586669921875, -0.04119873046875, -0.050048828125, ...
YakovElm/Hyperledger5Classic_Balance_DATA_ratio_Half
2023-05-30T17:25:41.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger5Classic_Balance_DATA_ratio_Half
0
2
transformers
2023-05-30T14:05:02
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger5Classic_Balance_DATA_ratio_Half results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger5Classic_Balance_DATA_ratio_Half This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5204 - Train Accuracy: 0.7552 - Validation Loss: 0.5316 - Validation Accuracy: 0.7478 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6140 | 0.6549 | 0.5368 | 0.7699 | 0 | | 0.5649 | 0.7360 | 0.5733 | 0.6903 | 1 | | 0.5204 | 0.7552 | 0.5316 | 0.7478 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,830
[ [ -0.0472412109375, -0.039825439453125, 0.01319122314453125, 0.0072174072265625, -0.029022216796875, -0.0275421142578125, -0.008819580078125, -0.0245361328125, 0.01654052734375, 0.01331329345703125, -0.056549072265625, -0.04351806640625, -0.05023193359375, -0....
YakovElm/IntelDAOS5Classic_Balance_DATA_ratio_Half
2023-05-30T22:37:52.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS5Classic_Balance_DATA_ratio_Half
0
2
transformers
2023-05-30T14:09:08
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS5Classic_Balance_DATA_ratio_Half results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS5Classic_Balance_DATA_ratio_Half This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5860 - Train Accuracy: 0.6772 - Validation Loss: 0.6540 - Validation Accuracy: 0.6667 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6459 | 0.6455 | 0.6410 | 0.6667 | 0 | | 0.6023 | 0.6825 | 0.6514 | 0.6190 | 1 | | 0.5860 | 0.6772 | 0.6540 | 0.6667 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,826
[ [ -0.043975830078125, -0.036590576171875, 0.01329803466796875, 0.0023288726806640625, -0.030914306640625, -0.0249481201171875, -0.009521484375, -0.0269775390625, 0.0189208984375, 0.007061004638671875, -0.057220458984375, -0.0438232421875, -0.049774169921875, -...
fredymad/roberta_Pfinal_2e-5_16_2
2023-06-02T15:47:25.000Z
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
fredymad
null
null
fredymad/roberta_Pfinal_2e-5_16_2
0
2
transformers
2023-05-30T14:15:32
--- license: apache-2.0 tags: - generated_from_trainer metrics: - f1 model-index: - name: roberta_Pfinal_2e-5_16_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta_Pfinal_2e-5_16_2 This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2494 - F1: 0.7330 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2608 | 1.0 | 669 | 0.2140 | 0.6623 | | 0.1754 | 2.0 | 1338 | 0.2494 | 0.7330 | ### Framework versions - Transformers 4.28.0 - Pytorch 1.13.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
1,409
[ [ -0.02874755859375, -0.04840087890625, 0.01392364501953125, 0.01210784912109375, -0.0276947021484375, -0.04205322265625, -0.0147247314453125, -0.0156402587890625, 0.0030994415283203125, 0.0253753662109375, -0.055328369140625, -0.04296875, -0.046966552734375, ...
YakovElm/Jira5Classic_Balance_DATA_ratio_Half
2023-05-31T00:16:06.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira5Classic_Balance_DATA_ratio_Half
0
2
transformers
2023-05-30T14:16:11
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira5Classic_Balance_DATA_ratio_Half results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira5Classic_Balance_DATA_ratio_Half This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4710 - Train Accuracy: 0.7959 - Validation Loss: 0.5563 - Validation Accuracy: 0.7239 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6163 | 0.6918 | 0.5537 | 0.7485 | 0 | | 0.5128 | 0.7735 | 0.5581 | 0.7485 | 1 | | 0.4710 | 0.7959 | 0.5563 | 0.7239 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,816
[ [ -0.0362548828125, -0.038482666015625, 0.01129913330078125, 0.0032958984375, -0.03253173828125, -0.0203857421875, -0.007068634033203125, -0.0239715576171875, 0.0227813720703125, 0.01007843017578125, -0.05389404296875, -0.043304443359375, -0.0489501953125, -0....
YakovElm/MariaDB5Classic_Balance_DATA_ratio_Half
2023-05-31T02:01:46.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB5Classic_Balance_DATA_ratio_Half
0
2
transformers
2023-05-30T14:19:50
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB5Classic_Balance_DATA_ratio_Half results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB5Classic_Balance_DATA_ratio_Half This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5735 - Train Accuracy: 0.7211 - Validation Loss: 0.4513 - Validation Accuracy: 0.8281 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6439 | 0.6368 | 0.5531 | 0.7188 | 0 | | 0.5910 | 0.6842 | 0.5049 | 0.7188 | 1 | | 0.5735 | 0.7211 | 0.4513 | 0.8281 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,822
[ [ -0.041778564453125, -0.04156494140625, 0.01285552978515625, 0.004669189453125, -0.03143310546875, -0.02789306640625, -0.00415802001953125, -0.02337646484375, 0.02178955078125, 0.0142974853515625, -0.06158447265625, -0.04644775390625, -0.04534912109375, -0.02...
YakovElm/Qt5Classic_Balance_DATA_ratio_Half
2023-05-31T03:49:42.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt5Classic_Balance_DATA_ratio_Half
0
2
transformers
2023-05-30T14:27:31
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt5Classic_Balance_DATA_ratio_Half results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt5Classic_Balance_DATA_ratio_Half This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5593 - Train Accuracy: 0.7382 - Validation Loss: 0.6399 - Validation Accuracy: 0.6328 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6323 | 0.6629 | 0.6276 | 0.6610 | 0 | | 0.6071 | 0.6704 | 0.6235 | 0.6102 | 1 | | 0.5593 | 0.7382 | 0.6399 | 0.6328 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,812
[ [ -0.038238525390625, -0.03045654296875, 0.016082763671875, 0.005298614501953125, -0.033538818359375, -0.0212249755859375, -0.00005054473876953125, -0.0184173583984375, 0.0084381103515625, 0.0098724365234375, -0.0552978515625, -0.04486083984375, -0.045623779296875...
rayjyate/bert-emotion
2023-05-30T14:33:36.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
rayjyate
null
null
rayjyate/bert-emotion
0
2
transformers
2023-05-30T14:27:51
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - precision - recall model-index: - name: bert-emotion results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval config: emotion split: validation args: emotion metrics: - name: Precision type: precision value: 0.7505623807659564 - name: Recall type: recall value: 0.7243031825553111 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-emotion This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.1413 - Precision: 0.7506 - Recall: 0.7243 - Fscore: 0.7340 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | 0.8556 | 1.0 | 815 | 0.7854 | 0.7461 | 0.5929 | 0.6088 | | 0.5369 | 2.0 | 1630 | 0.9014 | 0.7549 | 0.7278 | 0.7359 | | 0.2571 | 3.0 | 2445 | 1.1413 | 0.7506 | 0.7243 | 0.7340 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,970
[ [ -0.03363037109375, -0.045196533203125, 0.018218994140625, 0.022857666015625, -0.028564453125, -0.0169525146484375, -0.01788330078125, -0.0093231201171875, 0.01593017578125, -0.000675201416015625, -0.059661865234375, -0.053375244140625, -0.0584716796875, -0.0...
YakovElm/Hyperledger20Classic_512
2023-05-30T14:31:34.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger20Classic_512
0
2
transformers
2023-05-30T14:30:55
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger20Classic_512 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger20Classic_512 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2642 - Train Accuracy: 0.9149 - Validation Loss: 0.2898 - Validation Accuracy: 0.8983 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3104 | 0.9035 | 0.3020 | 0.8983 | 0 | | 0.2724 | 0.9149 | 0.2950 | 0.8983 | 1 | | 0.2642 | 0.9149 | 0.2898 | 0.8983 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,792
[ [ -0.048431396484375, -0.0418701171875, 0.0222625732421875, 0.0036525726318359375, -0.0288848876953125, -0.0266571044921875, -0.0173187255859375, -0.025970458984375, 0.01296234130859375, 0.0150299072265625, -0.055084228515625, -0.049224853515625, -0.052978515625, ...
fredymad/robertuito_Pfinal
2023-06-02T12:18:55.000Z
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "endpoints_compatible", "region:us" ]
text-classification
fredymad
null
null
fredymad/robertuito_Pfinal
0
2
transformers
2023-05-30T14:32:10
--- tags: - generated_from_trainer metrics: - f1 model-index: - name: robertuito_Pfinal results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robertuito_Pfinal This model is a fine-tuned version of [pysentimiento/robertuito-base-uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2105 - F1: 0.7639 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2254 | 1.0 | 669 | 0.1746 | 0.7618 | | 0.1557 | 2.0 | 1338 | 0.2105 | 0.7639 | ### Framework versions - Transformers 4.28.0 - Pytorch 1.13.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
1,389
[ [ -0.029632568359375, -0.034393310546875, 0.014312744140625, 0.016815185546875, -0.0316162109375, -0.0290069580078125, -0.0191802978515625, -0.0085906982421875, 0.0135345458984375, 0.037628173828125, -0.055023193359375, -0.050537109375, -0.04669189453125, -0.0...
kfkas/t5-large-korean-P2G
2023-06-03T11:55:24.000Z
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_keras_callback", "ko", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
kfkas
null
null
kfkas/t5-large-korean-P2G
5
2
transformers
2023-05-30T14:38:58
--- language: - ko tags: - generated_from_keras_callback model-index: - name: t5-large-korean-P2G results: [] --- # t5-large-korean-P2G 이 모델은 lcw99 / t5-large-korean-text-summary을 국립 국어원 신문 말뭉치 50만개의 문장을 2021을 g2pK로 훈련시켜 G2P된 데이터를 원본으로 돌립니다.<br> git : https://github.com/taemin6697<br> ## Usage ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model_dir = "kfkas/t5-large-korean-P2G" tokenizer = AutoTokenizer.from_pretrained(model_dir) model = AutoModelForSeq2SeqLM.from_pretrained(model_dir) text = "서규왕국 싸우디 태양광·풍녁 빨쩐 중심지 될 껃" inputs = tokenizer.encode(text,return_tensors="pt") output = model.generate(inputs) decoded_output = tokenizer.batch(output[0], skip_special_tokens=True) print(decoded_output)#석유왕국 사우디 태양광·풍력 발전 중심지 될 것 ``` ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float16 ### Training results ### Framework versions - Transformers 4.22.1 - TensorFlow 2.10.0 - Datasets 2.5.1 - Tokenizers 0.12.1
1,166
[ [ -0.0200653076171875, -0.0347900390625, 0.0157318115234375, 0.0433349609375, -0.047698974609375, -0.00238037109375, -0.01329803466796875, -0.011505126953125, 0.00870513916015625, 0.0135498046875, -0.0270538330078125, -0.043548583984375, -0.07012939453125, 0.0...
HEN10/layoutlmv2_Kb_qa04
2023-05-30T14:56:20.000Z
[ "transformers", "pytorch", "tensorboard", "layoutlmv2", "document-question-answering", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
document-question-answering
HEN10
null
null
HEN10/layoutlmv2_Kb_qa04
0
2
transformers
2023-05-30T14:45:40
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer model-index: - name: layoutlmv2_Kb_qa04 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv2_Kb_qa04 This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.1587 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.9629 | 0.57 | 50 | 3.1486 | | 2.5694 | 1.14 | 100 | 4.1441 | | 2.331 | 1.7 | 150 | 3.4756 | | 1.8442 | 2.27 | 200 | 4.1663 | | 1.7225 | 2.84 | 250 | 4.1981 | | 1.5666 | 3.41 | 300 | 4.2186 | | 1.4984 | 3.98 | 350 | 4.1587 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.12.1
1,597
[ [ -0.0220947265625, -0.0293426513671875, 0.00971221923828125, 0.0188140869140625, -0.0244598388671875, -0.029541015625, 0.0034732818603515625, -0.00429534912109375, -0.0017423629760742188, 0.0284576416015625, -0.05352783203125, -0.04571533203125, -0.03591918945312...
fredymad/distilbert_Pfinal_4CLASES_2e-5_16_2
2023-06-02T10:35:41.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "endpoints_compatible", "region:us" ]
text-classification
fredymad
null
null
fredymad/distilbert_Pfinal_4CLASES_2e-5_16_2
0
2
transformers
2023-05-30T15:12:56
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert_Pfinal_4CLASES_2e-5_16_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_Pfinal_4CLASES_2e-5_16_2 This model is a fine-tuned version of [dccuchile/distilbert-base-spanish-uncased](https://huggingface.co/dccuchile/distilbert-base-spanish-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3103 - Accuracy: 0.8987 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4274 | 1.0 | 669 | 0.3094 | 0.8972 | | 0.2899 | 2.0 | 1338 | 0.3103 | 0.8987 | ### Framework versions - Transformers 4.28.0 - Pytorch 1.13.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
1,453
[ [ -0.027801513671875, -0.04638671875, 0.0137176513671875, 0.0254364013671875, -0.028411865234375, -0.017852783203125, -0.01178741455078125, -0.0091094970703125, 0.000988006591796875, 0.011871337890625, -0.048004150390625, -0.04986572265625, -0.05401611328125, ...
lynxvail/bert-emotion
2023-05-30T15:30:44.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
lynxvail
null
null
lynxvail/bert-emotion
0
2
transformers
2023-05-30T15:22:59
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - precision - recall model-index: - name: bert-emotion results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval config: emotion split: validation args: emotion metrics: - name: Precision type: precision value: 0.7505623807659564 - name: Recall type: recall value: 0.7243031825553111 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-emotion This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.1413 - Precision: 0.7506 - Recall: 0.7243 - Fscore: 0.7340 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | 0.8556 | 1.0 | 815 | 0.7854 | 0.7461 | 0.5929 | 0.6088 | | 0.5369 | 2.0 | 1630 | 0.9014 | 0.7549 | 0.7278 | 0.7359 | | 0.2571 | 3.0 | 2445 | 1.1413 | 0.7506 | 0.7243 | 0.7340 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,970
[ [ -0.03363037109375, -0.045196533203125, 0.018218994140625, 0.022857666015625, -0.028564453125, -0.0169525146484375, -0.01788330078125, -0.0093231201171875, 0.01593017578125, -0.000675201416015625, -0.059661865234375, -0.053375244140625, -0.0584716796875, -0.0...
gapvandyaisummer/bert-emotion
2023-05-30T15:41:09.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
gapvandyaisummer
null
null
gapvandyaisummer/bert-emotion
0
2
transformers
2023-05-30T15:23:58
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - precision - recall model-index: - name: bert-emotion results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval config: emotion split: validation args: emotion metrics: - name: Precision type: precision value: 0.7505623807659564 - name: Recall type: recall value: 0.7243031825553111 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-emotion This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.1413 - Precision: 0.7506 - Recall: 0.7243 - Fscore: 0.7340 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | 0.8556 | 1.0 | 815 | 0.7854 | 0.7461 | 0.5929 | 0.6088 | | 0.5369 | 2.0 | 1630 | 0.9014 | 0.7549 | 0.7278 | 0.7359 | | 0.2571 | 3.0 | 2445 | 1.1413 | 0.7506 | 0.7243 | 0.7340 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,970
[ [ -0.03363037109375, -0.045196533203125, 0.018218994140625, 0.022857666015625, -0.028564453125, -0.0169525146484375, -0.01788330078125, -0.0093231201171875, 0.01593017578125, -0.000675201416015625, -0.059661865234375, -0.053375244140625, -0.0584716796875, -0.0...
jsilver/bert-emotion
2023-05-30T15:33:48.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
jsilver
null
null
jsilver/bert-emotion
0
2
transformers
2023-05-30T15:27:23
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - precision - recall model-index: - name: bert-emotion results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval config: emotion split: validation args: emotion metrics: - name: Precision type: precision value: 0.7505623807659564 - name: Recall type: recall value: 0.7243031825553111 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-emotion This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.1413 - Precision: 0.7506 - Recall: 0.7243 - Fscore: 0.7340 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | 0.8556 | 1.0 | 815 | 0.7854 | 0.7461 | 0.5929 | 0.6088 | | 0.5369 | 2.0 | 1630 | 0.9014 | 0.7549 | 0.7278 | 0.7359 | | 0.2571 | 3.0 | 2445 | 1.1413 | 0.7506 | 0.7243 | 0.7340 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,970
[ [ -0.03363037109375, -0.045196533203125, 0.018218994140625, 0.022857666015625, -0.028564453125, -0.0169525146484375, -0.01788330078125, -0.0093231201171875, 0.01593017578125, -0.000675201416015625, -0.059661865234375, -0.053375244140625, -0.0584716796875, -0.0...
YakovElm/Apache5Classic_Balance_DATA_ratio_1
2023-05-30T15:29:10.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Apache5Classic_Balance_DATA_ratio_1
0
2
transformers
2023-05-30T15:28:06
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache5Classic_Balance_DATA_ratio_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache5Classic_Balance_DATA_ratio_1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6407 - Train Accuracy: 0.6296 - Validation Loss: 0.6324 - Validation Accuracy: 0.6382 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6980 | 0.5166 | 0.6806 | 0.5641 | 0 | | 0.6895 | 0.5470 | 0.6698 | 0.5755 | 1 | | 0.6407 | 0.6296 | 0.6324 | 0.6382 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,814
[ [ -0.04644775390625, -0.04461669921875, 0.0118255615234375, 0.0126495361328125, -0.03167724609375, -0.03265380859375, -0.009857177734375, -0.0251312255859375, 0.014068603515625, 0.0130462646484375, -0.055633544921875, -0.040069580078125, -0.0498046875, -0.0215...
FelixHonikker/bert-emotion
2023-05-30T15:34:36.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
FelixHonikker
null
null
FelixHonikker/bert-emotion
0
2
transformers
2023-05-30T15:29:08
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - precision - recall model-index: - name: bert-emotion results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval config: emotion split: validation args: emotion metrics: - name: Precision type: precision value: 0.7505623807659564 - name: Recall type: recall value: 0.7243031825553111 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-emotion This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.1413 - Precision: 0.7506 - Recall: 0.7243 - Fscore: 0.7340 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | 0.8556 | 1.0 | 815 | 0.7854 | 0.7461 | 0.5929 | 0.6088 | | 0.5369 | 2.0 | 1630 | 0.9014 | 0.7549 | 0.7278 | 0.7359 | | 0.2571 | 3.0 | 2445 | 1.1413 | 0.7506 | 0.7243 | 0.7340 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,970
[ [ -0.03363037109375, -0.045196533203125, 0.0182037353515625, 0.0228729248046875, -0.0285797119140625, -0.01690673828125, -0.01788330078125, -0.0093231201171875, 0.0159454345703125, -0.0006780624389648438, -0.059661865234375, -0.053375244140625, -0.0584716796875, ...
YuruiGao/bert-emotion
2023-05-30T15:36:04.000Z
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YuruiGao
null
null
YuruiGao/bert-emotion
0
2
transformers
2023-05-30T15:30:01
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval model-index: - name: bert-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-emotion This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,030
[ [ -0.03509521484375, -0.055328369140625, 0.0193939208984375, 0.0300750732421875, -0.037933349609375, -0.013336181640625, -0.0200347900390625, -0.012542724609375, 0.019683837890625, -0.006031036376953125, -0.061767578125, -0.043212890625, -0.053985595703125, -0...
paku90/bert-emotion
2023-05-30T15:48:07.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
paku90
null
null
paku90/bert-emotion
0
2
transformers
2023-05-30T15:32:59
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - precision - recall model-index: - name: bert-emotion results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval config: emotion split: validation args: emotion metrics: - name: Precision type: precision value: 0.7505623807659564 - name: Recall type: recall value: 0.7243031825553111 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-emotion This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.1413 - Precision: 0.7506 - Recall: 0.7243 - Fscore: 0.7340 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | 0.8556 | 1.0 | 815 | 0.7854 | 0.7461 | 0.5929 | 0.6088 | | 0.5369 | 2.0 | 1630 | 0.9014 | 0.7549 | 0.7278 | 0.7359 | | 0.2571 | 3.0 | 2445 | 1.1413 | 0.7506 | 0.7243 | 0.7340 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,970
[ [ -0.03363037109375, -0.045196533203125, 0.018218994140625, 0.022857666015625, -0.028564453125, -0.0169525146484375, -0.01788330078125, -0.0093231201171875, 0.01593017578125, -0.000675201416015625, -0.059661865234375, -0.053375244140625, -0.0584716796875, -0.0...
jonglet/mobile_vit
2023-05-30T17:21:27.000Z
[ "transformers", "pytorch", "tensorboard", "mobilevit", "image-classification", "generated_from_trainer", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
jonglet
null
null
jonglet/mobile_vit
0
2
transformers
2023-05-30T15:37:09
--- license: other tags: - generated_from_trainer metrics: - accuracy model-index: - name: mobile_vit results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobile_vit This model is a fine-tuned version of [apple/mobilevit-small](https://huggingface.co/apple/mobilevit-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1128 - Accuracy: 0.7615 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 200 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5859 | 0.78 | 1000 | 0.9741 | 0.7787 | | 0.1195 | 1.56 | 2000 | 1.1128 | 0.7615 | ### Framework versions - Transformers 4.27.4 - Pytorch 1.13.0 - Datasets 2.1.0 - Tokenizers 0.13.2
1,413
[ [ -0.031951904296875, -0.037841796875, 0.01180267333984375, 0.006923675537109375, -0.033294677734375, -0.029937744140625, -0.004184722900390625, -0.00994110107421875, 0.021728515625, 0.01885986328125, -0.051849365234375, -0.040863037109375, -0.037445068359375, ...
YakovElm/Qt20Classic_512
2023-05-30T15:38:16.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt20Classic_512
0
2
transformers
2023-05-30T15:37:34
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt20Classic_512 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt20Classic_512 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1699 - Train Accuracy: 0.9462 - Validation Loss: 0.1652 - Validation Accuracy: 0.9586 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2176 | 0.9448 | 0.1658 | 0.9586 | 0 | | 0.1966 | 0.9462 | 0.1557 | 0.9586 | 1 | | 0.1699 | 0.9462 | 0.1652 | 0.9586 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,774
[ [ -0.039947509765625, -0.03521728515625, 0.0227813720703125, 0.004642486572265625, -0.036865234375, -0.02325439453125, -0.01016998291015625, -0.0210723876953125, 0.008148193359375, 0.01149749755859375, -0.05474853515625, -0.047698974609375, -0.048370361328125, ...
platzi/platzi-distilroberta-base-mrpc-glue-andres_arboleda
2023-05-30T15:51:56.000Z
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
platzi
null
null
platzi/platzi-distilroberta-base-mrpc-glue-andres_arboleda
0
2
transformers
2023-05-30T15:43:48
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: platzi-distilroberta-base-mrpc-glue-andres_arboleda results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: mrpc split: validation args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8308823529411765 - name: F1 type: f1 value: 0.8685714285714285 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-distilroberta-base-mrpc-glue-andres_arboleda This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6988 - Accuracy: 0.8309 - F1: 0.8686 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5239 | 1.09 | 500 | 0.4315 | 0.8186 | 0.8650 | | 0.3701 | 2.18 | 1000 | 0.6988 | 0.8309 | 0.8686 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,854
[ [ -0.02984619140625, -0.0445556640625, 0.01023101806640625, 0.0221405029296875, -0.0298614501953125, -0.025390625, -0.01042938232421875, -0.004154205322265625, 0.01067352294921875, 0.00843048095703125, -0.0496826171875, -0.04473876953125, -0.057647705078125, -...
YakovElm/Apache5Classic_Balance_DATA_ratio_2
2023-05-30T15:50:35.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Apache5Classic_Balance_DATA_ratio_2
0
2
transformers
2023-05-30T15:49:55
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache5Classic_Balance_DATA_ratio_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache5Classic_Balance_DATA_ratio_2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5624 - Train Accuracy: 0.7304 - Validation Loss: 0.5657 - Validation Accuracy: 0.7135 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6344 | 0.6639 | 0.5897 | 0.6926 | 0 | | 0.6169 | 0.6854 | 0.5808 | 0.6964 | 1 | | 0.5624 | 0.7304 | 0.5657 | 0.7135 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,814
[ [ -0.04400634765625, -0.044158935546875, 0.01172637939453125, 0.01274871826171875, -0.032470703125, -0.031982421875, -0.00982666015625, -0.0267791748046875, 0.01153564453125, 0.01239013671875, -0.054290771484375, -0.0377197265625, -0.05023193359375, -0.0227661...
tcgyver/bert-emotion
2023-05-30T16:17:22.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
tcgyver
null
null
tcgyver/bert-emotion
0
2
transformers
2023-05-30T16:11:48
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - precision - recall model-index: - name: bert-emotion results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval config: emotion split: validation args: emotion metrics: - name: Precision type: precision value: 0.7505623807659564 - name: Recall type: recall value: 0.7243031825553111 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-emotion This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.1413 - Precision: 0.7506 - Recall: 0.7243 - Fscore: 0.7340 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | 0.8556 | 1.0 | 815 | 0.7854 | 0.7461 | 0.5929 | 0.6088 | | 0.5369 | 2.0 | 1630 | 0.9014 | 0.7549 | 0.7278 | 0.7359 | | 0.2571 | 3.0 | 2445 | 1.1413 | 0.7506 | 0.7243 | 0.7340 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,970
[ [ -0.03363037109375, -0.045257568359375, 0.018218994140625, 0.0228729248046875, -0.028564453125, -0.01690673828125, -0.01788330078125, -0.0093231201171875, 0.01593017578125, -0.0006737709045410156, -0.059661865234375, -0.053375244140625, -0.058502197265625, -0...
YakovElm/Apache5Classic_Balance_DATA_ratio_3
2023-05-30T16:18:35.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Apache5Classic_Balance_DATA_ratio_3
0
2
transformers
2023-05-30T16:17:54
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache5Classic_Balance_DATA_ratio_3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache5Classic_Balance_DATA_ratio_3 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4840 - Train Accuracy: 0.7774 - Validation Loss: 0.5341 - Validation Accuracy: 0.7578 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5576 | 0.7532 | 0.5386 | 0.7507 | 0 | | 0.5328 | 0.7641 | 0.5329 | 0.7664 | 1 | | 0.4840 | 0.7774 | 0.5341 | 0.7578 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,814
[ [ -0.0450439453125, -0.044708251953125, 0.0140838623046875, 0.01277923583984375, -0.0323486328125, -0.033172607421875, -0.01007080078125, -0.027099609375, 0.01178741455078125, 0.0136871337890625, -0.053497314453125, -0.040863037109375, -0.049072265625, -0.0203...
YakovElm/Apache5Classic_Balance_DATA_ratio_4
2023-05-30T16:54:26.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Apache5Classic_Balance_DATA_ratio_4
0
2
transformers
2023-05-30T16:53:22
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache5Classic_Balance_DATA_ratio_4 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache5Classic_Balance_DATA_ratio_4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4243 - Train Accuracy: 0.8162 - Validation Loss: 0.4969 - Validation Accuracy: 0.8223 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5149 | 0.7844 | 0.4510 | 0.8200 | 0 | | 0.4849 | 0.7976 | 0.4359 | 0.8326 | 1 | | 0.4243 | 0.8162 | 0.4969 | 0.8223 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,814
[ [ -0.045440673828125, -0.04345703125, 0.0144500732421875, 0.01238250732421875, -0.031494140625, -0.031524658203125, -0.0100860595703125, -0.0266571044921875, 0.01296234130859375, 0.01387786865234375, -0.05450439453125, -0.04107666015625, -0.04840087890625, -0....
YakovElm/Apache10Classic_Balance_DATA_ratio_Half
2023-05-31T14:29:57.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Apache10Classic_Balance_DATA_ratio_Half
0
2
transformers
2023-05-30T17:03:19
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache10Classic_Balance_DATA_ratio_Half results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache10Classic_Balance_DATA_ratio_Half This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5599 - Train Accuracy: 0.7195 - Validation Loss: 0.6311 - Validation Accuracy: 0.6831 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6404 | 0.6430 | 0.6082 | 0.6995 | 0 | | 0.6084 | 0.6885 | 0.6509 | 0.5902 | 1 | | 0.5599 | 0.7195 | 0.6311 | 0.6831 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,822
[ [ -0.04541015625, -0.048248291015625, 0.01026153564453125, 0.012969970703125, -0.0318603515625, -0.031219482421875, -0.00914764404296875, -0.0245819091796875, 0.0182342529296875, 0.01247406005859375, -0.0557861328125, -0.038055419921875, -0.05078125, -0.023559...
YakovElm/Apache10Classic_Balance_DATA_ratio_1
2023-05-31T14:45:02.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Apache10Classic_Balance_DATA_ratio_1
0
2
transformers
2023-05-30T17:14:25
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache10Classic_Balance_DATA_ratio_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache10Classic_Balance_DATA_ratio_1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5633 - Train Accuracy: 0.7077 - Validation Loss: 0.7215 - Validation Accuracy: 0.5287 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6865 | 0.5519 | 0.6406 | 0.6352 | 0 | | 0.6470 | 0.6161 | 0.6177 | 0.6434 | 1 | | 0.5633 | 0.7077 | 0.7215 | 0.5287 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,816
[ [ -0.04608154296875, -0.047760009765625, 0.01161956787109375, 0.0132293701171875, -0.0308990478515625, -0.033416748046875, -0.01175689697265625, -0.024261474609375, 0.0168609619140625, 0.01345062255859375, -0.0545654296875, -0.037506103515625, -0.050537109375, ...
YakovElm/Hyperledger5Classic_Balance_DATA_ratio_1
2023-05-30T17:38:56.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger5Classic_Balance_DATA_ratio_1
0
2
transformers
2023-05-30T17:38:20
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger5Classic_Balance_DATA_ratio_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger5Classic_Balance_DATA_ratio_1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6162 - Train Accuracy: 0.6626 - Validation Loss: 0.6618 - Validation Accuracy: 0.5894 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6661 | 0.6018 | 0.6800 | 0.5960 | 0 | | 0.6524 | 0.6361 | 0.6548 | 0.6325 | 1 | | 0.6162 | 0.6626 | 0.6618 | 0.5894 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,824
[ [ -0.04803466796875, -0.03857421875, 0.01434326171875, 0.0081024169921875, -0.0276947021484375, -0.0296783447265625, -0.009857177734375, -0.0239105224609375, 0.015655517578125, 0.0144500732421875, -0.05535888671875, -0.043121337890625, -0.04949951171875, -0.01...
YakovElm/Hyperledger5Classic_Balance_DATA_ratio_2
2023-05-30T17:58:09.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger5Classic_Balance_DATA_ratio_2
0
2
transformers
2023-05-30T17:57:17
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger5Classic_Balance_DATA_ratio_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger5Classic_Balance_DATA_ratio_2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5169 - Train Accuracy: 0.7251 - Validation Loss: 0.6507 - Validation Accuracy: 0.6549 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6223 | 0.6676 | 0.5989 | 0.6460 | 0 | | 0.5765 | 0.6713 | 0.5891 | 0.6637 | 1 | | 0.5169 | 0.7251 | 0.6507 | 0.6549 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,824
[ [ -0.046234130859375, -0.0382080078125, 0.014862060546875, 0.0075836181640625, -0.02886962890625, -0.028778076171875, -0.0098876953125, -0.024810791015625, 0.01386260986328125, 0.013427734375, -0.05438232421875, -0.041595458984375, -0.050323486328125, -0.01994...
YakovElm/Hyperledger5Classic_Balance_DATA_ratio_3
2023-05-30T18:22:55.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger5Classic_Balance_DATA_ratio_3
0
2
transformers
2023-05-30T18:22:19
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger5Classic_Balance_DATA_ratio_3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger5Classic_Balance_DATA_ratio_3 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4065 - Train Accuracy: 0.8043 - Validation Loss: 0.5434 - Validation Accuracy: 0.7297 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5440 | 0.7441 | 0.5403 | 0.7313 | 0 | | 0.4963 | 0.7523 | 0.5343 | 0.7396 | 1 | | 0.4065 | 0.8043 | 0.5434 | 0.7297 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,824
[ [ -0.04705810546875, -0.039337158203125, 0.016265869140625, 0.00799560546875, -0.029052734375, -0.0304107666015625, -0.01023101806640625, -0.0255584716796875, 0.01377105712890625, 0.01515960693359375, -0.053375244140625, -0.0439453125, -0.049896240234375, -0.0...
alozanorius/bert-fine-tuned-cola
2023-05-30T19:49:40.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
alozanorius
null
null
alozanorius/bert-fine-tuned-cola
0
2
transformers
2023-05-30T18:40:48
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: bert-fine-tuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # bert-fine-tuned-cola This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2971 - Validation Loss: 0.4283 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.5021 | 0.4639 | 0 | | 0.2971 | 0.4283 | 1 | ### Framework versions - Transformers 4.28.0 - TensorFlow 2.10.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,333
[ [ -0.037750244140625, -0.059478759765625, 0.01430511474609375, 0.01299285888671875, -0.032135009765625, -0.021209716796875, -0.0174560546875, -0.0207061767578125, 0.0139007568359375, 0.00970458984375, -0.05560302734375, -0.034423828125, -0.05169677734375, -0.0...
YakovElm/Hyperledger5Classic_Balance_DATA_ratio_4
2023-05-30T18:54:03.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger5Classic_Balance_DATA_ratio_4
0
2
transformers
2023-05-30T18:53:00
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger5Classic_Balance_DATA_ratio_4 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger5Classic_Balance_DATA_ratio_4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3930 - Train Accuracy: 0.8280 - Validation Loss: 0.4749 - Validation Accuracy: 0.7865 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.4819 | 0.8015 | 0.5263 | 0.7838 | 0 | | 0.4474 | 0.8073 | 0.4822 | 0.7812 | 1 | | 0.3930 | 0.8280 | 0.4749 | 0.7865 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,824
[ [ -0.047088623046875, -0.038055419921875, 0.016632080078125, 0.007904052734375, -0.0283660888671875, -0.0288543701171875, -0.01110076904296875, -0.0248870849609375, 0.01378631591796875, 0.0142364501953125, -0.05389404296875, -0.044342041015625, -0.04937744140625, ...
YakovElm/Hyperledger10Classic_Balance_DATA_ratio_Half
2023-05-30T19:03:15.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger10Classic_Balance_DATA_ratio_Half
0
2
transformers
2023-05-30T19:02:40
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger10Classic_Balance_DATA_ratio_Half results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger10Classic_Balance_DATA_ratio_Half This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5467 - Train Accuracy: 0.7368 - Validation Loss: 0.5960 - Validation Accuracy: 0.6957 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6355 | 0.6534 | 0.5992 | 0.7065 | 0 | | 0.6034 | 0.6824 | 0.6301 | 0.6359 | 1 | | 0.5467 | 0.7368 | 0.5960 | 0.6957 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,832
[ [ -0.046478271484375, -0.04241943359375, 0.013427734375, 0.008087158203125, -0.02764892578125, -0.02880859375, -0.01082611083984375, -0.021942138671875, 0.0218048095703125, 0.013275146484375, -0.05450439453125, -0.038604736328125, -0.050994873046875, -0.022399...
YakovElm/Hyperledger10Classic_Balance_DATA_ratio_1
2023-05-30T19:13:55.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger10Classic_Balance_DATA_ratio_1
0
2
transformers
2023-05-30T19:13:19
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger10Classic_Balance_DATA_ratio_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger10Classic_Balance_DATA_ratio_1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5430 - Train Accuracy: 0.7143 - Validation Loss: 0.6359 - Validation Accuracy: 0.6041 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6801 | 0.5510 | 0.6161 | 0.6531 | 0 | | 0.6168 | 0.6327 | 0.5900 | 0.6612 | 1 | | 0.5430 | 0.7143 | 0.6359 | 0.6041 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,826
[ [ -0.04693603515625, -0.042388916015625, 0.0140380859375, 0.00836944580078125, -0.026397705078125, -0.030517578125, -0.0124664306640625, -0.02142333984375, 0.020721435546875, 0.0147705078125, -0.053314208984375, -0.03790283203125, -0.050323486328125, -0.021286...
t12e/instructor-base
2023-05-30T21:53:29.000Z
[ "sentence-transformers", "pytorch", "t5", "text-embedding", "embeddings", "information-retrieval", "beir", "text-classification", "language-model", "text-clustering", "text-semantic-similarity", "text-evaluation", "prompt-retrieval", "text-reranking", "feature-extraction", "sentence-si...
sentence-similarity
t12e
null
null
t12e/instructor-base
0
2
sentence-transformers
2023-05-30T19:22:17
--- pipeline_tag: sentence-similarity tags: - text-embedding - embeddings - information-retrieval - beir - text-classification - language-model - text-clustering - text-semantic-similarity - text-evaluation - prompt-retrieval - text-reranking - sentence-transformers - feature-extraction - sentence-similarity - transformers - t5 - English - Sentence Similarity - natural_questions - ms_marco - fever - hotpot_qa - mteb language: en inference: false license: apache-2.0 model-index: - name: final_base_results results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 86.2089552238806 - type: ap value: 55.76273850794966 - type: f1 value: 81.26104211414781 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 88.35995000000001 - type: ap value: 84.18839957309655 - type: f1 value: 88.317619250081 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 44.64 - type: f1 value: 42.48663956478136 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 27.383000000000003 - type: map_at_10 value: 43.024 - type: map_at_100 value: 44.023 - type: map_at_1000 value: 44.025999999999996 - type: map_at_3 value: 37.684 - type: map_at_5 value: 40.884 - type: mrr_at_1 value: 28.094 - type: mrr_at_10 value: 43.315 - type: mrr_at_100 value: 44.313 - type: mrr_at_1000 value: 44.317 - type: mrr_at_3 value: 37.862 - type: mrr_at_5 value: 41.155 - type: ndcg_at_1 value: 27.383000000000003 - type: ndcg_at_10 value: 52.032000000000004 - type: ndcg_at_100 value: 56.19499999999999 - type: ndcg_at_1000 value: 56.272 - type: ndcg_at_3 value: 41.166000000000004 - type: ndcg_at_5 value: 46.92 - type: precision_at_1 value: 27.383000000000003 - type: precision_at_10 value: 8.087 - type: precision_at_100 value: 0.989 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 17.093 - type: precision_at_5 value: 13.044 - type: recall_at_1 value: 27.383000000000003 - type: recall_at_10 value: 80.868 - type: recall_at_100 value: 98.86200000000001 - type: recall_at_1000 value: 99.431 - type: recall_at_3 value: 51.28 - type: recall_at_5 value: 65.22 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 39.68441054431849 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 29.188539728343844 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 63.173362687519784 - type: mrr value: 76.18860748362133 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_spearman value: 82.30789953771232 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 77.03571428571428 - type: f1 value: 75.87384305045917 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 32.98041170516364 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 25.71652988451154 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 33.739999999999995 - type: map_at_10 value: 46.197 - type: map_at_100 value: 47.814 - type: map_at_1000 value: 47.934 - type: map_at_3 value: 43.091 - type: map_at_5 value: 44.81 - type: mrr_at_1 value: 41.059 - type: mrr_at_10 value: 52.292 - type: mrr_at_100 value: 52.978 - type: mrr_at_1000 value: 53.015 - type: mrr_at_3 value: 49.976 - type: mrr_at_5 value: 51.449999999999996 - type: ndcg_at_1 value: 41.059 - type: ndcg_at_10 value: 52.608 - type: ndcg_at_100 value: 57.965 - type: ndcg_at_1000 value: 59.775999999999996 - type: ndcg_at_3 value: 48.473 - type: ndcg_at_5 value: 50.407999999999994 - type: precision_at_1 value: 41.059 - type: precision_at_10 value: 9.943 - type: precision_at_100 value: 1.6070000000000002 - type: precision_at_1000 value: 0.20500000000000002 - type: precision_at_3 value: 23.413999999999998 - type: precision_at_5 value: 16.481 - type: recall_at_1 value: 33.739999999999995 - type: recall_at_10 value: 63.888999999999996 - type: recall_at_100 value: 85.832 - type: recall_at_1000 value: 97.475 - type: recall_at_3 value: 51.953 - type: recall_at_5 value: 57.498000000000005 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 31.169999999999998 - type: map_at_10 value: 41.455 - type: map_at_100 value: 42.716 - type: map_at_1000 value: 42.847 - type: map_at_3 value: 38.568999999999996 - type: map_at_5 value: 40.099000000000004 - type: mrr_at_1 value: 39.427 - type: mrr_at_10 value: 47.818 - type: mrr_at_100 value: 48.519 - type: mrr_at_1000 value: 48.558 - type: mrr_at_3 value: 45.86 - type: mrr_at_5 value: 46.936 - type: ndcg_at_1 value: 39.427 - type: ndcg_at_10 value: 47.181 - type: ndcg_at_100 value: 51.737 - type: ndcg_at_1000 value: 53.74 - type: ndcg_at_3 value: 43.261 - type: ndcg_at_5 value: 44.891 - type: precision_at_1 value: 39.427 - type: precision_at_10 value: 8.847 - type: precision_at_100 value: 1.425 - type: precision_at_1000 value: 0.189 - type: precision_at_3 value: 20.785999999999998 - type: precision_at_5 value: 14.560999999999998 - type: recall_at_1 value: 31.169999999999998 - type: recall_at_10 value: 56.971000000000004 - type: recall_at_100 value: 76.31400000000001 - type: recall_at_1000 value: 88.93900000000001 - type: recall_at_3 value: 45.208 - type: recall_at_5 value: 49.923 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 39.682 - type: map_at_10 value: 52.766000000000005 - type: map_at_100 value: 53.84100000000001 - type: map_at_1000 value: 53.898 - type: map_at_3 value: 49.291000000000004 - type: map_at_5 value: 51.365 - type: mrr_at_1 value: 45.266 - type: mrr_at_10 value: 56.093 - type: mrr_at_100 value: 56.763 - type: mrr_at_1000 value: 56.793000000000006 - type: mrr_at_3 value: 53.668000000000006 - type: mrr_at_5 value: 55.1 - type: ndcg_at_1 value: 45.266 - type: ndcg_at_10 value: 58.836 - type: ndcg_at_100 value: 62.863 - type: ndcg_at_1000 value: 63.912 - type: ndcg_at_3 value: 53.19199999999999 - type: ndcg_at_5 value: 56.125 - type: precision_at_1 value: 45.266 - type: precision_at_10 value: 9.492 - type: precision_at_100 value: 1.236 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 23.762 - type: precision_at_5 value: 16.414 - type: recall_at_1 value: 39.682 - type: recall_at_10 value: 73.233 - type: recall_at_100 value: 90.335 - type: recall_at_1000 value: 97.452 - type: recall_at_3 value: 58.562000000000005 - type: recall_at_5 value: 65.569 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.743 - type: map_at_10 value: 34.016000000000005 - type: map_at_100 value: 35.028999999999996 - type: map_at_1000 value: 35.113 - type: map_at_3 value: 31.763 - type: map_at_5 value: 33.013999999999996 - type: mrr_at_1 value: 28.927000000000003 - type: mrr_at_10 value: 36.32 - type: mrr_at_100 value: 37.221 - type: mrr_at_1000 value: 37.281 - type: mrr_at_3 value: 34.105000000000004 - type: mrr_at_5 value: 35.371 - type: ndcg_at_1 value: 28.927000000000003 - type: ndcg_at_10 value: 38.474000000000004 - type: ndcg_at_100 value: 43.580000000000005 - type: ndcg_at_1000 value: 45.64 - type: ndcg_at_3 value: 34.035 - type: ndcg_at_5 value: 36.186 - type: precision_at_1 value: 28.927000000000003 - type: precision_at_10 value: 5.74 - type: precision_at_100 value: 0.8710000000000001 - type: precision_at_1000 value: 0.108 - type: precision_at_3 value: 14.124 - type: precision_at_5 value: 9.74 - type: recall_at_1 value: 26.743 - type: recall_at_10 value: 49.955 - type: recall_at_100 value: 73.904 - type: recall_at_1000 value: 89.133 - type: recall_at_3 value: 38.072 - type: recall_at_5 value: 43.266 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 16.928 - type: map_at_10 value: 23.549 - type: map_at_100 value: 24.887 - type: map_at_1000 value: 25.018 - type: map_at_3 value: 21.002000000000002 - type: map_at_5 value: 22.256 - type: mrr_at_1 value: 21.02 - type: mrr_at_10 value: 27.898 - type: mrr_at_100 value: 29.018 - type: mrr_at_1000 value: 29.099999999999998 - type: mrr_at_3 value: 25.456 - type: mrr_at_5 value: 26.625 - type: ndcg_at_1 value: 21.02 - type: ndcg_at_10 value: 28.277 - type: ndcg_at_100 value: 34.54 - type: ndcg_at_1000 value: 37.719 - type: ndcg_at_3 value: 23.707 - type: ndcg_at_5 value: 25.482 - type: precision_at_1 value: 21.02 - type: precision_at_10 value: 5.361 - type: precision_at_100 value: 0.9809999999999999 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 11.401 - type: precision_at_5 value: 8.209 - type: recall_at_1 value: 16.928 - type: recall_at_10 value: 38.601 - type: recall_at_100 value: 65.759 - type: recall_at_1000 value: 88.543 - type: recall_at_3 value: 25.556 - type: recall_at_5 value: 30.447000000000003 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 28.549000000000003 - type: map_at_10 value: 38.426 - type: map_at_100 value: 39.845000000000006 - type: map_at_1000 value: 39.956 - type: map_at_3 value: 35.372 - type: map_at_5 value: 37.204 - type: mrr_at_1 value: 35.034 - type: mrr_at_10 value: 44.041000000000004 - type: mrr_at_100 value: 44.95 - type: mrr_at_1000 value: 44.997 - type: mrr_at_3 value: 41.498000000000005 - type: mrr_at_5 value: 43.077 - type: ndcg_at_1 value: 35.034 - type: ndcg_at_10 value: 44.218 - type: ndcg_at_100 value: 49.958000000000006 - type: ndcg_at_1000 value: 52.019000000000005 - type: ndcg_at_3 value: 39.34 - type: ndcg_at_5 value: 41.892 - type: precision_at_1 value: 35.034 - type: precision_at_10 value: 7.911 - type: precision_at_100 value: 1.26 - type: precision_at_1000 value: 0.16 - type: precision_at_3 value: 18.511 - type: precision_at_5 value: 13.205 - type: recall_at_1 value: 28.549000000000003 - type: recall_at_10 value: 56.035999999999994 - type: recall_at_100 value: 79.701 - type: recall_at_1000 value: 93.149 - type: recall_at_3 value: 42.275 - type: recall_at_5 value: 49.097 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 29.391000000000002 - type: map_at_10 value: 39.48 - type: map_at_100 value: 40.727000000000004 - type: map_at_1000 value: 40.835 - type: map_at_3 value: 36.234 - type: map_at_5 value: 37.877 - type: mrr_at_1 value: 35.959 - type: mrr_at_10 value: 44.726 - type: mrr_at_100 value: 45.531 - type: mrr_at_1000 value: 45.582 - type: mrr_at_3 value: 42.047000000000004 - type: mrr_at_5 value: 43.611 - type: ndcg_at_1 value: 35.959 - type: ndcg_at_10 value: 45.303 - type: ndcg_at_100 value: 50.683 - type: ndcg_at_1000 value: 52.818 - type: ndcg_at_3 value: 39.987 - type: ndcg_at_5 value: 42.243 - type: precision_at_1 value: 35.959 - type: precision_at_10 value: 8.241999999999999 - type: precision_at_100 value: 1.274 - type: precision_at_1000 value: 0.163 - type: precision_at_3 value: 18.836 - type: precision_at_5 value: 13.196 - type: recall_at_1 value: 29.391000000000002 - type: recall_at_10 value: 57.364000000000004 - type: recall_at_100 value: 80.683 - type: recall_at_1000 value: 94.918 - type: recall_at_3 value: 42.263 - type: recall_at_5 value: 48.634 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.791749999999997 - type: map_at_10 value: 35.75541666666667 - type: map_at_100 value: 37.00791666666667 - type: map_at_1000 value: 37.12408333333333 - type: map_at_3 value: 33.02966666666667 - type: map_at_5 value: 34.56866666666667 - type: mrr_at_1 value: 31.744333333333337 - type: mrr_at_10 value: 39.9925 - type: mrr_at_100 value: 40.86458333333333 - type: mrr_at_1000 value: 40.92175000000001 - type: mrr_at_3 value: 37.68183333333334 - type: mrr_at_5 value: 39.028499999999994 - type: ndcg_at_1 value: 31.744333333333337 - type: ndcg_at_10 value: 40.95008333333334 - type: ndcg_at_100 value: 46.25966666666667 - type: ndcg_at_1000 value: 48.535333333333334 - type: ndcg_at_3 value: 36.43333333333333 - type: ndcg_at_5 value: 38.602333333333334 - type: precision_at_1 value: 31.744333333333337 - type: precision_at_10 value: 7.135166666666666 - type: precision_at_100 value: 1.1535833333333334 - type: precision_at_1000 value: 0.15391666666666665 - type: precision_at_3 value: 16.713 - type: precision_at_5 value: 11.828416666666666 - type: recall_at_1 value: 26.791749999999997 - type: recall_at_10 value: 51.98625 - type: recall_at_100 value: 75.30358333333334 - type: recall_at_1000 value: 91.05433333333333 - type: recall_at_3 value: 39.39583333333333 - type: recall_at_5 value: 45.05925 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.219 - type: map_at_10 value: 29.162 - type: map_at_100 value: 30.049999999999997 - type: map_at_1000 value: 30.144 - type: map_at_3 value: 27.204 - type: map_at_5 value: 28.351 - type: mrr_at_1 value: 25.153 - type: mrr_at_10 value: 31.814999999999998 - type: mrr_at_100 value: 32.573 - type: mrr_at_1000 value: 32.645 - type: mrr_at_3 value: 29.934 - type: mrr_at_5 value: 30.946 - type: ndcg_at_1 value: 25.153 - type: ndcg_at_10 value: 33.099000000000004 - type: ndcg_at_100 value: 37.768 - type: ndcg_at_1000 value: 40.331 - type: ndcg_at_3 value: 29.473 - type: ndcg_at_5 value: 31.206 - type: precision_at_1 value: 25.153 - type: precision_at_10 value: 5.183999999999999 - type: precision_at_100 value: 0.8170000000000001 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 12.831999999999999 - type: precision_at_5 value: 8.895999999999999 - type: recall_at_1 value: 22.219 - type: recall_at_10 value: 42.637 - type: recall_at_100 value: 64.704 - type: recall_at_1000 value: 83.963 - type: recall_at_3 value: 32.444 - type: recall_at_5 value: 36.802 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 17.427999999999997 - type: map_at_10 value: 24.029 - type: map_at_100 value: 25.119999999999997 - type: map_at_1000 value: 25.257 - type: map_at_3 value: 22.016 - type: map_at_5 value: 23.143 - type: mrr_at_1 value: 21.129 - type: mrr_at_10 value: 27.750000000000004 - type: mrr_at_100 value: 28.666999999999998 - type: mrr_at_1000 value: 28.754999999999995 - type: mrr_at_3 value: 25.849 - type: mrr_at_5 value: 26.939999999999998 - type: ndcg_at_1 value: 21.129 - type: ndcg_at_10 value: 28.203 - type: ndcg_at_100 value: 33.44 - type: ndcg_at_1000 value: 36.61 - type: ndcg_at_3 value: 24.648999999999997 - type: ndcg_at_5 value: 26.316 - type: precision_at_1 value: 21.129 - type: precision_at_10 value: 5.055 - type: precision_at_100 value: 0.909 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 11.666 - type: precision_at_5 value: 8.3 - type: recall_at_1 value: 17.427999999999997 - type: recall_at_10 value: 36.923 - type: recall_at_100 value: 60.606 - type: recall_at_1000 value: 83.19 - type: recall_at_3 value: 26.845000000000002 - type: recall_at_5 value: 31.247000000000003 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.457000000000004 - type: map_at_10 value: 35.228 - type: map_at_100 value: 36.475 - type: map_at_1000 value: 36.585 - type: map_at_3 value: 32.444 - type: map_at_5 value: 34.046 - type: mrr_at_1 value: 30.784 - type: mrr_at_10 value: 39.133 - type: mrr_at_100 value: 40.11 - type: mrr_at_1000 value: 40.169 - type: mrr_at_3 value: 36.692 - type: mrr_at_5 value: 38.17 - type: ndcg_at_1 value: 30.784 - type: ndcg_at_10 value: 40.358 - type: ndcg_at_100 value: 46.119 - type: ndcg_at_1000 value: 48.428 - type: ndcg_at_3 value: 35.504000000000005 - type: ndcg_at_5 value: 37.864 - type: precision_at_1 value: 30.784 - type: precision_at_10 value: 6.800000000000001 - type: precision_at_100 value: 1.083 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 15.920000000000002 - type: precision_at_5 value: 11.437 - type: recall_at_1 value: 26.457000000000004 - type: recall_at_10 value: 51.845 - type: recall_at_100 value: 77.046 - type: recall_at_1000 value: 92.892 - type: recall_at_3 value: 38.89 - type: recall_at_5 value: 44.688 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 29.378999999999998 - type: map_at_10 value: 37.373 - type: map_at_100 value: 39.107 - type: map_at_1000 value: 39.317 - type: map_at_3 value: 34.563 - type: map_at_5 value: 36.173 - type: mrr_at_1 value: 35.178 - type: mrr_at_10 value: 42.44 - type: mrr_at_100 value: 43.434 - type: mrr_at_1000 value: 43.482 - type: mrr_at_3 value: 39.987 - type: mrr_at_5 value: 41.370000000000005 - type: ndcg_at_1 value: 35.178 - type: ndcg_at_10 value: 42.82 - type: ndcg_at_100 value: 48.935 - type: ndcg_at_1000 value: 51.28 - type: ndcg_at_3 value: 38.562999999999995 - type: ndcg_at_5 value: 40.687 - type: precision_at_1 value: 35.178 - type: precision_at_10 value: 7.945 - type: precision_at_100 value: 1.524 - type: precision_at_1000 value: 0.242 - type: precision_at_3 value: 17.721 - type: precision_at_5 value: 12.925 - type: recall_at_1 value: 29.378999999999998 - type: recall_at_10 value: 52.141999999999996 - type: recall_at_100 value: 79.49000000000001 - type: recall_at_1000 value: 93.782 - type: recall_at_3 value: 39.579 - type: recall_at_5 value: 45.462 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 19.814999999999998 - type: map_at_10 value: 27.383999999999997 - type: map_at_100 value: 28.483999999999998 - type: map_at_1000 value: 28.585 - type: map_at_3 value: 24.807000000000002 - type: map_at_5 value: 26.485999999999997 - type: mrr_at_1 value: 21.996 - type: mrr_at_10 value: 29.584 - type: mrr_at_100 value: 30.611 - type: mrr_at_1000 value: 30.684 - type: mrr_at_3 value: 27.11 - type: mrr_at_5 value: 28.746 - type: ndcg_at_1 value: 21.996 - type: ndcg_at_10 value: 32.024 - type: ndcg_at_100 value: 37.528 - type: ndcg_at_1000 value: 40.150999999999996 - type: ndcg_at_3 value: 27.016000000000002 - type: ndcg_at_5 value: 29.927999999999997 - type: precision_at_1 value: 21.996 - type: precision_at_10 value: 5.102 - type: precision_at_100 value: 0.856 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 11.583 - type: precision_at_5 value: 8.577 - type: recall_at_1 value: 19.814999999999998 - type: recall_at_10 value: 44.239 - type: recall_at_100 value: 69.269 - type: recall_at_1000 value: 89.216 - type: recall_at_3 value: 31.102999999999998 - type: recall_at_5 value: 38.078 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 11.349 - type: map_at_10 value: 19.436 - type: map_at_100 value: 21.282999999999998 - type: map_at_1000 value: 21.479 - type: map_at_3 value: 15.841 - type: map_at_5 value: 17.558 - type: mrr_at_1 value: 25.863000000000003 - type: mrr_at_10 value: 37.218 - type: mrr_at_100 value: 38.198 - type: mrr_at_1000 value: 38.236 - type: mrr_at_3 value: 33.409 - type: mrr_at_5 value: 35.602000000000004 - type: ndcg_at_1 value: 25.863000000000003 - type: ndcg_at_10 value: 27.953 - type: ndcg_at_100 value: 35.327 - type: ndcg_at_1000 value: 38.708999999999996 - type: ndcg_at_3 value: 21.985 - type: ndcg_at_5 value: 23.957 - type: precision_at_1 value: 25.863000000000003 - type: precision_at_10 value: 8.99 - type: precision_at_100 value: 1.6889999999999998 - type: precision_at_1000 value: 0.232 - type: precision_at_3 value: 16.308 - type: precision_at_5 value: 12.912 - type: recall_at_1 value: 11.349 - type: recall_at_10 value: 34.581 - type: recall_at_100 value: 60.178 - type: recall_at_1000 value: 78.88199999999999 - type: recall_at_3 value: 20.041999999999998 - type: recall_at_5 value: 25.458 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 7.893 - type: map_at_10 value: 15.457 - type: map_at_100 value: 20.905 - type: map_at_1000 value: 22.116 - type: map_at_3 value: 11.593 - type: map_at_5 value: 13.134 - type: mrr_at_1 value: 57.49999999999999 - type: mrr_at_10 value: 65.467 - type: mrr_at_100 value: 66.022 - type: mrr_at_1000 value: 66.039 - type: mrr_at_3 value: 63.458000000000006 - type: mrr_at_5 value: 64.546 - type: ndcg_at_1 value: 45.875 - type: ndcg_at_10 value: 33.344 - type: ndcg_at_100 value: 36.849 - type: ndcg_at_1000 value: 44.03 - type: ndcg_at_3 value: 37.504 - type: ndcg_at_5 value: 34.892 - type: precision_at_1 value: 57.49999999999999 - type: precision_at_10 value: 25.95 - type: precision_at_100 value: 7.89 - type: precision_at_1000 value: 1.669 - type: precision_at_3 value: 40.333000000000006 - type: precision_at_5 value: 33.050000000000004 - type: recall_at_1 value: 7.893 - type: recall_at_10 value: 20.724999999999998 - type: recall_at_100 value: 42.516 - type: recall_at_1000 value: 65.822 - type: recall_at_3 value: 12.615000000000002 - type: recall_at_5 value: 15.482000000000001 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 51.760000000000005 - type: f1 value: 45.51690565701713 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 53.882 - type: map_at_10 value: 65.902 - type: map_at_100 value: 66.33 - type: map_at_1000 value: 66.348 - type: map_at_3 value: 63.75999999999999 - type: map_at_5 value: 65.181 - type: mrr_at_1 value: 58.041 - type: mrr_at_10 value: 70.133 - type: mrr_at_100 value: 70.463 - type: mrr_at_1000 value: 70.47 - type: mrr_at_3 value: 68.164 - type: mrr_at_5 value: 69.465 - type: ndcg_at_1 value: 58.041 - type: ndcg_at_10 value: 71.84700000000001 - type: ndcg_at_100 value: 73.699 - type: ndcg_at_1000 value: 74.06700000000001 - type: ndcg_at_3 value: 67.855 - type: ndcg_at_5 value: 70.203 - type: precision_at_1 value: 58.041 - type: precision_at_10 value: 9.427000000000001 - type: precision_at_100 value: 1.049 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 27.278000000000002 - type: precision_at_5 value: 17.693 - type: recall_at_1 value: 53.882 - type: recall_at_10 value: 85.99 - type: recall_at_100 value: 94.09100000000001 - type: recall_at_1000 value: 96.612 - type: recall_at_3 value: 75.25 - type: recall_at_5 value: 80.997 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 19.165 - type: map_at_10 value: 31.845000000000002 - type: map_at_100 value: 33.678999999999995 - type: map_at_1000 value: 33.878 - type: map_at_3 value: 27.881 - type: map_at_5 value: 30.049999999999997 - type: mrr_at_1 value: 38.272 - type: mrr_at_10 value: 47.04 - type: mrr_at_100 value: 47.923 - type: mrr_at_1000 value: 47.973 - type: mrr_at_3 value: 44.985 - type: mrr_at_5 value: 46.150000000000006 - type: ndcg_at_1 value: 38.272 - type: ndcg_at_10 value: 39.177 - type: ndcg_at_100 value: 45.995000000000005 - type: ndcg_at_1000 value: 49.312 - type: ndcg_at_3 value: 36.135 - type: ndcg_at_5 value: 36.936 - type: precision_at_1 value: 38.272 - type: precision_at_10 value: 10.926 - type: precision_at_100 value: 1.809 - type: precision_at_1000 value: 0.23700000000000002 - type: precision_at_3 value: 24.331 - type: precision_at_5 value: 17.747 - type: recall_at_1 value: 19.165 - type: recall_at_10 value: 45.103 - type: recall_at_100 value: 70.295 - type: recall_at_1000 value: 90.592 - type: recall_at_3 value: 32.832 - type: recall_at_5 value: 37.905 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 32.397 - type: map_at_10 value: 44.83 - type: map_at_100 value: 45.716 - type: map_at_1000 value: 45.797 - type: map_at_3 value: 41.955999999999996 - type: map_at_5 value: 43.736999999999995 - type: mrr_at_1 value: 64.794 - type: mrr_at_10 value: 71.866 - type: mrr_at_100 value: 72.22 - type: mrr_at_1000 value: 72.238 - type: mrr_at_3 value: 70.416 - type: mrr_at_5 value: 71.304 - type: ndcg_at_1 value: 64.794 - type: ndcg_at_10 value: 54.186 - type: ndcg_at_100 value: 57.623000000000005 - type: ndcg_at_1000 value: 59.302 - type: ndcg_at_3 value: 49.703 - type: ndcg_at_5 value: 52.154999999999994 - type: precision_at_1 value: 64.794 - type: precision_at_10 value: 11.219 - type: precision_at_100 value: 1.394 - type: precision_at_1000 value: 0.16199999999999998 - type: precision_at_3 value: 30.767 - type: precision_at_5 value: 20.397000000000002 - type: recall_at_1 value: 32.397 - type: recall_at_10 value: 56.096999999999994 - type: recall_at_100 value: 69.696 - type: recall_at_1000 value: 80.88499999999999 - type: recall_at_3 value: 46.150999999999996 - type: recall_at_5 value: 50.993 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 81.1744 - type: ap value: 75.44973697032414 - type: f1 value: 81.09901117955782 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 19.519000000000002 - type: map_at_10 value: 31.025000000000002 - type: map_at_100 value: 32.275999999999996 - type: map_at_1000 value: 32.329 - type: map_at_3 value: 27.132 - type: map_at_5 value: 29.415999999999997 - type: mrr_at_1 value: 20.115 - type: mrr_at_10 value: 31.569000000000003 - type: mrr_at_100 value: 32.768 - type: mrr_at_1000 value: 32.816 - type: mrr_at_3 value: 27.748 - type: mrr_at_5 value: 29.956 - type: ndcg_at_1 value: 20.115 - type: ndcg_at_10 value: 37.756 - type: ndcg_at_100 value: 43.858000000000004 - type: ndcg_at_1000 value: 45.199 - type: ndcg_at_3 value: 29.818 - type: ndcg_at_5 value: 33.875 - type: precision_at_1 value: 20.115 - type: precision_at_10 value: 6.122 - type: precision_at_100 value: 0.919 - type: precision_at_1000 value: 0.10300000000000001 - type: precision_at_3 value: 12.794 - type: precision_at_5 value: 9.731 - type: recall_at_1 value: 19.519000000000002 - type: recall_at_10 value: 58.62500000000001 - type: recall_at_100 value: 86.99 - type: recall_at_1000 value: 97.268 - type: recall_at_3 value: 37.002 - type: recall_at_5 value: 46.778 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.71865025079799 - type: f1 value: 93.38906173610519 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 70.2576379388965 - type: f1 value: 49.20405830249464 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.48486886348351 - type: f1 value: 64.92199176095157 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.59246805648958 - type: f1 value: 72.1222026389164 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 30.887642595096825 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 28.3764418784054 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.81544126336991 - type: mrr value: 32.82666576268031 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.185 - type: map_at_10 value: 11.158 - type: map_at_100 value: 14.041 - type: map_at_1000 value: 15.360999999999999 - type: map_at_3 value: 8.417 - type: map_at_5 value: 9.378 - type: mrr_at_1 value: 44.582 - type: mrr_at_10 value: 53.083999999999996 - type: mrr_at_100 value: 53.787 - type: mrr_at_1000 value: 53.824000000000005 - type: mrr_at_3 value: 51.187000000000005 - type: mrr_at_5 value: 52.379 - type: ndcg_at_1 value: 42.57 - type: ndcg_at_10 value: 31.593 - type: ndcg_at_100 value: 29.093999999999998 - type: ndcg_at_1000 value: 37.909 - type: ndcg_at_3 value: 37.083 - type: ndcg_at_5 value: 34.397 - type: precision_at_1 value: 43.963 - type: precision_at_10 value: 23.498 - type: precision_at_100 value: 7.6160000000000005 - type: precision_at_1000 value: 2.032 - type: precision_at_3 value: 34.572 - type: precision_at_5 value: 29.412 - type: recall_at_1 value: 5.185 - type: recall_at_10 value: 15.234 - type: recall_at_100 value: 29.49 - type: recall_at_1000 value: 62.273999999999994 - type: recall_at_3 value: 9.55 - type: recall_at_5 value: 11.103 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 23.803 - type: map_at_10 value: 38.183 - type: map_at_100 value: 39.421 - type: map_at_1000 value: 39.464 - type: map_at_3 value: 33.835 - type: map_at_5 value: 36.327 - type: mrr_at_1 value: 26.68 - type: mrr_at_10 value: 40.439 - type: mrr_at_100 value: 41.415 - type: mrr_at_1000 value: 41.443999999999996 - type: mrr_at_3 value: 36.612 - type: mrr_at_5 value: 38.877 - type: ndcg_at_1 value: 26.68 - type: ndcg_at_10 value: 45.882 - type: ndcg_at_100 value: 51.227999999999994 - type: ndcg_at_1000 value: 52.207 - type: ndcg_at_3 value: 37.511 - type: ndcg_at_5 value: 41.749 - type: precision_at_1 value: 26.68 - type: precision_at_10 value: 7.9750000000000005 - type: precision_at_100 value: 1.0959999999999999 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_3 value: 17.449 - type: precision_at_5 value: 12.897 - type: recall_at_1 value: 23.803 - type: recall_at_10 value: 67.152 - type: recall_at_100 value: 90.522 - type: recall_at_1000 value: 97.743 - type: recall_at_3 value: 45.338 - type: recall_at_5 value: 55.106 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 70.473 - type: map_at_10 value: 84.452 - type: map_at_100 value: 85.101 - type: map_at_1000 value: 85.115 - type: map_at_3 value: 81.435 - type: map_at_5 value: 83.338 - type: mrr_at_1 value: 81.19 - type: mrr_at_10 value: 87.324 - type: mrr_at_100 value: 87.434 - type: mrr_at_1000 value: 87.435 - type: mrr_at_3 value: 86.31 - type: mrr_at_5 value: 87.002 - type: ndcg_at_1 value: 81.21000000000001 - type: ndcg_at_10 value: 88.19 - type: ndcg_at_100 value: 89.44 - type: ndcg_at_1000 value: 89.526 - type: ndcg_at_3 value: 85.237 - type: ndcg_at_5 value: 86.892 - type: precision_at_1 value: 81.21000000000001 - type: precision_at_10 value: 13.417000000000002 - type: precision_at_100 value: 1.537 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.31 - type: precision_at_5 value: 24.59 - type: recall_at_1 value: 70.473 - type: recall_at_10 value: 95.367 - type: recall_at_100 value: 99.616 - type: recall_at_1000 value: 99.996 - type: recall_at_3 value: 86.936 - type: recall_at_5 value: 91.557 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 59.25776525253911 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 63.22135271663078 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 4.003 - type: map_at_10 value: 10.062999999999999 - type: map_at_100 value: 11.854000000000001 - type: map_at_1000 value: 12.145999999999999 - type: map_at_3 value: 7.242 - type: map_at_5 value: 8.652999999999999 - type: mrr_at_1 value: 19.7 - type: mrr_at_10 value: 29.721999999999998 - type: mrr_at_100 value: 30.867 - type: mrr_at_1000 value: 30.944 - type: mrr_at_3 value: 26.683 - type: mrr_at_5 value: 28.498 - type: ndcg_at_1 value: 19.7 - type: ndcg_at_10 value: 17.095 - type: ndcg_at_100 value: 24.375 - type: ndcg_at_1000 value: 29.831000000000003 - type: ndcg_at_3 value: 16.305 - type: ndcg_at_5 value: 14.291 - type: precision_at_1 value: 19.7 - type: precision_at_10 value: 8.799999999999999 - type: precision_at_100 value: 1.9349999999999998 - type: precision_at_1000 value: 0.32399999999999995 - type: precision_at_3 value: 15.2 - type: precision_at_5 value: 12.540000000000001 - type: recall_at_1 value: 4.003 - type: recall_at_10 value: 17.877000000000002 - type: recall_at_100 value: 39.217 - type: recall_at_1000 value: 65.862 - type: recall_at_3 value: 9.242 - type: recall_at_5 value: 12.715000000000002 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_spearman value: 80.25888668589654 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_spearman value: 77.02037527837669 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_spearman value: 86.58432681008449 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_spearman value: 81.31697756099051 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_spearman value: 88.18867599667057 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_spearman value: 84.87853941747623 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_spearman value: 89.46479925383916 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_spearman value: 66.45272113649146 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_spearman value: 86.43357313527851 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 78.82761687254882 - type: mrr value: 93.46223674655047 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 44.583 - type: map_at_10 value: 52.978 - type: map_at_100 value: 53.803 - type: map_at_1000 value: 53.839999999999996 - type: map_at_3 value: 50.03300000000001 - type: map_at_5 value: 51.939 - type: mrr_at_1 value: 47.0 - type: mrr_at_10 value: 54.730000000000004 - type: mrr_at_100 value: 55.31399999999999 - type: mrr_at_1000 value: 55.346 - type: mrr_at_3 value: 52.0 - type: mrr_at_5 value: 53.783 - type: ndcg_at_1 value: 47.0 - type: ndcg_at_10 value: 57.82899999999999 - type: ndcg_at_100 value: 61.49400000000001 - type: ndcg_at_1000 value: 62.676 - type: ndcg_at_3 value: 52.373000000000005 - type: ndcg_at_5 value: 55.481 - type: precision_at_1 value: 47.0 - type: precision_at_10 value: 7.867 - type: precision_at_100 value: 0.997 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 20.556 - type: precision_at_5 value: 14.066999999999998 - type: recall_at_1 value: 44.583 - type: recall_at_10 value: 71.172 - type: recall_at_100 value: 87.7 - type: recall_at_1000 value: 97.333 - type: recall_at_3 value: 56.511 - type: recall_at_5 value: 64.206 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.66237623762376 - type: cos_sim_ap value: 90.35465126226322 - type: cos_sim_f1 value: 82.44575936883628 - type: cos_sim_precision value: 81.32295719844358 - type: cos_sim_recall value: 83.6 - type: dot_accuracy value: 99.66237623762376 - type: dot_ap value: 90.35464287920453 - type: dot_f1 value: 82.44575936883628 - type: dot_precision value: 81.32295719844358 - type: dot_recall value: 83.6 - type: euclidean_accuracy value: 99.66237623762376 - type: euclidean_ap value: 90.3546512622632 - type: euclidean_f1 value: 82.44575936883628 - type: euclidean_precision value: 81.32295719844358 - type: euclidean_recall value: 83.6 - type: manhattan_accuracy value: 99.65940594059406 - type: manhattan_ap value: 90.29220174849843 - type: manhattan_f1 value: 82.4987605354487 - type: manhattan_precision value: 81.80924287118977 - type: manhattan_recall value: 83.2 - type: max_accuracy value: 99.66237623762376 - type: max_ap value: 90.35465126226322 - type: max_f1 value: 82.4987605354487 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 65.0394225901397 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 35.27954189859326 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 50.99055979974896 - type: mrr value: 51.82745257193787 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.21655465344237 - type: cos_sim_spearman value: 29.853205339630172 - type: dot_pearson value: 30.216540628083564 - type: dot_spearman value: 29.868978894753027 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.2 - type: map_at_10 value: 1.398 - type: map_at_100 value: 7.406 - type: map_at_1000 value: 18.401 - type: map_at_3 value: 0.479 - type: map_at_5 value: 0.772 - type: mrr_at_1 value: 70.0 - type: mrr_at_10 value: 79.25999999999999 - type: mrr_at_100 value: 79.25999999999999 - type: mrr_at_1000 value: 79.25999999999999 - type: mrr_at_3 value: 77.333 - type: mrr_at_5 value: 78.133 - type: ndcg_at_1 value: 63.0 - type: ndcg_at_10 value: 58.548 - type: ndcg_at_100 value: 45.216 - type: ndcg_at_1000 value: 41.149 - type: ndcg_at_3 value: 60.641999999999996 - type: ndcg_at_5 value: 61.135 - type: precision_at_1 value: 70.0 - type: precision_at_10 value: 64.0 - type: precision_at_100 value: 46.92 - type: precision_at_1000 value: 18.642 - type: precision_at_3 value: 64.667 - type: precision_at_5 value: 66.4 - type: recall_at_1 value: 0.2 - type: recall_at_10 value: 1.6729999999999998 - type: recall_at_100 value: 10.856 - type: recall_at_1000 value: 38.964999999999996 - type: recall_at_3 value: 0.504 - type: recall_at_5 value: 0.852 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 1.6629999999999998 - type: map_at_10 value: 8.601 - type: map_at_100 value: 14.354 - type: map_at_1000 value: 15.927 - type: map_at_3 value: 4.1930000000000005 - type: map_at_5 value: 5.655 - type: mrr_at_1 value: 18.367 - type: mrr_at_10 value: 34.466 - type: mrr_at_100 value: 35.235 - type: mrr_at_1000 value: 35.27 - type: mrr_at_3 value: 28.571 - type: mrr_at_5 value: 31.531 - type: ndcg_at_1 value: 14.285999999999998 - type: ndcg_at_10 value: 20.374 - type: ndcg_at_100 value: 33.532000000000004 - type: ndcg_at_1000 value: 45.561 - type: ndcg_at_3 value: 18.442 - type: ndcg_at_5 value: 18.076 - type: precision_at_1 value: 18.367 - type: precision_at_10 value: 20.204 - type: precision_at_100 value: 7.489999999999999 - type: precision_at_1000 value: 1.5630000000000002 - type: precision_at_3 value: 21.769 - type: precision_at_5 value: 20.408 - type: recall_at_1 value: 1.6629999999999998 - type: recall_at_10 value: 15.549 - type: recall_at_100 value: 47.497 - type: recall_at_1000 value: 84.524 - type: recall_at_3 value: 5.289 - type: recall_at_5 value: 8.035 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.8194 - type: ap value: 14.447702451658554 - type: f1 value: 55.13659412856185 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 63.310696095076416 - type: f1 value: 63.360434851097814 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 51.30677907335145 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.12386004649221 - type: cos_sim_ap value: 73.99096426215495 - type: cos_sim_f1 value: 68.18416968442834 - type: cos_sim_precision value: 66.86960933536275 - type: cos_sim_recall value: 69.55145118733509 - type: dot_accuracy value: 86.12386004649221 - type: dot_ap value: 73.99096813038672 - type: dot_f1 value: 68.18416968442834 - type: dot_precision value: 66.86960933536275 - type: dot_recall value: 69.55145118733509 - type: euclidean_accuracy value: 86.12386004649221 - type: euclidean_ap value: 73.99095984980165 - type: euclidean_f1 value: 68.18416968442834 - type: euclidean_precision value: 66.86960933536275 - type: euclidean_recall value: 69.55145118733509 - type: manhattan_accuracy value: 86.09405734040651 - type: manhattan_ap value: 73.96825745608601 - type: manhattan_f1 value: 68.13888179729383 - type: manhattan_precision value: 65.99901088031652 - type: manhattan_recall value: 70.42216358839049 - type: max_accuracy value: 86.12386004649221 - type: max_ap value: 73.99096813038672 - type: max_f1 value: 68.18416968442834 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.99367407924865 - type: cos_sim_ap value: 86.19720829843081 - type: cos_sim_f1 value: 78.39889075384951 - type: cos_sim_precision value: 74.5110278818144 - type: cos_sim_recall value: 82.71481367416075 - type: dot_accuracy value: 88.99367407924865 - type: dot_ap value: 86.19718471454047 - type: dot_f1 value: 78.39889075384951 - type: dot_precision value: 74.5110278818144 - type: dot_recall value: 82.71481367416075 - type: euclidean_accuracy value: 88.99367407924865 - type: euclidean_ap value: 86.1972021422436 - type: euclidean_f1 value: 78.39889075384951 - type: euclidean_precision value: 74.5110278818144 - type: euclidean_recall value: 82.71481367416075 - type: manhattan_accuracy value: 88.95680521597392 - type: manhattan_ap value: 86.16659921351506 - type: manhattan_f1 value: 78.39125971550081 - type: manhattan_precision value: 74.82502799552073 - type: manhattan_recall value: 82.31444410224823 - type: max_accuracy value: 88.99367407924865 - type: max_ap value: 86.19720829843081 - type: max_f1 value: 78.39889075384951 --- # hkunlp/instructor-base We introduce **Instructor**👨‍🏫, an instruction-finetuned text embedding model that can generate text embeddings tailored to any task (e.g., classification, retrieval, clustering, text evaluation, etc.) and domains (e.g., science, finance, etc.) ***by simply providing the task instruction, without any finetuning***. Instructor👨‍ achieves sota on 70 diverse embedding tasks! The model is easy to use with **our customized** `sentence-transformer` library. For more details, check out [our paper](https://arxiv.org/abs/2212.09741) and [project page](https://instructor-embedding.github.io/)! **************************** **Updates** **************************** * 01/21: We released a new [checkpoint](https://huggingface.co/hkunlp/instructor-base) trained with hard negatives, which gives better performance. * 12/21: We released our [paper](https://arxiv.org/abs/2212.09741), [code](https://github.com/HKUNLP/instructor-embedding), [checkpoint](https://huggingface.co/hkunlp/instructor-base) and [project page](https://instructor-embedding.github.io/)! Check them out! ## Quick start <hr /> ## Installation ```bash pip install InstructorEmbedding ``` ## Compute your customized embeddings Then you can use the model like this to calculate domain-specific and task-aware embeddings: ```python from InstructorEmbedding import INSTRUCTOR model = INSTRUCTOR('hkunlp/instructor-base') sentence = "3D ActionSLAM: wearable person tracking in multi-floor environments" instruction = "Represent the Science title:" embeddings = model.encode([[instruction,sentence]]) print(embeddings) ``` ## Use cases <hr /> ## Calculate embeddings for your customized texts If you want to calculate customized embeddings for specific sentences, you may follow the unified template to write instructions: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Represent the `domain` `text_type` for `task_objective`: * `domain` is optional, and it specifies the domain of the text, e.g., science, finance, medicine, etc. * `text_type` is required, and it specifies the encoding unit, e.g., sentence, document, paragraph, etc. * `task_objective` is optional, and it specifies the objective of embedding, e.g., retrieve a document, classify the sentence, etc. ## Calculate Sentence similarities You can further use the model to compute similarities between two groups of sentences, with **customized embeddings**. ```python from sklearn.metrics.pairwise import cosine_similarity sentences_a = [['Represent the Science sentence: ','Parton energy loss in QCD matter'], ['Represent the Financial statement: ','The Federal Reserve on Wednesday raised its benchmark interest rate.']] sentences_b = [['Represent the Science sentence: ','The Chiral Phase Transition in Dissipative Dynamics'], ['Represent the Financial statement: ','The funds rose less than 0.5 per cent on Friday']] embeddings_a = model.encode(sentences_a) embeddings_b = model.encode(sentences_b) similarities = cosine_similarity(embeddings_a,embeddings_b) print(similarities) ``` ## Information Retrieval You can also use **customized embeddings** for information retrieval. ```python import numpy as np from sklearn.metrics.pairwise import cosine_similarity query = [['Represent the Wikipedia question for retrieving supporting documents: ','where is the food stored in a yam plant']] corpus = [['Represent the Wikipedia document for retrieval: ','Capitalism has been dominant in the Western world since the end of feudalism, but most feel[who?] that the term "mixed economies" more precisely describes most contemporary economies, due to their containing both private-owned and state-owned enterprises. In capitalism, prices determine the demand-supply scale. For example, higher demand for certain goods and services lead to higher prices and lower demand for certain goods lead to lower prices.'], ['Represent the Wikipedia document for retrieval: ',"The disparate impact theory is especially controversial under the Fair Housing Act because the Act regulates many activities relating to housing, insurance, and mortgage loans—and some scholars have argued that the theory's use under the Fair Housing Act, combined with extensions of the Community Reinvestment Act, contributed to rise of sub-prime lending and the crash of the U.S. housing market and ensuing global economic recession"], ['Represent the Wikipedia document for retrieval: ','Disparate impact in United States labor law refers to practices in employment, housing, and other areas that adversely affect one group of people of a protected characteristic more than another, even though rules applied by employers or landlords are formally neutral. Although the protected classes vary by statute, most federal civil rights laws protect based on race, color, religion, national origin, and sex as protected traits, and some laws include disability status and other traits as well.']] query_embeddings = model.encode(query) corpus_embeddings = model.encode(corpus) similarities = cosine_similarity(query_embeddings,corpus_embeddings) retrieved_doc_id = np.argmax(similarities) print(retrieved_doc_id) ``` ## Clustering Use **customized embeddings** for clustering texts in groups. ```python import sklearn.cluster sentences = [['Represent the Medicine sentence for clustering: ','Dynamical Scalar Degree of Freedom in Horava-Lifshitz Gravity'], ['Represent the Medicine sentence for clustering: ','Comparison of Atmospheric Neutrino Flux Calculations at Low Energies'], ['Represent the Medicine sentence for clustering: ','Fermion Bags in the Massive Gross-Neveu Model'], ['Represent the Medicine sentence for clustering: ',"QCD corrections to Associated t-tbar-H production at the Tevatron"], ['Represent the Medicine sentence for clustering: ','A New Analysis of the R Measurements: Resonance Parameters of the Higher, Vector States of Charmonium']] embeddings = model.encode(sentences) clustering_model = sklearn.cluster.MiniBatchKMeans(n_clusters=2) clustering_model.fit(embeddings) cluster_assignment = clustering_model.labels_ print(cluster_assignment) ```
66,209
[ [ -0.01410675048828125, -0.07806396484375, 0.035125732421875, 0.0013952255249023438, -0.001987457275390625, -0.014556884765625, -0.01776123046875, -0.007350921630859375, 0.0192413330078125, 0.0206146240234375, -0.01806640625, -0.061309814453125, -0.034759521484375...