modelId
stringlengths
4
111
lastModified
stringlengths
24
24
tags
list
pipeline_tag
stringlengths
5
30
author
stringlengths
2
34
config
null
securityStatus
null
id
stringlengths
4
111
likes
int64
0
9.53k
downloads
int64
2
73.6M
library_name
stringlengths
2
84
created
timestamp[us]
card
stringlengths
101
901k
card_len
int64
101
901k
embeddings
list
sagnikrayc/roberta-base-fever
2023-10-02T14:13:57.000Z
[ "transformers", "pytorch", "safetensors", "roberta", "text-classification", "en", "dataset:copenlu/fever_gold_evidence", "license:afl-3.0", "endpoints_compatible", "region:us" ]
text-classification
sagnikrayc
null
null
sagnikrayc/roberta-base-fever
0
2
transformers
2023-05-26T19:47:58
--- license: afl-3.0 datasets: - copenlu/fever_gold_evidence language: - en metrics: - precision - recall - f1 --- ``` wandb: Run summary: wandb: eval/f1 0.8823 wandb: eval/loss 0.55886 wandb: eval/p 0.88088 wandb: eval/r 0.88558 ``` **Note**: 1. `[evidence_text][SEP][claim]` 2. Only trained/validated on instances length <= 512 tokens.
406
[ [ -0.01418304443359375, -0.06219482421875, 0.033966064453125, 0.0181884765625, -0.0018301010131835938, -0.01093292236328125, 0.00698089599609375, 0.00041365623474121094, 0.0281982421875, 0.03802490234375, -0.022705078125, -0.022979736328125, -0.038421630859375, ...
sagnikrayc/roberta-large-fever
2023-05-26T20:12:44.000Z
[ "transformers", "pytorch", "roberta", "text-classification", "en", "dataset:copenlu/fever_gold_evidence", "license:afl-3.0", "endpoints_compatible", "region:us" ]
text-classification
sagnikrayc
null
null
sagnikrayc/roberta-large-fever
0
2
transformers
2023-05-26T19:52:33
--- license: afl-3.0 datasets: - copenlu/fever_gold_evidence language: - en metrics: - precision - recall - f1 --- ``` wandb: eval/f1 0.88556 wandb: eval/loss 0.62762 wandb: eval/p 0.88384 wandb: eval/r 0.8891 ``` **Note**: 1. `[evidence_text][SEP][claim]` 2. Only trained/validated on instances length <= 512 tokens.
386
[ [ -0.006305694580078125, -0.056915283203125, 0.033203125, 0.0226898193359375, 0.0030345916748046875, -0.0167388916015625, 0.01146697998046875, -0.004894256591796875, 0.023773193359375, 0.0290679931640625, -0.0160064697265625, -0.0198211669921875, -0.03759765625, ...
sagnikrayc/bert-large-cased-fever
2023-05-26T20:14:34.000Z
[ "transformers", "pytorch", "bert", "text-classification", "en", "dataset:copenlu/fever_gold_evidence", "license:afl-3.0", "endpoints_compatible", "region:us" ]
text-classification
sagnikrayc
null
null
sagnikrayc/bert-large-cased-fever
0
2
transformers
2023-05-26T20:01:44
--- license: afl-3.0 datasets: - copenlu/fever_gold_evidence language: - en metrics: - precision - recall - f1 --- ``` wandb: eval/f1 0.87196 wandb: eval/loss 0.73371 wandb: eval/p 0.87077 wandb: eval/r 0.8753 ``` **Note**: 1. `[evidence_text][SEP][claim]` 2. Only trained/validated on instances length <= 512 tokens.
387
[ [ -0.0088043212890625, -0.05712890625, 0.032623291015625, 0.023834228515625, 0.0017986297607421875, -0.0180206298828125, 0.01081085205078125, -0.0031890869140625, 0.0235137939453125, 0.0246429443359375, -0.015960693359375, -0.0214996337890625, -0.037628173828125, ...
sagnikrayc/bert-large-uncased-fever
2023-05-26T20:16:19.000Z
[ "transformers", "pytorch", "bert", "text-classification", "en", "dataset:copenlu/fever_gold_evidence", "license:afl-3.0", "endpoints_compatible", "region:us" ]
text-classification
sagnikrayc
null
null
sagnikrayc/bert-large-uncased-fever
0
2
transformers
2023-05-26T20:12:05
--- license: afl-3.0 datasets: - copenlu/fever_gold_evidence language: - en metrics: - precision - recall - f1 --- ``` wandb: eval/f1 0.87565 wandb: eval/loss 0.70447 wandb: eval/p 0.8745 wandb: eval/r 0.87847 ``` **Note**: 1. `[evidence_text][SEP][claim]` 2. Only trained/validated on instances length <= 512 tokens.
386
[ [ -0.006717681884765625, -0.056884765625, 0.03143310546875, 0.0249786376953125, 0.0015506744384765625, -0.0176849365234375, 0.006412506103515625, -0.0037364959716796875, 0.025146484375, 0.02264404296875, -0.01494598388671875, -0.025115966796875, -0.03704833984375,...
YakovElm/IntelDAOS15Classic_256
2023-05-26T20:40:30.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS15Classic_256
0
2
transformers
2023-05-26T20:39:52
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS15Classic_256 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS15Classic_256 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1959 - Train Accuracy: 0.9460 - Validation Loss: 0.3646 - Validation Accuracy: 0.8859 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2340 | 0.9460 | 0.3859 | 0.8859 | 0 | | 0.2042 | 0.9460 | 0.3765 | 0.8859 | 1 | | 0.1959 | 0.9460 | 0.3646 | 0.8859 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,788
[ [ -0.044891357421875, -0.04254150390625, 0.0201416015625, 0.0013036727905273438, -0.034576416015625, -0.02886962890625, -0.0190582275390625, -0.02691650390625, 0.01446533203125, 0.0106353759765625, -0.05523681640625, -0.04827880859375, -0.051788330078125, -0.0...
Showroom/shoes_subcategory_classifier
2023-05-26T21:08:59.000Z
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "autotrain", "en", "dataset:Showroom/autotrain-data-shoes_categories", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
Showroom
null
null
Showroom/shoes_subcategory_classifier
0
2
transformers
2023-05-26T21:06:24
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain" datasets: - Showroom/autotrain-data-shoes_categories co2_eq_emissions: emissions: 0.19050292538257937 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 62075134986 - CO2 Emissions (in grams): 0.1905 ## Validation Metrics - Loss: 0.372 - Accuracy: 0.903 - Macro F1: 0.801 - Micro F1: 0.903 - Weighted F1: 0.902 - Macro Precision: 0.809 - Micro Precision: 0.903 - Weighted Precision: 0.903 - Macro Recall: 0.796 - Micro Recall: 0.903 - Weighted Recall: 0.903 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Showroom/autotrain-shoes_categories-62075134986 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Showroom/autotrain-shoes_categories-62075134986", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Showroom/autotrain-shoes_categories-62075134986", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
1,311
[ [ -0.03179931640625, -0.01751708984375, 0.0024261474609375, 0.01094818115234375, -0.0050201416015625, 0.00634765625, -0.0013885498046875, -0.01220703125, -0.00305938720703125, -0.000823974609375, -0.04876708984375, -0.035003662109375, -0.05059814453125, -0.005...
YakovElm/IntelDAOS20Classic_256
2023-05-26T22:16:28.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS20Classic_256
0
2
transformers
2023-05-26T22:15:52
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS20Classic_256 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS20Classic_256 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1503 - Train Accuracy: 0.9610 - Validation Loss: 0.3345 - Validation Accuracy: 0.9099 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2440 | 0.9300 | 0.3165 | 0.9099 | 0 | | 0.1550 | 0.9610 | 0.3098 | 0.9099 | 1 | | 0.1503 | 0.9610 | 0.3345 | 0.9099 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,788
[ [ -0.04632568359375, -0.040130615234375, 0.020904541015625, 0.0011196136474609375, -0.0321044921875, -0.0283966064453125, -0.01806640625, -0.0280609130859375, 0.0139923095703125, 0.0108184814453125, -0.055633544921875, -0.04766845703125, -0.051513671875, -0.02...
YakovElm/Apache5Classic_32
2023-05-26T22:37:57.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Apache5Classic_32
0
2
transformers
2023-05-26T22:37:21
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache5Classic_32 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache5Classic_32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2422 - Train Accuracy: 0.9181 - Validation Loss: 0.6553 - Validation Accuracy: 0.8129 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3112 | 0.9049 | 0.4947 | 0.8233 | 0 | | 0.2848 | 0.9120 | 0.4767 | 0.8233 | 1 | | 0.2422 | 0.9181 | 0.6553 | 0.8129 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,778
[ [ -0.044647216796875, -0.0430908203125, 0.0204010009765625, 0.0055389404296875, -0.03533935546875, -0.03143310546875, -0.01776123046875, -0.02777099609375, 0.00989532470703125, 0.01314544677734375, -0.05426025390625, -0.04833984375, -0.052947998046875, -0.0238...
YakovElm/Apache10Classic_32
2023-05-26T23:03:47.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Apache10Classic_32
0
2
transformers
2023-05-26T23:03:04
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache10Classic_32 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache10Classic_32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1864 - Train Accuracy: 0.9398 - Validation Loss: 0.4344 - Validation Accuracy: 0.8631 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2450 | 0.9344 | 0.4172 | 0.8644 | 0 | | 0.2183 | 0.9383 | 0.4524 | 0.8644 | 1 | | 0.1864 | 0.9398 | 0.4344 | 0.8631 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,780
[ [ -0.044952392578125, -0.044525146484375, 0.0210418701171875, 0.006683349609375, -0.0357666015625, -0.03192138671875, -0.0183258056640625, -0.0277557373046875, 0.01119232177734375, 0.01453399658203125, -0.053802490234375, -0.046966552734375, -0.05340576171875, ...
YakovElm/Apache15Classic_32
2023-05-26T23:27:43.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Apache15Classic_32
0
2
transformers
2023-05-26T23:27:09
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache15Classic_32 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache15Classic_32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1627 - Train Accuracy: 0.9533 - Validation Loss: 0.4178 - Validation Accuracy: 0.8924 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.1997 | 0.9498 | 0.3502 | 0.8924 | 0 | | 0.1803 | 0.9542 | 0.3673 | 0.8924 | 1 | | 0.1627 | 0.9533 | 0.4178 | 0.8924 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,780
[ [ -0.0440673828125, -0.044219970703125, 0.0206146240234375, 0.006885528564453125, -0.0364990234375, -0.032135009765625, -0.0176849365234375, -0.025970458984375, 0.01166534423828125, 0.01361846923828125, -0.05413818359375, -0.047515869140625, -0.053009033203125, ...
YakovElm/Jira5Classic_256
2023-05-26T23:47:50.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira5Classic_256
0
2
transformers
2023-05-26T23:47:14
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira5Classic_256 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira5Classic_256 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4111 - Train Accuracy: 0.8090 - Validation Loss: 1.1085 - Validation Accuracy: 0.5237 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5551 | 0.7429 | 0.8002 | 0.4858 | 0 | | 0.4860 | 0.7712 | 0.7765 | 0.4890 | 1 | | 0.4111 | 0.8090 | 1.1085 | 0.5237 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,776
[ [ -0.041107177734375, -0.03985595703125, 0.0200958251953125, -0.0007505416870117188, -0.033843994140625, -0.026519775390625, -0.0165557861328125, -0.0258331298828125, 0.01424407958984375, 0.012420654296875, -0.052093505859375, -0.04876708984375, -0.05047607421875,...
YakovElm/Apache20Classic_32
2023-05-26T23:51:31.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Apache20Classic_32
0
2
transformers
2023-05-26T23:50:38
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache20Classic_32 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache20Classic_32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1517 - Train Accuracy: 0.9624 - Validation Loss: 0.3060 - Validation Accuracy: 0.9055 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.1711 | 0.9581 | 0.4085 | 0.9055 | 0 | | 0.1568 | 0.9624 | 0.3792 | 0.9055 | 1 | | 0.1517 | 0.9624 | 0.3060 | 0.9055 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,780
[ [ -0.044586181640625, -0.045074462890625, 0.020538330078125, 0.0070343017578125, -0.035186767578125, -0.033294677734375, -0.018524169921875, -0.0272369384765625, 0.010406494140625, 0.01363372802734375, -0.05419921875, -0.04791259765625, -0.0528564453125, -0.02...
YakovElm/Apache5Classic_64
2023-05-27T00:28:21.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Apache5Classic_64
0
2
transformers
2023-05-27T00:27:46
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache5Classic_64 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache5Classic_64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2482 - Train Accuracy: 0.9136 - Validation Loss: 0.5374 - Validation Accuracy: 0.7947 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3112 | 0.9051 | 0.5143 | 0.8233 | 0 | | 0.2845 | 0.9116 | 0.4954 | 0.8220 | 1 | | 0.2482 | 0.9136 | 0.5374 | 0.7947 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,778
[ [ -0.04473876953125, -0.043182373046875, 0.0199737548828125, 0.00540924072265625, -0.035369873046875, -0.0323486328125, -0.0180511474609375, -0.0279541015625, 0.00962066650390625, 0.01415252685546875, -0.054046630859375, -0.048828125, -0.053497314453125, -0.02...
YakovElm/Apache10Classic_64
2023-05-27T01:07:04.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Apache10Classic_64
0
2
transformers
2023-05-27T01:06:30
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache10Classic_64 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache10Classic_64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1916 - Train Accuracy: 0.9381 - Validation Loss: 0.4588 - Validation Accuracy: 0.8644 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2421 | 0.9353 | 0.4801 | 0.8644 | 0 | | 0.2256 | 0.9383 | 0.4038 | 0.8644 | 1 | | 0.1916 | 0.9381 | 0.4588 | 0.8644 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,780
[ [ -0.04534912109375, -0.04522705078125, 0.02056884765625, 0.006397247314453125, -0.035888671875, -0.0325927734375, -0.01751708984375, -0.02777099609375, 0.01097869873046875, 0.0146636962890625, -0.053741455078125, -0.047149658203125, -0.052825927734375, -0.023...
YakovElm/Jira10Classic_256
2023-05-27T01:19:24.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira10Classic_256
0
2
transformers
2023-05-27T01:18:48
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira10Classic_256 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira10Classic_256 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3342 - Train Accuracy: 0.8405 - Validation Loss: 0.7061 - Validation Accuracy: 0.6088 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5118 | 0.7817 | 0.8080 | 0.4921 | 0 | | 0.4265 | 0.7849 | 0.8772 | 0.4921 | 1 | | 0.3342 | 0.8405 | 0.7061 | 0.6088 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,778
[ [ -0.04119873046875, -0.04144287109375, 0.0198822021484375, 0.0003428459167480469, -0.033721923828125, -0.0276641845703125, -0.0169219970703125, -0.0251922607421875, 0.016265869140625, 0.01241302490234375, -0.0516357421875, -0.047149658203125, -0.05096435546875, ...
YakovElm/Apache15Classic_64
2023-05-27T01:46:32.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Apache15Classic_64
0
2
transformers
2023-05-27T01:45:58
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache15Classic_64 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache15Classic_64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1664 - Train Accuracy: 0.9542 - Validation Loss: 0.3210 - Validation Accuracy: 0.8924 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.1964 | 0.9533 | 0.3529 | 0.8924 | 0 | | 0.1834 | 0.9542 | 0.3501 | 0.8924 | 1 | | 0.1664 | 0.9542 | 0.3210 | 0.8924 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,780
[ [ -0.04498291015625, -0.043975830078125, 0.0209808349609375, 0.0065155029296875, -0.03564453125, -0.032745361328125, -0.0177459716796875, -0.0265655517578125, 0.01102447509765625, 0.0137786865234375, -0.053131103515625, -0.047637939453125, -0.052734375, -0.025...
raygx/BERT-NepSA-T2
2023-07-22T11:28:22.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:mit", "endpoints_compatible", "region:us" ]
text-classification
raygx
null
null
raygx/BERT-NepSA-T2
0
2
transformers
2023-05-27T02:03:56
--- license: mit base_model: Shushant/nepaliBERT tags: - generated_from_keras_callback model-index: - name: BERT-NepSA-T2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # BERT-NepSA-T2 This model is a fine-tuned version of [Shushant/nepaliBERT](https://huggingface.co/Shushant/nepaliBERT) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.0001} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.31.0 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
1,133
[ [ -0.024566650390625, -0.052215576171875, 0.013763427734375, 0.01471710205078125, -0.044921875, -0.026611328125, -0.005619049072265625, -0.031585693359375, 0.0275115966796875, 0.01113128662109375, -0.049468994140625, -0.0295257568359375, -0.06036376953125, -0....
YakovElm/Apache20Classic_64
2023-05-27T02:30:58.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Apache20Classic_64
0
2
transformers
2023-05-27T02:30:21
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache20Classic_64 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache20Classic_64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1374 - Train Accuracy: 0.9624 - Validation Loss: 0.3081 - Validation Accuracy: 0.9055 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.1664 | 0.9620 | 0.3171 | 0.9055 | 0 | | 0.1522 | 0.9624 | 0.2966 | 0.9055 | 1 | | 0.1374 | 0.9624 | 0.3081 | 0.9055 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,780
[ [ -0.044921875, -0.044830322265625, 0.0204315185546875, 0.0061798095703125, -0.035552978515625, -0.033294677734375, -0.01849365234375, -0.0273590087890625, 0.0106353759765625, 0.013946533203125, -0.054168701171875, -0.048004150390625, -0.05328369140625, -0.024...
YakovElm/Hyperledger5Classic_32
2023-05-27T02:47:14.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger5Classic_32
0
2
transformers
2023-05-27T02:46:41
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger5Classic_32 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger5Classic_32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3550 - Train Accuracy: 0.8578 - Validation Loss: 0.4350 - Validation Accuracy: 0.8361 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.4133 | 0.8547 | 0.4381 | 0.8361 | 0 | | 0.3935 | 0.8554 | 0.4381 | 0.8361 | 1 | | 0.3550 | 0.8578 | 0.4350 | 0.8361 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,788
[ [ -0.047332763671875, -0.037139892578125, 0.0219573974609375, 0.0030460357666015625, -0.031280517578125, -0.0278167724609375, -0.0180511474609375, -0.02716064453125, 0.00814056396484375, 0.01297760009765625, -0.053253173828125, -0.0506591796875, -0.053619384765625...
YakovElm/Jira15Classic_256
2023-05-27T02:50:35.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira15Classic_256
0
2
transformers
2023-05-27T02:49:59
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira15Classic_256 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira15Classic_256 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3963 - Train Accuracy: 0.7912 - Validation Loss: 0.6595 - Validation Accuracy: 0.6593 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5200 | 0.7692 | 0.8593 | 0.5205 | 0 | | 0.4517 | 0.7922 | 0.8734 | 0.5205 | 1 | | 0.3963 | 0.7912 | 0.6595 | 0.6593 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,778
[ [ -0.04150390625, -0.04150390625, 0.02001953125, -0.00006681680679321289, -0.03363037109375, -0.02935791015625, -0.0167236328125, -0.0258636474609375, 0.0155487060546875, 0.0120849609375, -0.05181884765625, -0.048004150390625, -0.05126953125, -0.02786254882812...
YakovElm/Hyperledger10Classic_32
2023-05-27T03:02:45.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger10Classic_32
0
2
transformers
2023-05-27T03:02:01
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger10Classic_32 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger10Classic_32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2991 - Train Accuracy: 0.8845 - Validation Loss: 0.3973 - Validation Accuracy: 0.8548 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3675 | 0.8779 | 0.3861 | 0.8600 | 0 | | 0.3449 | 0.8838 | 0.3911 | 0.8600 | 1 | | 0.2991 | 0.8845 | 0.3973 | 0.8548 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,790
[ [ -0.046295166015625, -0.0401611328125, 0.021820068359375, 0.003009796142578125, -0.0306243896484375, -0.0284576416015625, -0.02008056640625, -0.0258636474609375, 0.01168060302734375, 0.01364898681640625, -0.051544189453125, -0.04693603515625, -0.05322265625, ...
YakovElm/Hyperledger15Classic_32
2023-05-27T03:18:38.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger15Classic_32
0
2
transformers
2023-05-27T03:17:57
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger15Classic_32 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger15Classic_32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2991 - Train Accuracy: 0.9028 - Validation Loss: 0.3422 - Validation Accuracy: 0.8807 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3396 | 0.8914 | 0.3557 | 0.8807 | 0 | | 0.3083 | 0.9035 | 0.3524 | 0.8807 | 1 | | 0.2991 | 0.9028 | 0.3422 | 0.8807 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,790
[ [ -0.047637939453125, -0.04132080078125, 0.0216522216796875, 0.004505157470703125, -0.0308837890625, -0.0289764404296875, -0.019927978515625, -0.0255584716796875, 0.01024627685546875, 0.01372528076171875, -0.052215576171875, -0.04901123046875, -0.052276611328125, ...
YakovElm/Hyperledger20Classic_32
2023-05-27T03:34:15.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger20Classic_32
0
2
transformers
2023-05-27T03:33:41
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger20Classic_32 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger20Classic_32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2565 - Train Accuracy: 0.9149 - Validation Loss: 0.3101 - Validation Accuracy: 0.8983 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3031 | 0.9059 | 0.3074 | 0.8983 | 0 | | 0.2700 | 0.9149 | 0.2988 | 0.8983 | 1 | | 0.2565 | 0.9149 | 0.3101 | 0.8983 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,790
[ [ -0.04766845703125, -0.040374755859375, 0.0222320556640625, 0.0038013458251953125, -0.03045654296875, -0.027984619140625, -0.0183258056640625, -0.0262451171875, 0.01003265380859375, 0.014678955078125, -0.0540771484375, -0.048583984375, -0.053466796875, -0.020...
openbmb/cpm-bee-5b
2023-07-03T11:34:49.000Z
[ "transformers", "pytorch", "cpmbee", "feature-extraction", "custom_code", "en", "zh", "region:us" ]
feature-extraction
openbmb
null
null
openbmb/cpm-bee-5b
6
2
transformers
2023-05-27T03:59:34
--- language: - en - zh --- # CPM-Bee **CPM-Bee** is a fully open-source, commercially-usable Chinese-English bilingual base model with a capacity of ten billion parameters. It is the second milestone achieved through the training process of [**CPM-live**](https://live.openbmb.org/). Utilizing the Transformer auto-regressive architecture, CPM-Bee has been pre-trained on an extensive corpus of trillion-scale tokens, thereby possessing remarkable foundational capabilities. ## Model description - **Open-source and Commercial Usable**:OpenBMB adheres to the spirit of open-source, aiming to make large-scale models accessible to everyone. CPM-Bee, as a foudation model, is fully open-source and available for commercial use, contributing to the advancement of the field of large-scale models. - **Excellent Performance in Chinese and English**: : CPM-Bee's base model has undergone rigorous selection and balancing of pre-training data, resulting in outstanding performance in both Chinese and English. For detailed information regarding evaluation tasks and results, please refer to the assessment documentation. - **Vast and High-quality Corpus**: CPM-Bee, as a base model, has been trained on an extensive corpus of over trillion tokens, making it one of the models with the highest volume of training data within the open-source community. Furthermore, we have implemented stringent selection, cleaning, and post-processing procedures on the pre-training corpus to ensure its quality. - **Support for OpenBMB System**: The OpenBMB system provides a comprehensive ecosystem of tools and scripts for high-performance pre-training, adaptation, compression, deployment, and tool development. CPM-Bee, as a base model, is accompanied by all the necessary tool scripts, enabling developers to efficiently utilize and explore advanced functionalities. - **Conversational and Tool Usage Capabilities**: Building upon OpenBMB's exploration in instruction-based fine-tuning and tool learning, we have performed fine-tuning on top of the CPM-Bee base model, resulting in an instance model with powerful conversational and tool usage capabilities. The API and beta testing for this model will be made available in the near future. ## Intended uses & limitations You can use the raw model for many NLP tasks like text generation or fine-tune it to a downstream task. ### How to use ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("openbmb/cpm-bee-5b", trust_remote_code=True) >>> model = AutoModelForCausalLM.from_pretrained("openbmb/cpm-bee-5b", trust_remote_code=True).cuda() # >>> result = model.generate({"input": "今天天气不错,", "<ans>": ""}, tokenizer) >>> print(result) ``` If you wanna use multi GPUs to inference, you can use `accelerate` as follow: ```python from transformers import AutoModelForCausalLM, AutoTokenizer from accelerate import dispatch_model from accelerate.utils import get_balanced_memory, infer_auto_device_map tokenizer = AutoTokenizer.from_pretrained("openbmb/cpm-bee-5b", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("openbmb/cpm-bee-5b", trust_remote_code=True).cuda() max_memory = get_balanced_memory( model, no_split_module_classes=["CpmBeeTransformerBlock"] ) device_map = infer_auto_device_map(model, max_memory=max_memory, no_split_module_classes=["CpmBeeTransformerBlock"]) # make sure the data on the same device when projecting hidden states to logits. device_map["cpmbee.encoder.output_layernorm"] = device_map["cpmbee.input_embedding"] = 0 model = dispatch_model(model, device_map=device_map) res = model.generate( [ {"input": "今天天气是真的", "<ans>": ""}, {"input": "NGC 6231是一个位于天蝎座的疏散星团,天球座标为赤经16时54分,赤纬-41度48分,视觉观测大小约45角分,亮度约2.6视星等,距地球5900光年。NGC 6231年龄约为三百二十万年,是一个非常年轻的星团,星团内的最亮星是5等的天蝎座 ζ1星。用双筒望远镜或小型望远镜就能看到个别的行星。NGC 6231在1654年被意大利天文学家乔瓦尼·巴蒂斯特·霍迪尔纳(Giovanni Battista Hodierna)以Luminosae的名字首次纪录在星表中,但是未见记载于夏尔·梅西耶的天体列表和威廉·赫歇尔的深空天体目录。这个天体在1678年被爱德蒙·哈雷(I.7)、1745年被夏西亚科斯(Jean-Phillippe Loys de Cheseaux)(9)、1751年被尼可拉·路易·拉卡伊(II.13)分别再次独立发现。", "question": "NGC 6231的经纬度是多少?", "<ans>": ""} ], tokenizer, max_new_tokens=100 ) print(res) ``` We suggest to use `bmtrain` to finetune CPM-Bee. Also, you can use `accelerate` and `deepspeed` to finetune CPM-Bee. Here we will give a brief example of a training loop: ```python from transformers import AutoTokenizer, AutoModelForCausalLM from accelerate import Accelerator from torch.utils.data import Dataset, DataLoader accelerator = Accelerator() trainset = Dataset() # Make sure trainset.__getitem__() can get data with correct format like {"input": "...", "<ans>": ""} # for details, you can read https://github.com/OpenBMB/CPM-Bee/tree/main/tutorials/basic_task_finetune train_loader = DataLoader(trainset, batch_size=1) tokenizer = AutoTokenizer.from_pretrained("openbmb/cpm-bee-5b", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("openbmb/cpm-bee-5b", trust_remote_code=True).cuda() optimizer = torch.optim.Adam(model.parameters()) model, optimizer, train_loader = accelerator.prepare( model, optimizer, train_loader ) for iter, data in enumerate(train_loader): optimizer.zero_grad() # change the data to a trainable format input_encoded = tokenizer.prepare_for_finetune(data, max_length=512).to(model.device) outputs = model(**input_encoded) loss = outputs.loss accelerator.backward(loss) optimizer.step() ``` You should design your own parallel and mix_precision training strategy on the basis of it.
5,604
[ [ -0.0423583984375, -0.057098388671875, 0.0110931396484375, 0.0210723876953125, -0.0263671875, -0.00968170166015625, -0.03399658203125, -0.0214385986328125, 0.0005021095275878906, 0.0177764892578125, -0.043853759765625, -0.0428466796875, -0.044952392578125, -0...
YakovElm/Hyperledger5Classic_64
2023-05-27T04:03:05.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger5Classic_64
0
2
transformers
2023-05-27T04:02:31
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger5Classic_64 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger5Classic_64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3683 - Train Accuracy: 0.8561 - Validation Loss: 0.4172 - Validation Accuracy: 0.8351 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.4207 | 0.8481 | 0.4357 | 0.8361 | 0 | | 0.3940 | 0.8547 | 0.4199 | 0.8361 | 1 | | 0.3683 | 0.8561 | 0.4172 | 0.8351 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,788
[ [ -0.04608154296875, -0.037567138671875, 0.022308349609375, 0.00002288818359375, -0.03173828125, -0.0286865234375, -0.0183258056640625, -0.0269317626953125, 0.00760650634765625, 0.01424407958984375, -0.052459716796875, -0.05145263671875, -0.05535888671875, -0....
YakovElm/Jira20Classic_256
2023-05-27T04:22:30.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira20Classic_256
0
2
transformers
2023-05-27T04:21:54
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira20Classic_256 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira20Classic_256 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2410 - Train Accuracy: 0.8972 - Validation Loss: 0.2703 - Validation Accuracy: 0.9338 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3731 | 0.8562 | 0.2694 | 0.9338 | 0 | | 0.3110 | 0.8772 | 0.2464 | 0.9338 | 1 | | 0.2410 | 0.8972 | 0.2703 | 0.9338 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,778
[ [ -0.040069580078125, -0.041015625, 0.020294189453125, 0.0006303787231445312, -0.032684326171875, -0.0266571044921875, -0.01715087890625, -0.025146484375, 0.01580810546875, 0.01284027099609375, -0.0526123046875, -0.047576904296875, -0.050506591796875, -0.02758...
YakovElm/Hyperledger10Classic_64
2023-05-27T04:27:49.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger10Classic_64
0
2
transformers
2023-05-27T04:27:14
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger10Classic_64 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger10Classic_64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2869 - Train Accuracy: 0.8865 - Validation Loss: 0.4772 - Validation Accuracy: 0.8600 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3638 | 0.8831 | 0.3750 | 0.8600 | 0 | | 0.3325 | 0.8838 | 0.3629 | 0.8600 | 1 | | 0.2869 | 0.8865 | 0.4772 | 0.8600 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,790
[ [ -0.0455322265625, -0.04095458984375, 0.0216217041015625, 0.00147247314453125, -0.0308685302734375, -0.029937744140625, -0.0202484130859375, -0.02508544921875, 0.01152801513671875, 0.01511383056640625, -0.050933837890625, -0.04913330078125, -0.054962158203125, ...
YakovElm/Hyperledger15Classic_64
2023-05-27T04:53:01.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger15Classic_64
0
2
transformers
2023-05-27T04:52:27
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger15Classic_64 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger15Classic_64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2628 - Train Accuracy: 0.9045 - Validation Loss: 0.3526 - Validation Accuracy: 0.8683 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3275 | 0.8942 | 0.3392 | 0.8807 | 0 | | 0.2991 | 0.9035 | 0.3343 | 0.8807 | 1 | | 0.2628 | 0.9045 | 0.3526 | 0.8683 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,790
[ [ -0.04766845703125, -0.041961669921875, 0.021636962890625, 0.002964019775390625, -0.0316162109375, -0.029876708984375, -0.0194091796875, -0.0252685546875, 0.01044464111328125, 0.01549530029296875, -0.053253173828125, -0.04974365234375, -0.05377197265625, -0.0...
YakovElm/Hyperledger20Classic_64
2023-05-27T05:16:23.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger20Classic_64
0
2
transformers
2023-05-27T05:15:48
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger20Classic_64 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger20Classic_64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2229 - Train Accuracy: 0.9198 - Validation Loss: 0.3311 - Validation Accuracy: 0.8963 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2931 | 0.9149 | 0.3059 | 0.8983 | 0 | | 0.2643 | 0.9142 | 0.2926 | 0.8983 | 1 | | 0.2229 | 0.9198 | 0.3311 | 0.8963 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,790
[ [ -0.0467529296875, -0.040283203125, 0.0222015380859375, 0.002262115478515625, -0.0307769775390625, -0.029815673828125, -0.018096923828125, -0.0257110595703125, 0.0091400146484375, 0.016510009765625, -0.05377197265625, -0.049530029296875, -0.054779052734375, -...
YakovElm/IntelDAOS5Classic_32
2023-05-27T05:22:47.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS5Classic_32
0
2
transformers
2023-05-27T05:22:13
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS5Classic_32 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS5Classic_32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3605 - Train Accuracy: 0.8740 - Validation Loss: 0.4734 - Validation Accuracy: 0.8438 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3976 | 0.8740 | 0.4505 | 0.8438 | 0 | | 0.3801 | 0.8740 | 0.4481 | 0.8438 | 1 | | 0.3605 | 0.8740 | 0.4734 | 0.8438 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,784
[ [ -0.0440673828125, -0.038909912109375, 0.0207977294921875, 0.0014715194702148438, -0.0340576171875, -0.0281829833984375, -0.0189971923828125, -0.0283355712890625, 0.01122283935546875, 0.01091766357421875, -0.05377197265625, -0.04925537109375, -0.052490234375, ...
YakovElm/IntelDAOS10Classic_32
2023-05-27T05:28:48.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS10Classic_32
0
2
transformers
2023-05-27T05:28:15
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS10Classic_32 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS10Classic_32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2725 - Train Accuracy: 0.9200 - Validation Loss: 0.3952 - Validation Accuracy: 0.8739 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3293 | 0.8960 | 0.3865 | 0.8739 | 0 | | 0.2838 | 0.9200 | 0.4036 | 0.8739 | 1 | | 0.2725 | 0.9200 | 0.3952 | 0.8739 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,786
[ [ -0.0440673828125, -0.03985595703125, 0.0211944580078125, 0.0019702911376953125, -0.03448486328125, -0.028533935546875, -0.0182952880859375, -0.027618408203125, 0.01212310791015625, 0.01078033447265625, -0.052520751953125, -0.04779052734375, -0.051727294921875, ...
YakovElm/IntelDAOS15Classic_32
2023-05-27T05:34:51.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS15Classic_32
0
2
transformers
2023-05-27T05:34:17
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS15Classic_32 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS15Classic_32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2037 - Train Accuracy: 0.9460 - Validation Loss: 0.3651 - Validation Accuracy: 0.8859 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2431 | 0.9260 | 0.3967 | 0.8859 | 0 | | 0.2160 | 0.9460 | 0.4047 | 0.8859 | 1 | | 0.2037 | 0.9460 | 0.3651 | 0.8859 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,786
[ [ -0.04364013671875, -0.040374755859375, 0.0221405029296875, 0.002933502197265625, -0.033721923828125, -0.0296630859375, -0.01812744140625, -0.027008056640625, 0.01209259033203125, 0.0119171142578125, -0.05426025390625, -0.048431396484375, -0.05169677734375, -...
YakovElm/IntelDAOS20Classic_32
2023-05-27T05:40:35.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS20Classic_32
0
2
transformers
2023-05-27T05:40:00
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS20Classic_32 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS20Classic_32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1464 - Train Accuracy: 0.9610 - Validation Loss: 0.3274 - Validation Accuracy: 0.9099 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.1971 | 0.9610 | 0.3072 | 0.9099 | 0 | | 0.1570 | 0.9610 | 0.3179 | 0.9099 | 1 | | 0.1464 | 0.9610 | 0.3274 | 0.9099 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,786
[ [ -0.0447998046875, -0.039947509765625, 0.0207061767578125, 0.002422332763671875, -0.035369873046875, -0.0280609130859375, -0.0184783935546875, -0.0271453857421875, 0.01284027099609375, 0.0105133056640625, -0.053497314453125, -0.04766845703125, -0.052001953125, ...
YakovElm/IntelDAOS5Classic_64
2023-05-27T05:49:09.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS5Classic_64
0
2
transformers
2023-05-27T05:48:34
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS5Classic_64 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS5Classic_64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3585 - Train Accuracy: 0.8740 - Validation Loss: 0.4320 - Validation Accuracy: 0.8438 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.4007 | 0.8720 | 0.4336 | 0.8438 | 0 | | 0.3732 | 0.8740 | 0.4284 | 0.8438 | 1 | | 0.3585 | 0.8740 | 0.4320 | 0.8438 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,784
[ [ -0.044464111328125, -0.03900146484375, 0.021209716796875, 0.0007915496826171875, -0.034515380859375, -0.0290985107421875, -0.018951416015625, -0.0280914306640625, 0.01091766357421875, 0.01114654541015625, -0.053558349609375, -0.04931640625, -0.052490234375, ...
YakovElm/IntelDAOS10Classic_64
2023-05-27T05:57:58.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS10Classic_64
0
2
transformers
2023-05-27T05:57:23
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS10Classic_64 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS10Classic_64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2613 - Train Accuracy: 0.9200 - Validation Loss: 0.3848 - Validation Accuracy: 0.8739 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3128 | 0.8920 | 0.3859 | 0.8739 | 0 | | 0.2678 | 0.9200 | 0.4156 | 0.8739 | 1 | | 0.2613 | 0.9200 | 0.3848 | 0.8739 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,786
[ [ -0.04449462890625, -0.039886474609375, 0.0208892822265625, 0.0006036758422851562, -0.033935546875, -0.029144287109375, -0.0188446044921875, -0.0278167724609375, 0.0128173828125, 0.0118865966796875, -0.052764892578125, -0.04833984375, -0.05206298828125, -0.02...
YakovElm/IntelDAOS15Classic_64
2023-05-27T06:07:23.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS15Classic_64
0
2
transformers
2023-05-27T06:06:48
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS15Classic_64 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS15Classic_64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1817 - Train Accuracy: 0.9460 - Validation Loss: 0.3953 - Validation Accuracy: 0.8859 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2502 | 0.9450 | 0.3577 | 0.8859 | 0 | | 0.2086 | 0.9460 | 0.3578 | 0.8859 | 1 | | 0.1817 | 0.9460 | 0.3953 | 0.8859 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,786
[ [ -0.044342041015625, -0.0413818359375, 0.0214385986328125, 0.0018548965454101562, -0.03375244140625, -0.029571533203125, -0.0186309814453125, -0.026702880859375, 0.01273345947265625, 0.0121612548828125, -0.05389404296875, -0.048797607421875, -0.052001953125, ...
YakovElm/MariaDB5Classic_256
2023-05-27T06:15:35.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB5Classic_256
0
2
transformers
2023-05-27T06:14:59
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB5Classic_256 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB5Classic_256 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2607 - Train Accuracy: 0.9088 - Validation Loss: 0.2602 - Validation Accuracy: 0.9322 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3286 | 0.8862 | 0.2445 | 0.9322 | 0 | | 0.2829 | 0.8954 | 0.2511 | 0.9322 | 1 | | 0.2607 | 0.9088 | 0.2602 | 0.9322 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,782
[ [ -0.044342041015625, -0.041778564453125, 0.0210723876953125, 0.0025691986083984375, -0.0333251953125, -0.030853271484375, -0.0153961181640625, -0.026397705078125, 0.01509857177734375, 0.0147705078125, -0.056060791015625, -0.050628662109375, -0.05126953125, -0...
YakovElm/IntelDAOS20Classic_64
2023-05-27T06:15:50.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS20Classic_64
0
2
transformers
2023-05-27T06:15:16
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS20Classic_64 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS20Classic_64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1354 - Train Accuracy: 0.9610 - Validation Loss: 0.3272 - Validation Accuracy: 0.9099 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2413 | 0.9400 | 0.3377 | 0.9099 | 0 | | 0.1555 | 0.9610 | 0.3160 | 0.9099 | 1 | | 0.1354 | 0.9610 | 0.3272 | 0.9099 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,786
[ [ -0.044921875, -0.040130615234375, 0.0212249755859375, 0.0014820098876953125, -0.033172607421875, -0.0290374755859375, -0.0186920166015625, -0.0278778076171875, 0.01287078857421875, 0.0114593505859375, -0.054290771484375, -0.04840087890625, -0.052215576171875, ...
YakovElm/Jira5Classic_32
2023-05-27T06:21:26.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira5Classic_32
0
2
transformers
2023-05-27T06:20:51
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira5Classic_32 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira5Classic_32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3030 - Train Accuracy: 0.8867 - Validation Loss: 1.0513 - Validation Accuracy: 0.6151 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5139 | 0.7555 | 0.7646 | 0.5047 | 0 | | 0.4087 | 0.8038 | 0.8291 | 0.5552 | 1 | | 0.3030 | 0.8867 | 1.0513 | 0.6151 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,774
[ [ -0.04022216796875, -0.039794921875, 0.020721435546875, 0.0003685951232910156, -0.033905029296875, -0.0284576416015625, -0.017242431640625, -0.0263214111328125, 0.012725830078125, 0.0120391845703125, -0.052459716796875, -0.049407958984375, -0.05157470703125, ...
YakovElm/Jira10Classic_32
2023-05-27T06:26:44.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira10Classic_32
0
2
transformers
2023-05-27T06:26:11
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira10Classic_32 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira10Classic_32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2559 - Train Accuracy: 0.8972 - Validation Loss: 1.0026 - Validation Accuracy: 0.5994 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5162 | 0.7629 | 0.8404 | 0.4890 | 0 | | 0.4017 | 0.8122 | 0.8047 | 0.6151 | 1 | | 0.2559 | 0.8972 | 1.0026 | 0.5994 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,776
[ [ -0.040313720703125, -0.04168701171875, 0.0200042724609375, 0.0009250640869140625, -0.03369140625, -0.0290374755859375, -0.0174713134765625, -0.0263519287109375, 0.01494598388671875, 0.01290130615234375, -0.0518798828125, -0.047271728515625, -0.051544189453125, ...
YakovElm/Jira15Classic_32
2023-05-27T06:32:14.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira15Classic_32
0
2
transformers
2023-05-27T06:31:38
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira15Classic_32 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira15Classic_32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2421 - Train Accuracy: 0.9024 - Validation Loss: 1.1123 - Validation Accuracy: 0.6372 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.4984 | 0.7702 | 0.8224 | 0.5205 | 0 | | 0.3898 | 0.8216 | 0.7801 | 0.6215 | 1 | | 0.2421 | 0.9024 | 1.1123 | 0.6372 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,776
[ [ -0.0416259765625, -0.04180908203125, 0.0208892822265625, 0.0025615692138671875, -0.033782958984375, -0.0308990478515625, -0.0173797607421875, -0.0254058837890625, 0.01398468017578125, 0.013153076171875, -0.0516357421875, -0.048736572265625, -0.051055908203125, ...
YakovElm/Jira20Classic_32
2023-05-27T06:38:00.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira20Classic_32
0
2
transformers
2023-05-27T06:37:22
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira20Classic_32 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira20Classic_32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1936 - Train Accuracy: 0.9224 - Validation Loss: 0.3181 - Validation Accuracy: 0.9148 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3608 | 0.8741 | 0.3038 | 0.9338 | 0 | | 0.2758 | 0.8741 | 0.3191 | 0.9306 | 1 | | 0.1936 | 0.9224 | 0.3181 | 0.9148 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,776
[ [ -0.04022216796875, -0.04144287109375, 0.019378662109375, 0.00278472900390625, -0.03594970703125, -0.0276336669921875, -0.017791748046875, -0.0252532958984375, 0.014556884765625, 0.01239776611328125, -0.05194091796875, -0.04815673828125, -0.0513916015625, -0....
YakovElm/Jira5Classic_64
2023-05-27T06:46:28.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira5Classic_64
0
2
transformers
2023-05-27T06:45:33
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira5Classic_64 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira5Classic_64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3482 - Train Accuracy: 0.8562 - Validation Loss: 1.2752 - Validation Accuracy: 0.5457 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5168 | 0.7639 | 0.8426 | 0.4858 | 0 | | 0.4535 | 0.7922 | 0.9725 | 0.4858 | 1 | | 0.3482 | 0.8562 | 1.2752 | 0.5457 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,774
[ [ -0.040069580078125, -0.03948974609375, 0.0200347900390625, -0.00020301342010498047, -0.034393310546875, -0.027801513671875, -0.0174713134765625, -0.0259857177734375, 0.01271820068359375, 0.0124053955078125, -0.052093505859375, -0.048797607421875, -0.052124023437...
YakovElm/Jira10Classic_64
2023-05-27T06:54:53.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira10Classic_64
0
2
transformers
2023-05-27T06:54:21
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira10Classic_64 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira10Classic_64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3304 - Train Accuracy: 0.8426 - Validation Loss: 0.6563 - Validation Accuracy: 0.6814 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5094 | 0.7807 | 0.8002 | 0.4921 | 0 | | 0.4211 | 0.7901 | 0.7682 | 0.5205 | 1 | | 0.3304 | 0.8426 | 0.6563 | 0.6814 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,776
[ [ -0.040374755859375, -0.0413818359375, 0.0200347900390625, 0.000021398067474365234, -0.034393310546875, -0.02911376953125, -0.0177764892578125, -0.025604248046875, 0.01535797119140625, 0.01305389404296875, -0.050018310546875, -0.048095703125, -0.051666259765625, ...
YakovElm/Jira15Classic_64
2023-05-27T07:03:03.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira15Classic_64
0
2
transformers
2023-05-27T07:02:29
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira15Classic_64 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira15Classic_64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3055 - Train Accuracy: 0.8678 - Validation Loss: 0.8529 - Validation Accuracy: 0.6530 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.4973 | 0.7922 | 0.8065 | 0.5205 | 0 | | 0.4266 | 0.7849 | 0.8817 | 0.5174 | 1 | | 0.3055 | 0.8678 | 0.8529 | 0.6530 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,776
[ [ -0.04052734375, -0.04217529296875, 0.0202178955078125, 0.00043845176696777344, -0.034393310546875, -0.0293121337890625, -0.0174407958984375, -0.0255126953125, 0.0146636962890625, 0.01287841796875, -0.05157470703125, -0.0489501953125, -0.05157470703125, -0.02...
YakovElm/Jira20Classic_64
2023-05-27T07:11:37.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira20Classic_64
0
2
transformers
2023-05-27T07:11:04
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira20Classic_64 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira20Classic_64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2121 - Train Accuracy: 0.9224 - Validation Loss: 0.3072 - Validation Accuracy: 0.9085 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3687 | 0.8678 | 0.2697 | 0.9338 | 0 | | 0.2722 | 0.8909 | 0.2871 | 0.9306 | 1 | | 0.2121 | 0.9224 | 0.3072 | 0.9085 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,776
[ [ -0.040252685546875, -0.0408935546875, 0.0200653076171875, 0.000980377197265625, -0.034454345703125, -0.0281982421875, -0.0176849365234375, -0.0248260498046875, 0.014556884765625, 0.01378631591796875, -0.0518798828125, -0.04840087890625, -0.05084228515625, -0...
YakovElm/MariaDB5Classic_32
2023-05-27T07:18:20.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB5Classic_32
0
2
transformers
2023-05-27T07:17:45
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB5Classic_32 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB5Classic_32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2727 - Train Accuracy: 0.8929 - Validation Loss: 0.2534 - Validation Accuracy: 0.9322 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3469 | 0.8787 | 0.2551 | 0.9322 | 0 | | 0.2924 | 0.8946 | 0.2727 | 0.9322 | 1 | | 0.2727 | 0.8929 | 0.2534 | 0.9322 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,780
[ [ -0.04302978515625, -0.04193115234375, 0.02130126953125, 0.003353118896484375, -0.0341796875, -0.03094482421875, -0.0166778564453125, -0.0261688232421875, 0.01358795166015625, 0.0146484375, -0.0546875, -0.050079345703125, -0.052032470703125, -0.0264892578125,...
YakovElm/MariaDB10Classic_32
2023-05-27T07:25:01.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB10Classic_32
0
2
transformers
2023-05-27T07:24:28
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB10Classic_32 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB10Classic_32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1860 - Train Accuracy: 0.9356 - Validation Loss: 0.2225 - Validation Accuracy: 0.9497 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2943 | 0.9121 | 0.2078 | 0.9523 | 0 | | 0.2359 | 0.9213 | 0.2011 | 0.9497 | 1 | | 0.1860 | 0.9356 | 0.2225 | 0.9497 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,782
[ [ -0.04327392578125, -0.042144775390625, 0.0215301513671875, 0.003757476806640625, -0.034027099609375, -0.03131103515625, -0.0160064697265625, -0.0264434814453125, 0.01418304443359375, 0.01416015625, -0.054229736328125, -0.049102783203125, -0.0523681640625, -0...
YakovElm/MariaDB15Classic_32
2023-05-27T07:31:30.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB15Classic_32
0
2
transformers
2023-05-27T07:30:52
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB15Classic_32 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB15Classic_32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1739 - Train Accuracy: 0.9347 - Validation Loss: 0.1727 - Validation Accuracy: 0.9598 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2658 | 0.9264 | 0.1676 | 0.9598 | 0 | | 0.2067 | 0.9314 | 0.1605 | 0.9573 | 1 | | 0.1739 | 0.9347 | 0.1727 | 0.9598 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,782
[ [ -0.04345703125, -0.04193115234375, 0.021148681640625, 0.00421142578125, -0.034820556640625, -0.0303192138671875, -0.01666259765625, -0.0261993408203125, 0.01454925537109375, 0.01409912109375, -0.05462646484375, -0.048095703125, -0.052764892578125, -0.0258026...
YakovElm/MariaDB20Classic_32
2023-05-27T07:38:04.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB20Classic_32
0
2
transformers
2023-05-27T07:37:18
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB20Classic_32 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB20Classic_32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2150 - Train Accuracy: 0.9356 - Validation Loss: 0.1324 - Validation Accuracy: 0.9698 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2765 | 0.9305 | 0.1945 | 0.9698 | 0 | | 0.2427 | 0.9356 | 0.1311 | 0.9698 | 1 | | 0.2150 | 0.9356 | 0.1324 | 0.9698 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,782
[ [ -0.043182373046875, -0.042633056640625, 0.02178955078125, 0.003753662109375, -0.03448486328125, -0.031707763671875, -0.0164642333984375, -0.026458740234375, 0.0148468017578125, 0.0140838623046875, -0.05474853515625, -0.049774169921875, -0.051788330078125, -0...
YakovElm/MariaDB5Classic_64
2023-05-27T07:47:05.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB5Classic_64
0
2
transformers
2023-05-27T07:46:25
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB5Classic_64 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB5Classic_64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2438 - Train Accuracy: 0.9004 - Validation Loss: 0.2560 - Validation Accuracy: 0.9271 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3297 | 0.8921 | 0.2584 | 0.9322 | 0 | | 0.2592 | 0.9079 | 0.2489 | 0.9271 | 1 | | 0.2438 | 0.9004 | 0.2560 | 0.9271 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,780
[ [ -0.043792724609375, -0.04180908203125, 0.021209716796875, 0.002925872802734375, -0.033843994140625, -0.031524658203125, -0.015777587890625, -0.0270843505859375, 0.014068603515625, 0.01491546630859375, -0.0550537109375, -0.050048828125, -0.05255126953125, -0....
YakovElm/MariaDB10Classic_64
2023-05-27T07:55:55.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB10Classic_64
0
2
transformers
2023-05-27T07:55:18
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB10Classic_64 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB10Classic_64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1941 - Train Accuracy: 0.9339 - Validation Loss: 0.1951 - Validation Accuracy: 0.9472 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2920 | 0.9004 | 0.1959 | 0.9523 | 0 | | 0.2384 | 0.9155 | 0.1959 | 0.9472 | 1 | | 0.1941 | 0.9339 | 0.1951 | 0.9472 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,782
[ [ -0.042877197265625, -0.04388427734375, 0.021209716796875, 0.004474639892578125, -0.037017822265625, -0.0305023193359375, -0.0158538818359375, -0.024810791015625, 0.0164337158203125, 0.01447296142578125, -0.053924560546875, -0.047821044921875, -0.052154541015625,...
YakovElm/MariaDB15Classic_64
2023-05-27T08:06:04.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB15Classic_64
0
2
transformers
2023-05-27T08:05:30
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB15Classic_64 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB15Classic_64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1915 - Train Accuracy: 0.9381 - Validation Loss: 0.1826 - Validation Accuracy: 0.9372 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2834 | 0.9096 | 0.1811 | 0.9598 | 0 | | 0.2120 | 0.9238 | 0.1664 | 0.9598 | 1 | | 0.1915 | 0.9381 | 0.1826 | 0.9372 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,782
[ [ -0.043212890625, -0.042999267578125, 0.020904541015625, 0.0045013427734375, -0.035430908203125, -0.0308685302734375, -0.0159759521484375, -0.0253448486328125, 0.01560211181640625, 0.01476287841796875, -0.0535888671875, -0.048736572265625, -0.0518798828125, -...
YakovElm/MariaDB10Classic_256
2023-05-27T08:09:44.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB10Classic_256
0
2
transformers
2023-05-27T08:09:08
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB10Classic_256 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB10Classic_256 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2336 - Train Accuracy: 0.9188 - Validation Loss: 0.1912 - Validation Accuracy: 0.9523 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3205 | 0.8996 | 0.1897 | 0.9523 | 0 | | 0.2709 | 0.9163 | 0.1853 | 0.9523 | 1 | | 0.2336 | 0.9188 | 0.1912 | 0.9523 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,784
[ [ -0.042816162109375, -0.042633056640625, 0.0207366943359375, 0.0038814544677734375, -0.035980224609375, -0.030120849609375, -0.01476287841796875, -0.0248260498046875, 0.016815185546875, 0.01436614990234375, -0.05548095703125, -0.04864501953125, -0.052276611328125...
YakovElm/MariaDB20Classic_64
2023-05-27T08:17:03.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB20Classic_64
0
2
transformers
2023-05-27T08:16:24
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB20Classic_64 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB20Classic_64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2044 - Train Accuracy: 0.9364 - Validation Loss: 0.1367 - Validation Accuracy: 0.9698 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3003 | 0.9121 | 0.1490 | 0.9698 | 0 | | 0.2201 | 0.9356 | 0.1322 | 0.9698 | 1 | | 0.2044 | 0.9364 | 0.1367 | 0.9698 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,782
[ [ -0.043304443359375, -0.04327392578125, 0.0214080810546875, 0.0035552978515625, -0.03411865234375, -0.032196044921875, -0.016754150390625, -0.0264739990234375, 0.0151824951171875, 0.0150604248046875, -0.0555419921875, -0.049835205078125, -0.05230712890625, -0...
YakovElm/Qt5Classic_32
2023-05-27T08:34:58.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt5Classic_32
0
2
transformers
2023-05-27T08:34:21
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt5Classic_32 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt5Classic_32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2765 - Train Accuracy: 0.8953 - Validation Loss: 0.2633 - Validation Accuracy: 0.9294 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3389 | 0.8937 | 0.2566 | 0.9294 | 0 | | 0.3223 | 0.8943 | 0.2479 | 0.9294 | 1 | | 0.2765 | 0.8953 | 0.2633 | 0.9294 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,770
[ [ -0.041168212890625, -0.035400390625, 0.022308349609375, 0.002105712890625, -0.03558349609375, -0.0265350341796875, -0.01236724853515625, -0.0239105224609375, 0.00669097900390625, 0.01172637939453125, -0.053314208984375, -0.049407958984375, -0.0504150390625, ...
YakovElm/Qt10Classic_32
2023-05-27T08:52:28.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt10Classic_32
0
2
transformers
2023-05-27T08:51:48
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt10Classic_32 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt10Classic_32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2267 - Train Accuracy: 0.9208 - Validation Loss: 0.2144 - Validation Accuracy: 0.9416 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2754 | 0.9202 | 0.2156 | 0.9416 | 0 | | 0.2484 | 0.9210 | 0.2215 | 0.9416 | 1 | | 0.2267 | 0.9208 | 0.2144 | 0.9416 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,772
[ [ -0.040863037109375, -0.036285400390625, 0.0227203369140625, 0.003582000732421875, -0.034393310546875, -0.027008056640625, -0.0131072998046875, -0.0225372314453125, 0.00908660888671875, 0.012542724609375, -0.052978515625, -0.04876708984375, -0.050628662109375, ...
YakovElm/Qt15Classic_32
2023-05-27T09:10:35.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt15Classic_32
0
2
transformers
2023-05-27T09:10:01
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt15Classic_32 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt15Classic_32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2029 - Train Accuracy: 0.9370 - Validation Loss: 0.2012 - Validation Accuracy: 0.9505 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2402 | 0.9354 | 0.1920 | 0.9505 | 0 | | 0.2261 | 0.9367 | 0.1922 | 0.9505 | 1 | | 0.2029 | 0.9370 | 0.2012 | 0.9505 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,772
[ [ -0.041107177734375, -0.03924560546875, 0.0211334228515625, 0.00501251220703125, -0.03704833984375, -0.0283355712890625, -0.0138092041015625, -0.0231475830078125, 0.01074981689453125, 0.013092041015625, -0.05377197265625, -0.048065185546875, -0.051544189453125, ...
YakovElm/Apache20Classic_512
2023-05-27T09:16:59.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Apache20Classic_512
0
2
transformers
2023-05-27T09:16:23
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache20Classic_512 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache20Classic_512 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1347 - Train Accuracy: 0.9624 - Validation Loss: 0.3465 - Validation Accuracy: 0.9042 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.1676 | 0.9622 | 0.3358 | 0.9055 | 0 | | 0.1498 | 0.9624 | 0.3097 | 0.9055 | 1 | | 0.1347 | 0.9624 | 0.3465 | 0.9042 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,782
[ [ -0.045013427734375, -0.044830322265625, 0.0204925537109375, 0.0063018798828125, -0.03436279296875, -0.03228759765625, -0.018341064453125, -0.0276031494140625, 0.011444091796875, 0.01355743408203125, -0.05419921875, -0.04779052734375, -0.052978515625, -0.0243...
YakovElm/Qt20Classic_32
2023-05-27T09:29:05.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt20Classic_32
0
2
transformers
2023-05-27T09:28:31
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt20Classic_32 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt20Classic_32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1986 - Train Accuracy: 0.9462 - Validation Loss: 0.1705 - Validation Accuracy: 0.9586 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2402 | 0.9365 | 0.1717 | 0.9586 | 0 | | 0.2074 | 0.9462 | 0.1663 | 0.9586 | 1 | | 0.1986 | 0.9462 | 0.1705 | 0.9586 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,772
[ [ -0.040069580078125, -0.036376953125, 0.022430419921875, 0.0050201416015625, -0.037750244140625, -0.02532958984375, -0.0119171142578125, -0.0220947265625, 0.0083465576171875, 0.012451171875, -0.054931640625, -0.04840087890625, -0.04974365234375, -0.0271453857...
YakovElm/Qt5Classic_64
2023-05-27T10:01:26.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt5Classic_64
0
2
transformers
2023-05-27T10:00:49
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt5Classic_64 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt5Classic_64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2960 - Train Accuracy: 0.8918 - Validation Loss: 0.2467 - Validation Accuracy: 0.9303 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3437 | 0.8889 | 0.2451 | 0.9294 | 0 | | 0.3214 | 0.8943 | 0.2529 | 0.9294 | 1 | | 0.2960 | 0.8918 | 0.2467 | 0.9303 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,770
[ [ -0.04132080078125, -0.034820556640625, 0.0228424072265625, 0.0015993118286132812, -0.035400390625, -0.0272216796875, -0.011962890625, -0.023712158203125, 0.005939483642578125, 0.01322174072265625, -0.053436279296875, -0.050140380859375, -0.0498046875, -0.025...
YakovElm/MariaDB15Classic_256
2023-05-27T10:03:34.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB15Classic_256
0
2
transformers
2023-05-27T10:02:58
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB15Classic_256 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB15Classic_256 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1888 - Train Accuracy: 0.9364 - Validation Loss: 0.1602 - Validation Accuracy: 0.9598 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2885 | 0.9163 | 0.1635 | 0.9598 | 0 | | 0.2145 | 0.9297 | 0.1719 | 0.9598 | 1 | | 0.1888 | 0.9364 | 0.1602 | 0.9598 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,784
[ [ -0.04327392578125, -0.042999267578125, 0.0209808349609375, 0.0037841796875, -0.03509521484375, -0.0305633544921875, -0.01555633544921875, -0.0256500244140625, 0.015777587890625, 0.01373291015625, -0.0550537109375, -0.04876708984375, -0.05169677734375, -0.025...
YakovElm/Qt10Classic_64
2023-05-27T10:31:07.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt10Classic_64
0
2
transformers
2023-05-27T10:30:34
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt10Classic_64 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt10Classic_64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2489 - Train Accuracy: 0.9210 - Validation Loss: 0.2184 - Validation Accuracy: 0.9416 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2887 | 0.9191 | 0.2186 | 0.9416 | 0 | | 0.2710 | 0.9210 | 0.2124 | 0.9416 | 1 | | 0.2489 | 0.9210 | 0.2184 | 0.9416 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,772
[ [ -0.040435791015625, -0.035675048828125, 0.0223388671875, 0.002361297607421875, -0.034576416015625, -0.02655029296875, -0.01242828369140625, -0.021942138671875, 0.008270263671875, 0.01293182373046875, -0.05267333984375, -0.048248291015625, -0.050018310546875, ...
YakovElm/Qt15Classic_64
2023-05-27T11:01:23.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt15Classic_64
0
2
transformers
2023-05-27T11:00:50
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt15Classic_64 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt15Classic_64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2040 - Train Accuracy: 0.9367 - Validation Loss: 0.1941 - Validation Accuracy: 0.9505 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2461 | 0.9332 | 0.1899 | 0.9505 | 0 | | 0.2245 | 0.9367 | 0.1855 | 0.9505 | 1 | | 0.2040 | 0.9367 | 0.1941 | 0.9505 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,772
[ [ -0.04132080078125, -0.0396728515625, 0.021453857421875, 0.004673004150390625, -0.036834716796875, -0.028167724609375, -0.01317596435546875, -0.022674560546875, 0.01013946533203125, 0.01348114013671875, -0.053680419921875, -0.0479736328125, -0.051544189453125, ...
YakovElm/Qt20Classic_64
2023-05-27T11:30:50.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt20Classic_64
0
2
transformers
2023-05-27T11:29:55
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt20Classic_64 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt20Classic_64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1783 - Train Accuracy: 0.9465 - Validation Loss: 0.1657 - Validation Accuracy: 0.9586 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2235 | 0.9440 | 0.1706 | 0.9586 | 0 | | 0.2009 | 0.9462 | 0.1646 | 0.9586 | 1 | | 0.1783 | 0.9465 | 0.1657 | 0.9586 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,772
[ [ -0.0396728515625, -0.036865234375, 0.02166748046875, 0.0038089752197265625, -0.03802490234375, -0.0257720947265625, -0.01230621337890625, -0.0218505859375, 0.007556915283203125, 0.012237548828125, -0.054718017578125, -0.049041748046875, -0.049713134765625, -...
YakovElm/MariaDB20Classic_256
2023-05-27T11:57:20.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB20Classic_256
0
2
transformers
2023-05-27T11:56:44
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB20Classic_256 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB20Classic_256 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1850 - Train Accuracy: 0.9322 - Validation Loss: 0.1319 - Validation Accuracy: 0.9698 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2456 | 0.9356 | 0.1422 | 0.9698 | 0 | | 0.2090 | 0.9356 | 0.1346 | 0.9698 | 1 | | 0.1850 | 0.9322 | 0.1319 | 0.9698 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,784
[ [ -0.04400634765625, -0.0428466796875, 0.0221099853515625, 0.0026416778564453125, -0.03350830078125, -0.031341552734375, -0.015625, -0.026397705078125, 0.015777587890625, 0.0154266357421875, -0.056060791015625, -0.050048828125, -0.051177978515625, -0.026428222...
myasa/distilbert-base-uncased-finetuned-emotion
2023-05-27T13:45:46.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
myasa
null
null
myasa/distilbert-base-uncased-finetuned-emotion
0
2
transformers
2023-05-27T12:58:31
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.926 - name: F1 type: f1 value: 0.926001422605883 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2124 - Accuracy: 0.926 - F1: 0.9260 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8123 | 1.0 | 250 | 0.2922 | 0.909 | 0.9067 | | 0.2351 | 2.0 | 500 | 0.2124 | 0.926 | 0.9260 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.8.0 - Tokenizers 0.13.3
1,844
[ [ -0.037994384765625, -0.0416259765625, 0.015380859375, 0.0217742919921875, -0.0258941650390625, -0.019073486328125, -0.0129852294921875, -0.008514404296875, 0.01023101806640625, 0.00830841064453125, -0.05584716796875, -0.05120849609375, -0.059478759765625, -0...
Inhaexpress/DialoGPT-medium-harrypotter
2023-05-27T14:21:57.000Z
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "endpoints_compatible", "text-generation-inference", "region:us" ]
conversational
Inhaexpress
null
null
Inhaexpress/DialoGPT-medium-harrypotter
1
2
transformers
2023-05-27T14:15:02
--- tags: - conversational --- # Harry Potter DialoGPT Model # He doesn't want to be Harry for some reason
107
[ [ -0.029754638671875, -0.0390625, 0.013671875, -0.0038738250732421875, -0.030029296875, 0.00272369384765625, 0.01033782958984375, 0.00878143310546875, 0.0321044921875, 0.034088134765625, -0.049041748046875, 0.0027332305908203125, -0.0192413330078125, 0.0344848...
ShayDuane/distilbert-base-uncased-finetuned-imdb
2023-05-27T15:10:54.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "feature-extraction", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
ShayDuane
null
null
ShayDuane/distilbert-base-uncased-finetuned-imdb
1
2
transformers
2023-05-27T14:57:08
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4336 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5803 | 1.0 | 1250 | 2.5043 | | 2.4534 | 2.0 | 2500 | 2.4634 | | 2.4564 | 3.0 | 3750 | 2.4336 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,466
[ [ -0.04205322265625, -0.0445556640625, 0.007633209228515625, 0.006694793701171875, -0.0308990478515625, -0.01025390625, -0.002666473388671875, -0.00024235248565673828, 0.0147857666015625, 0.027374267578125, -0.0570068359375, -0.039276123046875, -0.06304931640625, ...
jonastokoliu/text_cls_bert-base-uncased_imdb_finetune
2023-06-09T10:12:51.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
jonastokoliu
null
null
jonastokoliu/text_cls_bert-base-uncased_imdb_finetune
0
2
transformers
2023-05-27T16:11:59
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy model-index: - name: text_cls_bert-base-uncased_imdb_finetune results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.93672 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # text_cls_bert-base-uncased_imdb_finetune This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.1784 - Accuracy: 0.9367 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 391 | 0.1758 | 0.9346 | | 0.2319 | 2.0 | 782 | 0.1784 | 0.9367 | ### Framework versions - Transformers 4.30.0 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
1,711
[ [ -0.0404052734375, -0.042449951171875, 0.007663726806640625, 0.006938934326171875, -0.036102294921875, -0.0273590087890625, -0.01502227783203125, -0.0158538818359375, 0.0140380859375, 0.0277099609375, -0.058929443359375, -0.0347900390625, -0.051361083984375, ...
HasinMDG/distil_roberta_SD_country
2023-05-27T16:36:46.000Z
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
HasinMDG
null
null
HasinMDG/distil_roberta_SD_country
0
2
sentence-transformers
2023-05-27T16:36:34
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # HasinMDG/distil_roberta_SD_country This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("HasinMDG/distil_roberta_SD_country") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
1,557
[ [ -0.006183624267578125, -0.059539794921875, 0.033721923828125, -0.004070281982421875, -0.01482391357421875, -0.0162353515625, -0.023284912109375, -0.0019435882568359375, -0.001102447509765625, 0.039306640625, -0.040191650390625, -0.027313232421875, -0.04760742187...
HasinMDG/distil_roberta_SD_government
2023-05-27T16:47:53.000Z
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
HasinMDG
null
null
HasinMDG/distil_roberta_SD_government
0
2
sentence-transformers
2023-05-27T16:47:41
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # HasinMDG/distil_roberta_SD_government This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("HasinMDG/distil_roberta_SD_government") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
1,563
[ [ -0.006591796875, -0.0604248046875, 0.0369873046875, -0.01044464111328125, -0.01258087158203125, -0.01421356201171875, -0.0238037109375, 0.0038127899169921875, -0.0036678314208984375, 0.0421142578125, -0.037109375, -0.0258941650390625, -0.048980712890625, 0.0...
HasinMDG/distil_roberta_SD_company
2023-05-27T16:58:27.000Z
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
HasinMDG
null
null
HasinMDG/distil_roberta_SD_company
0
2
sentence-transformers
2023-05-27T16:58:15
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # HasinMDG/distil_roberta_SD_company This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("HasinMDG/distil_roberta_SD_company") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
1,557
[ [ -0.00217437744140625, -0.05889892578125, 0.03082275390625, -0.004177093505859375, -0.0137481689453125, -0.016021728515625, -0.0176849365234375, -0.00897979736328125, -0.0017948150634765625, 0.03851318359375, -0.043548583984375, -0.02325439453125, -0.043121337890...
YakovElm/Qt5Classic_256
2023-05-27T17:37:56.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt5Classic_256
0
2
transformers
2023-05-27T17:37:19
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt5Classic_256 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt5Classic_256 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2872 - Train Accuracy: 0.8943 - Validation Loss: 0.2604 - Validation Accuracy: 0.9278 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3403 | 0.8943 | 0.2470 | 0.9294 | 0 | | 0.3156 | 0.8943 | 0.2504 | 0.9294 | 1 | | 0.2872 | 0.8943 | 0.2604 | 0.9278 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,772
[ [ -0.041290283203125, -0.034698486328125, 0.023345947265625, 0.0013513565063476562, -0.035308837890625, -0.0251922607421875, -0.01041412353515625, -0.0221099853515625, 0.006374359130859375, 0.01197052001953125, -0.053985595703125, -0.050079345703125, -0.0487976074...
adityavelusamy/Questy-v1
2023-05-27T18:56:44.000Z
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "autotrain", "summarization", "unk", "dataset:adityavelusamy/autotrain-data-f", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
summarization
adityavelusamy
null
null
adityavelusamy/Questy-v1
0
2
transformers
2023-05-27T18:48:37
--- tags: - autotrain - summarization language: - unk widget: - text: "I love AutoTrain" datasets: - adityavelusamy/autotrain-data-f co2_eq_emissions: emissions: 0.5793683469903973 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 62230135023 - CO2 Emissions (in grams): 0.5794 ## Validation Metrics - Loss: 0.883 - Rouge1: 52.493 - Rouge2: 33.950 - RougeL: 47.184 - RougeLsum: 47.225 - Gen Len: 15.493 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/adityavelusamy/autotrain-f-62230135023 ```
710
[ [ -0.033416748046875, -0.031585693359375, 0.0271759033203125, 0.01486968994140625, -0.0025177001953125, 0.003421783447265625, 0.01500701904296875, -0.01195526123046875, 0.0220794677734375, 0.0191497802734375, -0.06201171875, -0.0323486328125, -0.05816650390625, ...
HasinMDG/masked_distil_roberta_SD_country
2023-05-27T18:55:54.000Z
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
HasinMDG
null
null
HasinMDG/masked_distil_roberta_SD_country
0
2
sentence-transformers
2023-05-27T18:55:42
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # HasinMDG/masked_distil_roberta_SD_country This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("HasinMDG/masked_distil_roberta_SD_country") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
1,571
[ [ -0.01012420654296875, -0.060150146484375, 0.03167724609375, -0.0024662017822265625, -0.018524169921875, -0.007694244384765625, -0.0226898193359375, -0.00539398193359375, 0.004291534423828125, 0.0438232421875, -0.042236328125, -0.031494140625, -0.052459716796875,...
HasinMDG/masked_distil_roberta_SD_government
2023-05-27T19:07:40.000Z
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
HasinMDG
null
null
HasinMDG/masked_distil_roberta_SD_government
0
2
sentence-transformers
2023-05-27T19:07:28
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # HasinMDG/masked_distil_roberta_SD_government This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("HasinMDG/masked_distil_roberta_SD_government") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
1,577
[ [ -0.0102081298828125, -0.060821533203125, 0.0338134765625, -0.00856781005859375, -0.0166015625, -0.00567626953125, -0.023040771484375, -0.0008764266967773438, 0.002460479736328125, 0.0458984375, -0.039520263671875, -0.030059814453125, -0.053802490234375, 0.01...
HasinMDG/masked_distil_roberta_SD_company
2023-05-27T19:18:44.000Z
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
HasinMDG
null
null
HasinMDG/masked_distil_roberta_SD_company
0
2
sentence-transformers
2023-05-27T19:18:32
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # HasinMDG/masked_distil_roberta_SD_company This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("HasinMDG/masked_distil_roberta_SD_company") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
1,571
[ [ -0.005992889404296875, -0.059478759765625, 0.029083251953125, -0.0028896331787109375, -0.0171661376953125, -0.0079193115234375, -0.017425537109375, -0.011810302734375, 0.0028896331787109375, 0.0426025390625, -0.0455322265625, -0.027374267578125, -0.0484313964843...
YakovElm/Hyperledger5Classic_512
2023-05-27T19:38:29.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger5Classic_512
0
2
transformers
2023-05-27T19:37:53
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger5Classic_512 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger5Classic_512 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3034 - Train Accuracy: 0.8744 - Validation Loss: 0.4265 - Validation Accuracy: 0.8185 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.4068 | 0.8537 | 0.4270 | 0.8361 | 0 | | 0.3760 | 0.8537 | 0.4053 | 0.8361 | 1 | | 0.3034 | 0.8744 | 0.4265 | 0.8185 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,790
[ [ -0.048614501953125, -0.03778076171875, 0.022003173828125, 0.0026836395263671875, -0.029693603515625, -0.02618408203125, -0.0169525146484375, -0.0262298583984375, 0.01129913330078125, 0.0136871337890625, -0.0540771484375, -0.05059814453125, -0.051971435546875, ...
m33393/llama-65b-gptq-cuda-4bit-32g-safetensors
2023-05-30T04:17:26.000Z
[ "transformers", "llama", "text-generation", "safetensors", "license:other", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
m33393
null
null
m33393/llama-65b-gptq-cuda-4bit-32g-safetensors
2
2
transformers
2023-05-27T21:30:36
--- license: other library_name: transformers tags: - safetensors - llama --- Converted to HF with `transformers 4.30.0.dev0`, then quantized to 4 bit with GPTQ (Group size `32`): `python llama.py ../llama-65b-hf c4 --wbits 4 --true-sequential --act-order --groupsize 32 --save_safetensors 4bit-32g.safetensors` PPL should be marginally better than group size 128 at the cost of more VRAM. An A6000 should still be able to fit it all at full 2048 context. --- Note that this model was quantized under GPTQ's `cuda` branch. Which means it should work with 0cc4m's KoboldAI fork: https://github.com/0cc4m/KoboldAI
614
[ [ -0.036041259765625, -0.028594970703125, 0.037445068359375, 0.021240234375, -0.0240020751953125, -0.0222625732421875, 0.01812744140625, -0.0246124267578125, -0.0029430389404296875, 0.0408935546875, -0.029083251953125, -0.00800323486328125, -0.0243072509765625, ...
YakovElm/Qt10Classic_256
2023-05-27T23:26:01.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt10Classic_256
0
2
transformers
2023-05-27T23:25:24
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt10Classic_256 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt10Classic_256 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2176 - Train Accuracy: 0.9205 - Validation Loss: 0.2088 - Validation Accuracy: 0.9416 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2773 | 0.9208 | 0.2342 | 0.9416 | 0 | | 0.2556 | 0.9210 | 0.2074 | 0.9416 | 1 | | 0.2176 | 0.9205 | 0.2088 | 0.9416 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,774
[ [ -0.041351318359375, -0.035186767578125, 0.0236968994140625, 0.0018720626831054688, -0.03399658203125, -0.025177001953125, -0.010986328125, -0.0213775634765625, 0.00909423828125, 0.01213836669921875, -0.05438232421875, -0.0477294921875, -0.049468994140625, -0...
indigorange/dqn-SpaceInvadersNoFrameskip-v4
2023-05-28T04:35:59.000Z
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
indigorange
null
null
indigorange/dqn-SpaceInvadersNoFrameskip-v4
0
2
stable-baselines3
2023-05-28T04:35:23
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 645.50 +/- 137.41 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga indigorange -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga indigorange -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga indigorange ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
2,700
[ [ -0.0419921875, -0.03375244140625, 0.019256591796875, 0.0254669189453125, -0.00911712646484375, -0.0204010009765625, 0.01168060302734375, -0.0160980224609375, 0.01328277587890625, 0.0220947265625, -0.07147216796875, -0.03302001953125, -0.0263214111328125, -0....
YakovElm/Qt15Classic_256
2023-05-28T05:13:17.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt15Classic_256
0
2
transformers
2023-05-28T05:12:41
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt15Classic_256 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt15Classic_256 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2023 - Train Accuracy: 0.9373 - Validation Loss: 0.2062 - Validation Accuracy: 0.9465 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2403 | 0.9367 | 0.2023 | 0.9505 | 0 | | 0.2233 | 0.9367 | 0.1936 | 0.9505 | 1 | | 0.2023 | 0.9373 | 0.2062 | 0.9465 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,774
[ [ -0.041473388671875, -0.038818359375, 0.0221099853515625, 0.004314422607421875, -0.036102294921875, -0.0274505615234375, -0.01317596435546875, -0.0219573974609375, 0.0101165771484375, 0.01233673095703125, -0.05450439453125, -0.048553466796875, -0.050567626953125,...
HasinMDG/MLM_distilroberta_SD_government
2023-05-28T05:21:27.000Z
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
HasinMDG
null
null
HasinMDG/MLM_distilroberta_SD_government
0
2
sentence-transformers
2023-05-28T05:21:15
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # HasinMDG/MLM_distilroberta_SD_government This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("HasinMDG/MLM_distilroberta_SD_government") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
1,569
[ [ -0.00276947021484375, -0.060211181640625, 0.031402587890625, -0.0038700103759765625, -0.00795745849609375, -0.01348114013671875, -0.0215301513671875, 0.0083160400390625, -0.005535125732421875, 0.0447998046875, -0.041534423828125, -0.0292816162109375, -0.05010986...
HasinMDG/MLM_distilroberta_SD_company
2023-05-28T05:30:18.000Z
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
HasinMDG
null
null
HasinMDG/MLM_distilroberta_SD_company
0
2
sentence-transformers
2023-05-28T05:30:06
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # HasinMDG/MLM_distilroberta_SD_company This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("HasinMDG/MLM_distilroberta_SD_company") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
1,563
[ [ -0.0005254745483398438, -0.057830810546875, 0.0265045166015625, 0.0029621124267578125, -0.00959014892578125, -0.0140380859375, -0.016845703125, -0.0036029815673828125, -0.004772186279296875, 0.04046630859375, -0.048065185546875, -0.0267181396484375, -0.045227050...
YakovElm/Hyperledger10Classic_512
2023-05-28T05:45:24.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger10Classic_512
0
2
transformers
2023-05-28T05:44:46
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger10Classic_512 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger10Classic_512 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2833 - Train Accuracy: 0.8900 - Validation Loss: 0.3935 - Validation Accuracy: 0.8610 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3645 | 0.8731 | 0.3704 | 0.8600 | 0 | | 0.3302 | 0.8838 | 0.3660 | 0.8600 | 1 | | 0.2833 | 0.8900 | 0.3935 | 0.8610 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,792
[ [ -0.047271728515625, -0.0418701171875, 0.0213165283203125, 0.002735137939453125, -0.0289459228515625, -0.027313232421875, -0.01904296875, -0.025115966796875, 0.0144500732421875, 0.01389312744140625, -0.052520751953125, -0.047149658203125, -0.052276611328125, ...
HasinMDG/distilroberta_SD_country_v2
2023-05-28T06:11:40.000Z
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
HasinMDG
null
null
HasinMDG/distilroberta_SD_country_v2
0
2
sentence-transformers
2023-05-28T06:11:27
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # HasinMDG/distilroberta_SD_country_v2 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("HasinMDG/distilroberta_SD_country_v2") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
1,561
[ [ -0.006244659423828125, -0.05657958984375, 0.0309600830078125, -0.00029540061950683594, -0.0170135498046875, -0.01204681396484375, -0.0213470458984375, -0.002063751220703125, -0.00003713369369506836, 0.0384521484375, -0.03924560546875, -0.0272979736328125, -0.047...
HasinMDG/distilroberta_SD_government_v2
2023-05-28T06:23:03.000Z
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
HasinMDG
null
null
HasinMDG/distilroberta_SD_government_v2
0
2
sentence-transformers
2023-05-28T06:22:52
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # HasinMDG/distilroberta_SD_government_v2 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("HasinMDG/distilroberta_SD_government_v2") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
1,567
[ [ -0.006195068359375, -0.0589599609375, 0.034698486328125, -0.0083160400390625, -0.013824462890625, -0.01250457763671875, -0.0198974609375, 0.00267791748046875, -0.0025997161865234375, 0.0404052734375, -0.037445068359375, -0.0240936279296875, -0.04864501953125, ...
BrainRoster/ppo-LunarLander-v2
2023-06-09T19:59:15.000Z
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
BrainRoster
null
null
BrainRoster/ppo-LunarLander-v2
0
2
stable-baselines3
2023-05-28T06:49:41
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 279.02 +/- 16.41 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
784
[ [ -0.00021219253540039062, -0.027099609375, 0.0170745849609375, 0.023345947265625, -0.0060577392578125, 0.0027484893798828125, 0.034423828125, -0.01212310791015625, 0.0198822021484375, 0.06500244140625, -0.04315185546875, -0.035247802734375, -0.0343017578125, ...
kyo-takano/open-calm-7b-8bit
2023-05-28T11:41:05.000Z
[ "transformers", "pytorch", "gpt_neox", "text-generation", "japanese", "causal-lm", "quantized", "ja", "license:cc-by-sa-4.0", "has_space", "text-generation-inference", "region:us" ]
text-generation
kyo-takano
null
null
kyo-takano/open-calm-7b-8bit
10
2
transformers
2023-05-28T10:22:16
--- license: cc-by-sa-4.0 language: - ja tags: - japanese - causal-lm - quantized inference: false --- # OpenCALM-7B - 8bit [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/kyo-takano/0c7bf0479158aa137e0ba935dec70461/opencalm-7b-8bit.ipynb) 8-bit quantized version of [OpenCALM-7B by CyberAgent (under CC BY-SA 4.0)](https://huggingface.co/cyberagent/open-calm-7b) When using this quantized model, please be sure to give credit to the original. ## Setup ```sh pip install -q -U bitsandbytes pip install -q -U git+https://github.com/huggingface/transformers.git pip install -q -U git+https://github.com/huggingface/accelerate.git ``` ## Usage ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM MODEL_ID = "kyo-takano/open-calm-7b-8bit" model = AutoModelForCausalLM.from_pretrained(MODEL_ID) tokenizer = AutoTokenizer.from_pretrained(MODEL_ID) inputs = tokenizer("AIによって私達の暮らしは、", return_tensors="pt").to(model.device) with torch.no_grad(): tokens = model.generate( **inputs, max_new_tokens=64, do_sample=True, temperature=0.7, top_p=0.9, repetition_penalty=1.05, pad_token_id=tokenizer.pad_token_id, ) output = tokenizer.decode(tokens[0], skip_special_tokens=True) print(output) ``` ## Model Details - Developed by: CyberAgent, Inc. - Quantized by: Kyo Takano - Model type: Transformer-based Language Model - Language: Japanese - Library: GPT-NeoX - License: OpenCALM is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0). When using this model, please provide appropriate credit to **CyberAgent, Inc.**
1,739
[ [ -0.02105712890625, -0.042083740234375, 0.0196075439453125, 0.0271148681640625, -0.0286407470703125, -0.0101470947265625, -0.0076446533203125, -0.020782470703125, 0.004974365234375, 0.023101806640625, -0.0181732177734375, -0.0413818359375, -0.04315185546875, ...
YakovElm/Qt20Classic_256
2023-05-28T11:01:53.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt20Classic_256
0
2
transformers
2023-05-28T11:01:15
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt20Classic_256 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt20Classic_256 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1866 - Train Accuracy: 0.9454 - Validation Loss: 0.1784 - Validation Accuracy: 0.9586 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2203 | 0.9383 | 0.1651 | 0.9586 | 0 | | 0.2026 | 0.9462 | 0.1571 | 0.9586 | 1 | | 0.1866 | 0.9454 | 0.1784 | 0.9586 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,774
[ [ -0.041015625, -0.0350341796875, 0.0237579345703125, 0.00315093994140625, -0.034637451171875, -0.0236053466796875, -0.009490966796875, -0.021881103515625, 0.006755828857421875, 0.01314544677734375, -0.055572509765625, -0.04779052734375, -0.0482177734375, -0.0...
YakovElm/Hyperledger15Classic_512
2023-05-28T15:51:00.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger15Classic_512
0
2
transformers
2023-05-28T15:50:25
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger15Classic_512 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger15Classic_512 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2806 - Train Accuracy: 0.9035 - Validation Loss: 0.3198 - Validation Accuracy: 0.8807 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3217 | 0.8952 | 0.3253 | 0.8807 | 0 | | 0.2967 | 0.9035 | 0.3233 | 0.8807 | 1 | | 0.2806 | 0.9035 | 0.3198 | 0.8807 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,792
[ [ -0.048736572265625, -0.0421142578125, 0.022216796875, 0.004581451416015625, -0.0294189453125, -0.0281829833984375, -0.018463134765625, -0.0250091552734375, 0.01259613037109375, 0.0141143798828125, -0.053863525390625, -0.049163818359375, -0.051483154296875, -...
Abhilashvj/CIRCL_website_classifier_test
2023-05-28T16:44:04.000Z
[ "transformers", "pytorch", "resnet", "image-classification", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
Abhilashvj
null
null
Abhilashvj/CIRCL_website_classifier_test
0
2
transformers
2023-05-28T16:16:41
--- license: apache-2.0 pipeline_tag: image-classification metrics: - accuracy - f1 --- # Model Card for Model ID <!-- This model can be used to classify website screenshots. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
5,253
[ [ -0.047515869140625, -0.04296875, 0.03045654296875, 0.0038890838623046875, -0.0235595703125, -0.0227203369140625, 0.0085906982421875, -0.043609619140625, 0.01181793212890625, 0.052032470703125, -0.04998779296875, -0.0501708984375, -0.04248046875, -0.004093170...
Peraboom/SBertV1
2023-05-28T16:36:37.000Z
[ "transformers", "pytorch", "bert", "text-classification", "license:other", "endpoints_compatible", "region:us" ]
text-classification
Peraboom
null
null
Peraboom/SBertV1
1
2
transformers
2023-05-28T16:25:24
--- license: other --- This is distilled model from Bert Base uncased. It has 6 layers, 6 heads and 384 hidden Size. It has 29.8M parameter. Performance wise, it has the potential of 87% performance of bert base with has 12 layers and 12 heads with 110M parameters.
265
[ [ -0.04718017578125, -0.037017822265625, 0.033355712890625, 0.032135009765625, -0.0384521484375, 0.00970458984375, 0.0007414817810058594, -0.0164031982421875, 0.0096588134765625, 0.050933837890625, -0.027435302734375, -0.00255584716796875, -0.046783447265625, ...
tonirodriguez/roberta-base-bne-finetuned-toxicity-tweets
2023-05-28T18:36:49.000Z
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
tonirodriguez
null
null
tonirodriguez/roberta-base-bne-finetuned-toxicity-tweets
0
2
transformers
2023-05-28T16:45:52
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-base-bne-finetuned-toxicity-tweets results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-toxicity-tweets This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1345 - Accuracy: 0.9604 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.18 | 1.0 | 229 | 0.1270 | 0.9559 | | 0.0508 | 2.0 | 458 | 0.1345 | 0.9604 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,454
[ [ -0.0232696533203125, -0.046600341796875, 0.0156707763671875, 0.00821685791015625, -0.0245513916015625, -0.038909912109375, -0.00960540771484375, -0.01274871826171875, 0.00946044921875, 0.032318115234375, -0.04315185546875, -0.05816650390625, -0.05108642578125, ...
theSOL1/kogrammar-base
2023-06-08T11:52:16.000Z
[ "transformers", "pytorch", "safetensors", "bart", "text2text-generation", "grammar", "ko", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
theSOL1
null
null
theSOL1/kogrammar-base
1
2
transformers
2023-05-28T17:20:47
--- language: ko license: mit tags: - bart - grammar --- # kogrammar-base Dataset: 국립국어원 맞춤법 교정 말뭉치 <br> <br> Backbone Model: [kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2/blob/main/README.md) <br> GitHub Repo: [SOL1archive/KoGrammar](https://github.com/SOL1archive/KoGrammar) ## Train Method 전체 데이터셋 중 약 45%를 학습데이터로 활용하여 학습함. ## Metric |BLEU-2|ROUGE-2 F1| |-|-| |77.8 %|55.0 %|
404
[ [ -0.0212554931640625, -0.0131072998046875, 0.0150909423828125, 0.038604736328125, -0.04766845703125, -0.00011593103408813477, 0.0034542083740234375, 0.00319671630859375, 0.014373779296875, 0.039520263671875, -0.025238037109375, -0.05999755859375, -0.0452575683593...
JoseVerutti/uao-distilroberta-base-mrpc-glue-verutti-benjumea-lopez
2023-05-28T17:26:42.000Z
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
JoseVerutti
null
null
JoseVerutti/uao-distilroberta-base-mrpc-glue-verutti-benjumea-lopez
0
2
transformers
2023-05-28T17:23:23
--- license: apache-2.0 tags: - text-classification - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: uao-distilroberta-base-mrpc-glue-verutti-benjumea-lopez results: - task: name: Text Classification type: text-classification dataset: name: datasetX type: glue config: mrpc split: validation args: mrpc metrics: - name: Accuracy type: accuracy value: 0.821078431372549 - name: F1 type: f1 value: 0.8717047451669596 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # uao-distilroberta-base-mrpc-glue-verutti-benjumea-lopez This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset. It achieves the following results on the evaluation set: - Loss: 0.5776 - Accuracy: 0.8211 - F1: 0.8717 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5197 | 1.09 | 500 | 0.5776 | 0.8211 | 0.8717 | | 0.35 | 2.18 | 1000 | 0.5931 | 0.8309 | 0.8752 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,891
[ [ -0.0292205810546875, -0.04046630859375, 0.00881195068359375, 0.019989013671875, -0.0263671875, -0.022735595703125, -0.006439208984375, -0.00885009765625, 0.0005192756652832031, 0.0178985595703125, -0.04412841796875, -0.041748046875, -0.054046630859375, 0.000...
JoseVerutti/uao-bert-base-uncased-mrpc-glue-verutti-benjumea-lopez
2023-05-30T22:37:38.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
JoseVerutti
null
null
JoseVerutti/uao-bert-base-uncased-mrpc-glue-verutti-benjumea-lopez
0
2
transformers
2023-05-28T17:31:25
--- license: apache-2.0 tags: - text-classification - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: uao-bert-base-uncased-mrpc-glue-verutti-benjumea-lopez results: - task: name: Text Classification type: text-classification dataset: name: datasetX type: glue config: mrpc split: validation args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8406862745098039 - name: F1 type: f1 value: 0.8853615520282188 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # uao-bert-base-uncased-mrpc-glue-verutti-benjumea-lopez This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the datasetX dataset. It achieves the following results on the evaluation set: - Loss: 0.6569 - Accuracy: 0.8407 - F1: 0.8854 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5352 | 1.09 | 500 | 0.5610 | 0.8064 | 0.8587 | | 0.3137 | 2.18 | 1000 | 0.6569 | 0.8407 | 0.8854 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,888
[ [ -0.033233642578125, -0.036041259765625, 0.01114654541015625, 0.01451873779296875, -0.0274658203125, -0.0287017822265625, -0.0193023681640625, -0.018218994140625, 0.00543212890625, 0.02740478515625, -0.05181884765625, -0.044342041015625, -0.046783447265625, -...
YakovElm/IntelDAOS5Classic_512
2023-05-28T19:12:59.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS5Classic_512
0
2
transformers
2023-05-28T19:12:26
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS5Classic_512 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS5Classic_512 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3745 - Train Accuracy: 0.8740 - Validation Loss: 0.4273 - Validation Accuracy: 0.8438 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3984 | 0.8710 | 0.4399 | 0.8438 | 0 | | 0.3811 | 0.8740 | 0.4332 | 0.8438 | 1 | | 0.3745 | 0.8740 | 0.4273 | 0.8438 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,786
[ [ -0.04473876953125, -0.039215087890625, 0.02130126953125, 0.0008177757263183594, -0.03350830078125, -0.027740478515625, -0.0180206298828125, -0.0283660888671875, 0.0124359130859375, 0.0110015869140625, -0.053741455078125, -0.048675537109375, -0.051910400390625, ...
YakovElm/MariaDB5Classic_512
2023-05-28T19:20:55.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB5Classic_512
0
2
transformers
2023-05-28T19:20:21
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB5Classic_512 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB5Classic_512 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2565 - Train Accuracy: 0.9113 - Validation Loss: 0.2527 - Validation Accuracy: 0.9322 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3291 | 0.8795 | 0.2461 | 0.9322 | 0 | | 0.2694 | 0.9063 | 0.2598 | 0.9296 | 1 | | 0.2565 | 0.9113 | 0.2527 | 0.9322 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,782
[ [ -0.044525146484375, -0.04193115234375, 0.021453857421875, 0.0027103424072265625, -0.0333251953125, -0.030029296875, -0.0152740478515625, -0.026824951171875, 0.01476287841796875, 0.01439666748046875, -0.0552978515625, -0.04998779296875, -0.051544189453125, -0...