modelId
stringlengths
4
111
lastModified
stringlengths
24
24
tags
list
pipeline_tag
stringlengths
5
30
author
stringlengths
2
34
config
null
securityStatus
null
id
stringlengths
4
111
likes
int64
0
9.53k
downloads
int64
2
73.6M
library_name
stringlengths
2
84
created
timestamp[us]
card
stringlengths
101
901k
card_len
int64
101
901k
embeddings
list
tum-nlp/IDMGSP-Galactica-TRAIN-CG
2023-07-31T15:21:47.000Z
[ "transformers", "pytorch", "opt", "text-classification", "scientific paper", "fake papers", "science", "scientific text", "en", "dataset:tum-nlp/IDMGSP", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-classification
tum-nlp
null
null
tum-nlp/IDMGSP-Galactica-TRAIN-CG
0
2
transformers
2023-05-30T19:26:02
--- datasets: - tum-nlp/IDMGSP language: - en tags: - scientific paper - fake papers - science - scientific text widget: - text: | Abstract: The Hartree-Fock (HF) method is a widely used method for approximating the electronic structure of many-electron systems. In this work, we study the properties of HF solutions of the three-dimensional electron gas (3DEG), a model system consisting of a uniform, non-interacting electron gas in three dimensions. We find that the HF solutions accurately reproduce the known analytic results for the ground state energy and the static structure factor of the 3DEG. However, we also find that the HF solutions fail to accurately describe the excitation spectrum of the 3DEG, particularly at high energies. Introduction: The HF method is a self-consistent method for approximating the electronic structure of many-electron systems. It is based on the assumption that the electrons in a system can be described as non-interacting quasiparticles, each with its own effective potential. The HF method is commonly used to study the ground state properties of systems, such as the energy and the density distribution, but it can also be used to study excited states. The 3DEG is a model system that has been widely studied as a test case for electronic structure methods. It consists of a uniform, non-interacting electron gas in three dimensions, with a finite density and a periodic boundary condition. The 3DEG has a number of known analytic results for its ground state properties, such as the ground state energy and the static structure factor, which can be used to test the accuracy of approximate methods. Conclusion: In this work, we have studied the properties of HF solutions of the 3DEG. We find that the HF solutions accurately reproduce the known analytic results for the ground state energy and the static structure factor of the 3DEG. However, we also find that the HF solutions fail to accurately describe the excitation spectrum of the 3DEG, particularly at high energies. This suggests that the HF method may not be suitable for accurately describing the excited states of the 3DEG. Further work is needed to understand the limitations of the HF method and to develop improved methods for studying the electronic structure of many-electron systems. example_title: "Example ChatGPT fake" - text: | Abstract: Recent calculations have pointed to a 2.8 $\sigma$ tension between data on $\epsilon^{\prime}_K / \epsilon_K$ and the standard-model (SM) prediction. Several new physics (NP) models can explain this discrepancy, and such NP models are likely to predict deviations of $\mathcal{B}(K\to \pi \nu \overline{\nu})$ from the SM predictions, which can be probed precisely in the near future by NA62 and KOTO experiments. We present correlations between $\epsilon^{\prime}_K / \epsilon_K$ and $\mathcal{B}(K\to \pi \nu \overline{\nu})$ in two types of NP scenarios: a box dominated scenario and a $Z$-penguin dominated one. It is shown that different correlations are predicted and the future precision measurements of $K \to \pi \nu \overline{\nu}$ can distinguish both scenarios. Introduction: CP violating flavor-changing neutral current decays of K mesons are extremely sensitive to new physics (NP) and can probe virtual effects of particles with masses far above the reach of the Large Hadron Collider. Prime examples of such observables are ϵ′ K measuring direct CP violation in K → ππ decays and B(KL → π0νν). Until recently, large theoretical uncertainties precluded reliable predictions for ϵ′ K. Although standard-model (SM) predictions of ϵ′ K using chiral perturbation theory are consistent with the experimental value, their theoretical uncertainties are large. In contrast, calculation by the dual QCD approach 1 finds the SM value much below the experimental one. A major breakthrough has been the recent lattice-QCD calculation of the hadronic matrix elements by RBC-UKQCD collaboration 2, which gives support to the latter result. The SM value at the next-to-leading order divided by the indirect CP violating measure ϵK is 3 which is consistent with (ϵ′ K/ϵK)SM = (1.9±4.5)×10−4 given by Buras et al 4.a Both results are based on the lattice numbers, and further use CP-conserving K → ππ data to constrain some of the hadronic matrix elements involved. Compared to the world average of the experimental results 6, Re (ϵ′ K/ϵK)exp = (16.6 ± 2.3) × 10−4, (2) the SM prediction lies below the experimental value by 2.8 σ. Several NP models including supersymmetry (SUSY) can explain this discrepancy. It is known that such NP models are likely to predict deviations of the kaon rare decay branching ratios from the SM predictions, especially B(K → πνν) which can be probed precisely in the near future by NA62 and KOTO experiments.b In this contribution, we present correlations between ϵ′ K/ϵK and B(K → πνν) in two types of NP scenarios: a box dominated scenario and a Z-penguin dominated one. Presented at the 52th Rencontres de Moriond electroweak interactions and unified theories, La Thuile, Italy, 18-25 March, 2017. aOther estimations of the SM value are listed in Kitahara et al 5. b The correlations between ϵ′ K/ϵK, B(K → πνν) and ϵK through the CKM components in the SM are discussed in Ref. 7. Conclusion: We have presented the correlations between ϵ′ K/ϵK, B(KL → π0νν), and B(K+ → π+νν) in the box dominated scenario and the Z-penguin dominated one. It is shown that the constraint from ϵK produces different correlations between two NP scenarios. In the future, measurements of B(K → πνν) will be significantly improved. The NA62 experiment at CERN measuring B(K+ → π+νν) is aiming to reach a precision of 10 % compared to the SM value already in 2018. In order to achieve 5% accuracy more time is needed. Concerning KL → π0νν, the KOTO experiment at J-PARC aims in a first step at measuring B(KL → π0νν) around the SM sensitivity. Furthermore, the KOTO-step2 experiment will aim at 100 events for the SM branching ratio, implying a precision of 10 % of this measurement. Therefore, we conclude that when the ϵ′ K/ϵK discrepancy is explained by the NP contribution, NA62 experiment could probe whether a modified Z-coupling scenario is realized or not, and KOTO-step2 experiment can distinguish the box dominated scenario and the simplified modified Z-coupling scenario. example_title: "Example real" --- # Model Card for IDMGSP-Galactica-TRAIN-CG A fine-tuned Galactica model to detect machine-generated scientific papers based on their abstract, introduction, and conclusion. This model is trained on the `train-cg` dataset found in https://huggingface.co/datasets/tum-nlp/IDMGSP. # this model card is WIP, please check the repository, the dataset card and the paper for more details. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Technical University of Munich (TUM) - **Model type:** [More Information Needed] - **Language(s) (NLP):** English - **License:** [More Information Needed] - **Finetuned from model [optional]:** Galactica ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/qwenzo/-IDMGSP - **Paper:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use ```python from transformers import AutoTokenizer, OPTForSequenceClassification, pipeline model = OPTForSequenceClassification.from_pretrained("tum-nlp/IDMGSP-Galactica-TRAIN-CG") tokenizer = AutoTokenizer.from_pretrained("tum-nlp/IDMGSP-Galactica-TRAIN-CG") reader = pipeline("text-classification", model=model, tokenizer = tokenizer) reader( ''' Abstract: .... Introduction: .... Conclusion: ...''' ) ``` ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ## Training Details ### Training Data The training dataset comprises scientific papers generated by the Galactica, GPT-2, and SCIgen models, as well as papers extracted from the arXiv database. The provided table displays the sample counts from each source utilized in constructing the training dataset. The dataset could be found in https://huggingface.co/datasets/tum-nlp/IDMGSP. | Dataset | arXiv (real) | ChatGPT (fake) | GPT-2 (fake) | SCIgen (fake) | Galactica (fake) | GPT-3 (fake) | |------------------------------|--------------|----------------|--------------|----------------|------------------|--------------| | TRAIN without ChatGPT (TRAIN-CG) | 8k | - | 2k | 2k | 2k | - | ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters [More Information Needed] #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
11,586
[ [ -0.048614501953125, -0.03900146484375, 0.0229644775390625, 0.00699615478515625, -0.02960205078125, -0.00579833984375, -0.005626678466796875, -0.0302886962890625, 0.016143798828125, 0.024322509765625, -0.0460205078125, -0.0438232421875, -0.050323486328125, 0....
YakovElm/Hyperledger10Classic_Balance_DATA_ratio_2
2023-05-30T19:29:13.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger10Classic_Balance_DATA_ratio_2
0
2
transformers
2023-05-30T19:28:35
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger10Classic_Balance_DATA_ratio_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger10Classic_Balance_DATA_ratio_2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5273 - Train Accuracy: 0.7262 - Validation Loss: 0.6784 - Validation Accuracy: 0.6576 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6215 | 0.6646 | 0.6233 | 0.6685 | 0 | | 0.5866 | 0.6782 | 0.5921 | 0.6522 | 1 | | 0.5273 | 0.7262 | 0.6784 | 0.6576 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,826
[ [ -0.045654296875, -0.04180908203125, 0.0142974853515625, 0.00920867919921875, -0.0271453857421875, -0.0306243896484375, -0.011962890625, -0.023040771484375, 0.018768310546875, 0.01380157470703125, -0.05267333984375, -0.03704833984375, -0.051055908203125, -0.0...
YakovElm/Hyperledger10Classic_Balance_DATA_ratio_3
2023-05-30T19:49:04.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger10Classic_Balance_DATA_ratio_3
0
2
transformers
2023-05-30T19:48:29
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger10Classic_Balance_DATA_ratio_3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger10Classic_Balance_DATA_ratio_3 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4784 - Train Accuracy: 0.7641 - Validation Loss: 0.4682 - Validation Accuracy: 0.7755 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5434 | 0.7396 | 0.4923 | 0.7531 | 0 | | 0.5131 | 0.7498 | 0.4739 | 0.7551 | 1 | | 0.4784 | 0.7641 | 0.4682 | 0.7755 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,826
[ [ -0.046875, -0.042449951171875, 0.0161590576171875, 0.0088043212890625, -0.028076171875, -0.0307159423828125, -0.01221466064453125, -0.0239410400390625, 0.0187530517578125, 0.01548004150390625, -0.052032470703125, -0.0380859375, -0.05072021484375, -0.01995849...
YakovElm/Hyperledger10Classic_Balance_DATA_ratio_4
2023-05-30T20:13:47.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger10Classic_Balance_DATA_ratio_4
0
2
transformers
2023-05-30T20:13:12
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger10Classic_Balance_DATA_ratio_4 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger10Classic_Balance_DATA_ratio_4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4452 - Train Accuracy: 0.8037 - Validation Loss: 0.4715 - Validation Accuracy: 0.7993 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.4893 | 0.7945 | 0.4696 | 0.7993 | 0 | | 0.4671 | 0.8032 | 0.4655 | 0.7993 | 1 | | 0.4452 | 0.8037 | 0.4715 | 0.7993 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,826
[ [ -0.046661376953125, -0.04168701171875, 0.0164794921875, 0.00867462158203125, -0.0271453857421875, -0.0296783447265625, -0.0125885009765625, -0.0228729248046875, 0.01849365234375, 0.01519775390625, -0.05279541015625, -0.038665771484375, -0.049713134765625, -0...
YakovElm/Hyperledger15Classic_Balance_DATA_ratio_Half
2023-05-30T20:21:26.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger15Classic_Balance_DATA_ratio_Half
0
2
transformers
2023-05-30T20:20:50
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger15Classic_Balance_DATA_ratio_Half results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger15Classic_Balance_DATA_ratio_Half This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5381 - Train Accuracy: 0.7134 - Validation Loss: 0.5600 - Validation Accuracy: 0.7548 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6293 | 0.6724 | 0.6315 | 0.6452 | 0 | | 0.6051 | 0.6746 | 0.5948 | 0.7097 | 1 | | 0.5381 | 0.7134 | 0.5600 | 0.7548 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,832
[ [ -0.048248291015625, -0.0438232421875, 0.0125579833984375, 0.00980377197265625, -0.0287628173828125, -0.0283203125, -0.01043701171875, -0.0223236083984375, 0.0191497802734375, 0.013916015625, -0.056396484375, -0.040191650390625, -0.049224853515625, -0.0230255...
YakovElm/Hyperledger15Classic_Balance_DATA_ratio_1
2023-05-30T20:30:37.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger15Classic_Balance_DATA_ratio_1
0
2
transformers
2023-05-30T20:29:58
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger15Classic_Balance_DATA_ratio_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger15Classic_Balance_DATA_ratio_1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5761 - Train Accuracy: 0.6688 - Validation Loss: 0.6144 - Validation Accuracy: 0.6359 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6766 | 0.5929 | 0.6081 | 0.7087 | 0 | | 0.6340 | 0.6187 | 0.5659 | 0.7184 | 1 | | 0.5761 | 0.6688 | 0.6144 | 0.6359 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,826
[ [ -0.0482177734375, -0.04302978515625, 0.01432037353515625, 0.010162353515625, -0.0279083251953125, -0.029815673828125, -0.011383056640625, -0.0213470458984375, 0.0189056396484375, 0.01485443115234375, -0.055755615234375, -0.039947509765625, -0.048980712890625, ...
YakovElm/Hyperledger15Classic_Balance_DATA_ratio_2
2023-05-30T20:44:46.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger15Classic_Balance_DATA_ratio_2
0
2
transformers
2023-05-30T20:43:48
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger15Classic_Balance_DATA_ratio_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger15Classic_Balance_DATA_ratio_2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4921 - Train Accuracy: 0.7403 - Validation Loss: 0.6094 - Validation Accuracy: 0.6968 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6037 | 0.6649 | 0.5731 | 0.6806 | 0 | | 0.5526 | 0.6875 | 0.5595 | 0.7032 | 1 | | 0.4921 | 0.7403 | 0.6094 | 0.6968 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,826
[ [ -0.047149658203125, -0.042999267578125, 0.01374053955078125, 0.0098876953125, -0.0284881591796875, -0.0300140380859375, -0.01194000244140625, -0.0224609375, 0.0169525146484375, 0.0146484375, -0.054656982421875, -0.038421630859375, -0.049560546875, -0.0232696...
mikemosc/distilbert-base-uncased-finetuned-mnli
2023-05-30T23:33:58.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
mikemosc
null
null
mikemosc/distilbert-base-uncased-finetuned-mnli
0
2
transformers
2023-05-30T20:47:05
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-mnli results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: mnli split: validation_matched args: mnli metrics: - name: Accuracy type: accuracy value: 0.8188487009679063 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-mnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5546 - Accuracy: 0.8188 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.5153 | 1.0 | 24544 | 0.4999 | 0.8029 | | 0.4194 | 2.0 | 49088 | 0.4788 | 0.8138 | | 0.3065 | 3.0 | 73632 | 0.5546 | 0.8188 | | 0.2172 | 4.0 | 98176 | 0.7237 | 0.8142 | | 0.1784 | 5.0 | 122720 | 0.8463 | 0.8165 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,932
[ [ -0.02838134765625, -0.047149658203125, 0.01312255859375, 0.0137481689453125, -0.023468017578125, -0.0201263427734375, -0.0079345703125, -0.0032978057861328125, 0.01251220703125, 0.01515960693359375, -0.04986572265625, -0.044189453125, -0.05712890625, -0.0129...
YakovElm/Hyperledger15Classic_Balance_DATA_ratio_3
2023-05-30T21:05:05.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger15Classic_Balance_DATA_ratio_3
0
2
transformers
2023-05-30T21:01:17
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger15Classic_Balance_DATA_ratio_3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger15Classic_Balance_DATA_ratio_3 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4230 - Train Accuracy: 0.7633 - Validation Loss: 0.5957 - Validation Accuracy: 0.7361 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5250 | 0.7447 | 0.5194 | 0.7579 | 0 | | 0.4694 | 0.7625 | 0.5376 | 0.7603 | 1 | | 0.4230 | 0.7633 | 0.5957 | 0.7361 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,826
[ [ -0.0474853515625, -0.04327392578125, 0.015899658203125, 0.00934600830078125, -0.028411865234375, -0.030609130859375, -0.01181793212890625, -0.02337646484375, 0.016510009765625, 0.0155181884765625, -0.05352783203125, -0.04083251953125, -0.04876708984375, -0.0...
YakovElm/Hyperledger15Classic_Balance_DATA_ratio_4
2023-05-30T21:27:51.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger15Classic_Balance_DATA_ratio_4
0
2
transformers
2023-05-30T21:25:40
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger15Classic_Balance_DATA_ratio_4 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger15Classic_Balance_DATA_ratio_4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3612 - Train Accuracy: 0.8379 - Validation Loss: 0.5238 - Validation Accuracy: 0.7442 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.4905 | 0.7933 | 0.4737 | 0.7674 | 0 | | 0.4290 | 0.8127 | 0.4847 | 0.75 | 1 | | 0.3612 | 0.8379 | 0.5238 | 0.7442 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,826
[ [ -0.04736328125, -0.04241943359375, 0.01561737060546875, 0.00933074951171875, -0.0281524658203125, -0.029052734375, -0.0126190185546875, -0.0227203369140625, 0.017120361328125, 0.0146636962890625, -0.05438232421875, -0.04107666015625, -0.048675537109375, -0.0...
YakovElm/Hyperledger20Classic_Balance_DATA_ratio_Half
2023-05-30T21:36:46.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger20Classic_Balance_DATA_ratio_Half
0
2
transformers
2023-05-30T21:34:24
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger20Classic_Balance_DATA_ratio_Half results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger20Classic_Balance_DATA_ratio_Half This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4412 - Train Accuracy: 0.7976 - Validation Loss: 0.6211 - Validation Accuracy: 0.6569 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5896 | 0.6976 | 0.6079 | 0.7080 | 0 | | 0.5253 | 0.7659 | 0.6139 | 0.7153 | 1 | | 0.4412 | 0.7976 | 0.6211 | 0.6569 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,832
[ [ -0.047271728515625, -0.044036865234375, 0.01366424560546875, 0.007781982421875, -0.0285186767578125, -0.0284423828125, -0.00952911376953125, -0.023162841796875, 0.0191192626953125, 0.01526641845703125, -0.05670166015625, -0.0404052734375, -0.051025390625, -0...
YakovElm/Hyperledger20Classic_Balance_DATA_ratio_1
2023-05-30T21:47:29.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger20Classic_Balance_DATA_ratio_1
0
2
transformers
2023-05-30T21:44:29
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger20Classic_Balance_DATA_ratio_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger20Classic_Balance_DATA_ratio_1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5573 - Train Accuracy: 0.7130 - Validation Loss: 0.6381 - Validation Accuracy: 0.6154 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6848 | 0.5539 | 0.6371 | 0.6648 | 0 | | 0.6290 | 0.6362 | 0.6070 | 0.6648 | 1 | | 0.5573 | 0.7130 | 0.6381 | 0.6154 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,826
[ [ -0.0474853515625, -0.04278564453125, 0.01477813720703125, 0.00789642333984375, -0.0277099609375, -0.029693603515625, -0.01113128662109375, -0.0224456787109375, 0.0184326171875, 0.0160675048828125, -0.055938720703125, -0.039886474609375, -0.05108642578125, -0...
YakovElm/Hyperledger20Classic_Balance_DATA_ratio_2
2023-05-30T21:59:17.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger20Classic_Balance_DATA_ratio_2
0
2
transformers
2023-05-30T21:58:38
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger20Classic_Balance_DATA_ratio_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger20Classic_Balance_DATA_ratio_2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5201 - Train Accuracy: 0.6976 - Validation Loss: 0.6229 - Validation Accuracy: 0.6788 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5906 | 0.6427 | 0.5674 | 0.7263 | 0 | | 0.5513 | 0.6549 | 0.5635 | 0.6825 | 1 | | 0.5201 | 0.6976 | 0.6229 | 0.6788 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,826
[ [ -0.045867919921875, -0.04296875, 0.0148773193359375, 0.00870513916015625, -0.02850341796875, -0.0301666259765625, -0.0111541748046875, -0.023529052734375, 0.0171966552734375, 0.015655517578125, -0.054931640625, -0.03887939453125, -0.051666259765625, -0.02334...
razerblade072611/EleutherAI
2023-05-31T14:02:03.000Z
[ "transformers", "pytorch", "jax", "rust", "gpt_neo", "text-generation", "doi:10.57967/hf/0703", "endpoints_compatible", "region:us" ]
text-generation
razerblade072611
null
null
razerblade072611/EleutherAI
0
2
transformers
2023-05-30T22:02:54
import atexit import pyttsx3 import speech_recognition as sr import torch import nltk from nltk.corpus import stopwords from nltk.stem import WordNetLemmatizer from sklearn.feature_extraction.text import TfidfVectorizer from transformers import GPT2Model, AutoTokenizer, AutoModelForCausalLM, GPTNeoForCausalLM, pipeline from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer import numpy as np import json import os import spacy import sys import transformers import spacy as nlp import nltk import spacy import site from pathlib import Path model_path = spacy.util.get_package_path('en_core_web_sm') print(model_path) print("transformers version:", transformers.__version__) print("spacy version:", spacy.__version__) print("nltk version:", nltk.__version__) sys.path.append(r"C:\Users\withe\PycharmProjects\no hope2\Gpt-Neo1") # Download necessary NLTK resources nltk.download('punkt') nltk.download('stopwords') nltk.download('wordnet') nltk.download('omw-1.4') # Load the API key from the environment file dotenv_path = './API_KEY.env' (dotenv_path) # Check if GPU is available and set the device accordingly device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') # Initialize the speech engine speech_engine = pyttsx3.init() # Get the list of available voices voices = speech_engine.getProperty('voices') for voice in voices: print(voice.id, voice.name) # Set the desired voice voice_id = "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Speech\Voices\Tokens\TTS_MS_EN-GB_HAZEL_11.0 Microsoft Hazel Desktop - English (Great Britain)" speech_engine.setProperty('voice', voice_id) voices = speech_engine.getProperty('voices') for voice in voices: print(voice.id, voice.name) # Set the desired voice desired_voice = "Microsoft Hazel Desktop - English (Great Britain)" voice_id = None # Find the voice ID based on the desired voice name for voice in voices: if desired_voice in voice.name: voice_id = voice.id break if voice_id: speech_engine.setProperty('voice', voice_id) print("Desired voice set successfully.") else: print("Desired voice not found.") class CommonModule: def __init__(self, model, name, param1, param2): # Initialize the instance variables using the provided arguments self.model = model self.name = name self.param1 = param1 self.param2 = param2 self.tokenizer = AutoTokenizer.from_pretrained(model) # Load the tokenizer self.tokenizer.add_special_tokens({'pad_token': '[PAD]'}) self.gpt3_model = GPTNeoForCausalLM.from_pretrained('EleutherAI/gpt-neo-1.3B') self.gpt3_model.to(device) # Move model to the device (GPU or CPU) self.memory_module = MemoryModule() self.sentiment_module = SentimentAnalysisModule() self.speech_engine = speech_engine # Assign the initialized speech engine self.max_sequence_length = 10 # Decrease the value for faster response self.num_beams = 4 # Reduce the value for faster response self.no_repeat_ngram_size = 2 self.temperature = 0.3 self.response_cache = {} # Cache for storing frequently occurring responses def reset_conversation(self): self.memory_module.reset_memory() def retrieve_cached_response(self, input_text): named_entities = self.memory_module.get_named_entities() for entity in named_entities: if entity.lower() in input_text.lower(): return self.response_cache.get(entity) return None def generate_gpt3_response(self, input_text, conversation_history, temperature=0.3): prompt = '\n'.join(conversation_history) + '\n' + input_text + '\n' generator = pipeline('text-generation', model='EleutherAI/gpt-neo-1.3B') output = generator( prompt, do_sample=True, min_length=10, max_length=300, num_return_sequences=1, temperature=0.3 ) if output: generated_response = output[0]['generated_text'].strip() return generated_response return "" def process_input(self, input_text, conversation_history): named_entities = list(self.memory_module.get_named_entities()) for entity in named_entities: if entity in input_text: response = "Nice to meet you again, {}!".format(entity) self.memory_module.add_to_memory(response) return response # Check if the input contains a question if '?' in input_text: return "You're making me angry, you wouldn't like me when I'm angry." # Check if the input contains a keyword for memory search if 'search' in input_text.lower(): keyword = input_text.lower().split('search ')[-1] matches = self.memory_module.search_memory(keyword) if matches: return "I found some related information in the memory:\n" + '\n'.join(matches) else: return "Sorry, I couldn't find any relevant information in the memory." # Retrieve the cached response response = self.retrieve_cached_response(input_text) if response is None: response = self.generate_gpt3_response(input_text, conversation_history) self.cache_response(input_text, response) named_entities = self.memory_module.get_named_entities() if named_entities and any(entity in input_text for entity in named_entities): response = "Nice to meet you, {}! I'm still {}".format(named_entities[0], self.name) self.memory_module.add_to_memory(response) return response self.memory_module.add_to_memory(response) return response def cache_response(self, input_text, response): self.response_cache[input_text] = response def speak(self, text, conversation_history=None): if conversation_history is None: conversation_history = [] conversation_history.append(text) full_text = "\n".join(conversation_history) print(text) self.speech_engine.say(text) self.speech_engine.runAndWait() def listen(self): recognizer = sr.Recognizer() with sr.Microphone() as source: print("Listening...") audio = recognizer.listen(source) try: user_input = recognizer.recognize_google(audio) print("You said:", user_input) return user_input except sr.UnknownValueError: print("Sorry, I could not understand your speech.") except sr.RequestError as e: print("Sorry, an error occurred while processing your request. Please try again.") return "" def converse(self): self.reset_conversation() self.speak("Hey, what's up bro? I'm {}".format(self.name)) conversation_history = [] while True: user_input = self.listen() if user_input: response = self.process_input(user_input, conversation_history) self.speak(response, conversation_history) # Check if the user input contains a named entity (name) named_entities = self.memory_module.get_named_entities() if named_entities and any(entity in user_input for entity in named_entities): self.speak("Nice to meet you, {}! I'm still {}".format(named_entities[0], self.name), conversation_history) conversation_history.append(user_input) # Check if the conversation is over (you can define your own condition here) if user_input == "bye": self.save_memory('C:\\Users\\withe\PycharmProjects\\no hope\\Chat_Bot_Main\\save_memory.json') break def save_memory(self, file_path): data = { 'memory': self.memory_module.memory, 'named_entities': list(self.memory_module.named_entities) # Convert set to list } with open(file_path, 'w') as file: json.dump(data, file) def load_memory_data(self, memory_data): self.memory_module.memory = memory_data['memory'] self.memory_module.named_entities = set(memory_data['named_entities']) class MemoryModule: def __init__(self): self.memory = [] self.vectorizer = TfidfVectorizer(stop_words=stopwords.words('english')) self.lemmatizer = WordNetLemmatizer() self.tokenizer = nltk.tokenize.word_tokenize self.named_entities = set() # Set to store named entities like names def get_named_entities(self): return self.named_entities def preprocess_text(self, text): tokens = self.tokenizer(text.lower()) tokens = [self.lemmatizer.lemmatize(token) for token in tokens if token.isalnum()] preprocessed_text = ' '.join(tokens) return preprocessed_text def add_to_memory(self, text): preprocessed_text = self.preprocess_text(text) self.memory.append(preprocessed_text) # Update named entities if any named_entity = self.extract_named_entity(text) if named_entity: self.named_entities.add(named_entity) def extract_named_entity(self, text): doc = nlp(text) for entity in doc.ents: if entity.label_ in ['PERSON', 'ORG', 'GPE']: return entity.text return None def search_memory(self, keyword): preprocessed_keyword = self.preprocess_text(keyword) vectorized_memory = self.vectorizer.transform(self.memory) vectorized_keyword = self.vectorizer.transform([preprocessed_keyword]) similarity_scores = np.dot(vectorized_memory, vectorized_keyword.T).toarray().flatten() sorted_indices = np.argsort(similarity_scores)[::-1] matches = [self.memory[i] for i in sorted_indices if similarity_scores[i] > 0.5] return matches def reset_memory(self): self.memory = [] self.named_entities = set() class SentimentAnalysisModule: def __init__(self): self.analyzer = SentimentIntensityAnalyzer() def analyze_sentiment(self, text): sentiment_scores = self.analyzer.polarity_scores(text) return sentiment_scores def get_sentiment_label(self, sentiment_scores): compound_score = sentiment_scores['compound'] if compound_score >= 0.05: return 'positive' elif compound_score <= -0.05: return 'negative' else: return 'neutral' # Define an exit handler function def exit_handler(common_module): memory_data = { 'memory': common_module.memory_module.memory, 'named_entities': list(common_module.memory_module.named_entities) } common_module.save_memory('C:\\Users\\withe\\PycharmProjects\\no hope2\\Chat_Bot1\\save_memory.json') print("Memory data saved successfully.") return memory_data # Define a method to check if the load_memory.json file exists def check_memory_file(file_path): return os.path.isfile(file_path) # Modify the main section of the code to load memory data if the file exists if __name__ == "__main__": model = 'gpt2' name = "Chat bot1" param1 = 'value1' param2 = 'value2' common_module = CommonModule(model, name, param1, param2) memory_file_path = 'C:\\Users\\withe\\PycharmProjects\\no hope2\\Chat_Bot1\\load_memory1.json' if check_memory_file(memory_file_path): with open(memory_file_path, 'r') as file: memory_data = json.load(file) common_module.load_memory_data(memory_data) # Register the exit handler atexit.register(exit_handler, common_module) common_module.converse() common_module.save_memory(memory_file_path)
11,960
[ [ -0.0200958251953125, -0.06439208984375, 0.026947021484375, 0.0266571044921875, -0.00296783447265625, -0.00910186767578125, -0.0292816162109375, -0.0186767578125, 0.0113372802734375, 0.021087646484375, -0.04241943359375, -0.0396728515625, -0.025543212890625, ...
YakovElm/Hyperledger20Classic_Balance_DATA_ratio_3
2023-05-30T22:14:31.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger20Classic_Balance_DATA_ratio_3
0
2
transformers
2023-05-30T22:13:44
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger20Classic_Balance_DATA_ratio_3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger20Classic_Balance_DATA_ratio_3 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4850 - Train Accuracy: 0.7678 - Validation Loss: 0.5013 - Validation Accuracy: 0.7068 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5443 | 0.7477 | 0.5624 | 0.7205 | 0 | | 0.4953 | 0.7660 | 0.4982 | 0.7096 | 1 | | 0.4850 | 0.7678 | 0.5013 | 0.7068 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,826
[ [ -0.047271728515625, -0.042755126953125, 0.016754150390625, 0.00904083251953125, -0.0279541015625, -0.03082275390625, -0.01070404052734375, -0.0241546630859375, 0.0167388916015625, 0.017303466796875, -0.05438232421875, -0.0406494140625, -0.050445556640625, -0...
YakovElm/Hyperledger20Classic_Balance_DATA_ratio_4
2023-05-30T22:33:42.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger20Classic_Balance_DATA_ratio_4
0
2
transformers
2023-05-30T22:32:52
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger20Classic_Balance_DATA_ratio_4 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger20Classic_Balance_DATA_ratio_4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3697 - Train Accuracy: 0.8194 - Validation Loss: 0.5944 - Validation Accuracy: 0.8026 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.4770 | 0.8085 | 0.4682 | 0.7807 | 0 | | 0.4307 | 0.8114 | 0.4549 | 0.7763 | 1 | | 0.3697 | 0.8194 | 0.5944 | 0.8026 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,826
[ [ -0.046783447265625, -0.042694091796875, 0.016357421875, 0.00823211669921875, -0.0275421142578125, -0.0289306640625, -0.0121307373046875, -0.0232391357421875, 0.0174560546875, 0.0159759521484375, -0.05511474609375, -0.040557861328125, -0.05047607421875, -0.01...
YakovElm/IntelDAOS5Classic_Balance_DATA_ratio_1
2023-05-30T22:42:19.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS5Classic_Balance_DATA_ratio_1
0
2
transformers
2023-05-30T22:41:46
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS5Classic_Balance_DATA_ratio_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS5Classic_Balance_DATA_ratio_1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6248 - Train Accuracy: 0.6548 - Validation Loss: 0.6878 - Validation Accuracy: 0.5714 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6815 | 0.5714 | 0.6853 | 0.5357 | 0 | | 0.6431 | 0.6270 | 0.7439 | 0.4762 | 1 | | 0.6248 | 0.6548 | 0.6878 | 0.5714 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,820
[ [ -0.045257568359375, -0.035858154296875, 0.0139007568359375, 0.00283050537109375, -0.03057861328125, -0.026458740234375, -0.011962890625, -0.02685546875, 0.0183868408203125, 0.00807952880859375, -0.055511474609375, -0.0428466796875, -0.049713134765625, -0.025...
YakovElm/IntelDAOS5Classic_Balance_DATA_ratio_2
2023-05-30T22:48:24.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS5Classic_Balance_DATA_ratio_2
0
2
transformers
2023-05-30T22:47:50
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS5Classic_Balance_DATA_ratio_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS5Classic_Balance_DATA_ratio_2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6102 - Train Accuracy: 0.6772 - Validation Loss: 0.6331 - Validation Accuracy: 0.6349 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6426 | 0.6693 | 0.6235 | 0.6825 | 0 | | 0.6319 | 0.6614 | 0.6161 | 0.6825 | 1 | | 0.6102 | 0.6772 | 0.6331 | 0.6349 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,820
[ [ -0.043243408203125, -0.035858154296875, 0.0147705078125, 0.0030193328857421875, -0.030731201171875, -0.0263824462890625, -0.01153564453125, -0.027191162109375, 0.0167694091796875, 0.00849151611328125, -0.05499267578125, -0.041656494140625, -0.050262451171875, ...
YakovElm/IntelDAOS5Classic_Balance_DATA_ratio_3
2023-05-30T22:56:10.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS5Classic_Balance_DATA_ratio_3
0
2
transformers
2023-05-30T22:55:34
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS5Classic_Balance_DATA_ratio_3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS5Classic_Balance_DATA_ratio_3 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5535 - Train Accuracy: 0.7460 - Validation Loss: 0.4840 - Validation Accuracy: 0.7857 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5752 | 0.7401 | 0.5166 | 0.7857 | 0 | | 0.5614 | 0.7460 | 0.5229 | 0.7857 | 1 | | 0.5535 | 0.7460 | 0.4840 | 0.7857 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,820
[ [ -0.043853759765625, -0.036895751953125, 0.0162353515625, 0.003082275390625, -0.0313720703125, -0.0274505615234375, -0.012298583984375, -0.02734375, 0.0160980224609375, 0.0094451904296875, -0.0535888671875, -0.04327392578125, -0.049102783203125, -0.0243225097...
YakovElm/IntelDAOS5Classic_Balance_DATA_ratio_4
2023-05-30T23:05:31.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS5Classic_Balance_DATA_ratio_4
0
2
transformers
2023-05-30T23:04:53
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS5Classic_Balance_DATA_ratio_4 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS5Classic_Balance_DATA_ratio_4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4740 - Train Accuracy: 0.8111 - Validation Loss: 0.4501 - Validation Accuracy: 0.8238 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5092 | 0.7889 | 0.4630 | 0.8238 | 0 | | 0.4821 | 0.8111 | 0.4603 | 0.8238 | 1 | | 0.4740 | 0.8111 | 0.4501 | 0.8238 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,820
[ [ -0.0452880859375, -0.036163330078125, 0.015899658203125, 0.0031070709228515625, -0.0305938720703125, -0.0260772705078125, -0.0125579833984375, -0.027557373046875, 0.01690673828125, 0.009185791015625, -0.05474853515625, -0.04315185546875, -0.048828125, -0.023...
YakovElm/IntelDAOS10Classic_Balance_DATA_ratio_Half
2023-05-30T23:08:49.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS10Classic_Balance_DATA_ratio_Half
0
2
transformers
2023-05-30T23:08:14
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS10Classic_Balance_DATA_ratio_Half results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS10Classic_Balance_DATA_ratio_Half This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6711 - Train Accuracy: 0.5630 - Validation Loss: 0.6026 - Validation Accuracy: 0.6889 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6811 | 0.6000 | 0.6098 | 0.6889 | 0 | | 0.6291 | 0.6815 | 0.6062 | 0.6444 | 1 | | 0.6711 | 0.5630 | 0.6026 | 0.6889 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,828
[ [ -0.044219970703125, -0.038970947265625, 0.012939453125, 0.0030670166015625, -0.0303802490234375, -0.0252685546875, -0.01067352294921875, -0.025848388671875, 0.0225677490234375, 0.008026123046875, -0.056427001953125, -0.041107177734375, -0.050079345703125, -0...
YakovElm/IntelDAOS10Classic_Balance_DATA_ratio_1
2023-05-30T23:12:21.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS10Classic_Balance_DATA_ratio_1
0
2
transformers
2023-05-30T23:11:44
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS10Classic_Balance_DATA_ratio_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS10Classic_Balance_DATA_ratio_1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6684 - Train Accuracy: 0.5801 - Validation Loss: 0.6749 - Validation Accuracy: 0.6000 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.7113 | 0.4365 | 0.6936 | 0.5 | 0 | | 0.6836 | 0.5635 | 0.6832 | 0.5167 | 1 | | 0.6684 | 0.5801 | 0.6749 | 0.6000 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,822
[ [ -0.04498291015625, -0.038604736328125, 0.01442718505859375, 0.0037479400634765625, -0.02978515625, -0.0273895263671875, -0.01239776611328125, -0.02532958984375, 0.0207061767578125, 0.0093536376953125, -0.05438232421875, -0.039642333984375, -0.050048828125, -...
YakovElm/IntelDAOS10Classic_Balance_DATA_ratio_2
2023-05-30T23:17:11.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS10Classic_Balance_DATA_ratio_2
0
2
transformers
2023-05-30T23:16:36
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS10Classic_Balance_DATA_ratio_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS10Classic_Balance_DATA_ratio_2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6093 - Train Accuracy: 0.6605 - Validation Loss: 0.6365 - Validation Accuracy: 0.6923 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6664 | 0.6162 | 0.6270 | 0.6484 | 0 | | 0.6266 | 0.6458 | 0.6179 | 0.6703 | 1 | | 0.6093 | 0.6605 | 0.6365 | 0.6923 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,822
[ [ -0.043365478515625, -0.038482666015625, 0.014190673828125, 0.00334930419921875, -0.030548095703125, -0.0268402099609375, -0.01302337646484375, -0.026153564453125, 0.020111083984375, 0.00896453857421875, -0.054656982421875, -0.038848876953125, -0.050689697265625,...
YakovElm/IntelDAOS10Classic_Balance_DATA_ratio_3
2023-05-30T23:23:03.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS10Classic_Balance_DATA_ratio_3
0
2
transformers
2023-05-30T23:22:27
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS10Classic_Balance_DATA_ratio_3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS10Classic_Balance_DATA_ratio_3 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5602 - Train Accuracy: 0.7541 - Validation Loss: 0.5507 - Validation Accuracy: 0.7603 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6032 | 0.7293 | 0.5528 | 0.7603 | 0 | | 0.5601 | 0.7486 | 0.5334 | 0.7603 | 1 | | 0.5602 | 0.7541 | 0.5507 | 0.7603 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,822
[ [ -0.045013427734375, -0.038726806640625, 0.0163726806640625, 0.004055023193359375, -0.0303802490234375, -0.028350830078125, -0.01285552978515625, -0.0261383056640625, 0.0184783935546875, 0.01047515869140625, -0.052459716796875, -0.040924072265625, -0.049285888671...
YakovElm/IntelDAOS10Classic_Balance_DATA_ratio_4
2023-05-30T23:30:10.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS10Classic_Balance_DATA_ratio_4
0
2
transformers
2023-05-30T23:29:34
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS10Classic_Balance_DATA_ratio_4 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS10Classic_Balance_DATA_ratio_4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5157 - Train Accuracy: 0.7837 - Validation Loss: 0.4183 - Validation Accuracy: 0.8543 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5389 | 0.7837 | 0.4137 | 0.8543 | 0 | | 0.5116 | 0.7837 | 0.4319 | 0.8543 | 1 | | 0.5157 | 0.7837 | 0.4183 | 0.8543 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,822
[ [ -0.0443115234375, -0.03790283203125, 0.0163421630859375, 0.003589630126953125, -0.030487060546875, -0.0263519287109375, -0.0135955810546875, -0.0267486572265625, 0.01971435546875, 0.0100860595703125, -0.054046630859375, -0.04132080078125, -0.04937744140625, ...
YakovElm/IntelDAOS15Classic_Balance_DATA_ratio_Half
2023-05-30T23:33:01.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS15Classic_Balance_DATA_ratio_Half
0
2
transformers
2023-05-30T23:32:20
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS15Classic_Balance_DATA_ratio_Half results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS15Classic_Balance_DATA_ratio_Half This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5689 - Train Accuracy: 0.7404 - Validation Loss: 0.6286 - Validation Accuracy: 0.6571 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6473 | 0.6442 | 0.5794 | 0.7429 | 0 | | 0.6371 | 0.6442 | 0.5616 | 0.7714 | 1 | | 0.5689 | 0.7404 | 0.6286 | 0.6571 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,828
[ [ -0.04425048828125, -0.0401611328125, 0.01294708251953125, 0.0036525726318359375, -0.0305633544921875, -0.0255584716796875, -0.009979248046875, -0.02496337890625, 0.0206451416015625, 0.00846099853515625, -0.056793212890625, -0.041900634765625, -0.050323486328125,...
YakovElm/IntelDAOS15Classic_Balance_DATA_ratio_1
2023-05-30T23:36:05.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS15Classic_Balance_DATA_ratio_1
0
2
transformers
2023-05-30T23:35:31
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS15Classic_Balance_DATA_ratio_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS15Classic_Balance_DATA_ratio_1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6359 - Train Accuracy: 0.7050 - Validation Loss: 0.6330 - Validation Accuracy: 0.6522 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.7256 | 0.5036 | 0.6663 | 0.6522 | 0 | | 0.6591 | 0.6619 | 0.6598 | 0.6087 | 1 | | 0.6359 | 0.7050 | 0.6330 | 0.6522 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,822
[ [ -0.04522705078125, -0.03936767578125, 0.01415252685546875, 0.003627777099609375, -0.0291748046875, -0.027374267578125, -0.0120391845703125, -0.0254669189453125, 0.019561767578125, 0.00925445556640625, -0.05517578125, -0.042144775390625, -0.048980712890625, -...
YakovElm/IntelDAOS15Classic_Balance_DATA_ratio_2
2023-05-30T23:40:07.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS15Classic_Balance_DATA_ratio_2
0
2
transformers
2023-05-30T23:39:25
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS15Classic_Balance_DATA_ratio_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS15Classic_Balance_DATA_ratio_2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5300 - Train Accuracy: 0.7548 - Validation Loss: 0.5816 - Validation Accuracy: 0.6143 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6203 | 0.6779 | 0.5805 | 0.7000 | 0 | | 0.5601 | 0.7308 | 0.6075 | 0.6429 | 1 | | 0.5300 | 0.7548 | 0.5816 | 0.6143 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,822
[ [ -0.0435791015625, -0.0396728515625, 0.01377105712890625, 0.004962921142578125, -0.029998779296875, -0.0283355712890625, -0.0129852294921875, -0.0259246826171875, 0.0181121826171875, 0.00908660888671875, -0.05419921875, -0.039642333984375, -0.049591064453125, ...
YakovElm/IntelDAOS15Classic_Balance_DATA_ratio_3
2023-05-30T23:44:57.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS15Classic_Balance_DATA_ratio_3
0
2
transformers
2023-05-30T23:44:23
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS15Classic_Balance_DATA_ratio_3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS15Classic_Balance_DATA_ratio_3 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5163 - Train Accuracy: 0.7734 - Validation Loss: 0.4785 - Validation Accuracy: 0.7957 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5689 | 0.7338 | 0.5002 | 0.7957 | 0 | | 0.5411 | 0.7590 | 0.4894 | 0.7957 | 1 | | 0.5163 | 0.7734 | 0.4785 | 0.7957 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,822
[ [ -0.044281005859375, -0.040130615234375, 0.0159454345703125, 0.00439453125, -0.030975341796875, -0.0279083251953125, -0.01248931884765625, -0.02618408203125, 0.018218994140625, 0.01081085205078125, -0.053497314453125, -0.0413818359375, -0.049041748046875, -0....
YakovElm/IntelDAOS15Classic_Balance_DATA_ratio_4
2023-05-30T23:51:00.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS15Classic_Balance_DATA_ratio_4
0
2
transformers
2023-05-30T23:50:25
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS15Classic_Balance_DATA_ratio_4 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS15Classic_Balance_DATA_ratio_4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4989 - Train Accuracy: 0.8017 - Validation Loss: 0.4107 - Validation Accuracy: 0.8621 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5433 | 0.7759 | 0.4230 | 0.8621 | 0 | | 0.5032 | 0.8017 | 0.4035 | 0.8621 | 1 | | 0.4989 | 0.8017 | 0.4107 | 0.8621 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,822
[ [ -0.044708251953125, -0.038787841796875, 0.0158233642578125, 0.005126953125, -0.031341552734375, -0.02716064453125, -0.01241302490234375, -0.02545166015625, 0.0183258056640625, 0.010009765625, -0.054901123046875, -0.041717529296875, -0.048309326171875, -0.024...
YakovElm/IntelDAOS20Classic_Balance_DATA_ratio_Half
2023-05-30T23:53:29.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS20Classic_Balance_DATA_ratio_Half
0
2
transformers
2023-05-30T23:52:55
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS20Classic_Balance_DATA_ratio_Half results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS20Classic_Balance_DATA_ratio_Half This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6178 - Train Accuracy: 0.6543 - Validation Loss: 0.5301 - Validation Accuracy: 0.7778 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6696 | 0.5926 | 0.5580 | 0.7778 | 0 | | 0.6485 | 0.6543 | 0.5306 | 0.7778 | 1 | | 0.6178 | 0.6543 | 0.5301 | 0.7778 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,828
[ [ -0.04412841796875, -0.038848876953125, 0.01367950439453125, 0.0039520263671875, -0.030731201171875, -0.025543212890625, -0.009429931640625, -0.0256500244140625, 0.0216217041015625, 0.009002685546875, -0.057891845703125, -0.04144287109375, -0.050048828125, -0...
YakovElm/IntelDAOS20Classic_Balance_DATA_ratio_1
2023-05-30T23:56:07.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS20Classic_Balance_DATA_ratio_1
0
2
transformers
2023-05-30T23:55:31
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS20Classic_Balance_DATA_ratio_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS20Classic_Balance_DATA_ratio_1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5707 - Train Accuracy: 0.6972 - Validation Loss: 0.5805 - Validation Accuracy: 0.75 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6702 | 0.5229 | 0.6593 | 0.5556 | 0 | | 0.6356 | 0.6147 | 0.6616 | 0.5833 | 1 | | 0.5707 | 0.6972 | 0.5805 | 0.75 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,820
[ [ -0.044769287109375, -0.0384521484375, 0.01448822021484375, 0.0037078857421875, -0.030120849609375, -0.02740478515625, -0.01215362548828125, -0.0256195068359375, 0.02032470703125, 0.00955963134765625, -0.0557861328125, -0.04107666015625, -0.049468994140625, -...
YakovElm/IntelDAOS20Classic_Balance_DATA_ratio_2
2023-05-30T23:59:35.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS20Classic_Balance_DATA_ratio_2
0
2
transformers
2023-05-30T23:58:59
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS20Classic_Balance_DATA_ratio_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS20Classic_Balance_DATA_ratio_2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5250 - Train Accuracy: 0.7730 - Validation Loss: 0.5228 - Validation Accuracy: 0.7636 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6488 | 0.6258 | 0.5223 | 0.7455 | 0 | | 0.5682 | 0.6871 | 0.5957 | 0.6909 | 1 | | 0.5250 | 0.7730 | 0.5228 | 0.7636 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,822
[ [ -0.043792724609375, -0.037933349609375, 0.01424407958984375, 0.004360198974609375, -0.0308990478515625, -0.027191162109375, -0.0119171142578125, -0.0261993408203125, 0.0189056396484375, 0.00920867919921875, -0.0550537109375, -0.038299560546875, -0.04995727539062...
YakovElm/IntelDAOS20Classic_Balance_DATA_ratio_3
2023-05-31T00:03:40.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS20Classic_Balance_DATA_ratio_3
0
2
transformers
2023-05-31T00:03:04
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS20Classic_Balance_DATA_ratio_3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS20Classic_Balance_DATA_ratio_3 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4854 - Train Accuracy: 0.7661 - Validation Loss: 0.5602 - Validation Accuracy: 0.7123 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5905 | 0.7615 | 0.5971 | 0.6986 | 0 | | 0.5308 | 0.7615 | 0.6242 | 0.6986 | 1 | | 0.4854 | 0.7661 | 0.5602 | 0.7123 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,822
[ [ -0.043701171875, -0.038909912109375, 0.0168304443359375, 0.00457000732421875, -0.0302581787109375, -0.0276336669921875, -0.01190185546875, -0.026397705078125, 0.018402099609375, 0.01065826416015625, -0.05438232421875, -0.041259765625, -0.049072265625, -0.025...
YakovElm/IntelDAOS20Classic_Balance_DATA_ratio_4
2023-05-31T00:08:31.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS20Classic_Balance_DATA_ratio_4
0
2
transformers
2023-05-31T00:07:55
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS20Classic_Balance_DATA_ratio_4 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS20Classic_Balance_DATA_ratio_4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4344 - Train Accuracy: 0.8095 - Validation Loss: 0.5350 - Validation Accuracy: 0.7692 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5505 | 0.7399 | 0.5835 | 0.7473 | 0 | | 0.4818 | 0.8059 | 0.5470 | 0.7473 | 1 | | 0.4344 | 0.8095 | 0.5350 | 0.7692 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,822
[ [ -0.044677734375, -0.037872314453125, 0.0166015625, 0.004062652587890625, -0.0303802490234375, -0.026580810546875, -0.01239013671875, -0.0263519287109375, 0.01910400390625, 0.0103607177734375, -0.05560302734375, -0.041595458984375, -0.0482177734375, -0.024353...
YakovElm/Jira5Classic_Balance_DATA_ratio_1
2023-05-31T00:26:03.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira5Classic_Balance_DATA_ratio_1
0
2
transformers
2023-05-31T00:25:27
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira5Classic_Balance_DATA_ratio_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira5Classic_Balance_DATA_ratio_1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5531 - Train Accuracy: 0.7171 - Validation Loss: 0.5869 - Validation Accuracy: 0.6697 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6711 | 0.5780 | 0.6148 | 0.6560 | 0 | | 0.5881 | 0.6713 | 0.5785 | 0.6789 | 1 | | 0.5531 | 0.7171 | 0.5869 | 0.6697 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,810
[ [ -0.03729248046875, -0.03759765625, 0.01248931884765625, 0.00412750244140625, -0.03131103515625, -0.0228424072265625, -0.0085296630859375, -0.0229949951171875, 0.021453857421875, 0.0102081298828125, -0.052825927734375, -0.042236328125, -0.048187255859375, -0....
YakovElm/Jira10Classic_Balance_DATA_ratio_Half
2023-05-31T00:33:54.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira10Classic_Balance_DATA_ratio_Half
0
2
transformers
2023-05-31T00:33:17
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira10Classic_Balance_DATA_ratio_Half results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira10Classic_Balance_DATA_ratio_Half This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4887 - Train Accuracy: 0.7915 - Validation Loss: 0.4790 - Validation Accuracy: 0.8153 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6094 | 0.7085 | 0.5579 | 0.7962 | 0 | | 0.5428 | 0.7532 | 0.5758 | 0.6879 | 1 | | 0.4887 | 0.7915 | 0.4790 | 0.8153 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,818
[ [ -0.035919189453125, -0.04180908203125, 0.01104736328125, 0.0036411285400390625, -0.0312347412109375, -0.0216217041015625, -0.0087890625, -0.0219573974609375, 0.0262298583984375, 0.009979248046875, -0.052764892578125, -0.038818359375, -0.050018310546875, -0.0...
YakovElm/Jira10Classic_Balance_DATA_ratio_1
2023-05-31T00:43:35.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira10Classic_Balance_DATA_ratio_1
0
2
transformers
2023-05-31T00:42:58
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira10Classic_Balance_DATA_ratio_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira10Classic_Balance_DATA_ratio_1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4678 - Train Accuracy: 0.7863 - Validation Loss: 0.6591 - Validation Accuracy: 0.7081 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6739 | 0.5726 | 0.6130 | 0.6555 | 0 | | 0.5723 | 0.7033 | 0.5917 | 0.6746 | 1 | | 0.4678 | 0.7863 | 0.6591 | 0.7081 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,812
[ [ -0.03765869140625, -0.04046630859375, 0.01229095458984375, 0.0038852691650390625, -0.0308685302734375, -0.02423095703125, -0.01012420654296875, -0.0219879150390625, 0.0248870849609375, 0.01097869873046875, -0.05181884765625, -0.038848876953125, -0.04937744140625...
YakovElm/Jira10Classic_Balance_DATA_ratio_2
2023-05-31T00:57:19.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira10Classic_Balance_DATA_ratio_2
0
2
transformers
2023-05-31T00:56:44
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira10Classic_Balance_DATA_ratio_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira10Classic_Balance_DATA_ratio_2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4670 - Train Accuracy: 0.7906 - Validation Loss: 0.5642 - Validation Accuracy: 0.7484 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6305 | 0.6429 | 0.5566 | 0.7197 | 0 | | 0.5610 | 0.6950 | 0.4968 | 0.7293 | 1 | | 0.4670 | 0.7906 | 0.5642 | 0.7484 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,812
[ [ -0.035797119140625, -0.042205810546875, 0.01209259033203125, 0.0052337646484375, -0.03106689453125, -0.0243072509765625, -0.0112152099609375, -0.022430419921875, 0.02362060546875, 0.011016845703125, -0.0506591796875, -0.03692626953125, -0.04962158203125, -0....
YakovElm/Jira15Classic_Balance_DATA_ratio_Half
2023-05-31T01:04:51.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira15Classic_Balance_DATA_ratio_Half
0
2
transformers
2023-05-31T01:04:14
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira15Classic_Balance_DATA_ratio_Half results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira15Classic_Balance_DATA_ratio_Half This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4481 - Train Accuracy: 0.7911 - Validation Loss: 0.5020 - Validation Accuracy: 0.8133 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6042 | 0.6689 | 0.5017 | 0.7733 | 0 | | 0.5242 | 0.7733 | 0.4695 | 0.8133 | 1 | | 0.4481 | 0.7911 | 0.5020 | 0.8133 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,818
[ [ -0.037628173828125, -0.043304443359375, 0.01012420654296875, 0.005512237548828125, -0.032012939453125, -0.0220947265625, -0.00827789306640625, -0.0220184326171875, 0.0245819091796875, 0.01071929931640625, -0.054656982421875, -0.040313720703125, -0.04898071289062...
YakovElm/Jira15Classic_Balance_DATA_ratio_1
2023-05-31T01:14:01.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira15Classic_Balance_DATA_ratio_1
0
2
transformers
2023-05-31T01:13:26
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira15Classic_Balance_DATA_ratio_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira15Classic_Balance_DATA_ratio_1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4506 - Train Accuracy: 0.7967 - Validation Loss: 0.6458 - Validation Accuracy: 0.7250 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6588 | 0.6050 | 0.5779 | 0.7050 | 0 | | 0.5464 | 0.7267 | 0.5502 | 0.75 | 1 | | 0.4506 | 0.7967 | 0.6458 | 0.7250 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,812
[ [ -0.0382080078125, -0.042999267578125, 0.01190185546875, 0.005352020263671875, -0.0302886962890625, -0.0255126953125, -0.010528564453125, -0.021820068359375, 0.02301025390625, 0.01081085205078125, -0.0531005859375, -0.040435791015625, -0.048553466796875, -0.0...
YakovElm/Jira15Classic_Balance_DATA_ratio_2
2023-05-31T01:27:02.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira15Classic_Balance_DATA_ratio_2
0
2
transformers
2023-05-31T01:26:27
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira15Classic_Balance_DATA_ratio_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira15Classic_Balance_DATA_ratio_2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3906 - Train Accuracy: 0.8289 - Validation Loss: 0.4315 - Validation Accuracy: 0.8033 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5956 | 0.6822 | 0.5581 | 0.6800 | 0 | | 0.5190 | 0.7433 | 0.4423 | 0.7900 | 1 | | 0.3906 | 0.8289 | 0.4315 | 0.8033 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,812
[ [ -0.037261962890625, -0.04266357421875, 0.011474609375, 0.00566864013671875, -0.031463623046875, -0.0245361328125, -0.01039886474609375, -0.0223388671875, 0.021270751953125, 0.011016845703125, -0.052490234375, -0.037384033203125, -0.04888916015625, -0.0310974...
YakovElm/Jira20Classic_Balance_DATA_ratio_Half
2023-05-31T01:30:57.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira20Classic_Balance_DATA_ratio_Half
0
2
transformers
2023-05-31T01:30:21
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira20Classic_Balance_DATA_ratio_Half results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira20Classic_Balance_DATA_ratio_Half This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5100 - Train Accuracy: 0.7531 - Validation Loss: 0.6473 - Validation Accuracy: 0.6481 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6115 | 0.6914 | 0.7346 | 0.5556 | 0 | | 0.5589 | 0.7160 | 0.6487 | 0.5741 | 1 | | 0.5100 | 0.7531 | 0.6473 | 0.6481 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,818
[ [ -0.035980224609375, -0.04266357421875, 0.0110321044921875, 0.004558563232421875, -0.0308380126953125, -0.022064208984375, -0.00839996337890625, -0.02154541015625, 0.02569580078125, 0.01043701171875, -0.0546875, -0.04046630859375, -0.0498046875, -0.0312805175...
YakovElm/Jira20Classic_Balance_DATA_ratio_1
2023-05-31T01:35:10.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira20Classic_Balance_DATA_ratio_1
0
2
transformers
2023-05-31T01:34:32
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira20Classic_Balance_DATA_ratio_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira20Classic_Balance_DATA_ratio_1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6347 - Train Accuracy: 0.6713 - Validation Loss: 0.6894 - Validation Accuracy: 0.5139 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.7128 | 0.4907 | 0.6885 | 0.5139 | 0 | | 0.6632 | 0.5972 | 0.6791 | 0.5694 | 1 | | 0.6347 | 0.6713 | 0.6894 | 0.5139 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,812
[ [ -0.03668212890625, -0.041778564453125, 0.012054443359375, 0.004680633544921875, -0.0307769775390625, -0.02294921875, -0.0103759765625, -0.0215911865234375, 0.0249481201171875, 0.01120758056640625, -0.053680419921875, -0.039794921875, -0.049102783203125, -0.0...
YakovElm/Jira20Classic_Balance_DATA_ratio_2
2023-05-31T01:40:46.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira20Classic_Balance_DATA_ratio_2
0
2
transformers
2023-05-31T01:40:07
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira20Classic_Balance_DATA_ratio_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira20Classic_Balance_DATA_ratio_2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6133 - Train Accuracy: 0.6852 - Validation Loss: 0.6297 - Validation Accuracy: 0.6296 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6461 | 0.6481 | 0.6620 | 0.6296 | 0 | | 0.6100 | 0.6883 | 0.6349 | 0.6296 | 1 | | 0.6133 | 0.6852 | 0.6297 | 0.6296 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,812
[ [ -0.035430908203125, -0.040802001953125, 0.01226043701171875, 0.006328582763671875, -0.030975341796875, -0.0240020751953125, -0.0093841552734375, -0.0228424072265625, 0.0228729248046875, 0.0114898681640625, -0.05364990234375, -0.038055419921875, -0.04876708984375...
YakovElm/Jira20Classic_Balance_DATA_ratio_3
2023-05-31T01:48:31.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira20Classic_Balance_DATA_ratio_3
0
2
transformers
2023-05-31T01:47:57
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira20Classic_Balance_DATA_ratio_3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira20Classic_Balance_DATA_ratio_3 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3788 - Train Accuracy: 0.8472 - Validation Loss: 0.4907 - Validation Accuracy: 0.7708 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5380 | 0.7569 | 0.5530 | 0.7153 | 0 | | 0.4505 | 0.8218 | 0.5012 | 0.7708 | 1 | | 0.3788 | 0.8472 | 0.4907 | 0.7708 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,812
[ [ -0.036773681640625, -0.0421142578125, 0.01430511474609375, 0.005504608154296875, -0.0312042236328125, -0.025390625, -0.0106353759765625, -0.0225677490234375, 0.022216796875, 0.0124359130859375, -0.051422119140625, -0.04046630859375, -0.04815673828125, -0.029...
YakovElm/Jira20Classic_Balance_DATA_ratio_4
2023-05-31T01:56:58.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira20Classic_Balance_DATA_ratio_4
0
2
transformers
2023-05-31T01:56:19
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira20Classic_Balance_DATA_ratio_4 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira20Classic_Balance_DATA_ratio_4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4387 - Train Accuracy: 0.7981 - Validation Loss: 0.3830 - Validation Accuracy: 0.8278 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5397 | 0.7537 | 0.4603 | 0.8278 | 0 | | 0.4846 | 0.7981 | 0.4194 | 0.8278 | 1 | | 0.4387 | 0.7981 | 0.3830 | 0.8278 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,812
[ [ -0.03704833984375, -0.041107177734375, 0.0148468017578125, 0.005390167236328125, -0.0305938720703125, -0.023468017578125, -0.01068878173828125, -0.0230712890625, 0.0228271484375, 0.0122833251953125, -0.05255126953125, -0.040496826171875, -0.04754638671875, -...
YakovElm/MariaDB5Classic_Balance_DATA_ratio_1
2023-05-31T02:06:39.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB5Classic_Balance_DATA_ratio_1
0
2
transformers
2023-05-31T02:05:49
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB5Classic_Balance_DATA_ratio_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB5Classic_Balance_DATA_ratio_1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6207 - Train Accuracy: 0.6535 - Validation Loss: 0.5631 - Validation Accuracy: 0.7412 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6867 | 0.5157 | 0.6622 | 0.6471 | 0 | | 0.6667 | 0.5787 | 0.6071 | 0.7176 | 1 | | 0.6207 | 0.6535 | 0.5631 | 0.7412 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,816
[ [ -0.04266357421875, -0.040191650390625, 0.01427459716796875, 0.004344940185546875, -0.03045654296875, -0.0298919677734375, -0.0062408447265625, -0.0228271484375, 0.0209197998046875, 0.01444244384765625, -0.059417724609375, -0.046234130859375, -0.0450439453125, ...
YakovElm/MariaDB5Classic_Balance_DATA_ratio_2
2023-05-31T02:13:01.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB5Classic_Balance_DATA_ratio_2
0
2
transformers
2023-05-31T02:12:26
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB5Classic_Balance_DATA_ratio_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB5Classic_Balance_DATA_ratio_2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5732 - Train Accuracy: 0.6903 - Validation Loss: 0.6107 - Validation Accuracy: 0.6693 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6708 | 0.6037 | 0.6436 | 0.6378 | 0 | | 0.6054 | 0.6824 | 0.6241 | 0.6142 | 1 | | 0.5732 | 0.6903 | 0.6107 | 0.6693 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,816
[ [ -0.041351318359375, -0.040618896484375, 0.01349639892578125, 0.005214691162109375, -0.0313720703125, -0.029937744140625, -0.006824493408203125, -0.0238800048828125, 0.0200653076171875, 0.0148773193359375, -0.05950927734375, -0.044921875, -0.04571533203125, -...
YakovElm/MariaDB5Classic_Balance_DATA_ratio_3
2023-05-31T02:21:04.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB5Classic_Balance_DATA_ratio_3
0
2
transformers
2023-05-31T02:20:30
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB5Classic_Balance_DATA_ratio_3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB5Classic_Balance_DATA_ratio_3 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4818 - Train Accuracy: 0.7657 - Validation Loss: 0.6199 - Validation Accuracy: 0.7118 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5720 | 0.7382 | 0.5931 | 0.7118 | 0 | | 0.5061 | 0.7657 | 0.5737 | 0.7118 | 1 | | 0.4818 | 0.7657 | 0.6199 | 0.7118 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,816
[ [ -0.0423583984375, -0.041046142578125, 0.01546478271484375, 0.004486083984375, -0.03192138671875, -0.03021240234375, -0.00714874267578125, -0.0245513916015625, 0.01983642578125, 0.01593017578125, -0.05816650390625, -0.04632568359375, -0.04583740234375, -0.024...
YakovElm/MariaDB5Classic_Balance_DATA_ratio_4
2023-05-31T02:31:17.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB5Classic_Balance_DATA_ratio_4
0
2
transformers
2023-05-31T02:30:05
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB5Classic_Balance_DATA_ratio_4 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB5Classic_Balance_DATA_ratio_4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4175 - Train Accuracy: 0.8302 - Validation Loss: 0.4497 - Validation Accuracy: 0.7877 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.4696 | 0.8239 | 0.5012 | 0.7877 | 0 | | 0.4243 | 0.8302 | 0.5060 | 0.7877 | 1 | | 0.4175 | 0.8302 | 0.4497 | 0.7877 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,816
[ [ -0.042327880859375, -0.0406494140625, 0.015777587890625, 0.005199432373046875, -0.031585693359375, -0.0298614501953125, -0.007175445556640625, -0.0242156982421875, 0.01983642578125, 0.0152587890625, -0.058807373046875, -0.04736328125, -0.045379638671875, -0....
YakovElm/MariaDB10Classic_Balance_DATA_ratio_Half
2023-05-31T02:34:55.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB10Classic_Balance_DATA_ratio_Half
0
2
transformers
2023-05-31T02:34:21
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB10Classic_Balance_DATA_ratio_Half results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB10Classic_Balance_DATA_ratio_Half This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5948 - Train Accuracy: 0.6828 - Validation Loss: 0.5626 - Validation Accuracy: 0.6735 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.7471 | 0.5310 | 0.6682 | 0.5918 | 0 | | 0.6110 | 0.6690 | 0.6302 | 0.6122 | 1 | | 0.5948 | 0.6828 | 0.5626 | 0.6735 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,824
[ [ -0.0413818359375, -0.0423583984375, 0.012908935546875, 0.00440216064453125, -0.031463623046875, -0.02838134765625, -0.004970550537109375, -0.0222015380859375, 0.024505615234375, 0.014190673828125, -0.06048583984375, -0.0445556640625, -0.046142578125, -0.0278...
Ibrahim-Alam/finetuning-distilbert-base-uncased-on-sst2
2023-05-31T02:41:31.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:sst2", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
Ibrahim-Alam
null
null
Ibrahim-Alam/finetuning-distilbert-base-uncased-on-sst2
0
2
transformers
2023-05-31T02:35:41
--- license: apache-2.0 tags: - generated_from_trainer datasets: - sst2 metrics: - accuracy - f1 model-index: - name: finetuning-distilbert-base-uncased-on-sst2 results: - task: name: Text Classification type: text-classification dataset: name: sst2 type: sst2 config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.9013761467889908 - name: F1 type: f1 value: 0.9040178571428571 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-distilbert-base-uncased-on-sst2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the sst2 dataset. It achieves the following results on the evaluation set: - Loss: 0.2883 - Accuracy: 0.9014 - F1: 0.9040 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,570
[ [ -0.016693115234375, -0.047515869140625, 0.0171356201171875, 0.01448822021484375, -0.045745849609375, -0.0117034912109375, -0.01654052734375, -0.00327301025390625, -0.00002092123031616211, 0.01727294921875, -0.041748046875, -0.0357666015625, -0.0631103515625, ...
YakovElm/MariaDB10Classic_Balance_DATA_ratio_1
2023-05-31T02:38:55.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB10Classic_Balance_DATA_ratio_1
0
2
transformers
2023-05-31T02:38:20
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB10Classic_Balance_DATA_ratio_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB10Classic_Balance_DATA_ratio_1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5981 - Train Accuracy: 0.7010 - Validation Loss: 0.6804 - Validation Accuracy: 0.5692 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6864 | 0.5464 | 0.6699 | 0.5846 | 0 | | 0.6274 | 0.7010 | 0.6354 | 0.6923 | 1 | | 0.5981 | 0.7010 | 0.6804 | 0.5692 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,818
[ [ -0.0423583984375, -0.042083740234375, 0.01399993896484375, 0.004550933837890625, -0.030181884765625, -0.0301055908203125, -0.00745391845703125, -0.021636962890625, 0.02288818359375, 0.0149078369140625, -0.0592041015625, -0.043701171875, -0.04638671875, -0.02...
TokyoNights/dqn-SpaceInvadersNoFrameskip-v4
2023-05-31T02:42:32.000Z
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
TokyoNights
null
null
TokyoNights/dqn-SpaceInvadersNoFrameskip-v4
0
2
stable-baselines3
2023-05-31T02:41:58
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 582.00 +/- 131.63 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga TokyoNights -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga TokyoNights -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga TokyoNights ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
2,768
[ [ -0.042327880859375, -0.039886474609375, 0.021514892578125, 0.024200439453125, -0.0118865966796875, -0.01494598388671875, 0.0087127685546875, -0.01371002197265625, 0.01488494873046875, 0.0217742919921875, -0.07366943359375, -0.035430908203125, -0.0252227783203125...
YakovElm/MariaDB10Classic_Balance_DATA_ratio_2
2023-05-31T02:43:58.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB10Classic_Balance_DATA_ratio_2
0
2
transformers
2023-05-31T02:43:21
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB10Classic_Balance_DATA_ratio_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB10Classic_Balance_DATA_ratio_2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5036 - Train Accuracy: 0.7938 - Validation Loss: 0.6325 - Validation Accuracy: 0.6701 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6291 | 0.6735 | 0.6125 | 0.6804 | 0 | | 0.5694 | 0.7182 | 0.6580 | 0.6598 | 1 | | 0.5036 | 0.7938 | 0.6325 | 0.6701 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,818
[ [ -0.0411376953125, -0.04180908203125, 0.0138397216796875, 0.004779815673828125, -0.03094482421875, -0.0298614501953125, -0.007904052734375, -0.022796630859375, 0.0216217041015625, 0.01470947265625, -0.057708740234375, -0.04315185546875, -0.04705810546875, -0....
YakovElm/MariaDB10Classic_Balance_DATA_ratio_3
2023-05-31T02:50:25.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB10Classic_Balance_DATA_ratio_3
0
2
transformers
2023-05-31T02:49:45
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB10Classic_Balance_DATA_ratio_3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB10Classic_Balance_DATA_ratio_3 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4529 - Train Accuracy: 0.7861 - Validation Loss: 0.4950 - Validation Accuracy: 0.7615 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5801 | 0.7268 | 0.5206 | 0.7615 | 0 | | 0.5010 | 0.7809 | 0.5068 | 0.7615 | 1 | | 0.4529 | 0.7861 | 0.4950 | 0.7615 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,818
[ [ -0.0418701171875, -0.042083740234375, 0.0154266357421875, 0.005695343017578125, -0.030609130859375, -0.031494140625, -0.00830841064453125, -0.0231781005859375, 0.021270751953125, 0.01531219482421875, -0.05682373046875, -0.04498291015625, -0.046173095703125, ...
YakovElm/MariaDB10Classic_Balance_DATA_ratio_4
2023-05-31T02:58:07.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB10Classic_Balance_DATA_ratio_4
0
2
transformers
2023-05-31T02:57:33
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB10Classic_Balance_DATA_ratio_4 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB10Classic_Balance_DATA_ratio_4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4451 - Train Accuracy: 0.8004 - Validation Loss: 0.5120 - Validation Accuracy: 0.7531 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5466 | 0.7490 | 0.5117 | 0.7963 | 0 | | 0.4802 | 0.7901 | 0.4927 | 0.8210 | 1 | | 0.4451 | 0.8004 | 0.5120 | 0.7531 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,818
[ [ -0.042724609375, -0.041717529296875, 0.0157012939453125, 0.00482177734375, -0.0306243896484375, -0.029937744140625, -0.0081024169921875, -0.0234222412109375, 0.0214385986328125, 0.0159149169921875, -0.058135986328125, -0.044525146484375, -0.045867919921875, ...
YakovElm/MariaDB15Classic_Balance_DATA_ratio_Half
2023-05-31T03:01:14.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB15Classic_Balance_DATA_ratio_Half
0
2
transformers
2023-05-31T03:00:39
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB15Classic_Balance_DATA_ratio_Half results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB15Classic_Balance_DATA_ratio_Half This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5834 - Train Accuracy: 0.7143 - Validation Loss: 0.5577 - Validation Accuracy: 0.8000 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6561 | 0.5966 | 0.6086 | 0.7000 | 0 | | 0.5959 | 0.7059 | 0.5717 | 0.75 | 1 | | 0.5834 | 0.7143 | 0.5577 | 0.8000 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,824
[ [ -0.042510986328125, -0.04278564453125, 0.0124359130859375, 0.006114959716796875, -0.0312042236328125, -0.0287628173828125, -0.0064697265625, -0.0226898193359375, 0.0211029052734375, 0.01375579833984375, -0.0592041015625, -0.044647216796875, -0.047149658203125, ...
YakovElm/MariaDB15Classic_Balance_DATA_ratio_1
2023-05-31T03:04:38.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB15Classic_Balance_DATA_ratio_1
0
2
transformers
2023-05-31T03:04:03
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB15Classic_Balance_DATA_ratio_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB15Classic_Balance_DATA_ratio_1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6360 - Train Accuracy: 0.6352 - Validation Loss: 0.6069 - Validation Accuracy: 0.6792 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6965 | 0.4843 | 0.6664 | 0.6981 | 0 | | 0.6670 | 0.6101 | 0.6446 | 0.6415 | 1 | | 0.6360 | 0.6352 | 0.6069 | 0.6792 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,818
[ [ -0.04266357421875, -0.043121337890625, 0.0136566162109375, 0.005340576171875, -0.0303497314453125, -0.029693603515625, -0.0087890625, -0.022003173828125, 0.0206756591796875, 0.0140228271484375, -0.05853271484375, -0.04345703125, -0.047393798828125, -0.026916...
YakovElm/MariaDB15Classic_Balance_DATA_ratio_2
2023-05-31T03:08:59.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB15Classic_Balance_DATA_ratio_2
0
2
transformers
2023-05-31T03:08:23
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB15Classic_Balance_DATA_ratio_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB15Classic_Balance_DATA_ratio_2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5283 - Train Accuracy: 0.7448 - Validation Loss: 0.4299 - Validation Accuracy: 0.8125 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6389 | 0.6569 | 0.5386 | 0.7375 | 0 | | 0.5884 | 0.6569 | 0.4752 | 0.7875 | 1 | | 0.5283 | 0.7448 | 0.4299 | 0.8125 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,818
[ [ -0.04241943359375, -0.0416259765625, 0.01348876953125, 0.00597381591796875, -0.031097412109375, -0.02996826171875, -0.00875091552734375, -0.0232696533203125, 0.0194091796875, 0.0138702392578125, -0.057098388671875, -0.042327880859375, -0.0472412109375, -0.02...
markroot/my-test-model
2023-05-31T03:12:23.000Z
[ "transformers", "pytorch", "tf", "bert", "text-classification", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
text-classification
markroot
null
null
markroot/my-test-model
0
2
transformers
2023-05-31T03:11:22
--- tags: - generated_from_keras_callback model-index: - name: my-test-model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # my-test-model This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.0 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
862
[ [ -0.04058837890625, -0.0416259765625, 0.0302886962890625, -0.00196075439453125, -0.039306640625, -0.03497314453125, 0.0004494190216064453, -0.0213470458984375, 0.0031528472900390625, 0.0333251953125, -0.046051025390625, -0.036102294921875, -0.062286376953125, ...
YakovElm/MariaDB15Classic_Balance_DATA_ratio_3
2023-05-31T03:14:27.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB15Classic_Balance_DATA_ratio_3
0
2
transformers
2023-05-31T03:13:48
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB15Classic_Balance_DATA_ratio_3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB15Classic_Balance_DATA_ratio_3 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4251 - Train Accuracy: 0.7868 - Validation Loss: 0.6687 - Validation Accuracy: 0.6792 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5416 | 0.7618 | 0.6025 | 0.6792 | 0 | | 0.4727 | 0.7806 | 0.6428 | 0.6792 | 1 | | 0.4251 | 0.7868 | 0.6687 | 0.6792 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,818
[ [ -0.0423583984375, -0.043212890625, 0.015472412109375, 0.005863189697265625, -0.03106689453125, -0.031158447265625, -0.00919342041015625, -0.0236358642578125, 0.0188751220703125, 0.01500701904296875, -0.05621337890625, -0.044677734375, -0.04742431640625, -0.0...
YakovElm/MariaDB15Classic_Balance_DATA_ratio_4
2023-05-31T03:20:58.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB15Classic_Balance_DATA_ratio_4
0
2
transformers
2023-05-31T03:20:24
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB15Classic_Balance_DATA_ratio_4 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB15Classic_Balance_DATA_ratio_4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4238 - Train Accuracy: 0.7995 - Validation Loss: 0.3912 - Validation Accuracy: 0.8346 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5240 | 0.7845 | 0.4230 | 0.8346 | 0 | | 0.4713 | 0.7945 | 0.4084 | 0.8346 | 1 | | 0.4238 | 0.7995 | 0.3912 | 0.8346 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,818
[ [ -0.04296875, -0.041229248046875, 0.01507568359375, 0.00592803955078125, -0.0310516357421875, -0.0294342041015625, -0.00977325439453125, -0.0232391357421875, 0.019195556640625, 0.0146331787109375, -0.057098388671875, -0.04473876953125, -0.0469970703125, -0.02...
YakovElm/MariaDB20Classic_Balance_DATA_ratio_Half
2023-05-31T03:23:53.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB20Classic_Balance_DATA_ratio_Half
0
2
transformers
2023-05-31T03:23:16
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB20Classic_Balance_DATA_ratio_Half results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB20Classic_Balance_DATA_ratio_Half This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6302 - Train Accuracy: 0.6762 - Validation Loss: 0.5937 - Validation Accuracy: 0.7429 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6763 | 0.5524 | 0.6207 | 0.7429 | 0 | | 0.6582 | 0.6190 | 0.6358 | 0.7429 | 1 | | 0.6302 | 0.6762 | 0.5937 | 0.7429 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,824
[ [ -0.041412353515625, -0.0428466796875, 0.01422882080078125, 0.00437164306640625, -0.032073974609375, -0.0291290283203125, -0.00439453125, -0.021942138671875, 0.0241546630859375, 0.01543426513671875, -0.061676025390625, -0.046844482421875, -0.04595947265625, -...
YakovElm/MariaDB20Classic_Balance_DATA_ratio_1
2023-05-31T03:26:55.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB20Classic_Balance_DATA_ratio_1
0
2
transformers
2023-05-31T03:26:19
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB20Classic_Balance_DATA_ratio_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB20Classic_Balance_DATA_ratio_1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5688 - Train Accuracy: 0.7071 - Validation Loss: 0.5979 - Validation Accuracy: 0.6809 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6930 | 0.5214 | 0.6401 | 0.6596 | 0 | | 0.6195 | 0.7000 | 0.6279 | 0.5957 | 1 | | 0.5688 | 0.7071 | 0.5979 | 0.6809 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,818
[ [ -0.0419921875, -0.04302978515625, 0.01493072509765625, 0.0040130615234375, -0.03082275390625, -0.03094482421875, -0.0066070556640625, -0.022308349609375, 0.0237579345703125, 0.015869140625, -0.05999755859375, -0.045654296875, -0.046295166015625, -0.026870727...
YakovElm/MariaDB20Classic_Balance_DATA_ratio_2
2023-05-31T03:30:49.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB20Classic_Balance_DATA_ratio_2
0
2
transformers
2023-05-31T03:30:15
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB20Classic_Balance_DATA_ratio_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB20Classic_Balance_DATA_ratio_2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5870 - Train Accuracy: 0.6714 - Validation Loss: 0.5487 - Validation Accuracy: 0.7143 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6712 | 0.6143 | 0.5982 | 0.7143 | 0 | | 0.6334 | 0.6476 | 0.5734 | 0.7143 | 1 | | 0.5870 | 0.6714 | 0.5487 | 0.7143 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,818
[ [ -0.040863037109375, -0.042327880859375, 0.01477813720703125, 0.00493621826171875, -0.032257080078125, -0.029632568359375, -0.00714111328125, -0.0227813720703125, 0.0223846435546875, 0.0156707763671875, -0.059844970703125, -0.043853759765625, -0.046417236328125, ...
YakovElm/MariaDB20Classic_Balance_DATA_ratio_3
2023-05-31T03:35:37.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB20Classic_Balance_DATA_ratio_3
0
2
transformers
2023-05-31T03:35:00
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB20Classic_Balance_DATA_ratio_3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB20Classic_Balance_DATA_ratio_3 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5028 - Train Accuracy: 0.75 - Validation Loss: 0.5287 - Validation Accuracy: 0.7447 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5976 | 0.7071 | 0.5633 | 0.7447 | 0 | | 0.5556 | 0.75 | 0.5370 | 0.7447 | 1 | | 0.5028 | 0.75 | 0.5287 | 0.7447 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,816
[ [ -0.04193115234375, -0.04296875, 0.016204833984375, 0.005126953125, -0.03240966796875, -0.03106689453125, -0.00762939453125, -0.0233306884765625, 0.021881103515625, 0.016326904296875, -0.058837890625, -0.046356201171875, -0.045654296875, -0.027008056640625, ...
YakovElm/MariaDB20Classic_Balance_DATA_ratio_4
2023-05-31T03:41:24.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB20Classic_Balance_DATA_ratio_4
0
2
transformers
2023-05-31T03:40:50
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB20Classic_Balance_DATA_ratio_4 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB20Classic_Balance_DATA_ratio_4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3891 - Train Accuracy: 0.8205 - Validation Loss: 0.4660 - Validation Accuracy: 0.7778 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5057 | 0.7949 | 0.4990 | 0.7778 | 0 | | 0.4474 | 0.8148 | 0.4729 | 0.7778 | 1 | | 0.3891 | 0.8205 | 0.4660 | 0.7778 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,818
[ [ -0.042083740234375, -0.042083740234375, 0.0167694091796875, 0.005268096923828125, -0.03143310546875, -0.03021240234375, -0.007419586181640625, -0.02325439453125, 0.0217132568359375, 0.0163421630859375, -0.059326171875, -0.04620361328125, -0.04583740234375, -...
YakovElm/Qt5Classic_Balance_DATA_ratio_1
2023-05-31T03:59:58.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt5Classic_Balance_DATA_ratio_1
0
2
transformers
2023-05-31T03:59:23
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt5Classic_Balance_DATA_ratio_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt5Classic_Balance_DATA_ratio_1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5569 - Train Accuracy: 0.7207 - Validation Loss: 0.6633 - Validation Accuracy: 0.6314 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6783 | 0.5628 | 0.6909 | 0.6017 | 0 | | 0.6449 | 0.6361 | 0.6829 | 0.6102 | 1 | | 0.5569 | 0.7207 | 0.6633 | 0.6314 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,806
[ [ -0.03948974609375, -0.029205322265625, 0.0167236328125, 0.00543212890625, -0.03350830078125, -0.023162841796875, -0.001922607421875, -0.0179595947265625, 0.0082550048828125, 0.01084136962890625, -0.054595947265625, -0.04339599609375, -0.04522705078125, -0.02...
Ibrahim-Alam/finetuning-bert-base-uncased-on-sst2
2023-05-31T04:22:35.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:sst2", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
Ibrahim-Alam
null
null
Ibrahim-Alam/finetuning-bert-base-uncased-on-sst2
0
2
transformers
2023-05-31T04:12:55
--- license: apache-2.0 tags: - generated_from_trainer datasets: - sst2 metrics: - accuracy - f1 model-index: - name: finetuning-bert-base-uncased-on-sst2 results: - task: name: Text Classification type: text-classification dataset: name: sst2 type: sst2 config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.926605504587156 - name: F1 type: f1 value: 0.9285714285714286 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-bert-base-uncased-on-sst2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the sst2 dataset. It achieves the following results on the evaluation set: - Loss: 0.2424 - Accuracy: 0.9266 - F1: 0.9286 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,545
[ [ -0.023590087890625, -0.043701171875, 0.01328277587890625, 0.01203155517578125, -0.049072265625, -0.0247344970703125, -0.028472900390625, -0.0145263671875, 0.004169464111328125, 0.0253448486328125, -0.049530029296875, -0.03131103515625, -0.051025390625, -0.02...
YakovElm/Qt5Classic_Balance_DATA_ratio_2
2023-05-31T04:14:15.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt5Classic_Balance_DATA_ratio_2
0
2
transformers
2023-05-31T04:13:39
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt5Classic_Balance_DATA_ratio_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt5Classic_Balance_DATA_ratio_2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5704 - Train Accuracy: 0.6990 - Validation Loss: 0.6007 - Validation Accuracy: 0.6366 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6162 | 0.6736 | 0.6984 | 0.6254 | 0 | | 0.5881 | 0.6980 | 0.6512 | 0.6366 | 1 | | 0.5704 | 0.6990 | 0.6007 | 0.6366 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,806
[ [ -0.038238525390625, -0.0302276611328125, 0.0171051025390625, 0.005718231201171875, -0.033660888671875, -0.022979736328125, -0.0024585723876953125, -0.0188446044921875, 0.00681304931640625, 0.01052093505859375, -0.05389404296875, -0.042205810546875, -0.0457153320...
YakovElm/Qt5Classic_Balance_DATA_ratio_3
2023-05-31T04:32:47.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt5Classic_Balance_DATA_ratio_3
0
2
transformers
2023-05-31T04:32:12
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt5Classic_Balance_DATA_ratio_3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt5Classic_Balance_DATA_ratio_3 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4434 - Train Accuracy: 0.7877 - Validation Loss: 0.5431 - Validation Accuracy: 0.7125 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5376 | 0.7602 | 0.5287 | 0.7548 | 0 | | 0.5125 | 0.7616 | 0.5183 | 0.7548 | 1 | | 0.4434 | 0.7877 | 0.5431 | 0.7125 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,806
[ [ -0.038787841796875, -0.031280517578125, 0.018310546875, 0.005641937255859375, -0.03375244140625, -0.0238037109375, -0.0033702850341796875, -0.0199737548828125, 0.0079498291015625, 0.01149749755859375, -0.052886962890625, -0.04412841796875, -0.045654296875, -...
YakovElm/Qt5Classic_Balance_DATA_ratio_4
2023-05-31T04:55:44.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt5Classic_Balance_DATA_ratio_4
0
2
transformers
2023-05-31T04:55:08
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt5Classic_Balance_DATA_ratio_4 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt5Classic_Balance_DATA_ratio_4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4133 - Train Accuracy: 0.8297 - Validation Loss: 0.5084 - Validation Accuracy: 0.8105 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.4883 | 0.8009 | 0.4751 | 0.8020 | 0 | | 0.4483 | 0.8116 | 0.4644 | 0.8020 | 1 | | 0.4133 | 0.8297 | 0.5084 | 0.8105 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,806
[ [ -0.03985595703125, -0.031524658203125, 0.017852783203125, 0.00586700439453125, -0.0328369140625, -0.0231475830078125, -0.003688812255859375, -0.0193939208984375, 0.0083160400390625, 0.01204681396484375, -0.0543212890625, -0.0438232421875, -0.04534912109375, ...
sadananda/distilbert-base-uncased-finetuned-clinc
2023-05-31T07:12:03.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
sadananda
null
null
sadananda/distilbert-base-uncased-finetuned-clinc
0
2
transformers
2023-05-31T04:56:50
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos config: plus split: validation args: plus metrics: - name: Accuracy type: accuracy value: 0.9180645161290323 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7720 - Accuracy: 0.9181 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 318 | 3.2887 | 0.7419 | | 3.7868 | 2.0 | 636 | 1.8753 | 0.8371 | | 3.7868 | 3.0 | 954 | 1.1570 | 0.8961 | | 1.6927 | 4.0 | 1272 | 0.8573 | 0.9129 | | 0.9056 | 5.0 | 1590 | 0.7720 | 0.9181 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,932
[ [ -0.03399658203125, -0.041656494140625, 0.01201629638671875, 0.007144927978515625, -0.0271148681640625, -0.02545166015625, -0.012969970703125, -0.00949859619140625, 0.0020160675048828125, 0.021759033203125, -0.04620361328125, -0.0477294921875, -0.058319091796875,...
smarquie/dqn-SpaceInvadersNoFrameskip-v4
2023-05-31T05:01:02.000Z
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
smarquie
null
null
smarquie/dqn-SpaceInvadersNoFrameskip-v4
0
2
stable-baselines3
2023-05-31T05:00:26
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 582.00 +/- 249.58 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga smarquie -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga smarquie -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga smarquie ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
2,759
[ [ -0.043548583984375, -0.03900146484375, 0.01934814453125, 0.0250396728515625, -0.01061248779296875, -0.0181884765625, 0.01038360595703125, -0.01251983642578125, 0.01268768310546875, 0.02227783203125, -0.0718994140625, -0.034515380859375, -0.024993896484375, -...
YakovElm/Qt10Classic_Balance_DATA_ratio_Half
2023-05-31T05:02:14.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt10Classic_Balance_DATA_ratio_Half
0
2
transformers
2023-05-31T05:01:39
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt10Classic_Balance_DATA_ratio_Half results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt10Classic_Balance_DATA_ratio_Half This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5781 - Train Accuracy: 0.7063 - Validation Loss: 0.5501 - Validation Accuracy: 0.7222 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6592 | 0.5741 | 0.6015 | 0.6746 | 0 | | 0.6047 | 0.6825 | 0.5738 | 0.7540 | 1 | | 0.5781 | 0.7063 | 0.5501 | 0.7222 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,814
[ [ -0.03826904296875, -0.03466796875, 0.01513671875, 0.005977630615234375, -0.03265380859375, -0.022186279296875, -0.0025272369384765625, -0.017425537109375, 0.0129547119140625, 0.01021575927734375, -0.05419921875, -0.040557861328125, -0.046539306640625, -0.030...
YakovElm/Qt10Classic_Balance_DATA_ratio_1
2023-05-31T05:09:45.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt10Classic_Balance_DATA_ratio_1
0
2
transformers
2023-05-31T05:09:10
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt10Classic_Balance_DATA_ratio_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt10Classic_Balance_DATA_ratio_1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5377 - Train Accuracy: 0.7599 - Validation Loss: 0.7260 - Validation Accuracy: 0.5833 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6707 | 0.5853 | 0.6418 | 0.6369 | 0 | | 0.6033 | 0.6687 | 0.5982 | 0.6905 | 1 | | 0.5377 | 0.7599 | 0.7260 | 0.5833 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,808
[ [ -0.038665771484375, -0.033782958984375, 0.0161590576171875, 0.005931854248046875, -0.032501220703125, -0.0233154296875, -0.0045623779296875, -0.0174407958984375, 0.01314544677734375, 0.01041412353515625, -0.05364990234375, -0.039703369140625, -0.046844482421875,...
p208p2002/gpt2-babi
2023-05-31T08:06:12.000Z
[ "transformers", "pytorch", "gpt2", "text-generation", "dataset:facebook/babi_qa", "arxiv:1502.05698", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
p208p2002
null
null
p208p2002/gpt2-babi
0
2
transformers
2023-05-31T05:15:41
--- datasets: - facebook/babi_qa --- Fine tune and evaluate transformer model on facebook's bAbi tasks. > [Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks](https://arxiv.org/abs/1502.05698) Training Code: [p208p2002/bAbi-tasks-with-transformer-model](https://github.com/p208p2002/bAbi-tasks-with-transformer-model) |task_no|task_name |score| |-------|----------------------|-----| |qa1 |single-supporting-fact|100 | |qa2 |two-supporting-facts |99.4 | |qa3 |three-supporting-facts|62.0 | |qa4 |two-arg-relations |100 | |qa5 |three-arg-relations |96.0 | |qa6 |yes-no-questions |100 | |qa7 |counting |100 | |qa8 |lists-sets |95.6 | |qa9 |simple-negation |100 | |qa10 | indefinite-knowledge |100 | |qa11 | basic-coreference |100 | |qa12 | conjunction |100 | |qa13 | compound-coreference |100 | |qa14 | time-reasoning |100 | |qa15 | basic-deduction |100 | |qa16 | basic-induction |100 | |qa17 | positional-reasoning |100 | |qa18 | size-reasoning |100 | |qa19 | path-finding |100 | |qa20 | agents-motivations |100 | ```python # Please use with the follow template INPUT_TEMPLATE = """ Context: {context} Question: {question} Answer: {answer} """ input_text = INPUT_TEMPLATE.format_map({ "context":context, "question":question, "answer":answer }).strip() ```
1,457
[ [ -0.04156494140625, -0.078857421875, 0.0313720703125, 0.01486968994140625, 0.01036834716796875, -0.005504608154296875, -0.00823974609375, 0.0018329620361328125, -0.004589080810546875, 0.02545166015625, -0.060302734375, -0.01239776611328125, -0.041412353515625, ...
YakovElm/Qt10Classic_Balance_DATA_ratio_2
2023-05-31T05:20:14.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt10Classic_Balance_DATA_ratio_2
0
2
transformers
2023-05-31T05:19:39
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt10Classic_Balance_DATA_ratio_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt10Classic_Balance_DATA_ratio_2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5061 - Train Accuracy: 0.7460 - Validation Loss: 0.5543 - Validation Accuracy: 0.6905 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6105 | 0.6865 | 0.6598 | 0.6429 | 0 | | 0.5750 | 0.6997 | 0.6021 | 0.6508 | 1 | | 0.5061 | 0.7460 | 0.5543 | 0.6905 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,808
[ [ -0.037841796875, -0.034271240234375, 0.0158538818359375, 0.005512237548828125, -0.03289794921875, -0.0234527587890625, -0.004878997802734375, -0.0176544189453125, 0.011993408203125, 0.0108489990234375, -0.0531005859375, -0.0382080078125, -0.047332763671875, ...
YakovElm/Qt10Classic_Balance_DATA_ratio_3
2023-05-31T05:34:01.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt10Classic_Balance_DATA_ratio_3
0
2
transformers
2023-05-31T05:33:25
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt10Classic_Balance_DATA_ratio_3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt10Classic_Balance_DATA_ratio_3 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4685 - Train Accuracy: 0.7708 - Validation Loss: 0.5219 - Validation Accuracy: 0.7440 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5549 | 0.7520 | 0.5697 | 0.7381 | 0 | | 0.5130 | 0.7510 | 0.5651 | 0.7440 | 1 | | 0.4685 | 0.7708 | 0.5219 | 0.7440 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,808
[ [ -0.039337158203125, -0.0343017578125, 0.017486572265625, 0.006855010986328125, -0.033050537109375, -0.0252532958984375, -0.0048675537109375, -0.0185089111328125, 0.01099395751953125, 0.01181793212890625, -0.052001953125, -0.040679931640625, -0.045928955078125, ...
Akira10/distilbert-base-uncased-finetuned-clinc
2023-05-31T07:12:13.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
Akira10
null
null
Akira10/distilbert-base-uncased-finetuned-clinc
0
2
transformers
2023-05-31T05:41:10
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos config: plus split: validation args: plus metrics: - name: Accuracy type: accuracy value: 0.9145161290322581 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7724 - Accuracy: 0.9145 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.283 | 1.0 | 318 | 3.2777 | 0.7452 | | 2.6225 | 2.0 | 636 | 1.8655 | 0.8371 | | 1.5398 | 3.0 | 954 | 1.1527 | 0.8932 | | 1.012 | 4.0 | 1272 | 0.8558 | 0.9090 | | 0.7934 | 5.0 | 1590 | 0.7724 | 0.9145 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,932
[ [ -0.035308837890625, -0.041046142578125, 0.01372528076171875, 0.006389617919921875, -0.026824951171875, -0.0248260498046875, -0.01284027099609375, -0.00901031494140625, 0.003376007080078125, 0.022613525390625, -0.0467529296875, -0.04736328125, -0.057525634765625,...
YakovElm/Qt10Classic_Balance_DATA_ratio_4
2023-05-31T05:50:40.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt10Classic_Balance_DATA_ratio_4
0
2
transformers
2023-05-31T05:50:06
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt10Classic_Balance_DATA_ratio_4 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt10Classic_Balance_DATA_ratio_4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4020 - Train Accuracy: 0.8230 - Validation Loss: 0.5593 - Validation Accuracy: 0.7619 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.4751 | 0.8103 | 0.5714 | 0.7571 | 0 | | 0.4383 | 0.8119 | 0.5485 | 0.7571 | 1 | | 0.4020 | 0.8230 | 0.5593 | 0.7619 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,808
[ [ -0.039398193359375, -0.033203125, 0.01824951171875, 0.005828857421875, -0.03228759765625, -0.0237579345703125, -0.004913330078125, -0.0181121826171875, 0.01195526123046875, 0.0115203857421875, -0.05279541015625, -0.040557861328125, -0.046051025390625, -0.026...
YakovElm/Qt15Classic_Balance_DATA_ratio_Half
2023-05-31T05:56:03.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt15Classic_Balance_DATA_ratio_Half
0
2
transformers
2023-05-31T05:55:28
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt15Classic_Balance_DATA_ratio_Half results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt15Classic_Balance_DATA_ratio_Half This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5288 - Train Accuracy: 0.7700 - Validation Loss: 0.6552 - Validation Accuracy: 0.6600 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5977 | 0.6933 | 0.6622 | 0.6100 | 0 | | 0.5840 | 0.7300 | 0.6653 | 0.6400 | 1 | | 0.5288 | 0.7700 | 0.6552 | 0.6600 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,814
[ [ -0.039398193359375, -0.0361328125, 0.01364898681640625, 0.007396697998046875, -0.032867431640625, -0.0228424072265625, -0.003662109375, -0.017333984375, 0.01145172119140625, 0.01023101806640625, -0.055816650390625, -0.04156494140625, -0.04608154296875, -0.03...
YakovElm/Qt15Classic_Balance_DATA_ratio_1
2023-05-31T06:02:32.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt15Classic_Balance_DATA_ratio_1
0
2
transformers
2023-05-31T06:01:37
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt15Classic_Balance_DATA_ratio_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt15Classic_Balance_DATA_ratio_1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6387 - Train Accuracy: 0.6225 - Validation Loss: 0.6190 - Validation Accuracy: 0.6642 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6983 | 0.5475 | 0.6646 | 0.4552 | 0 | | 0.6680 | 0.5525 | 0.6700 | 0.6045 | 1 | | 0.6387 | 0.6225 | 0.6190 | 0.6642 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,808
[ [ -0.040252685546875, -0.035369873046875, 0.01495361328125, 0.006824493408203125, -0.032379150390625, -0.025238037109375, -0.005462646484375, -0.0178070068359375, 0.011688232421875, 0.01186370849609375, -0.05517578125, -0.040985107421875, -0.0465087890625, -0....
YakovElm/Qt15Classic_Balance_DATA_ratio_2
2023-05-31T06:11:14.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt15Classic_Balance_DATA_ratio_2
0
2
transformers
2023-05-31T06:10:38
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt15Classic_Balance_DATA_ratio_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt15Classic_Balance_DATA_ratio_2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5878 - Train Accuracy: 0.7238 - Validation Loss: 0.6686 - Validation Accuracy: 0.5750 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6288 | 0.6739 | 0.6947 | 0.6100 | 0 | | 0.5963 | 0.7138 | 0.6473 | 0.6050 | 1 | | 0.5878 | 0.7238 | 0.6686 | 0.5750 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,808
[ [ -0.038787841796875, -0.035919189453125, 0.0154876708984375, 0.00745391845703125, -0.03265380859375, -0.024261474609375, -0.005584716796875, -0.0190582275390625, 0.01073455810546875, 0.01131439208984375, -0.05438232421875, -0.039459228515625, -0.046661376953125, ...
anhdt-dsai-02/tuna_t0_v1.6
2023-05-31T12:30:02.000Z
[ "transformers", "pytorch", "tensorboard", "mt5", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
anhdt-dsai-02
null
null
anhdt-dsai-02/tuna_t0_v1.6
0
2
transformers
2023-05-31T06:16:32
--- tags: - generated_from_trainer metrics: - rouge - bleu model-index: - name: tuna_t0_v1.6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tuna_t0_v1.6 This model is a fine-tuned version of [anhdt-dsai-02/tuna_t0_v1.4](https://huggingface.co/anhdt-dsai-02/tuna_t0_v1.4) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2728 - Rouge1: 48.7673 - Rouge2: 21.6362 - Rougel: 33.1174 - Rougelsum: 36.77 - Bleu: 5.5664 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:------:| | 0.3131 | 1.0 | 7455 | 0.2811 | 47.5613 | 21.2753 | 32.3442 | 35.7895 | 5.3805 | | 0.3524 | 2.0 | 14910 | 0.2757 | 48.8703 | 21.5877 | 33.0113 | 36.6099 | 6.0464 | | 0.2785 | 3.0 | 22365 | 0.2734 | 48.9094 | 21.7508 | 33.3675 | 36.9757 | 5.9984 | | 0.3074 | 4.0 | 29820 | 0.2728 | 48.7673 | 21.6362 | 33.1174 | 36.77 | 5.5664 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
1,812
[ [ -0.04522705078125, -0.045013427734375, 0.0176849365234375, 0.0021800994873046875, -0.0279998779296875, -0.0291900634765625, -0.001312255859375, -0.0163421630859375, 0.027496337890625, 0.0303192138671875, -0.05609130859375, -0.035308837890625, -0.04827880859375, ...
YakovElm/Qt15Classic_Balance_DATA_ratio_3
2023-05-31T06:22:35.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt15Classic_Balance_DATA_ratio_3
0
2
transformers
2023-05-31T06:21:56
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt15Classic_Balance_DATA_ratio_3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt15Classic_Balance_DATA_ratio_3 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4696 - Train Accuracy: 0.7753 - Validation Loss: 0.6356 - Validation Accuracy: 0.5206 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5718 | 0.7253 | 0.5475 | 0.7566 | 0 | | 0.5318 | 0.7503 | 0.5253 | 0.7566 | 1 | | 0.4696 | 0.7753 | 0.6356 | 0.5206 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,808
[ [ -0.040008544921875, -0.035369873046875, 0.01715087890625, 0.007373809814453125, -0.033233642578125, -0.0256195068359375, -0.0056610107421875, -0.0198974609375, 0.00992584228515625, 0.01215362548828125, -0.05352783203125, -0.041534423828125, -0.0455322265625, ...
YakovElm/Qt15Classic_Balance_DATA_ratio_4
2023-05-31T06:36:11.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt15Classic_Balance_DATA_ratio_4
0
2
transformers
2023-05-31T06:35:36
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt15Classic_Balance_DATA_ratio_4 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt15Classic_Balance_DATA_ratio_4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4523 - Train Accuracy: 0.8144 - Validation Loss: 0.4851 - Validation Accuracy: 0.7605 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.4776 | 0.8114 | 0.5124 | 0.7605 | 0 | | 0.4510 | 0.8144 | 0.6845 | 0.7605 | 1 | | 0.4523 | 0.8144 | 0.4851 | 0.7605 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,808
[ [ -0.0401611328125, -0.03460693359375, 0.0166015625, 0.0073699951171875, -0.0322265625, -0.0244140625, -0.006244659423828125, -0.0194091796875, 0.01126861572265625, 0.01197052001953125, -0.054229736328125, -0.04278564453125, -0.046173095703125, -0.027099609375...
YakovElm/Qt20Classic_Balance_DATA_ratio_Half
2023-05-31T06:40:56.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt20Classic_Balance_DATA_ratio_Half
0
2
transformers
2023-05-31T06:40:21
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt20Classic_Balance_DATA_ratio_Half results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt20Classic_Balance_DATA_ratio_Half This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5791 - Train Accuracy: 0.6944 - Validation Loss: 0.6140 - Validation Accuracy: 0.7024 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6662 | 0.5794 | 0.6231 | 0.6786 | 0 | | 0.6139 | 0.6627 | 0.6083 | 0.6786 | 1 | | 0.5791 | 0.6944 | 0.6140 | 0.7024 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,814
[ [ -0.038604736328125, -0.03375244140625, 0.01593017578125, 0.007022857666015625, -0.033721923828125, -0.0212860107421875, -0.000865936279296875, -0.0174560546875, 0.0107421875, 0.01215362548828125, -0.05712890625, -0.041412353515625, -0.04547119140625, -0.0301...
Akira10/distilbert-base-uncased-distilled-clinc
2023-05-31T08:00:03.000Z
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
Akira10
null
null
Akira10/distilbert-base-uncased-distilled-clinc
0
2
transformers
2023-05-31T06:43:22
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-distilled-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos config: plus split: validation args: plus metrics: - name: Accuracy type: accuracy value: 0.9438709677419355 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.2192 - Accuracy: 0.9439 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.8173 | 1.0 | 318 | 1.2393 | 0.7426 | | 0.9618 | 2.0 | 636 | 0.6148 | 0.8590 | | 0.5073 | 3.0 | 954 | 0.3621 | 0.9158 | | 0.3189 | 4.0 | 1272 | 0.2748 | 0.9319 | | 0.2442 | 5.0 | 1590 | 0.2454 | 0.9394 | | 0.2143 | 6.0 | 1908 | 0.2330 | 0.9419 | | 0.1987 | 7.0 | 2226 | 0.2258 | 0.9432 | | 0.1905 | 8.0 | 2544 | 0.2218 | 0.9442 | | 0.1861 | 9.0 | 2862 | 0.2201 | 0.9439 | | 0.1836 | 10.0 | 3180 | 0.2192 | 0.9439 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
2,243
[ [ -0.0333251953125, -0.03802490234375, 0.01580810546875, 0.00600433349609375, -0.0238800048828125, -0.0182037353515625, -0.0095672607421875, -0.005504608154296875, 0.00818634033203125, 0.02093505859375, -0.04364013671875, -0.049285888671875, -0.061767578125, -...
YakovElm/Qt20Classic_Balance_DATA_ratio_1
2023-05-31T06:46:45.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt20Classic_Balance_DATA_ratio_1
0
2
transformers
2023-05-31T06:46:11
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt20Classic_Balance_DATA_ratio_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt20Classic_Balance_DATA_ratio_1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5747 - Train Accuracy: 0.7054 - Validation Loss: 0.6443 - Validation Accuracy: 0.6607 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6829 | 0.5387 | 0.6446 | 0.6607 | 0 | | 0.6460 | 0.6220 | 0.6451 | 0.6071 | 1 | | 0.5747 | 0.7054 | 0.6443 | 0.6607 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,808
[ [ -0.0390625, -0.034149169921875, 0.0164947509765625, 0.00652313232421875, -0.03289794921875, -0.0229949951171875, -0.0033893585205078125, -0.0169830322265625, 0.009796142578125, 0.0123138427734375, -0.05548095703125, -0.041107177734375, -0.04620361328125, -0....
YakovElm/Qt20Classic_Balance_DATA_ratio_2
2023-05-31T06:54:50.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt20Classic_Balance_DATA_ratio_2
0
2
transformers
2023-05-31T06:54:16
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt20Classic_Balance_DATA_ratio_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt20Classic_Balance_DATA_ratio_2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6028 - Train Accuracy: 0.6706 - Validation Loss: 0.5639 - Validation Accuracy: 0.6905 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6254 | 0.6667 | 0.5830 | 0.7143 | 0 | | 0.6309 | 0.6429 | 0.5765 | 0.6905 | 1 | | 0.6028 | 0.6706 | 0.5639 | 0.6905 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,808
[ [ -0.037628173828125, -0.033599853515625, 0.0161590576171875, 0.006771087646484375, -0.03350830078125, -0.0228271484375, -0.00333404541015625, -0.0184173583984375, 0.00975799560546875, 0.0124359130859375, -0.054351806640625, -0.03961181640625, -0.046142578125, ...
YakovElm/Qt20Classic_Balance_DATA_ratio_3
2023-05-31T07:04:49.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt20Classic_Balance_DATA_ratio_3
0
2
transformers
2023-05-31T07:04:14
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt20Classic_Balance_DATA_ratio_3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt20Classic_Balance_DATA_ratio_3 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4749 - Train Accuracy: 0.7560 - Validation Loss: 0.5236 - Validation Accuracy: 0.7277 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5616 | 0.7336 | 0.5621 | 0.7188 | 0 | | 0.5182 | 0.7604 | 0.5585 | 0.7188 | 1 | | 0.4749 | 0.7560 | 0.5236 | 0.7277 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,808
[ [ -0.038909912109375, -0.034576416015625, 0.0181121826171875, 0.007904052734375, -0.032806396484375, -0.02423095703125, -0.0034313201904296875, -0.01971435546875, 0.008941650390625, 0.01358795166015625, -0.053436279296875, -0.042083740234375, -0.045135498046875, ...
YakovElm/Qt20Classic_Balance_DATA_ratio_4
2023-05-31T07:17:18.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt20Classic_Balance_DATA_ratio_4
0
2
transformers
2023-05-31T07:16:42
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt20Classic_Balance_DATA_ratio_4 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt20Classic_Balance_DATA_ratio_4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3856 - Train Accuracy: 0.8202 - Validation Loss: 0.5670 - Validation Accuracy: 0.7607 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5112 | 0.7845 | 0.4737 | 0.7929 | 0 | | 0.4574 | 0.7976 | 0.4595 | 0.7929 | 1 | | 0.3856 | 0.8202 | 0.5670 | 0.7607 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,808
[ [ -0.0394287109375, -0.03289794921875, 0.0177459716796875, 0.006870269775390625, -0.03271484375, -0.02264404296875, -0.0037078857421875, -0.0181427001953125, 0.00923919677734375, 0.01285552978515625, -0.05450439453125, -0.042144775390625, -0.045440673828125, -...
pradeepiisc/distilbert-base-uncased-finetuned-emotion
2023-05-31T08:53:27.000Z
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
pradeepiisc
null
null
pradeepiisc/distilbert-base-uncased-finetuned-emotion
0
2
transformers
2023-05-31T08:27:49
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9245 - name: F1 type: f1 value: 0.924530571560491 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2262 - Accuracy: 0.9245 - F1: 0.9245 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.845 | 1.0 | 250 | 0.3265 | 0.9035 | 0.9010 | | 0.253 | 2.0 | 500 | 0.2262 | 0.9245 | 0.9245 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.1+cu117 - Datasets 2.10.1 - Tokenizers 0.13.3
1,847
[ [ -0.0377197265625, -0.040924072265625, 0.01441192626953125, 0.021636962890625, -0.02655029296875, -0.019561767578125, -0.013397216796875, -0.00841522216796875, 0.0105133056640625, 0.008331298828125, -0.056396484375, -0.05169677734375, -0.060150146484375, -0.0...
MJ03/distilbert-base-uncased-finetuned-clinc
2023-05-31T08:48:25.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
MJ03
null
null
MJ03/distilbert-base-uncased-finetuned-clinc
0
2
transformers
2023-05-31T08:40:30
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos config: plus split: validation args: plus metrics: - name: Accuracy type: accuracy value: 0.9180645161290323 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7720 - Accuracy: 0.9181 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2896 | 1.0 | 318 | 3.2887 | 0.7419 | | 2.6282 | 2.0 | 636 | 1.8753 | 0.8371 | | 1.548 | 3.0 | 954 | 1.1570 | 0.8961 | | 1.0148 | 4.0 | 1272 | 0.8573 | 0.9129 | | 0.7952 | 5.0 | 1590 | 0.7720 | 0.9181 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 1.16.1 - Tokenizers 0.13.3
1,932
[ [ -0.03466796875, -0.041015625, 0.0126190185546875, 0.00701141357421875, -0.0272216796875, -0.0245819091796875, -0.01291656494140625, -0.0086212158203125, 0.0030231475830078125, 0.02191162109375, -0.04644775390625, -0.048309326171875, -0.057861328125, -0.01176...
gokuls/hBERTv2_new_pretrain_cola
2023-06-06T06:27:33.000Z
[ "transformers", "pytorch", "tensorboard", "hybridbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "model-index", "endpoints_compatible", "region:us" ]
text-classification
gokuls
null
null
gokuls/hBERTv2_new_pretrain_cola
0
2
transformers
2023-05-31T09:32:57
--- language: - en tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: hBERTv2_new_pretrain_cola results: - task: name: Text Classification type: text-classification dataset: name: GLUE COLA type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hBERTv2_new_pretrain_cola This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6173 - Matthews Correlation: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6294 | 1.0 | 67 | 0.6236 | 0.0 | | 0.6169 | 2.0 | 134 | 0.6312 | 0.0 | | 0.6115 | 3.0 | 201 | 0.6173 | 0.0 | | 0.6372 | 4.0 | 268 | 0.6201 | 0.0 | | 0.6087 | 5.0 | 335 | 0.6217 | 0.0 | | 0.6086 | 6.0 | 402 | 0.6248 | 0.0 | | 0.6113 | 7.0 | 469 | 0.6283 | 0.0 | | 0.6109 | 8.0 | 536 | 0.6200 | 0.0 | ### Framework versions - Transformers 4.29.2 - Pytorch 1.14.0a0+410ce96 - Datasets 2.12.0 - Tokenizers 0.13.3
2,319
[ [ -0.02398681640625, -0.046295166015625, 0.003936767578125, 0.0194091796875, -0.018157958984375, -0.01042938232421875, 0.0009899139404296875, -0.016357421875, 0.026763916015625, 0.0197601318359375, -0.05291748046875, -0.03082275390625, -0.05322265625, -0.02339...
gokuls/hBERTv1_new_pretrain_w_init__cola
2023-06-06T06:30:52.000Z
[ "transformers", "pytorch", "tensorboard", "hybridbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "model-index", "endpoints_compatible", "region:us" ]
text-classification
gokuls
null
null
gokuls/hBERTv1_new_pretrain_w_init__cola
0
2
transformers
2023-05-31T10:08:33
--- language: - en tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation - accuracy model-index: - name: hBERTv1_new_pretrain_w_init__cola results: - task: name: Text Classification type: text-classification dataset: name: GLUE COLA type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.0 - name: Accuracy type: accuracy value: 0.6912751793861389 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hBERTv1_new_pretrain_w_init__cola This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_wt_init](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_wt_init) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6171 - Matthews Correlation: 0.0 - Accuracy: 0.6913 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:| | 0.6355 | 1.0 | 67 | 0.6239 | 0.0 | 0.6913 | | 0.6177 | 2.0 | 134 | 0.6211 | 0.0 | 0.6913 | | 0.6142 | 3.0 | 201 | 0.6231 | 0.0 | 0.6913 | | 0.6145 | 4.0 | 268 | 0.6171 | 0.0 | 0.6913 | | 0.6102 | 5.0 | 335 | 0.6199 | 0.0 | 0.6913 | | 0.6126 | 6.0 | 402 | 0.6184 | 0.0 | 0.6913 | | 0.6127 | 7.0 | 469 | 0.6206 | 0.0 | 0.6913 | | 0.6107 | 8.0 | 536 | 0.6185 | 0.0 | 0.6913 | | 0.6086 | 9.0 | 603 | 0.6260 | 0.0 | 0.6913 | ### Framework versions - Transformers 4.29.2 - Pytorch 1.14.0a0+410ce96 - Datasets 2.12.0 - Tokenizers 0.13.3
2,650
[ [ -0.03076171875, -0.043121337890625, 0.00458526611328125, 0.01477813720703125, -0.0160980224609375, -0.01082611083984375, 0.0018854141235351562, -0.01373291015625, 0.029144287109375, 0.0175323486328125, -0.0557861328125, -0.034027099609375, -0.052459716796875, ...
Anmol0130/brand_bottle_prediction_v2
2023-05-31T10:52:27.000Z
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
Anmol0130
null
null
Anmol0130/brand_bottle_prediction_v2
0
2
transformers
2023-05-31T10:15:09
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: brand_bottle_prediction_v2 results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 1.0 --- # brand_bottle_prediction_v2 Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### bacardi_black ![bacardi_black](images/bacardi_black.jpg) #### bacardi_carta_blanca ![bacardi_carta_blanca](images/bacardi_carta_blanca.jpg) #### bombay_sapphire ![bombay_sapphire](images/bombay_sapphire.jpg) #### coka_cola ![coka_cola](images/coka_cola.jpg) #### martini ![martini](images/martini.jpg)
971
[ [ -0.0280609130859375, -0.028411865234375, 0.00473785400390625, 0.030517578125, -0.030181884765625, 0.0178680419921875, 0.0100860595703125, -0.040924072265625, 0.04522705078125, 0.01486968994140625, -0.033782958984375, -0.0360107421875, -0.04248046875, 0.00005...