modelId
stringlengths
4
111
lastModified
stringlengths
24
24
tags
list
pipeline_tag
stringlengths
5
30
author
stringlengths
2
34
config
null
securityStatus
null
id
stringlengths
4
111
likes
int64
0
9.53k
downloads
int64
2
73.6M
library_name
stringlengths
2
84
created
timestamp[us]
card
stringlengths
101
901k
card_len
int64
101
901k
embeddings
list
seeoo/distilbert-base-uncased-finetuned-emotion
2023-05-22T04:59:55.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
seeoo
null
null
seeoo/distilbert-base-uncased-finetuned-emotion
0
2
transformers
2023-05-22T04:53:42
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2138 - Accuracy: 0.926 - F1: 0.9261 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8297 | 1.0 | 250 | 0.3079 | 0.905 | 0.9018 | | 0.2463 | 2.0 | 500 | 0.2138 | 0.926 | 0.9261 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Tokenizers 0.13.3
1,485
[ [ -0.039031982421875, -0.04388427734375, 0.02069091796875, 0.025970458984375, -0.0288238525390625, -0.0196075439453125, -0.01389312744140625, -0.007419586181640625, 0.00870513916015625, 0.007022857666015625, -0.056427001953125, -0.04986572265625, -0.06195068359375...
fcuadra/distilbert_classifier_newsgroups
2023-05-22T05:59:26.000Z
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
fcuadra
null
null
fcuadra/distilbert_classifier_newsgroups
0
2
transformers
2023-05-22T05:32:19
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: distilbert_classifier_newsgroups results: [] pipeline_tag: text-classification --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_classifier_newsgroups This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on [20Newsgroups](http://qwone.com/~jason/20Newsgroups/) dataset. It achieves the following results on the evaluation set: ## Model description We have fine-tuned the distilbert-base-uncased to classify news in 20 main topics based on the labeled dataset [20Newsgroups](http://qwone.com/~jason/20Newsgroups/). ## Training and evaluation data The 20 newsgroups dataset comprises around 18000 newsgroups posts on 20 topics split in two subsets: one for training (or development) and the other one for testing (or for performance evaluation). The split between the train and test set is based upon a messages posted before and after a specific date. These are the 20 topics we fine-tuned the model on: 'alt.atheism', 'comp.graphics', 'comp.os.ms-windows.misc', 'comp.sys.ibm.pc.hardware', 'comp.sys.mac.hardware', 'comp.windows.x', 'misc.forsale', 'rec.autos', 'rec.motorcycles', 'rec.sport.baseball', 'rec.sport.hockey', 'sci.crypt', 'sci.electronics', 'sci.med', 'sci.space', 'soc.religion.christian', 'talk.politics.guns', 'talk.politics.mideast', 'talk.politics.misc', 'talk.religion.misc' ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results Epoch 1/3 637/637 [==============================] - 110s 131ms/step - loss: 1.3480 - accuracy: 0.6633 - val_loss: 0.6122 - val_accuracy: 0.8304 Epoch 2/3 637/637 [==============================] - 44s 70ms/step - loss: 0.4498 - accuracy: 0.8812 - val_loss: 0.4342 - val_accuracy: 0.8799 Epoch 3/3 637/637 [==============================] - 40s 64ms/step - loss: 0.2685 - accuracy: 0.9355 - val_loss: 0.3756 - val_accuracy: 0.8993 CPU times: user 3min 4s, sys: 8.76 s, total: 3min 13s Wall time: 3min 15s <keras.callbacks.History at 0x7f481afbfbb0> ### Framework versions - Transformers 4.28.0 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
2,909
[ [ -0.05279541015625, -0.052490234375, 0.014678955078125, 0.0036220550537109375, -0.0207977294921875, -0.007106781005859375, -0.0189666748046875, -0.0104217529296875, 0.00917816162109375, -0.005359649658203125, -0.0408935546875, -0.04803466796875, -0.06021118164062...
ehanJ/distilbert-base-uncased-finetuned-emotion
2023-05-22T06:25:51.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
ehanJ
null
null
ehanJ/distilbert-base-uncased-finetuned-emotion
0
2
transformers
2023-05-22T06:20:44
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9205 - name: F1 type: f1 value: 0.9205899308588681 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2240 - Accuracy: 0.9205 - F1: 0.9206 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8441 | 1.0 | 250 | 0.3201 | 0.904 | 0.9018 | | 0.2551 | 2.0 | 500 | 0.2240 | 0.9205 | 0.9206 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,848
[ [ -0.037689208984375, -0.041534423828125, 0.01397705078125, 0.0213470458984375, -0.0260009765625, -0.019256591796875, -0.01346588134765625, -0.00859832763671875, 0.009796142578125, 0.007659912109375, -0.055999755859375, -0.0517578125, -0.0606689453125, -0.0080...
Afsara/fb_bart_large_cnn
2023-05-22T07:06:45.000Z
[ "transformers", "pytorch", "tf", "jax", "rust", "bart", "text2text-generation", "summarization", "en", "dataset:cnn_dailymail", "arxiv:1910.13461", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
summarization
Afsara
null
null
Afsara/fb_bart_large_cnn
0
2
transformers
2023-05-22T06:53:18
--- language: - en tags: - summarization license: mit thumbnail: https://huggingface.co/front/thumbnails/facebook.png datasets: - cnn_dailymail model-index: - name: facebook/bart-large-cnn results: - task: type: summarization name: Summarization dataset: name: cnn_dailymail type: cnn_dailymail config: 3.0.0 split: train metrics: - name: ROUGE-1 type: rouge value: 42.9486 verified: true - name: ROUGE-2 type: rouge value: 20.8149 verified: true - name: ROUGE-L type: rouge value: 30.6186 verified: true - name: ROUGE-LSUM type: rouge value: 40.0376 verified: true - name: loss type: loss value: 2.529000997543335 verified: true - name: gen_len type: gen_len value: 78.5866 verified: true --- # BART (large-sized model), fine-tuned on CNN Daily Mail BART model pre-trained on English language, and fine-tuned on [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail). It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Lewis et al. and first released in [this repository (https://github.com/pytorch/fairseq/tree/master/examples/bart). Disclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). This particular checkpoint has been fine-tuned on CNN Daily Mail, a large collection of text-summary pairs. ## Intended uses & limitations You can use this model for text summarization. ### How to use Here is how to use this model with the [pipeline API](https://huggingface.co/transformers/main_classes/pipelines.html): ```python from transformers import pipeline summarizer = pipeline("summarization", model="facebook/bart-large-cnn") ARTICLE = """ New York (CNN)When Liana Barrientos was 23 years old, she got married in Westchester County, New York. A year later, she got married again in Westchester County, but to a different man and without divorcing her first husband. Only 18 days after that marriage, she got hitched yet again. Then, Barrientos declared "I do" five more times, sometimes only within two weeks of each other. In 2010, she married once more, this time in the Bronx. In an application for a marriage license, she stated it was her "first and only" marriage. Barrientos, now 39, is facing two criminal counts of "offering a false instrument for filing in the first degree," referring to her false statements on the 2010 marriage license application, according to court documents. Prosecutors said the marriages were part of an immigration scam. On Friday, she pleaded not guilty at State Supreme Court in the Bronx, according to her attorney, Christopher Wright, who declined to comment further. After leaving court, Barrientos was arrested and charged with theft of service and criminal trespass for allegedly sneaking into the New York subway through an emergency exit, said Detective Annette Markowski, a police spokeswoman. In total, Barrientos has been married 10 times, with nine of her marriages occurring between 1999 and 2002. All occurred either in Westchester County, Long Island, New Jersey or the Bronx. She is believed to still be married to four men, and at one time, she was married to eight men at once, prosecutors say. Prosecutors said the immigration scam involved some of her husbands, who filed for permanent residence status shortly after the marriages. Any divorces happened only after such filings were approved. It was unclear whether any of the men will be prosecuted. The case was referred to the Bronx District Attorney\'s Office by Immigration and Customs Enforcement and the Department of Homeland Security\'s Investigation Division. Seven of the men are from so-called "red-flagged" countries, including Egypt, Turkey, Georgia, Pakistan and Mali. Her eighth husband, Rashid Rajput, was deported in 2006 to his native Pakistan after an investigation by the Joint Terrorism Task Force. If convicted, Barrientos faces up to four years in prison. Her next court appearance is scheduled for May 18. """ print(summarizer(ARTICLE, max_length=130, min_length=30, do_sample=False)) >>> [{'summary_text': 'Liana Barrientos, 39, is charged with two counts of "offering a false instrument for filing in the first degree" In total, she has been married 10 times, with nine of her marriages occurring between 1999 and 2002. She is believed to still be married to four men.'}] ``` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1910-13461, author = {Mike Lewis and Yinhan Liu and Naman Goyal and Marjan Ghazvininejad and Abdelrahman Mohamed and Omer Levy and Veselin Stoyanov and Luke Zettlemoyer}, title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension}, journal = {CoRR}, volume = {abs/1910.13461}, year = {2019}, url = {http://arxiv.org/abs/1910.13461}, eprinttype = {arXiv}, eprint = {1910.13461}, timestamp = {Thu, 31 Oct 2019 14:02:26 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
5,999
[ [ -0.03515625, -0.055145263671875, 0.0284576416015625, 0.0275421142578125, -0.038055419921875, -0.019134521484375, 0.005401611328125, -0.0229339599609375, 0.0300445556640625, 0.046295166015625, -0.020721435546875, -0.02984619140625, -0.042327880859375, 0.03247...
Afsara/cse_buet_bangla_t5
2023-05-22T07:23:22.000Z
[ "transformers", "pytorch", "t5", "text2text-generation", "bn", "arxiv:2205.11081", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text2text-generation
Afsara
null
null
Afsara/cse_buet_bangla_t5
1
2
transformers
2023-05-22T07:14:46
--- language: - bn licenses: - cc-by-nc-sa-4.0 --- # BanglaT5 This repository contains the pretrained checkpoint of the model **BanglaT5**. This is a sequence to sequence transformer model pretrained with the ["Span Corruption"]() objective. Finetuned models using this checkpoint achieve state-of-the-art results on many of the NLG tasks in bengali. For finetuning on different downstream tasks such as `Machine Translation`, `Abstractive Text Summarization`, `Question Answering` etc., refer to the scripts in the official GitHub [repository](https://github.com/csebuetnlp/BanglaNLG). **Note**: This model was pretrained using a specific normalization pipeline available [here](https://github.com/csebuetnlp/normalizer). All finetuning scripts in the official GitHub repository use this normalization by default. If you need to adapt the pretrained model for a different task make sure the text units are normalized using this pipeline before tokenizing to get best results. A basic example is given below: ## Using this model in `transformers` (tested on 4.11.0.dev0) ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer from normalizer import normalize # pip install git+https://github.com/csebuetnlp/normalizer model = AutoModelForSeq2SeqLM.from_pretrained("csebuetnlp/banglat5") tokenizer = AutoTokenizer.from_pretrained("csebuetnlp/banglat5", use_fast=False) input_sentence = "" input_ids = tokenizer(normalize(input_sentence), return_tensors="pt").input_ids generated_tokens = model.generate(input_ids) decoded_tokens = tokenizer.batch_decode(generated_tokens)[0] print(decoded_tokens) ``` ## Benchmarks * Supervised fine-tuning | Model | Params | MT (SacreBLEU) | TS (ROUGE-2) | QA (EM/F1) | MD (SacreBLEU-1) | NHG (ROUGE-2) | XLS (ROUGE-2) | BNLG score | |--------------------|------------|-----------------------|------------------------|-------------------|--------------------|----------------|----------------|---------------| |[mT5 (base)](https://huggingface.co/google/mt5-base) | 582M | 36.6/22.5 | 10.3 | 59.0/65.3 | 17.5 | 9.6 | 2.7/0.7 | 24.9 | |[XLM-ProphetNet](https://huggingface.co/microsoft/xprophetnet-large-wiki100-cased) | 616M | 23.3/16.4 | 7.8 | 53.0/57.3 | 20.0 | 9.5 | 6.2/2.7 | 21.8 | |[mBART-50](https://huggingface.co/facebook/mbart-large-50) | 611M | 23.6/16.7 | 10.4 | 53.4/58.9 | 18.5 | 11.2 | 5.4/3.7 | 22.4 | |[IndicBART](https://huggingface.co/ai4bharat/IndicBART) | 244M | 22.7/13.1 | 8.1 | 53.3/58.8 | 14.8 | 7.9 | 6.3/2.5 | 20.8 | |[BanglaT5](https://huggingface.co/csebuetnlp/banglat5) | 247M | 38.8/25.2 | 13.7 | 68.5/74.8 | 19.0 | 13.8 | 6.4/4.0 | 29.4 | The benchmarking datasets are as follows: * **MT:** **[Machine Translation](https://github.com/csebuetnlp/banglanmt#datasets)** * **TS:** **[Abstractive Text Summarization](https://huggingface.co/datasets/csebuetnlp/xlsum)** * **QA:** **[Question Answering](https://huggingface.co/datasets/csebuetnlp/squad_bn)** * **MD:** **[Multi Turn Dialogue Generation](https://drive.google.com/file/d/1qPmNN6qA4evbh4cD_BDDTCFOwMu4H2JS/view?usp=sharing)** * **NHG:** **[News Headline Generation](https://huggingface.co/datasets/csebuetnlp/xlsum)** * **XLS:** **[Cross-lingual Summarization](https://huggingface.co/datasets/csebuetnlp/CrossSum)** ## Citation If you use this model, please cite the following paper: ``` @article{bhattacharjee2022banglanlg, author = {Abhik Bhattacharjee and Tahmid Hasan and Wasi Uddin Ahmad and Rifat Shahriyar}, title = {BanglaNLG: Benchmarks and Resources for Evaluating Low-Resource Natural Language Generation in Bangla}, journal = {CoRR}, volume = {abs/2205.11081}, year = {2022}, url = {https://arxiv.org/abs/2205.11081}, eprinttype = {arXiv}, eprint = {2205.11081} } ``` If you use the normalization module, please cite the following paper: ``` @inproceedings{hasan-etal-2020-low, title = "Not Low-Resource Anymore: Aligner Ensembling, Batch Filtering, and New Datasets for {B}engali-{E}nglish Machine Translation", author = "Hasan, Tahmid and Bhattacharjee, Abhik and Samin, Kazi and Hasan, Masum and Basak, Madhusudan and Rahman, M. Sohel and Shahriyar, Rifat", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.207", doi = "10.18653/v1/2020.emnlp-main.207", pages = "2612--2623", abstract = "Despite being the seventh most widely spoken language in the world, Bengali has received much less attention in machine translation literature due to being low in resources. Most publicly available parallel corpora for Bengali are not large enough; and have rather poor quality, mostly because of incorrect sentence alignments resulting from erroneous sentence segmentation, and also because of a high volume of noise present in them. In this work, we build a customized sentence segmenter for Bengali and propose two novel methods for parallel corpus creation on low-resource setups: aligner ensembling and batch filtering. With the segmenter and the two methods combined, we compile a high-quality Bengali-English parallel corpus comprising of 2.75 million sentence pairs, more than 2 million of which were not available before. Training on neural models, we achieve an improvement of more than 9 BLEU score over previous approaches to Bengali-English machine translation. We also evaluate on a new test set of 1000 pairs made with extensive quality control. We release the segmenter, parallel corpus, and the evaluation set, thus elevating Bengali from its low-resource status. To the best of our knowledge, this is the first ever large scale study on Bengali-English machine translation. We believe our study will pave the way for future research on Bengali-English machine translation as well as other low-resource languages. Our data and code are available at https://github.com/csebuetnlp/banglanmt.", } ```
6,179
[ [ -0.041412353515625, -0.046356201171875, -0.00176239013671875, 0.03448486328125, -0.016937255859375, 0.00554656982421875, -0.0265350341796875, -0.023040771484375, 0.0185089111328125, 0.0217742919921875, -0.039276123046875, -0.042510986328125, -0.04833984375, ...
KeiHeityuu/my_awesome_model
2023-10-18T11:39:12.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
KeiHeityuu
null
null
KeiHeityuu/my_awesome_model
0
2
transformers
2023-05-22T09:05:08
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy model-index: - name: my_awesome_model results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.93104 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2349 - Accuracy: 0.9310 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2301 | 1.0 | 1563 | 0.1888 | 0.9272 | | 0.1512 | 2.0 | 3126 | 0.2349 | 0.9310 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,675
[ [ -0.03936767578125, -0.043487548828125, 0.01485443115234375, 0.002330780029296875, -0.0266571044921875, -0.0171661376953125, 0.002964019775390625, -0.0098876953125, 0.01435089111328125, 0.0238494873046875, -0.051788330078125, -0.043975830078125, -0.05902099609375...
Ioanaaaaaaa/distilbert-base-uncased-finetuned-sexism-3
2023-05-22T10:54:56.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
Ioanaaaaaaa
null
null
Ioanaaaaaaa/distilbert-base-uncased-finetuned-sexism-3
0
2
transformers
2023-05-22T10:31:33
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-sexism-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sexism-3 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6098 - Accuracy: 0.8396 - F1: 0.8374 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.3246 | 1.0 | 1876 | 0.3858 | 0.8534 | 0.8490 | | 0.2469 | 2.0 | 3752 | 0.6098 | 0.8396 | 0.8374 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,502
[ [ -0.0272674560546875, -0.04351806640625, 0.0130767822265625, 0.02093505859375, -0.02276611328125, -0.0247802734375, -0.00467681884765625, -0.0067291259765625, 0.00200653076171875, 0.0181121826171875, -0.050323486328125, -0.049774169921875, -0.052734375, -0.00...
Ioanaaaaaaa/distilbert-base-uncased-finetuned-sexism-4
2023-05-22T11:20:24.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
Ioanaaaaaaa
null
null
Ioanaaaaaaa/distilbert-base-uncased-finetuned-sexism-4
0
2
transformers
2023-05-22T10:59:37
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-sexism-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sexism-4 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7202 - Accuracy: 0.8406 - F1: 0.8399 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.14 | 1.0 | 1876 | 0.7202 | 0.8406 | 0.8399 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,431
[ [ -0.02667236328125, -0.043609619140625, 0.0146636962890625, 0.0221405029296875, -0.0242767333984375, -0.0244598388671875, -0.005458831787109375, -0.00710296630859375, 0.0016326904296875, 0.0171966552734375, -0.051544189453125, -0.049652099609375, -0.0497436523437...
Ztijn/bert-base-dutch-cased-finetuned-squad
2023-05-22T14:22:35.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
question-answering
Ztijn
null
null
Ztijn/bert-base-dutch-cased-finetuned-squad
0
2
transformers
2023-05-22T11:24:27
--- tags: - generated_from_trainer model-index: - name: bert-base-dutch-cased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-dutch-cased-finetuned-squad This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4324 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.261 | 1.0 | 8350 | 1.1667 | | 0.9583 | 2.0 | 16700 | 1.2665 | | 0.6993 | 3.0 | 25050 | 1.4324 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,405
[ [ -0.046234130859375, -0.046630859375, 0.0077972412109375, 0.0202178955078125, -0.022674560546875, -0.0201263427734375, -0.016265869140625, -0.018646240234375, 0.00942230224609375, 0.03265380859375, -0.06597900390625, -0.04193115234375, -0.04937744140625, -0.0...
phnghiapro/distilbert-base-uncased-finetuned-cola
2023-05-22T12:07:13.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
phnghiapro
null
null
phnghiapro/distilbert-base-uncased-finetuned-cola
0
2
transformers
2023-05-22T11:26:04
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5282404248888111 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5703 - Matthews Correlation: 0.5282 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5235 | 1.0 | 535 | 0.5332 | 0.4098 | | 0.3452 | 2.0 | 1070 | 0.4980 | 0.4899 | | 0.2301 | 3.0 | 1605 | 0.5703 | 0.5282 | | 0.1786 | 4.0 | 2140 | 0.7849 | 0.5126 | | 0.134 | 5.0 | 2675 | 0.8406 | 0.5185 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
2,042
[ [ -0.0220794677734375, -0.050323486328125, 0.0119781494140625, 0.0186767578125, -0.0226593017578125, -0.00872039794921875, -0.005985260009765625, -0.0031147003173828125, 0.0233917236328125, 0.01027679443359375, -0.045379638671875, -0.035552978515625, -0.0626831054...
AustinCarthy/Onlyphish_100KP_BFall_fromB_40KGen_topP_0.75
2023-05-22T20:54:05.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
AustinCarthy
null
null
AustinCarthy/Onlyphish_100KP_BFall_fromB_40KGen_topP_0.75
0
2
transformers
2023-05-22T11:28:12
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: Onlyphish_100KP_BFall_fromB_40KGen_topP_0.75 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Onlyphish_100KP_BFall_fromB_40KGen_topP_0.75 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0202 - Accuracy: 0.9974 - F1: 0.9722 - Precision: 0.9989 - Recall: 0.9468 - Roc Auc Score: 0.9734 - Tpr At Fpr 0.01: 0.9538 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 | |:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:| | 0.0187 | 1.0 | 91875 | 0.0418 | 0.9936 | 0.9283 | 0.9968 | 0.8686 | 0.9342 | 0.8558 | | 0.0035 | 2.0 | 183750 | 0.0279 | 0.9954 | 0.9488 | 0.9991 | 0.9034 | 0.9517 | 0.9336 | | 0.0021 | 3.0 | 275625 | 0.0237 | 0.9971 | 0.9688 | 0.9979 | 0.9414 | 0.9707 | 0.9384 | | 0.0021 | 4.0 | 367500 | 0.0202 | 0.9973 | 0.9713 | 0.9985 | 0.9456 | 0.9728 | 0.9532 | | 0.0003 | 5.0 | 459375 | 0.0202 | 0.9974 | 0.9722 | 0.9989 | 0.9468 | 0.9734 | 0.9538 | ### Framework versions - Transformers 4.29.1 - Pytorch 1.9.0+cu111 - Datasets 2.10.1 - Tokenizers 0.13.2
2,257
[ [ -0.042938232421875, -0.04156494140625, 0.01031494140625, 0.0090179443359375, -0.0208740234375, -0.02325439453125, -0.00745391845703125, -0.0173492431640625, 0.030181884765625, 0.028564453125, -0.05377197265625, -0.053253173828125, -0.04913330078125, -0.01286...
Ioanaaaaaaa/distilbert-base-uncased-finetuned-sexism-7
2023-05-22T12:26:43.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
Ioanaaaaaaa
null
null
Ioanaaaaaaa/distilbert-base-uncased-finetuned-sexism-7
0
2
transformers
2023-05-22T12:07:18
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-sexism-7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sexism-7 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2813 - Accuracy: 0.8406 - F1: 0.8399 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.0586 | 1.0 | 938 | 1.1347 | 0.8316 | 0.8316 | | 0.0219 | 2.0 | 1876 | 1.2813 | 0.8406 | 0.8399 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,504
[ [ -0.0282745361328125, -0.04229736328125, 0.01276397705078125, 0.01959228515625, -0.024444580078125, -0.024261474609375, -0.00537109375, -0.00582122802734375, 0.002414703369140625, 0.018218994140625, -0.05133056640625, -0.050048828125, -0.0531005859375, 0.0002...
igh197/distilbert-base-uncased-finetuned-emotion
2023-05-22T13:01:20.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
igh197
null
null
igh197/distilbert-base-uncased-finetuned-emotion
0
2
transformers
2023-05-22T12:55:24
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9245 - name: F1 type: f1 value: 0.9242747341236085 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2153 - Accuracy: 0.9245 - F1: 0.9243 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7987 | 1.0 | 250 | 0.3092 | 0.907 | 0.9034 | | 0.2449 | 2.0 | 500 | 0.2153 | 0.9245 | 0.9243 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,848
[ [ -0.037689208984375, -0.041656494140625, 0.0137176513671875, 0.0219573974609375, -0.025909423828125, -0.018951416015625, -0.01355743408203125, -0.0087432861328125, 0.01044464111328125, 0.00814056396484375, -0.05609130859375, -0.051116943359375, -0.06005859375, ...
Stern5497/sbert-legal-xlm-roberta-base
2023-05-22T14:01:53.000Z
[ "sentence-transformers", "pytorch", "roberta", "feature-extraction", "sentence-similarity", "transformers", "endpoints_compatible", "region:us" ]
sentence-similarity
Stern5497
null
null
Stern5497/sbert-legal-xlm-roberta-base
1
2
sentence-transformers
2023-05-22T13:59:42
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 8301 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 5000, "evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "correct_bias": false, "eps": 1e-06, "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 830, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
3,921
[ [ -0.0200958251953125, -0.06353759765625, 0.022552490234375, 0.023162841796875, -0.01837158203125, -0.031219482421875, -0.020355224609375, 0.004352569580078125, 0.0151824951171875, 0.0279998779296875, -0.051361083984375, -0.04779052734375, -0.052276611328125, ...
Backdrive/distilbert-base-uncased-finetuned-emotion
2023-05-22T14:52:30.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
Backdrive
null
null
Backdrive/distilbert-base-uncased-finetuned-emotion
0
2
transformers
2023-05-22T14:39:04
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9285 - name: F1 type: f1 value: 0.9285478749765623 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2158 - Accuracy: 0.9285 - F1: 0.9285 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8112 | 1.0 | 250 | 0.3104 | 0.9005 | 0.8968 | | 0.2447 | 2.0 | 500 | 0.2158 | 0.9285 | 0.9285 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,848
[ [ -0.037994384765625, -0.04156494140625, 0.01468658447265625, 0.021820068359375, -0.0258636474609375, -0.0192413330078125, -0.013458251953125, -0.0085601806640625, 0.0106353759765625, 0.008636474609375, -0.056396484375, -0.051483154296875, -0.05963134765625, -...
AlexC98/BertGoodCommitPreprocessed
2023-05-22T14:49:18.000Z
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
text-classification
AlexC98
null
null
AlexC98/BertGoodCommitPreprocessed
0
2
transformers
2023-05-22T14:45:20
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: BertGoodCommitPreprocessed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BertGoodCommitPreprocessed This model is a fine-tuned version of [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5242 - Accuracy: 0.8424 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 24 | 0.5858 | 0.6788 | | No log | 2.0 | 48 | 0.5640 | 0.7273 | | No log | 3.0 | 72 | 0.5381 | 0.7394 | | No log | 4.0 | 96 | 0.5246 | 0.7394 | | No log | 5.0 | 120 | 0.5214 | 0.7394 | | No log | 6.0 | 144 | 0.5093 | 0.7394 | | No log | 7.0 | 168 | 0.4986 | 0.7515 | | No log | 8.0 | 192 | 0.5131 | 0.7455 | | No log | 9.0 | 216 | 0.5093 | 0.7697 | | No log | 10.0 | 240 | 0.5064 | 0.7758 | | No log | 11.0 | 264 | 0.5069 | 0.7697 | | No log | 12.0 | 288 | 0.4774 | 0.7818 | | No log | 13.0 | 312 | 0.5096 | 0.7879 | | No log | 14.0 | 336 | 0.4933 | 0.7939 | | No log | 15.0 | 360 | 0.4740 | 0.7939 | | No log | 16.0 | 384 | 0.4787 | 0.7939 | | No log | 17.0 | 408 | 0.4675 | 0.8 | | No log | 18.0 | 432 | 0.4971 | 0.8121 | | No log | 19.0 | 456 | 0.4935 | 0.8303 | | No log | 20.0 | 480 | 0.4947 | 0.8121 | | 0.3574 | 21.0 | 504 | 0.4968 | 0.8242 | | 0.3574 | 22.0 | 528 | 0.5158 | 0.8303 | | 0.3574 | 23.0 | 552 | 0.5146 | 0.8061 | | 0.3574 | 24.0 | 576 | 0.4963 | 0.8303 | | 0.3574 | 25.0 | 600 | 0.5024 | 0.8182 | | 0.3574 | 26.0 | 624 | 0.5069 | 0.8242 | | 0.3574 | 27.0 | 648 | 0.5242 | 0.8424 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
2,959
[ [ -0.043426513671875, -0.044281005859375, 0.009185791015625, -0.004001617431640625, -0.005611419677734375, -0.01122283935546875, -0.004909515380859375, -0.0096893310546875, 0.038330078125, 0.0201873779296875, -0.052978515625, -0.04638671875, -0.052398681640625, ...
AlexC98/BertGoodCommitOriginal
2023-05-22T14:57:09.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
text-classification
AlexC98
null
null
AlexC98/BertGoodCommitOriginal
0
2
transformers
2023-05-22T14:47:30
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: BertGoodCommitOriginal results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BertGoodCommitOriginal This model is a fine-tuned version of [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5639 - Accuracy: 0.8242 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 24 | 0.5844 | 0.7030 | | No log | 2.0 | 48 | 0.5566 | 0.7212 | | No log | 3.0 | 72 | 0.5375 | 0.7333 | | No log | 4.0 | 96 | 0.5321 | 0.7212 | | No log | 5.0 | 120 | 0.5221 | 0.7333 | | No log | 6.0 | 144 | 0.5112 | 0.7394 | | No log | 7.0 | 168 | 0.4828 | 0.7515 | | No log | 8.0 | 192 | 0.4857 | 0.7818 | | No log | 9.0 | 216 | 0.4672 | 0.7879 | | No log | 10.0 | 240 | 0.4740 | 0.7879 | | No log | 11.0 | 264 | 0.4758 | 0.7818 | | No log | 12.0 | 288 | 0.4554 | 0.8061 | | No log | 13.0 | 312 | 0.4697 | 0.8182 | | No log | 14.0 | 336 | 0.4810 | 0.8242 | | No log | 15.0 | 360 | 0.4612 | 0.8182 | | No log | 16.0 | 384 | 0.4663 | 0.8242 | | No log | 17.0 | 408 | 0.4757 | 0.8182 | | No log | 18.0 | 432 | 0.4928 | 0.8182 | | No log | 19.0 | 456 | 0.5371 | 0.8242 | | No log | 20.0 | 480 | 0.5345 | 0.8182 | | 0.3387 | 21.0 | 504 | 0.5341 | 0.8182 | | 0.3387 | 22.0 | 528 | 0.5639 | 0.8242 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
2,641
[ [ -0.04180908203125, -0.0439453125, 0.0084228515625, -0.002185821533203125, -0.0117340087890625, -0.0175323486328125, -0.00673675537109375, -0.012542724609375, 0.0325927734375, 0.017425537109375, -0.053741455078125, -0.048065185546875, -0.0513916015625, -0.018...
AlexC98/BertWhyCommitPreprocessed
2023-05-22T15:29:54.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
text-classification
AlexC98
null
null
AlexC98/BertWhyCommitPreprocessed
0
2
transformers
2023-05-22T15:20:33
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: BertWhyCommitPreprocessed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BertWhyCommitPreprocessed This model is a fine-tuned version of [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4699 - Accuracy: 0.8848 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 31 | 0.5237 | 0.7333 | | No log | 2.0 | 62 | 0.4632 | 0.7636 | | No log | 3.0 | 93 | 0.4243 | 0.8 | | No log | 4.0 | 124 | 0.3896 | 0.8182 | | No log | 5.0 | 155 | 0.3824 | 0.8242 | | No log | 6.0 | 186 | 0.3661 | 0.8182 | | No log | 7.0 | 217 | 0.3597 | 0.8242 | | No log | 8.0 | 248 | 0.3569 | 0.8364 | | No log | 9.0 | 279 | 0.3518 | 0.8606 | | No log | 10.0 | 310 | 0.3618 | 0.8485 | | No log | 11.0 | 341 | 0.3462 | 0.8545 | | No log | 12.0 | 372 | 0.3636 | 0.8485 | | No log | 13.0 | 403 | 0.3759 | 0.8485 | | No log | 14.0 | 434 | 0.3771 | 0.8727 | | No log | 15.0 | 465 | 0.3957 | 0.8727 | | No log | 16.0 | 496 | 0.4154 | 0.8788 | | 0.2682 | 17.0 | 527 | 0.3980 | 0.8606 | | 0.2682 | 18.0 | 558 | 0.4442 | 0.8667 | | 0.2682 | 19.0 | 589 | 0.4028 | 0.8788 | | 0.2682 | 20.0 | 620 | 0.4653 | 0.8606 | | 0.2682 | 21.0 | 651 | 0.4699 | 0.8848 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
2,585
[ [ -0.04193115234375, -0.040618896484375, 0.012237548828125, -0.001842498779296875, -0.0087432861328125, -0.0188140869140625, -0.00969696044921875, -0.0132293701171875, 0.0305328369140625, 0.01995849609375, -0.057830810546875, -0.043670654296875, -0.050811767578125...
AlexC98/BertWhatCommitPreprocessed
2023-05-22T15:38:18.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
text-classification
AlexC98
null
null
AlexC98/BertWhatCommitPreprocessed
0
2
transformers
2023-05-22T15:31:15
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: BertWhatCommitPreprocessed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BertWhatCommitPreprocessed This model is a fine-tuned version of [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3631 - Accuracy: 0.9152 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 38 | 0.5383 | 0.7333 | | No log | 2.0 | 76 | 0.4130 | 0.8485 | | No log | 3.0 | 114 | 0.3096 | 0.8727 | | No log | 4.0 | 152 | 0.3140 | 0.8788 | | No log | 5.0 | 190 | 0.2983 | 0.8970 | | No log | 6.0 | 228 | 0.3019 | 0.8848 | | No log | 7.0 | 266 | 0.3235 | 0.9030 | | No log | 8.0 | 304 | 0.3571 | 0.8970 | | No log | 9.0 | 342 | 0.3457 | 0.8970 | | No log | 10.0 | 380 | 0.3340 | 0.8909 | | No log | 11.0 | 418 | 0.3378 | 0.9091 | | No log | 12.0 | 456 | 0.3389 | 0.9091 | | No log | 13.0 | 494 | 0.3753 | 0.9030 | | 0.2144 | 14.0 | 532 | 0.3492 | 0.9152 | | 0.2144 | 15.0 | 570 | 0.3631 | 0.9152 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
2,215
[ [ -0.039154052734375, -0.03955078125, 0.01153564453125, 0.0018901824951171875, -0.0160369873046875, -0.0301361083984375, -0.01465606689453125, -0.018280029296875, 0.01678466796875, 0.0187835693359375, -0.062042236328125, -0.039947509765625, -0.04901123046875, ...
AlexC98/BertWhatCommitOriginal
2023-05-22T15:59:51.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
text-classification
AlexC98
null
null
AlexC98/BertWhatCommitOriginal
0
2
transformers
2023-05-22T15:51:45
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: BertWhatCommitOriginal results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BertWhatCommitOriginal This model is a fine-tuned version of [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3530 - Accuracy: 0.9091 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 38 | 0.5485 | 0.7152 | | No log | 2.0 | 76 | 0.4204 | 0.8364 | | No log | 3.0 | 114 | 0.2951 | 0.8788 | | No log | 4.0 | 152 | 0.2811 | 0.8848 | | No log | 5.0 | 190 | 0.2628 | 0.8909 | | No log | 6.0 | 228 | 0.2605 | 0.8970 | | No log | 7.0 | 266 | 0.2790 | 0.8970 | | No log | 8.0 | 304 | 0.2821 | 0.9030 | | No log | 9.0 | 342 | 0.2724 | 0.9212 | | No log | 10.0 | 380 | 0.2871 | 0.9091 | | No log | 11.0 | 418 | 0.3067 | 0.9273 | | No log | 12.0 | 456 | 0.3404 | 0.9273 | | No log | 13.0 | 494 | 0.3645 | 0.9212 | | 0.2027 | 14.0 | 532 | 0.3422 | 0.9152 | | 0.2027 | 15.0 | 570 | 0.4038 | 0.9212 | | 0.2027 | 16.0 | 608 | 0.3530 | 0.9091 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
2,269
[ [ -0.039642333984375, -0.041046142578125, 0.0099334716796875, 0.00234222412109375, -0.0157470703125, -0.0257110595703125, -0.01242828369140625, -0.0152740478515625, 0.02178955078125, 0.0147705078125, -0.058349609375, -0.0439453125, -0.048583984375, -0.01843261...
AlexC98/BertWhyCommitOriginal
2023-05-22T16:09:27.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
text-classification
AlexC98
null
null
AlexC98/BertWhyCommitOriginal
0
2
transformers
2023-05-22T16:00:23
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: BertWhyCommitOriginal results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BertWhyCommitOriginal This model is a fine-tuned version of [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4881 - Accuracy: 0.8788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 31 | 0.5058 | 0.7394 | | No log | 2.0 | 62 | 0.4463 | 0.7758 | | No log | 3.0 | 93 | 0.4260 | 0.7758 | | No log | 4.0 | 124 | 0.3954 | 0.8061 | | No log | 5.0 | 155 | 0.3745 | 0.8061 | | No log | 6.0 | 186 | 0.3653 | 0.8303 | | No log | 7.0 | 217 | 0.3533 | 0.8424 | | No log | 8.0 | 248 | 0.3500 | 0.8364 | | No log | 9.0 | 279 | 0.3416 | 0.8606 | | No log | 10.0 | 310 | 0.3546 | 0.8424 | | No log | 11.0 | 341 | 0.3469 | 0.8485 | | No log | 12.0 | 372 | 0.3511 | 0.8606 | | No log | 13.0 | 403 | 0.3883 | 0.8545 | | No log | 14.0 | 434 | 0.4090 | 0.8485 | | No log | 15.0 | 465 | 0.4301 | 0.8485 | | No log | 16.0 | 496 | 0.4415 | 0.8606 | | 0.2667 | 17.0 | 527 | 0.4732 | 0.8545 | | 0.2667 | 18.0 | 558 | 0.4849 | 0.8727 | | 0.2667 | 19.0 | 589 | 0.4881 | 0.8788 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
2,453
[ [ -0.041351318359375, -0.042236328125, 0.010894775390625, -0.0013885498046875, -0.0132598876953125, -0.0241546630859375, -0.0097808837890625, -0.0142364501953125, 0.027496337890625, 0.015350341796875, -0.05633544921875, -0.045623779296875, -0.05010986328125, -...
antonkurylo/t5-base-samsum
2023-05-22T18:06:51.000Z
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "summarization", "generated_from_trainer", "dataset:samsum", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
summarization
antonkurylo
null
null
antonkurylo/t5-base-samsum
0
2
transformers
2023-05-22T16:18:30
--- license: apache-2.0 tags: - summarization - generated_from_trainer datasets: - samsum metrics: - rouge model-index: - name: t5-base-samsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: samsum type: samsum config: samsum split: validation args: samsum metrics: - name: Rouge1 type: rouge value: 48.9131 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-samsum This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 0.6172 - Rouge1: 48.9131 - Rouge2: 25.4942 - Rougel: 41.2363 - Rougelsum: 45.3434 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 0.7606 | 1.0 | 3683 | 0.6254 | 46.9778 | 23.8245 | 39.8294 | 43.4639 | | 0.6273 | 2.0 | 7366 | 0.6119 | 48.2515 | 24.7534 | 40.4415 | 44.5567 | | 0.5769 | 3.0 | 11049 | 0.6116 | 48.228 | 24.7865 | 40.7537 | 44.4026 | | 0.5412 | 4.0 | 14732 | 0.6145 | 48.8563 | 25.356 | 41.1913 | 45.186 | | 0.5199 | 5.0 | 18415 | 0.6172 | 48.9131 | 25.4942 | 41.2363 | 45.3434 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
2,137
[ [ -0.0286712646484375, -0.024505615234375, 0.0188446044921875, 0.006610870361328125, -0.023651123046875, -0.017913818359375, -0.00350189208984375, -0.006458282470703125, 0.0207672119140625, 0.028839111328125, -0.053741455078125, -0.061126708984375, -0.058776855468...
cyrildever/distilbert-base-uncased-finetuned-emotion
2023-05-22T17:44:05.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
cyrildever
null
null
cyrildever/distilbert-base-uncased-finetuned-emotion
0
2
transformers
2023-05-22T16:25:44
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.925 - name: F1 type: f1 value: 0.9247451469405729 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2111 - Accuracy: 0.925 - F1: 0.9247 ## Model description More information needed ## Intended uses & limitations This is a simple test from the O'Reilly's book "Natural Language Processing with Transformers". Not to use for anything but testing purposes. ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7938 | 1.0 | 250 | 0.3038 | 0.9075 | 0.9054 | | 0.2377 | 2.0 | 500 | 0.2111 | 0.925 | 0.9247 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
1,957
[ [ -0.04193115234375, -0.050750732421875, 0.01282501220703125, 0.0192108154296875, -0.016510009765625, -0.0150909423828125, -0.0174713134765625, -0.01190185546875, 0.00440216064453125, 0.0125885009765625, -0.060089111328125, -0.04833984375, -0.0618896484375, 0....
soteroshanthi/distilbert_classifier_newsgroups
2023-05-22T19:45:49.000Z
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
soteroshanthi
null
null
soteroshanthi/distilbert_classifier_newsgroups
0
2
transformers
2023-05-22T19:45:39
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: distilbert_classifier_newsgroups results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_classifier_newsgroups This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.0 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,471
[ [ -0.0386962890625, -0.042022705078125, 0.021240234375, 0.0084228515625, -0.033599853515625, -0.0068359375, -0.01174163818359375, -0.010833740234375, -0.002910614013671875, -0.00620269775390625, -0.041534423828125, -0.050445556640625, -0.067138671875, -0.01020...
michaelfeil/codegen2-1B-gptj
2023-06-22T13:13:46.000Z
[ "transformers", "pytorch", "safetensors", "gptj", "text-generation", "fauxpilot", "gpt-j", "float16", "arxiv:2305.02309", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
michaelfeil
null
null
michaelfeil/codegen2-1B-gptj
1
2
transformers
2023-05-22T20:41:37
--- tags: - fauxpilot - gpt-j - float16 license: apache-2.0 --- # Conversion for FauxPilot, Codegen-2 as GPT-J It feels like GPT-J, acts like any other GPT-J, but its Codegen-2 weights under the hood. Converted on 2023-05-22 using ``` python /home/michael/fauxpilot/converter/codegen_gptj_convert.py --code_model Salesforce/codegen2-1B /home/michael/tmp-codegen2-1B-gptj ``` # Licence and other remarks: Licence conditions are intended to be idential to original huggingface repo. # Original description see https://huggingface.co/'Salesforce/codegen2-1B' # CodeGen2 (CodeGen2-16B) ## Model description [CodeGen2](https://github.com/salesforce/CodeGen2) is a family of autoregressive language models for **program synthesis**, introduced in the paper: [CodeGen2: Lessons for Training LLMs on Programming and Natural Languages](https://arxiv.org/abs/2305.02309) by Erik Nijkamp\*, Hiroaki Hayashi\*, Caiming Xiong, Silvio Savarese, Yingbo Zhou. Unlike the original CodeGen model family (i.e., CodeGen1), CodeGen2 is capable of infilling, and supports more programming languages. Four model sizes are released: `1B`, `3.7B`, `7B`, `16B`. ## How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality. ### Causal sampling For regular causal sampling, simply generate completions given the context: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen2-16B") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen2-16B", trust_remote_code=True, revision="main") text = "def hello_world():" input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=128) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) ``` ### Infill sampling For **infill** sampling, we introduce three new special token types: * `<mask_N>`: N-th span to be masked. In practice, use `<mask_1>` to where you want to sample infill. * `<sep>`: Seperator token between the suffix and the infilled sample. See below. * `<eom>`: "End-Of-Mask" token that model will output at the end of infilling. You may use this token to truncate the output. For example, if we want to generate infill for the following cursor position of a function: ```python def hello_world(): | return name ``` we construct an input to the model by 1. Inserting `<mask_1>` token in place of cursor position 2. Append `<sep>` token to indicate the boundary 3. Insert another `<mask_1>` to indicate which mask we want to infill. The final snippet looks as follows: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen2-16B") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen2-16B", trust_remote_code=True, revision="main") def format(prefix, suffix): return prefix + "<mask_1>" + suffix + "<|endoftext|>" + "<sep>" + "<mask_1>" prefix = "def hello_world(): " suffix = " return name" text = format(prefix, suffix) input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=128) print(tokenizer.decode(generated_ids[0], skip_special_tokens=False)[len(text):]) ``` You might want to truncate the model output with `<eom>`. ## Training data This checkpoint is trained on the stricter permissive subset of [the deduplicated version of the Stack dataset (v1.1)](https://huggingface.co/datasets/bigcode/the-stack-dedup). Supported languages (and frameworks) are as follows: `c`, `c++`, `c-sharp`, `dart`, `go`, `java`, `javascript`, `kotlin`, `lua`, `php`, `python`, `ruby`, `rust`, `scala`, `shell`, `sql`, `swift`, `typescript`, `vue`. ## Training procedure CodeGen2 was trained using cross-entropy loss to maximize the likelihood of sequential inputs. The input sequences are formatted in two ways: (1) causal language modeling and (2) file-level span corruption. Please refer to the paper for more details. ## Evaluation results We evaluate our models on HumanEval and HumanEval-Infill. Please refer to the [paper](https://arxiv.org/abs/2305.02309) for more details. ## Intended use and limitations As an autoregressive language model, CodeGen2 is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them. However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well. ## BibTeX entry and citation info ```bibtex @article{Nijkamp2023codegen2, title={CodeGen2: Lessons for Training LLMs on Programming and Natural Languages}, author={Nijkamp, Erik and Hayashi, Hiroaki and Xiong, Caiming and Savarese, Silvio and Zhou, Yingbo}, journal={arXiv preprint}, year={2023} } ```
4,968
[ [ -0.016876220703125, -0.051513671875, 0.009521484375, 0.0241241455078125, -0.00820159912109375, 0.0057220458984375, -0.015594482421875, -0.040924072265625, -0.005496978759765625, 0.03265380859375, -0.047088623046875, -0.0248565673828125, -0.03533935546875, 0....
filoux/course_distilbert_classifier_newsgroups
2023-05-22T20:59:14.000Z
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
filoux
null
null
filoux/course_distilbert_classifier_newsgroups
0
2
transformers
2023-05-22T20:58:56
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: course_distilbert_classifier_newsgroups results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # course_distilbert_classifier_newsgroups This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.0 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,485
[ [ -0.039093017578125, -0.043304443359375, 0.02001953125, 0.005374908447265625, -0.033233642578125, -0.007450103759765625, -0.01239776611328125, -0.01036834716796875, -0.0034580230712890625, -0.005329132080078125, -0.040863037109375, -0.05078125, -0.0653076171875, ...
shinta0615/distilbert-base-uncased-distilled-clinc
2023-05-24T21:58:05.000Z
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
shinta0615
null
null
shinta0615/distilbert-base-uncased-distilled-clinc
0
2
transformers
2023-05-22T22:14:31
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-distilled-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos config: plus split: validation args: plus metrics: - name: Accuracy type: accuracy value: 0.9483870967741935 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.3469 - Accuracy: 0.9484 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 318 | 2.4602 | 0.7535 | | 2.8954 | 2.0 | 636 | 1.2412 | 0.8558 | | 2.8954 | 3.0 | 954 | 0.6810 | 0.9126 | | 1.0885 | 4.0 | 1272 | 0.4728 | 0.9335 | | 0.455 | 5.0 | 1590 | 0.4025 | 0.9439 | | 0.455 | 6.0 | 1908 | 0.3754 | 0.9439 | | 0.2936 | 7.0 | 2226 | 0.3600 | 0.9471 | | 0.2422 | 8.0 | 2544 | 0.3522 | 0.9468 | | 0.2422 | 9.0 | 2862 | 0.3493 | 0.9481 | | 0.2251 | 10.0 | 3180 | 0.3469 | 0.9484 | ### Framework versions - Transformers 4.29.1 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.3
2,237
[ [ -0.032989501953125, -0.03558349609375, 0.0136566162109375, 0.00714111328125, -0.0246734619140625, -0.020294189453125, -0.00815582275390625, -0.004962921142578125, 0.0078582763671875, 0.02301025390625, -0.04376220703125, -0.048980712890625, -0.06036376953125, ...
dan21cg/distilbert-base-uncased-finetuned-clinc
2023-05-22T23:41:47.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
dan21cg
null
null
dan21cg/distilbert-base-uncased-finetuned-clinc
0
2
transformers
2023-05-22T23:11:50
Temporary Redirect. Redirecting to /jupitercoder/distilbert-base-uncased-finetuned-clinc/resolve/main/README.md
111
[ [ -0.045013427734375, -0.053680419921875, 0.05255126953125, -0.01006317138671875, -0.043365478515625, 0.041717529296875, -0.023529052734375, 0.0188751220703125, 0.0484619140625, 0.047821044921875, -0.0518798828125, -0.0548095703125, -0.043365478515625, 0.01434...
wiorz/legal_bert_small_summarized_defined
2023-05-23T23:21:46.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:cc-by-sa-4.0", "endpoints_compatible", "region:us" ]
text-classification
wiorz
null
null
wiorz/legal_bert_small_summarized_defined
0
2
transformers
2023-05-22T23:57:17
--- license: cc-by-sa-4.0 tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: legal_bert_small_summarized_defined results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # legal_bert_small_summarized_defined This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8897 - Accuracy: 0.835 - Precision: 0.5 - Recall: 0.1515 - F1: 0.2326 - D-index: 1.5181 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1600 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | D-index | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:| | No log | 1.0 | 200 | 0.4467 | 0.835 | 0.0 | 0.0 | 0.0 | 1.4607 | | No log | 2.0 | 400 | 0.4909 | 0.835 | 0.0 | 0.0 | 0.0 | 1.4607 | | 0.5409 | 3.0 | 600 | 0.4941 | 0.83 | 0.4545 | 0.1515 | 0.2273 | 1.5113 | | 0.5409 | 4.0 | 800 | 0.5612 | 0.84 | 0.6 | 0.0909 | 0.1579 | 1.5021 | | 0.4849 | 5.0 | 1000 | 0.6301 | 0.84 | 0.5714 | 0.1212 | 0.2 | 1.5135 | | 0.4849 | 6.0 | 1200 | 0.8969 | 0.84 | 0.6 | 0.0909 | 0.1579 | 1.5021 | | 0.4849 | 7.0 | 1400 | 1.3171 | 0.82 | 0.3636 | 0.1212 | 0.1818 | 1.4865 | | 0.2104 | 8.0 | 1600 | 1.6653 | 0.775 | 0.2692 | 0.2121 | 0.2373 | 1.4593 | | 0.2104 | 9.0 | 1800 | 1.7041 | 0.795 | 0.3182 | 0.2121 | 0.2545 | 1.4866 | | 0.0314 | 10.0 | 2000 | 1.7495 | 0.815 | 0.3571 | 0.1515 | 0.2128 | 1.4911 | | 0.0314 | 11.0 | 2200 | 1.7627 | 0.815 | 0.3571 | 0.1515 | 0.2128 | 1.4911 | | 0.0314 | 12.0 | 2400 | 1.7892 | 0.825 | 0.375 | 0.0909 | 0.1463 | 1.4819 | | 0.0067 | 13.0 | 2600 | 1.8211 | 0.83 | 0.4444 | 0.1212 | 0.1905 | 1.5000 | | 0.0067 | 14.0 | 2800 | 1.8567 | 0.83 | 0.4444 | 0.1212 | 0.1905 | 1.5000 | | 0.0 | 15.0 | 3000 | 1.8817 | 0.83 | 0.4444 | 0.1212 | 0.1905 | 1.5000 | | 0.0 | 16.0 | 3200 | 1.8590 | 0.825 | 0.4167 | 0.1515 | 0.2222 | 1.5046 | | 0.0 | 17.0 | 3400 | 1.8619 | 0.835 | 0.5 | 0.1515 | 0.2326 | 1.5181 | | 0.0014 | 18.0 | 3600 | 1.8744 | 0.835 | 0.5 | 0.1515 | 0.2326 | 1.5181 | | 0.0014 | 19.0 | 3800 | 1.8849 | 0.835 | 0.5 | 0.1515 | 0.2326 | 1.5181 | | 0.0 | 20.0 | 4000 | 1.8897 | 0.835 | 0.5 | 0.1515 | 0.2326 | 1.5181 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
3,612
[ [ -0.0438232421875, -0.041900634765625, 0.01499176025390625, 0.0081787109375, -0.00832366943359375, -0.011688232421875, 0.00048661231994628906, -0.01264190673828125, 0.044677734375, 0.0271453857421875, -0.043548583984375, -0.0526123046875, -0.046539306640625, ...
kdeeaz/distilbert_classifier_newsgroups
2023-05-23T00:59:39.000Z
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
kdeeaz
null
null
kdeeaz/distilbert_classifier_newsgroups
0
2
transformers
2023-05-23T00:59:24
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: distilbert_classifier_newsgroups results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_classifier_newsgroups This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.0 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,471
[ [ -0.0386962890625, -0.042022705078125, 0.021240234375, 0.0084228515625, -0.033599853515625, -0.0068359375, -0.01174163818359375, -0.010833740234375, -0.002910614013671875, -0.00620269775390625, -0.041534423828125, -0.050445556640625, -0.067138671875, -0.01020...
mauhcs/distilbert-base-uncased-finetuned-emotion
2023-05-24T01:35:18.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
mauhcs
null
null
mauhcs/distilbert-base-uncased-finetuned-emotion
0
2
transformers
2023-05-23T01:56:39
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.925 - name: F1 type: f1 value: 0.9249666408719047 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2147 - Accuracy: 0.925 - F1: 0.9250 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8493 | 1.0 | 250 | 0.3120 | 0.9115 | 0.9084 | | 0.2513 | 2.0 | 500 | 0.2147 | 0.925 | 0.9250 | ### Framework versions - Transformers 4.29.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,846
[ [ -0.037567138671875, -0.041290283203125, 0.0137176513671875, 0.021820068359375, -0.02581787109375, -0.0187225341796875, -0.0130767822265625, -0.00884246826171875, 0.01076507568359375, 0.007770538330078125, -0.05615234375, -0.0518798828125, -0.05987548828125, ...
AustinCarthy/MixGPT2_100KP_BFall_fromB_20KGen_topP_0.75
2023-05-23T10:17:41.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
AustinCarthy
null
null
AustinCarthy/MixGPT2_100KP_BFall_fromB_20KGen_topP_0.75
0
2
transformers
2023-05-23T02:02:04
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: MixGPT2_100KP_BFall_fromB_20KGen_topP_0.75 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MixGPT2_100KP_BFall_fromB_20KGen_topP_0.75 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0189 - Accuracy: 0.9972 - F1: 0.9700 - Precision: 0.9994 - Recall: 0.9424 - Roc Auc Score: 0.9712 - Tpr At Fpr 0.01: 0.9544 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 | |:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:| | 0.0051 | 1.0 | 78750 | 0.0225 | 0.9964 | 0.9605 | 0.9961 | 0.9274 | 0.9636 | 0.9226 | | 0.0044 | 2.0 | 157500 | 0.0219 | 0.9963 | 0.9593 | 0.9985 | 0.923 | 0.9615 | 0.933 | | 0.0018 | 3.0 | 236250 | 0.0216 | 0.9969 | 0.9669 | 0.9991 | 0.9366 | 0.9683 | 0.9496 | | 0.0012 | 4.0 | 315000 | 0.0233 | 0.9967 | 0.9646 | 0.9994 | 0.9322 | 0.9661 | 0.9448 | | 0.0011 | 5.0 | 393750 | 0.0189 | 0.9972 | 0.9700 | 0.9994 | 0.9424 | 0.9712 | 0.9544 | ### Framework versions - Transformers 4.29.1 - Pytorch 1.9.0+cu111 - Datasets 2.10.1 - Tokenizers 0.13.2
2,253
[ [ -0.044830322265625, -0.04193115234375, 0.00714111328125, 0.0156707763671875, -0.0213165283203125, -0.01885986328125, -0.0060882568359375, -0.0201873779296875, 0.0273284912109375, 0.024993896484375, -0.05181884765625, -0.0477294921875, -0.0543212890625, -0.01...
KINGeorge2000/sentiment_roberta_yu
2023-07-07T09:31:20.000Z
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "endpoints_compatible", "region:us" ]
text-classification
KINGeorge2000
null
null
KINGeorge2000/sentiment_roberta_yu
0
2
transformers
2023-05-23T05:49:16
--- tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: sentiment_roberta_yu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentiment_roberta_yu This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2580 - Accuracy: 0.6668 - F1: 0.6668 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,173
[ [ -0.0242462158203125, -0.04583740234375, 0.0181884765625, 0.0175628662109375, -0.033111572265625, -0.034454345703125, -0.02008056640625, -0.017822265625, 0.0190582275390625, 0.0163726806640625, -0.057373046875, -0.054718017578125, -0.0511474609375, -0.0042152...
satyamverma/distilbert-base-uncased-finetuned-mrpc
2023-05-23T09:05:28.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
satyamverma
null
null
satyamverma/distilbert-base-uncased-finetuned-mrpc
0
2
transformers
2023-05-23T06:19:04
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-mrpc results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: mrpc split: validation args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8480392156862745 - name: F1 type: f1 value: 0.8945578231292517 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-mrpc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4304 - Accuracy: 0.8480 - F1: 0.8946 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 230 | 0.3851 | 0.8137 | 0.8652 | | No log | 2.0 | 460 | 0.3614 | 0.8456 | 0.8948 | | 0.4318 | 3.0 | 690 | 0.4304 | 0.8480 | 0.8946 | | 0.4318 | 4.0 | 920 | 0.5555 | 0.8407 | 0.8900 | | 0.1697 | 5.0 | 1150 | 0.5883 | 0.8456 | 0.8927 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
2,053
[ [ -0.0322265625, -0.040679931640625, 0.006938934326171875, 0.0115509033203125, -0.028594970703125, -0.01904296875, -0.007244110107421875, -0.00506591796875, 0.0121307373046875, 0.01490020751953125, -0.051605224609375, -0.039306640625, -0.05810546875, -0.013931...
songys/distilbert-base-uncased-finetuned-clinc
2023-06-05T06:59:46.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
songys
null
null
songys/distilbert-base-uncased-finetuned-clinc
0
2
transformers
2023-05-23T06:52:33
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos config: plus split: validation args: plus metrics: - name: Accuracy type: accuracy value: 0.9180645161290323 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7720 - Accuracy: 0.9181 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2896 | 1.0 | 318 | 3.2887 | 0.7419 | | 2.6282 | 2.0 | 636 | 1.8753 | 0.8371 | | 1.548 | 3.0 | 954 | 1.1570 | 0.8961 | | 1.0148 | 4.0 | 1272 | 0.8573 | 0.9129 | | 0.7952 | 5.0 | 1590 | 0.7720 | 0.9181 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,932
[ [ -0.034637451171875, -0.041107177734375, 0.01275634765625, 0.007160186767578125, -0.0271453857421875, -0.02459716796875, -0.0129241943359375, -0.0089874267578125, 0.0028514862060546875, 0.021942138671875, -0.046356201171875, -0.048095703125, -0.0579833984375, ...
songys/distilbert-base-uncased-distilled-clinc
2023-06-05T09:03:08.000Z
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
songys
null
null
songys/distilbert-base-uncased-distilled-clinc
0
2
transformers
2023-05-23T07:23:37
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-distilled-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos config: plus split: validation args: plus metrics: - name: Accuracy type: accuracy value: 0.9474193548387096 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.2676 - Accuracy: 0.9474 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.1402 | 1.0 | 318 | 3.0979 | 0.7503 | | 2.3572 | 2.0 | 636 | 1.5361 | 0.8577 | | 1.1469 | 3.0 | 954 | 0.7670 | 0.9168 | | 0.5652 | 4.0 | 1272 | 0.4659 | 0.9345 | | 0.308 | 5.0 | 1590 | 0.3458 | 0.9448 | | 0.1934 | 6.0 | 1908 | 0.3009 | 0.9448 | | 0.1368 | 7.0 | 2226 | 0.2781 | 0.9471 | | 0.1088 | 8.0 | 2544 | 0.2724 | 0.9484 | | 0.0949 | 9.0 | 2862 | 0.2704 | 0.9468 | | 0.0897 | 10.0 | 3180 | 0.2676 | 0.9474 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
2,243
[ [ -0.0330810546875, -0.037811279296875, 0.01433563232421875, 0.00614166259765625, -0.023773193359375, -0.019378662109375, -0.01038360595703125, -0.005222320556640625, 0.007251739501953125, 0.02099609375, -0.043243408203125, -0.049407958984375, -0.060516357421875, ...
YakovElm/test2
2023-05-23T08:38:43.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/test2
0
2
transformers
2023-05-23T08:37:33
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: test2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # test2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
930
[ [ -0.03985595703125, -0.050201416015625, 0.0169830322265625, 0.006561279296875, -0.0462646484375, -0.0287017822265625, -0.01451873779296875, -0.0306396484375, -0.001171112060546875, 0.03192138671875, -0.048065185546875, -0.033782958984375, -0.057220458984375, ...
elftsdmr/5000
2023-05-23T09:11:57.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
elftsdmr
null
null
elftsdmr/5000
0
2
transformers
2023-05-23T08:58:53
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: '5000' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 5000 This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1912 - Accuracy: 0.952 - Precision: 0.9751 - Recall: 0.9287 - F1: 0.9513 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 1.0 | 63 | 0.1936 | 0.939 | 0.9890 | 0.8891 | 0.9364 | | No log | 2.0 | 126 | 0.2011 | 0.946 | 0.9747 | 0.9168 | 0.9449 | | No log | 3.0 | 189 | 0.1912 | 0.952 | 0.9751 | 0.9287 | 0.9513 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.1+cu117 - Datasets 2.1.0 - Tokenizers 0.13.3
1,674
[ [ -0.03436279296875, -0.042633056640625, 0.01015472412109375, 0.0222320556640625, -0.0258331298828125, -0.0274658203125, -0.0216827392578125, -0.0199432373046875, 0.01291656494140625, 0.020416259765625, -0.054840087890625, -0.048980712890625, -0.046600341796875, ...
darrel999/distilbert-base-uncased_emotion_ft_0523
2023-05-23T09:30:38.000Z
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
darrel999
null
null
darrel999/distilbert-base-uncased_emotion_ft_0523
0
2
transformers
2023-05-23T09:11:52
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 - precision model-index: - name: distilbert-base-uncased_emotion_ft_0523 results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.917 - name: F1 type: f1 value: 0.9167815299071149 - name: Precision type: precision value: 0.8882036697297124 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased_emotion_ft_0523 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2694 - Accuracy: 0.917 - F1: 0.9168 - Precision: 0.8882 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:| | No log | 1.0 | 63 | 0.9564 | 0.641 | 0.5522 | 0.5005 | | No log | 2.0 | 126 | 0.4544 | 0.8635 | 0.8507 | 0.8714 | | No log | 3.0 | 189 | 0.2987 | 0.91 | 0.9093 | 0.8805 | | 0.67 | 4.0 | 252 | 0.2694 | 0.917 | 0.9168 | 0.8882 | ### Framework versions - Transformers 4.29.1 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.3
2,160
[ [ -0.03436279296875, -0.0335693359375, 0.01226043701171875, 0.0210723876953125, -0.0240631103515625, -0.0184326171875, -0.00860595703125, -0.00882720947265625, 0.01050567626953125, 0.0089874267578125, -0.05322265625, -0.052398681640625, -0.059173583984375, -0....
atrytone/scibert_claim_id_2e-05
2023-05-23T10:44:58.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "endpoints_compatible", "region:us" ]
text-classification
atrytone
null
null
atrytone/scibert_claim_id_2e-05
0
2
transformers
2023-05-23T10:04:04
--- tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: scibert_claim_id_2e-05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # scibert_claim_id_2e-05 This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0162 - Accuracy: 0.9962 - F1: 0.9880 - Precision: 0.9889 - Recall: 0.9870 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.3131 | 1.0 | 666 | 0.2551 | 0.8880 | 0.5518 | 0.7419 | 0.4392 | | 0.267 | 2.0 | 1332 | 0.1821 | 0.9280 | 0.7636 | 0.7875 | 0.7410 | | 0.2245 | 3.0 | 1998 | 0.0942 | 0.9695 | 0.9034 | 0.8968 | 0.9101 | | 0.1135 | 4.0 | 2664 | 0.0514 | 0.9845 | 0.9517 | 0.9339 | 0.9702 | | 0.0821 | 5.0 | 3330 | 0.0223 | 0.9944 | 0.9822 | 0.9808 | 0.9837 | | 0.0618 | 6.0 | 3996 | 0.0162 | 0.9962 | 0.9880 | 0.9889 | 0.9870 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,972
[ [ -0.0293426513671875, -0.032623291015625, 0.01313018798828125, 0.0110321044921875, -0.01380157470703125, -0.0228271484375, 0.00039958953857421875, -0.0154876708984375, 0.0291748046875, 0.02081298828125, -0.0489501953125, -0.0472412109375, -0.0474853515625, -0...
Karlpy/LunarLander-v2
2023-05-23T10:05:17.000Z
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
Karlpy
null
null
Karlpy/LunarLander-v2
0
2
stable-baselines3
2023-05-23T10:04:17
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 296.84 +/- 13.13 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
784
[ [ -0.00023484230041503906, -0.02716064453125, 0.017059326171875, 0.023345947265625, -0.00606536865234375, 0.002735137939453125, 0.034454345703125, -0.012115478515625, 0.019866943359375, 0.06500244140625, -0.043212890625, -0.035247802734375, -0.0343017578125, -...
AustinCarthy/MixGPT2_100KP_BFall_fromB_30KGen_topP_0.75
2023-05-24T14:23:52.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
AustinCarthy
null
null
AustinCarthy/MixGPT2_100KP_BFall_fromB_30KGen_topP_0.75
0
2
transformers
2023-05-23T10:18:04
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: MixGPT2_100KP_BFall_fromB_30KGen_topP_0.75 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MixGPT2_100KP_BFall_fromB_30KGen_topP_0.75 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0196 - Accuracy: 0.9968 - F1: 0.9656 - Precision: 0.9994 - Recall: 0.934 - Roc Auc Score: 0.9670 - Tpr At Fpr 0.01: 0.9596 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 | |:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:| | 0.0021 | 1.0 | 85313 | 0.0313 | 0.9952 | 0.9469 | 0.9984 | 0.9004 | 0.9502 | 0.9028 | | 0.0031 | 2.0 | 170626 | 0.0236 | 0.9970 | 0.9671 | 0.9987 | 0.9374 | 0.9687 | 0.9466 | | 0.0039 | 3.0 | 255939 | 0.0182 | 0.9971 | 0.9688 | 0.9981 | 0.9412 | 0.9706 | 0.9394 | | 0.002 | 4.0 | 341252 | 0.0199 | 0.9973 | 0.9709 | 0.9987 | 0.9446 | 0.9723 | 0.9508 | | 0.001 | 5.0 | 426565 | 0.0196 | 0.9968 | 0.9656 | 0.9994 | 0.934 | 0.9670 | 0.9596 | ### Framework versions - Transformers 4.29.1 - Pytorch 1.9.0+cu111 - Datasets 2.10.1 - Tokenizers 0.13.2
2,252
[ [ -0.045013427734375, -0.0416259765625, 0.007129669189453125, 0.015899658203125, -0.0218658447265625, -0.019287109375, -0.00568389892578125, -0.0196380615234375, 0.0272369384765625, 0.0236358642578125, -0.051910400390625, -0.045318603515625, -0.053466796875, -...
atrytone/scibert_claim_id_3e-05
2023-05-23T11:22:38.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "endpoints_compatible", "region:us" ]
text-classification
atrytone
null
null
atrytone/scibert_claim_id_3e-05
0
2
transformers
2023-05-23T10:45:10
--- tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: scibert_claim_id_3e-05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # scibert_claim_id_3e-05 This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0071 - Accuracy: 0.9980 - F1: 0.9935 - Precision: 0.9957 - Recall: 0.9914 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.3163 | 1.0 | 666 | 0.2554 | 0.8884 | 0.5534 | 0.7437 | 0.4407 | | 0.2673 | 2.0 | 1332 | 0.1671 | 0.9361 | 0.7850 | 0.8309 | 0.7439 | | 0.2188 | 3.0 | 1998 | 0.0689 | 0.9769 | 0.9268 | 0.9232 | 0.9303 | | 0.0925 | 4.0 | 2664 | 0.0369 | 0.9879 | 0.9624 | 0.9428 | 0.9827 | | 0.0635 | 5.0 | 3330 | 0.0109 | 0.9971 | 0.9909 | 0.9928 | 0.9889 | | 0.038 | 6.0 | 3996 | 0.0071 | 0.9980 | 0.9935 | 0.9957 | 0.9914 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,972
[ [ -0.030029296875, -0.03289794921875, 0.013671875, 0.0129241943359375, -0.01213836669921875, -0.0214385986328125, 0.0004851818084716797, -0.0158233642578125, 0.027069091796875, 0.019744873046875, -0.04791259765625, -0.047698974609375, -0.047698974609375, -0.01...
kapilchauhan/fintuned-bert-free-speech-structure
2023-05-25T01:32:35.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
kapilchauhan
null
null
kapilchauhan/fintuned-bert-free-speech-structure
0
2
transformers
2023-05-23T10:47:05
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: fintuned-bert-free-speech-structure results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # fintuned-bert-free-speech-structure This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6916 - Train Sparse Categorical Accuracy: 0.5276 - Validation Loss: 0.6917 - Validation Sparse Categorical Accuracy: 0.5280 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch | |:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:| | 0.6709 | 0.5396 | 0.6917 | 0.5280 | 0 | | 0.6916 | 0.5275 | 0.6916 | 0.5280 | 1 | | 0.6916 | 0.5276 | 0.6917 | 0.5280 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
2,042
[ [ -0.046539306640625, -0.051605224609375, 0.01457977294921875, 0.0079498291015625, -0.036651611328125, -0.024932861328125, -0.0189056396484375, -0.0243377685546875, 0.021270751953125, 0.0177764892578125, -0.05755615234375, -0.047760009765625, -0.051727294921875, ...
GhifSmile/distilbert-base-uncased-PINA-dfnew-tuning
2023-05-23T14:44:53.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
GhifSmile
null
null
GhifSmile/distilbert-base-uncased-PINA-dfnew-tuning
0
2
transformers
2023-05-23T11:22:58
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - precision - recall model-index: - name: distilbert-base-uncased-PINA-dfnew-tuning results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-PINA-dfnew-tuning This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3403 - Accuracy: 0.9438 - Precision: 0.8528 - Recall: 0.8454 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:| | 0.8484 | 1.0 | 1436 | 0.4963 | 0.8896 | 0.7918 | 0.7575 | | 0.3783 | 2.0 | 2872 | 0.4298 | 0.9114 | 0.8288 | 0.7918 | | 0.2649 | 3.0 | 4308 | 0.3808 | 0.9302 | 0.8484 | 0.8148 | | 0.1951 | 4.0 | 5744 | 0.3627 | 0.9363 | 0.8631 | 0.8205 | | 0.149 | 5.0 | 7180 | 0.3403 | 0.9438 | 0.8528 | 0.8454 | | 0.1061 | 6.0 | 8616 | 0.3415 | 0.9455 | 0.8571 | 0.8366 | | 0.0745 | 7.0 | 10052 | 0.3441 | 0.9467 | 0.8554 | 0.8418 | | 0.0452 | 8.0 | 11488 | 0.3850 | 0.9500 | 0.8697 | 0.8711 | | 0.0273 | 9.0 | 12924 | 0.3941 | 0.9506 | 0.8546 | 0.8469 | | 0.0166 | 10.0 | 14360 | 0.4046 | 0.9525 | 0.8621 | 0.8492 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
2,268
[ [ -0.040802001953125, -0.040802001953125, 0.0169677734375, 0.01084136962890625, -0.0203857421875, -0.01261138916015625, -0.00528717041015625, -0.00585174560546875, 0.0189666748046875, 0.019378662109375, -0.05364990234375, -0.049346923828125, -0.054779052734375, ...
Sejan/bert-finetuned-mrpc
2023-05-23T12:25:07.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
Sejan
null
null
Sejan/bert-finetuned-mrpc
0
2
transformers
2023-05-23T12:20:01
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-finetuned-mrpc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-mrpc This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Tokenizers 0.13.3
1,017
[ [ -0.046295166015625, -0.046295166015625, 0.005908966064453125, 0.00940704345703125, -0.040863037109375, -0.031463623046875, -0.020477294921875, -0.0191802978515625, 0.01096343994140625, 0.02960205078125, -0.06121826171875, -0.0280303955078125, -0.045806884765625,...
bhattronak14/distilbert-base-uncased-finetuned-Pre_requisite_finder
2023-05-24T13:41:32.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
bhattronak14
null
null
bhattronak14/distilbert-base-uncased-finetuned-Pre_requisite_finder
0
2
transformers
2023-05-23T12:22:06
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-Pre_requisite_finder results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-Pre_requisite_finder This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0416 | 1.0 | 810 | 0.0008 | 0.9997 | | 0.0013 | 2.0 | 1620 | 0.0000 | 1.0 | | 0.0 | 3.0 | 2430 | 0.0000 | 1.0 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,535
[ [ -0.035308837890625, -0.04522705078125, 0.01824951171875, 0.017578125, -0.0243988037109375, -0.0239105224609375, -0.0067291259765625, -0.005992889404296875, 0.00417327880859375, 0.020721435546875, -0.055084228515625, -0.046966552734375, -0.06036376953125, -0....
VinsmokeMir/FineTuning_Method_2_SC
2023-05-23T14:49:02.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "endpoints_compatible", "region:us" ]
text-classification
VinsmokeMir
null
null
VinsmokeMir/FineTuning_Method_2_SC
0
2
transformers
2023-05-23T13:55:32
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: FineTuning_Method_2_SC results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # FineTuning_Method_2_SC This model is a fine-tuned version of [rafsankabir/Pretrained_E13_Method2](https://huggingface.co/rafsankabir/Pretrained_E13_Method2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3223 - Accuracy: 0.6790 - F1 Macro: 0.6487 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | No log | 0.32 | 500 | 1.0745 | 0.3976 | 0.1896 | | 1.0543 | 0.64 | 1000 | 0.9059 | 0.5967 | 0.4614 | | 1.0543 | 0.95 | 1500 | 0.8259 | 0.6414 | 0.5633 | | 0.8389 | 1.27 | 2000 | 0.8177 | 0.6394 | 0.5715 | | 0.8389 | 1.59 | 2500 | 0.8269 | 0.6356 | 0.5724 | | 0.7713 | 1.91 | 3000 | 0.7916 | 0.6631 | 0.6238 | | 0.7713 | 2.23 | 3500 | 0.7996 | 0.6745 | 0.6155 | | 0.6734 | 2.54 | 4000 | 0.7921 | 0.6624 | 0.6307 | | 0.6734 | 2.86 | 4500 | 0.7743 | 0.6726 | 0.6459 | | 0.6309 | 3.18 | 5000 | 0.8343 | 0.6803 | 0.6382 | | 0.6309 | 3.5 | 5500 | 0.8233 | 0.6784 | 0.6390 | | 0.5582 | 3.82 | 6000 | 0.8678 | 0.6631 | 0.6273 | | 0.5582 | 4.13 | 6500 | 0.8621 | 0.6758 | 0.6368 | | 0.4988 | 4.45 | 7000 | 0.9389 | 0.6720 | 0.6386 | | 0.4988 | 4.77 | 7500 | 0.9067 | 0.6918 | 0.6505 | | 0.4885 | 5.09 | 8000 | 0.9116 | 0.6937 | 0.6583 | | 0.4885 | 5.41 | 8500 | 1.0357 | 0.6822 | 0.6459 | | 0.427 | 5.73 | 9000 | 0.9428 | 0.6847 | 0.6479 | | 0.427 | 6.04 | 9500 | 1.0233 | 0.6752 | 0.6531 | | 0.4034 | 6.36 | 10000 | 1.1578 | 0.6835 | 0.6515 | | 0.4034 | 6.68 | 10500 | 1.1870 | 0.6790 | 0.6545 | | 0.4053 | 7.0 | 11000 | 1.0370 | 0.7007 | 0.6651 | | 0.4053 | 7.32 | 11500 | 1.2087 | 0.6822 | 0.6497 | | 0.3545 | 7.63 | 12000 | 1.2255 | 0.6847 | 0.6605 | | 0.3545 | 7.95 | 12500 | 1.2710 | 0.6905 | 0.6609 | | 0.3437 | 8.27 | 13000 | 1.3646 | 0.6918 | 0.6618 | | 0.3437 | 8.59 | 13500 | 1.3767 | 0.6879 | 0.6563 | | 0.3407 | 8.91 | 14000 | 1.2705 | 0.6796 | 0.6506 | | 0.3407 | 9.22 | 14500 | 1.4605 | 0.6803 | 0.6496 | | 0.2876 | 9.54 | 15000 | 1.4202 | 0.6860 | 0.6555 | | 0.2876 | 9.86 | 15500 | 1.4151 | 0.6847 | 0.6517 | | 0.3035 | 10.18 | 16000 | 1.4536 | 0.6713 | 0.6514 | | 0.3035 | 10.5 | 16500 | 1.4806 | 0.6828 | 0.6469 | | 0.2733 | 10.81 | 17000 | 1.4596 | 0.6899 | 0.6552 | | 0.2733 | 11.13 | 17500 | 1.6183 | 0.6886 | 0.6557 | | 0.2562 | 11.45 | 18000 | 1.6054 | 0.6771 | 0.6591 | | 0.2562 | 11.77 | 18500 | 1.5966 | 0.6701 | 0.6503 | | 0.2582 | 12.09 | 19000 | 1.5659 | 0.6822 | 0.6531 | | 0.2582 | 12.4 | 19500 | 1.6146 | 0.6867 | 0.6575 | | 0.2368 | 12.72 | 20000 | 1.6207 | 0.6899 | 0.6629 | | 0.2368 | 13.04 | 20500 | 1.5220 | 0.6918 | 0.6640 | | 0.245 | 13.36 | 21000 | 1.6572 | 0.6720 | 0.6489 | | 0.245 | 13.68 | 21500 | 1.6443 | 0.6860 | 0.6590 | | 0.2226 | 13.99 | 22000 | 1.6238 | 0.6847 | 0.6589 | | 0.2226 | 14.31 | 22500 | 1.7241 | 0.6777 | 0.6521 | | 0.2117 | 14.63 | 23000 | 1.6134 | 0.6867 | 0.6580 | | 0.2117 | 14.95 | 23500 | 1.6723 | 0.6911 | 0.6618 | | 0.2056 | 15.27 | 24000 | 1.6257 | 0.6892 | 0.6529 | | 0.2056 | 15.59 | 24500 | 1.7072 | 0.6796 | 0.6531 | | 0.1859 | 15.9 | 25000 | 1.7174 | 0.6771 | 0.6554 | | 0.1859 | 16.22 | 25500 | 1.6951 | 0.6879 | 0.6555 | | 0.1725 | 16.54 | 26000 | 1.7240 | 0.6905 | 0.6632 | | 0.1725 | 16.86 | 26500 | 1.7126 | 0.6879 | 0.6608 | | 0.1817 | 17.18 | 27000 | 1.7949 | 0.6847 | 0.6520 | | 0.1817 | 17.49 | 27500 | 1.7694 | 0.6911 | 0.6622 | | 0.1617 | 17.81 | 28000 | 1.7891 | 0.6828 | 0.6527 | | 0.1617 | 18.13 | 28500 | 1.7860 | 0.6790 | 0.6526 | | 0.1628 | 18.45 | 29000 | 1.8127 | 0.6867 | 0.6605 | | 0.1628 | 18.77 | 29500 | 1.7317 | 0.6892 | 0.6610 | | 0.1736 | 19.08 | 30000 | 1.7273 | 0.6899 | 0.6569 | | 0.1736 | 19.4 | 30500 | 1.7853 | 0.6854 | 0.6584 | | 0.1441 | 19.72 | 31000 | 1.7866 | 0.6918 | 0.6624 | | 0.1441 | 20.04 | 31500 | 1.7842 | 0.6873 | 0.6580 | | 0.1392 | 20.36 | 32000 | 1.8669 | 0.6860 | 0.6597 | | 0.1392 | 20.67 | 32500 | 1.8392 | 0.6899 | 0.6639 | | 0.159 | 20.99 | 33000 | 1.8412 | 0.6784 | 0.6552 | | 0.159 | 21.31 | 33500 | 1.8673 | 0.6854 | 0.6584 | | 0.1275 | 21.63 | 34000 | 1.8622 | 0.6854 | 0.6571 | | 0.1275 | 21.95 | 34500 | 1.8622 | 0.6796 | 0.6583 | | 0.1216 | 22.26 | 35000 | 1.9509 | 0.6854 | 0.6604 | | 0.1216 | 22.58 | 35500 | 1.9425 | 0.6809 | 0.6550 | | 0.1351 | 22.9 | 36000 | 1.9496 | 0.6784 | 0.6559 | | 0.1351 | 23.22 | 36500 | 1.9685 | 0.6847 | 0.6582 | | 0.1221 | 23.54 | 37000 | 1.9112 | 0.6911 | 0.6642 | | 0.1221 | 23.85 | 37500 | 1.9341 | 0.6726 | 0.6526 | | 0.1155 | 24.17 | 38000 | 1.9573 | 0.6899 | 0.6614 | | 0.1155 | 24.49 | 38500 | 1.9853 | 0.6873 | 0.6580 | | 0.1139 | 24.81 | 39000 | 1.9915 | 0.6790 | 0.6533 | | 0.1139 | 25.13 | 39500 | 1.9997 | 0.6796 | 0.6539 | | 0.1166 | 25.45 | 40000 | 1.9994 | 0.6847 | 0.6592 | | 0.1166 | 25.76 | 40500 | 1.9848 | 0.6745 | 0.6513 | | 0.1128 | 26.08 | 41000 | 2.0095 | 0.6867 | 0.6578 | | 0.1128 | 26.4 | 41500 | 2.0585 | 0.6822 | 0.6547 | | 0.1048 | 26.72 | 42000 | 2.0293 | 0.6777 | 0.6510 | | 0.1048 | 27.04 | 42500 | 2.0797 | 0.6758 | 0.6512 | | 0.1 | 27.35 | 43000 | 2.1162 | 0.6822 | 0.6544 | | 0.1 | 27.67 | 43500 | 2.0569 | 0.6835 | 0.6538 | | 0.1106 | 27.99 | 44000 | 2.0991 | 0.6828 | 0.6565 | | 0.1106 | 28.31 | 44500 | 2.0976 | 0.6841 | 0.6563 | | 0.0886 | 28.63 | 45000 | 2.1305 | 0.6854 | 0.6532 | | 0.0886 | 28.94 | 45500 | 2.1015 | 0.6867 | 0.6564 | | 0.1027 | 29.26 | 46000 | 2.1105 | 0.6867 | 0.6559 | | 0.1027 | 29.58 | 46500 | 2.1396 | 0.6765 | 0.6499 | | 0.1057 | 29.9 | 47000 | 2.1237 | 0.6790 | 0.6501 | | 0.1057 | 30.22 | 47500 | 2.1849 | 0.6790 | 0.6518 | | 0.0876 | 30.53 | 48000 | 2.1346 | 0.6841 | 0.6533 | | 0.0876 | 30.85 | 48500 | 2.1441 | 0.6828 | 0.6540 | | 0.0856 | 31.17 | 49000 | 2.1528 | 0.6911 | 0.6600 | | 0.0856 | 31.49 | 49500 | 2.1725 | 0.6847 | 0.6509 | | 0.0869 | 31.81 | 50000 | 2.2085 | 0.6771 | 0.6503 | | 0.0869 | 32.12 | 50500 | 2.2606 | 0.6688 | 0.6434 | | 0.0848 | 32.44 | 51000 | 2.2510 | 0.6745 | 0.6451 | | 0.0848 | 32.76 | 51500 | 2.2528 | 0.6739 | 0.6496 | | 0.0816 | 33.08 | 52000 | 2.2532 | 0.6758 | 0.6503 | | 0.0816 | 33.4 | 52500 | 2.2356 | 0.6803 | 0.6500 | | 0.0793 | 33.72 | 53000 | 2.2579 | 0.6745 | 0.6483 | | 0.0793 | 34.03 | 53500 | 2.2126 | 0.6816 | 0.6520 | | 0.0767 | 34.35 | 54000 | 2.2504 | 0.6803 | 0.6497 | | 0.0767 | 34.67 | 54500 | 2.2601 | 0.6803 | 0.6524 | | 0.0844 | 34.99 | 55000 | 2.2785 | 0.6733 | 0.6470 | | 0.0844 | 35.31 | 55500 | 2.2756 | 0.6784 | 0.6520 | | 0.0755 | 35.62 | 56000 | 2.2813 | 0.6816 | 0.6542 | | 0.0755 | 35.94 | 56500 | 2.2752 | 0.6803 | 0.6518 | | 0.077 | 36.26 | 57000 | 2.2815 | 0.6796 | 0.6518 | | 0.077 | 36.58 | 57500 | 2.2861 | 0.6803 | 0.6514 | | 0.0752 | 36.9 | 58000 | 2.2929 | 0.6771 | 0.6505 | | 0.0752 | 37.21 | 58500 | 2.2859 | 0.6816 | 0.6537 | | 0.0698 | 37.53 | 59000 | 2.3117 | 0.6796 | 0.6525 | | 0.0698 | 37.85 | 59500 | 2.3038 | 0.6816 | 0.6511 | | 0.0613 | 38.17 | 60000 | 2.3176 | 0.6765 | 0.6477 | | 0.0613 | 38.49 | 60500 | 2.3131 | 0.6796 | 0.6493 | | 0.0706 | 38.8 | 61000 | 2.3161 | 0.6777 | 0.6477 | | 0.0706 | 39.12 | 61500 | 2.3127 | 0.6784 | 0.6484 | | 0.0678 | 39.44 | 62000 | 2.3174 | 0.6765 | 0.6467 | | 0.0678 | 39.76 | 62500 | 2.3223 | 0.6790 | 0.6487 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
10,654
[ [ -0.0484619140625, -0.0362548828125, 0.017578125, 0.006458282470703125, -0.00391387939453125, 0.006481170654296875, 0.001689910888671875, 0.005908966064453125, 0.0504150390625, 0.0288238525390625, -0.042938232421875, -0.03668212890625, -0.038787841796875, -0....
Xenova/all-roberta-large-v1
2023-09-01T21:43:11.000Z
[ "transformers.js", "onnx", "roberta", "fill-mask", "feature-extraction", "region:us" ]
feature-extraction
Xenova
null
null
Xenova/all-roberta-large-v1
0
2
transformers.js
2023-05-23T14:27:43
--- library_name: transformers.js pipeline_tag: feature-extraction --- https://huggingface.co/sentence-transformers/all-roberta-large-v1 with ONNX weights to be compatible with Transformers.js. Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
552
[ [ -0.023193359375, -0.0028228759765625, 0.041534423828125, 0.054229736328125, -0.006961822509765625, -0.0164337158203125, -0.0186767578125, -0.0164642333984375, 0.0298614501953125, 0.04754638671875, -0.057098388671875, -0.030517578125, -0.0526123046875, 0.0117...
Xenova/paraphrase-multilingual-mpnet-base-v2
2023-05-30T22:29:55.000Z
[ "transformers.js", "onnx", "xlm-roberta", "feature-extraction", "region:us" ]
feature-extraction
Xenova
null
null
Xenova/paraphrase-multilingual-mpnet-base-v2
1
2
transformers.js
2023-05-23T14:31:51
--- library_name: "transformers.js" --- https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2 with ONNX weights to be compatible with Transformers.js. Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
538
[ [ -0.02069091796875, 0.0092620849609375, 0.038421630859375, 0.0572509765625, -0.01311492919921875, -0.01100921630859375, -0.002773284912109375, 0.0010585784912109375, 0.021881103515625, 0.052703857421875, -0.046844482421875, -0.022186279296875, -0.043609619140625,...
P3ps/bert-finetuned-cross-ner
2023-05-24T11:36:32.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
P3ps
null
null
P3ps/bert-finetuned-cross-ner
0
2
transformers
2023-05-23T15:00:41
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-cross-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-cross-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1761 - Precision: 0.8267 - Recall: 0.8619 - F1: 0.8439 - Accuracy: 0.9561 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2037 | 1.0 | 2607 | 0.1973 | 0.7633 | 0.8122 | 0.7870 | 0.9449 | | 0.1264 | 2.0 | 5214 | 0.1709 | 0.8102 | 0.8484 | 0.8289 | 0.9542 | | 0.0817 | 3.0 | 7821 | 0.1761 | 0.8267 | 0.8619 | 0.8439 | 0.9561 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,723
[ [ -0.043975830078125, -0.047821044921875, 0.006214141845703125, 0.007175445556640625, -0.0292510986328125, -0.036956787109375, -0.0157318115234375, -0.0195465087890625, 0.022186279296875, 0.02520751953125, -0.06024169921875, -0.045684814453125, -0.04791259765625, ...
VinsmokeMir/Fine_Tuning_SC_Method_2_Epoch_13B
2023-05-23T15:44:19.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "endpoints_compatible", "region:us" ]
text-classification
VinsmokeMir
null
null
VinsmokeMir/Fine_Tuning_SC_Method_2_Epoch_13B
0
2
transformers
2023-05-23T15:28:29
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: Fine_Tuning_SC_Method_2_Epoch_13B results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Fine_Tuning_SC_Method_2_Epoch_13B This model is a fine-tuned version of [rafsankabir/Pretrained_E13B_Method2](https://huggingface.co/rafsankabir/Pretrained_E13B_Method2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4244 - Accuracy: 0.6873 - F1 Macro: 0.6544 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | No log | 1.27 | 500 | 1.0673 | 0.3976 | 0.1896 | | 1.0138 | 2.54 | 1000 | 0.8217 | 0.6331 | 0.5569 | | 1.0138 | 3.82 | 1500 | 0.7889 | 0.6662 | 0.6049 | | 0.7305 | 5.09 | 2000 | 0.7821 | 0.6765 | 0.6382 | | 0.7305 | 6.36 | 2500 | 0.7867 | 0.6918 | 0.6457 | | 0.5856 | 7.63 | 3000 | 0.8236 | 0.6892 | 0.6623 | | 0.5856 | 8.91 | 3500 | 0.8490 | 0.6835 | 0.6551 | | 0.4723 | 10.18 | 4000 | 0.9057 | 0.6854 | 0.6533 | | 0.4723 | 11.45 | 4500 | 0.9237 | 0.6796 | 0.6455 | | 0.3896 | 12.72 | 5000 | 0.9814 | 0.6879 | 0.6499 | | 0.3896 | 13.99 | 5500 | 0.9984 | 0.6745 | 0.6487 | | 0.3299 | 15.27 | 6000 | 1.0226 | 0.6822 | 0.6545 | | 0.3299 | 16.54 | 6500 | 1.0579 | 0.6758 | 0.6485 | | 0.2783 | 17.81 | 7000 | 1.0932 | 0.6796 | 0.6487 | | 0.2783 | 19.08 | 7500 | 1.1047 | 0.6950 | 0.6609 | | 0.2455 | 20.36 | 8000 | 1.1643 | 0.6860 | 0.6559 | | 0.2455 | 21.63 | 8500 | 1.1953 | 0.6841 | 0.6548 | | 0.2181 | 22.9 | 9000 | 1.2043 | 0.6835 | 0.6516 | | 0.2181 | 24.17 | 9500 | 1.2603 | 0.6867 | 0.6502 | | 0.1894 | 25.45 | 10000 | 1.2652 | 0.6860 | 0.6552 | | 0.1894 | 26.72 | 10500 | 1.2860 | 0.6790 | 0.6474 | | 0.1757 | 27.99 | 11000 | 1.2892 | 0.6854 | 0.6541 | | 0.1757 | 29.26 | 11500 | 1.3400 | 0.6803 | 0.6496 | | 0.1599 | 30.53 | 12000 | 1.3630 | 0.6828 | 0.6493 | | 0.1599 | 31.81 | 12500 | 1.3688 | 0.6854 | 0.6538 | | 0.1531 | 33.08 | 13000 | 1.3962 | 0.6854 | 0.6534 | | 0.1531 | 34.35 | 13500 | 1.4021 | 0.6841 | 0.6523 | | 0.1452 | 35.62 | 14000 | 1.4029 | 0.6847 | 0.6524 | | 0.1452 | 36.9 | 14500 | 1.4130 | 0.6886 | 0.6562 | | 0.1391 | 38.17 | 15000 | 1.4203 | 0.6879 | 0.6553 | | 0.1391 | 39.44 | 15500 | 1.4244 | 0.6873 | 0.6544 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
3,724
[ [ -0.048248291015625, -0.03875732421875, 0.0099945068359375, 0.00829315185546875, -0.01190948486328125, -0.008697509765625, -0.00424957275390625, -0.0106353759765625, 0.0330810546875, 0.0262603759765625, -0.05267333984375, -0.046966552734375, -0.046844482421875, ...
YakovElm/Apache5Classic_with_cleaning
2023-05-23T15:56:00.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Apache5Classic_with_cleaning
0
2
transformers
2023-05-23T15:55:23
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache5Classic_with_cleaning results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache5Classic_with_cleaning This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2193 - Train Accuracy: 0.9235 - Validation Loss: 0.6107 - Validation Accuracy: 0.8194 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3142 | 0.9001 | 0.4816 | 0.8233 | 0 | | 0.2820 | 0.9099 | 0.4622 | 0.8233 | 1 | | 0.2193 | 0.9235 | 0.6107 | 0.8194 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,800
[ [ -0.047943115234375, -0.045257568359375, 0.0209503173828125, -0.0019140243530273438, -0.035858154296875, -0.03179931640625, -0.0160064697265625, -0.0297393798828125, 0.006927490234375, 0.018096923828125, -0.053314208984375, -0.0498046875, -0.053070068359375, ...
VinsmokeMir/Hinton_SC_BS32_LR3e5
2023-05-23T16:22:55.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "endpoints_compatible", "region:us" ]
text-classification
VinsmokeMir
null
null
VinsmokeMir/Hinton_SC_BS32_LR3e5
0
2
transformers
2023-05-23T16:07:35
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: Hinton_SC_BS32_LR3e5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Hinton_SC_BS32_LR3e5 This model is a fine-tuned version of [rafsankabir/Pretrained_Final_E6](https://huggingface.co/rafsankabir/Pretrained_Final_E6) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4069 - Accuracy: 0.6790 - F1 Macro: 0.6473 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | No log | 1.27 | 500 | 1.0674 | 0.3976 | 0.1896 | | 1.0108 | 2.54 | 1000 | 0.8270 | 0.6426 | 0.5565 | | 1.0108 | 3.82 | 1500 | 0.8016 | 0.6522 | 0.5753 | | 0.7423 | 5.09 | 2000 | 0.7922 | 0.6611 | 0.6099 | | 0.7423 | 6.36 | 2500 | 0.8057 | 0.6726 | 0.6155 | | 0.6098 | 7.63 | 3000 | 0.8303 | 0.6860 | 0.6456 | | 0.6098 | 8.91 | 3500 | 0.8322 | 0.6847 | 0.6481 | | 0.5049 | 10.18 | 4000 | 0.8775 | 0.6994 | 0.6603 | | 0.5049 | 11.45 | 4500 | 0.9122 | 0.6956 | 0.6510 | | 0.4132 | 12.72 | 5000 | 0.9451 | 0.6879 | 0.6564 | | 0.4132 | 13.99 | 5500 | 0.9600 | 0.6809 | 0.6433 | | 0.3571 | 15.27 | 6000 | 1.0050 | 0.6854 | 0.6515 | | 0.3571 | 16.54 | 6500 | 1.0671 | 0.6847 | 0.6496 | | 0.2952 | 17.81 | 7000 | 1.0836 | 0.6873 | 0.6525 | | 0.2952 | 19.08 | 7500 | 1.0993 | 0.6873 | 0.6558 | | 0.2577 | 20.36 | 8000 | 1.1465 | 0.6924 | 0.6613 | | 0.2577 | 21.63 | 8500 | 1.2137 | 0.6828 | 0.6541 | | 0.2314 | 22.9 | 9000 | 1.1916 | 0.6924 | 0.6610 | | 0.2314 | 24.17 | 9500 | 1.2445 | 0.6860 | 0.6525 | | 0.2044 | 25.45 | 10000 | 1.2564 | 0.6867 | 0.6554 | | 0.2044 | 26.72 | 10500 | 1.2770 | 0.6828 | 0.6509 | | 0.1899 | 27.99 | 11000 | 1.3005 | 0.6854 | 0.6553 | | 0.1899 | 29.26 | 11500 | 1.3149 | 0.6816 | 0.6519 | | 0.1777 | 30.53 | 12000 | 1.3320 | 0.6835 | 0.6512 | | 0.1777 | 31.81 | 12500 | 1.3456 | 0.6847 | 0.6538 | | 0.1652 | 33.08 | 13000 | 1.3620 | 0.6796 | 0.6486 | | 0.1652 | 34.35 | 13500 | 1.3808 | 0.6796 | 0.6500 | | 0.1544 | 35.62 | 14000 | 1.3878 | 0.6841 | 0.6533 | | 0.1544 | 36.9 | 14500 | 1.3989 | 0.6790 | 0.6490 | | 0.1521 | 38.17 | 15000 | 1.4031 | 0.6822 | 0.6501 | | 0.1521 | 39.44 | 15500 | 1.4069 | 0.6790 | 0.6473 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
3,690
[ [ -0.04571533203125, -0.03302001953125, 0.0079498291015625, 0.00583648681640625, -0.0081024169921875, -0.00862884521484375, -0.0003750324249267578, -0.007701873779296875, 0.036285400390625, 0.0247650146484375, -0.0491943359375, -0.046905517578125, -0.0469055175781...
VinsmokeMir/Method2_E13B_SC_BS4_LR3e5
2023-05-23T18:18:39.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "endpoints_compatible", "region:us" ]
text-classification
VinsmokeMir
null
null
VinsmokeMir/Method2_E13B_SC_BS4_LR3e5
0
2
transformers
2023-05-23T16:27:50
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: Method2_E13B_SC_BS4_LR3e5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Method2_E13B_SC_BS4_LR3e5 This model is a fine-tuned version of [rafsankabir/Pretrained_E13B_Method2](https://huggingface.co/rafsankabir/Pretrained_E13B_Method2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5641 - Accuracy: 0.6803 - F1 Macro: 0.6446 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | |:-------------:|:-----:|:------:|:---------------:|:--------:|:--------:| | No log | 0.16 | 500 | 1.0767 | 0.3976 | 0.1896 | | 1.075 | 0.32 | 1000 | 1.0769 | 0.3976 | 0.1896 | | 1.075 | 0.48 | 1500 | 1.0183 | 0.5539 | 0.4151 | | 1.0246 | 0.64 | 2000 | 0.8956 | 0.5916 | 0.4745 | | 1.0246 | 0.8 | 2500 | 0.8743 | 0.6082 | 0.5120 | | 0.8948 | 0.95 | 3000 | 0.8365 | 0.6216 | 0.5546 | | 0.8948 | 1.11 | 3500 | 0.8635 | 0.6311 | 0.5752 | | 0.8069 | 1.27 | 4000 | 0.9060 | 0.6158 | 0.5398 | | 0.8069 | 1.43 | 4500 | 0.8231 | 0.6388 | 0.5924 | | 0.7969 | 1.59 | 5000 | 0.8368 | 0.6331 | 0.5935 | | 0.7969 | 1.75 | 5500 | 0.8262 | 0.6477 | 0.5981 | | 0.7804 | 1.91 | 6000 | 0.8299 | 0.6579 | 0.6208 | | 0.7804 | 2.07 | 6500 | 0.8197 | 0.6579 | 0.6364 | | 0.715 | 2.23 | 7000 | 0.8498 | 0.6624 | 0.5955 | | 0.715 | 2.39 | 7500 | 0.8357 | 0.6669 | 0.6218 | | 0.6953 | 2.54 | 8000 | 0.8438 | 0.6560 | 0.6269 | | 0.6953 | 2.7 | 8500 | 0.8528 | 0.6669 | 0.6022 | | 0.7074 | 2.86 | 9000 | 0.8009 | 0.6745 | 0.6457 | | 0.7074 | 3.02 | 9500 | 0.8222 | 0.6720 | 0.6402 | | 0.6598 | 3.18 | 10000 | 0.9347 | 0.6650 | 0.6062 | | 0.6598 | 3.34 | 10500 | 0.9053 | 0.6803 | 0.6510 | | 0.6634 | 3.5 | 11000 | 0.8902 | 0.6720 | 0.6434 | | 0.6634 | 3.66 | 11500 | 0.9370 | 0.6733 | 0.6415 | | 0.6182 | 3.82 | 12000 | 0.8914 | 0.6745 | 0.6519 | | 0.6182 | 3.98 | 12500 | 0.8938 | 0.6752 | 0.6389 | | 0.6043 | 4.13 | 13000 | 1.0143 | 0.6745 | 0.6413 | | 0.6043 | 4.29 | 13500 | 1.0768 | 0.6765 | 0.6543 | | 0.587 | 4.45 | 14000 | 1.1154 | 0.6790 | 0.6421 | | 0.587 | 4.61 | 14500 | 1.1295 | 0.6828 | 0.6525 | | 0.6345 | 4.77 | 15000 | 1.1210 | 0.6822 | 0.6390 | | 0.6345 | 4.93 | 15500 | 1.0062 | 0.6726 | 0.6380 | | 0.6 | 5.09 | 16000 | 1.1504 | 0.6739 | 0.6369 | | 0.6 | 5.25 | 16500 | 1.3298 | 0.6733 | 0.6280 | | 0.5667 | 5.41 | 17000 | 1.2751 | 0.6662 | 0.6308 | | 0.5667 | 5.57 | 17500 | 1.4070 | 0.6567 | 0.6069 | | 0.614 | 5.73 | 18000 | 1.2956 | 0.6694 | 0.6284 | | 0.614 | 5.88 | 18500 | 1.2795 | 0.6822 | 0.6382 | | 0.5651 | 6.04 | 19000 | 1.3021 | 0.6739 | 0.6478 | | 0.5651 | 6.2 | 19500 | 1.4076 | 0.6682 | 0.6333 | | 0.5347 | 6.36 | 20000 | 1.3917 | 0.6733 | 0.6344 | | 0.5347 | 6.52 | 20500 | 1.4203 | 0.6790 | 0.6285 | | 0.5278 | 6.68 | 21000 | 1.3340 | 0.6860 | 0.6628 | | 0.5278 | 6.84 | 21500 | 1.3521 | 0.6873 | 0.6489 | | 0.5796 | 7.0 | 22000 | 1.2835 | 0.6847 | 0.6567 | | 0.5796 | 7.16 | 22500 | 1.4437 | 0.6879 | 0.6563 | | 0.4627 | 7.32 | 23000 | 1.5052 | 0.6835 | 0.6435 | | 0.4627 | 7.47 | 23500 | 1.4991 | 0.6707 | 0.6434 | | 0.518 | 7.63 | 24000 | 1.5436 | 0.6656 | 0.6403 | | 0.518 | 7.79 | 24500 | 1.5247 | 0.6784 | 0.6433 | | 0.5373 | 7.95 | 25000 | 1.4743 | 0.6835 | 0.6537 | | 0.5373 | 8.11 | 25500 | 1.5379 | 0.6777 | 0.6385 | | 0.4539 | 8.27 | 26000 | 1.5548 | 0.6739 | 0.6393 | | 0.4539 | 8.43 | 26500 | 1.6174 | 0.6669 | 0.6378 | | 0.4519 | 8.59 | 27000 | 1.5949 | 0.6816 | 0.6504 | | 0.4519 | 8.75 | 27500 | 1.5558 | 0.6816 | 0.6357 | | 0.4813 | 8.91 | 28000 | 1.5826 | 0.6739 | 0.6553 | | 0.4813 | 9.06 | 28500 | 1.5929 | 0.6867 | 0.6540 | | 0.4121 | 9.22 | 29000 | 1.6260 | 0.6886 | 0.6545 | | 0.4121 | 9.38 | 29500 | 1.5950 | 0.6841 | 0.6500 | | 0.4451 | 9.54 | 30000 | 1.6146 | 0.6854 | 0.6481 | | 0.4451 | 9.7 | 30500 | 1.6587 | 0.6796 | 0.6493 | | 0.4039 | 9.86 | 31000 | 1.6173 | 0.6758 | 0.6400 | | 0.4039 | 10.02 | 31500 | 1.5952 | 0.6803 | 0.6517 | | 0.3921 | 10.18 | 32000 | 1.7298 | 0.6694 | 0.6413 | | 0.3921 | 10.34 | 32500 | 1.7106 | 0.6796 | 0.6467 | | 0.3799 | 10.5 | 33000 | 1.6695 | 0.6867 | 0.6505 | | 0.3799 | 10.66 | 33500 | 1.6907 | 0.6803 | 0.6550 | | 0.4003 | 10.81 | 34000 | 1.6811 | 0.6809 | 0.6413 | | 0.4003 | 10.97 | 34500 | 1.6644 | 0.6771 | 0.6352 | | 0.3812 | 11.13 | 35000 | 1.7371 | 0.6822 | 0.6386 | | 0.3812 | 11.29 | 35500 | 1.7405 | 0.6841 | 0.6516 | | 0.3399 | 11.45 | 36000 | 1.6981 | 0.6822 | 0.6503 | | 0.3399 | 11.61 | 36500 | 1.6536 | 0.6847 | 0.6483 | | 0.3653 | 11.77 | 37000 | 1.7461 | 0.6790 | 0.6475 | | 0.3653 | 11.93 | 37500 | 1.7247 | 0.6790 | 0.6485 | | 0.338 | 12.09 | 38000 | 1.7433 | 0.6905 | 0.6532 | | 0.338 | 12.25 | 38500 | 1.7331 | 0.6765 | 0.6558 | | 0.3302 | 12.4 | 39000 | 1.7603 | 0.6796 | 0.6456 | | 0.3302 | 12.56 | 39500 | 1.7784 | 0.6726 | 0.6505 | | 0.3195 | 12.72 | 40000 | 1.8032 | 0.6784 | 0.6469 | | 0.3195 | 12.88 | 40500 | 1.7869 | 0.6822 | 0.6553 | | 0.3508 | 13.04 | 41000 | 1.7761 | 0.6752 | 0.6506 | | 0.3508 | 13.2 | 41500 | 1.7806 | 0.6847 | 0.6454 | | 0.2915 | 13.36 | 42000 | 1.8542 | 0.6707 | 0.6528 | | 0.2915 | 13.52 | 42500 | 1.8365 | 0.6796 | 0.6520 | | 0.3023 | 13.68 | 43000 | 1.8563 | 0.6828 | 0.6524 | | 0.3023 | 13.84 | 43500 | 1.7947 | 0.6752 | 0.6495 | | 0.3213 | 13.99 | 44000 | 1.8130 | 0.6796 | 0.6546 | | 0.3213 | 14.15 | 44500 | 1.8288 | 0.6841 | 0.6502 | | 0.2644 | 14.31 | 45000 | 1.8140 | 0.6726 | 0.6453 | | 0.2644 | 14.47 | 45500 | 1.8711 | 0.6809 | 0.6552 | | 0.2739 | 14.63 | 46000 | 1.8439 | 0.6873 | 0.6534 | | 0.2739 | 14.79 | 46500 | 1.8302 | 0.6828 | 0.6460 | | 0.3012 | 14.95 | 47000 | 1.8708 | 0.6752 | 0.6454 | | 0.3012 | 15.11 | 47500 | 1.8498 | 0.6822 | 0.6487 | | 0.2805 | 15.27 | 48000 | 1.8908 | 0.6803 | 0.6453 | | 0.2805 | 15.43 | 48500 | 1.9480 | 0.6790 | 0.6406 | | 0.2895 | 15.59 | 49000 | 1.8994 | 0.6675 | 0.6392 | | 0.2895 | 15.74 | 49500 | 1.9135 | 0.6790 | 0.6461 | | 0.2444 | 15.9 | 50000 | 1.9387 | 0.6841 | 0.6480 | | 0.2444 | 16.06 | 50500 | 1.9175 | 0.6745 | 0.6463 | | 0.2569 | 16.22 | 51000 | 1.9332 | 0.6745 | 0.6472 | | 0.2569 | 16.38 | 51500 | 1.9400 | 0.6771 | 0.6445 | | 0.2251 | 16.54 | 52000 | 1.9596 | 0.6745 | 0.6441 | | 0.2251 | 16.7 | 52500 | 1.9959 | 0.6835 | 0.6464 | | 0.2686 | 16.86 | 53000 | 1.9879 | 0.6777 | 0.6456 | | 0.2686 | 17.02 | 53500 | 1.9882 | 0.6828 | 0.6471 | | 0.2168 | 17.18 | 54000 | 2.0254 | 0.6886 | 0.6520 | | 0.2168 | 17.33 | 54500 | 2.0432 | 0.6777 | 0.6442 | | 0.2735 | 17.49 | 55000 | 1.9843 | 0.6745 | 0.6443 | | 0.2735 | 17.65 | 55500 | 2.0330 | 0.6828 | 0.6451 | | 0.2159 | 17.81 | 56000 | 2.0698 | 0.6682 | 0.6423 | | 0.2159 | 17.97 | 56500 | 1.9797 | 0.6771 | 0.6426 | | 0.245 | 18.13 | 57000 | 2.0008 | 0.6726 | 0.6383 | | 0.245 | 18.29 | 57500 | 2.0425 | 0.6816 | 0.6473 | | 0.2036 | 18.45 | 58000 | 2.0482 | 0.6720 | 0.6356 | | 0.2036 | 18.61 | 58500 | 2.0950 | 0.6675 | 0.6384 | | 0.2336 | 18.77 | 59000 | 2.0167 | 0.6854 | 0.6458 | | 0.2336 | 18.92 | 59500 | 1.9984 | 0.6809 | 0.6406 | | 0.2332 | 19.08 | 60000 | 2.0552 | 0.6739 | 0.6441 | | 0.2332 | 19.24 | 60500 | 2.0450 | 0.6784 | 0.6459 | | 0.1984 | 19.4 | 61000 | 2.0599 | 0.6752 | 0.6434 | | 0.1984 | 19.56 | 61500 | 2.0704 | 0.6784 | 0.6417 | | 0.1945 | 19.72 | 62000 | 2.0755 | 0.6758 | 0.6445 | | 0.1945 | 19.88 | 62500 | 2.0660 | 0.6809 | 0.6428 | | 0.2143 | 20.04 | 63000 | 2.0670 | 0.6739 | 0.6448 | | 0.2143 | 20.2 | 63500 | 2.0581 | 0.6873 | 0.6509 | | 0.1878 | 20.36 | 64000 | 2.1272 | 0.6752 | 0.6452 | | 0.1878 | 20.52 | 64500 | 2.1002 | 0.6803 | 0.6511 | | 0.2144 | 20.67 | 65000 | 2.1383 | 0.6713 | 0.6438 | | 0.2144 | 20.83 | 65500 | 2.1070 | 0.6809 | 0.6419 | | 0.2121 | 20.99 | 66000 | 2.1273 | 0.6726 | 0.6412 | | 0.2121 | 21.15 | 66500 | 2.1605 | 0.6707 | 0.6395 | | 0.1835 | 21.31 | 67000 | 2.2891 | 0.6567 | 0.6331 | | 0.1835 | 21.47 | 67500 | 2.2472 | 0.6765 | 0.6402 | | 0.1991 | 21.63 | 68000 | 2.2238 | 0.6752 | 0.6412 | | 0.1991 | 21.79 | 68500 | 2.1965 | 0.6669 | 0.6372 | | 0.2018 | 21.95 | 69000 | 2.2050 | 0.6669 | 0.6395 | | 0.2018 | 22.11 | 69500 | 2.1795 | 0.6803 | 0.6467 | | 0.151 | 22.26 | 70000 | 2.2214 | 0.6777 | 0.6430 | | 0.151 | 22.42 | 70500 | 2.1754 | 0.6867 | 0.6513 | | 0.2078 | 22.58 | 71000 | 2.1959 | 0.6822 | 0.6488 | | 0.2078 | 22.74 | 71500 | 2.1933 | 0.6860 | 0.6481 | | 0.2004 | 22.9 | 72000 | 2.2001 | 0.6816 | 0.6500 | | 0.2004 | 23.06 | 72500 | 2.2159 | 0.6784 | 0.6490 | | 0.1773 | 23.22 | 73000 | 2.2603 | 0.6790 | 0.6462 | | 0.1773 | 23.38 | 73500 | 2.2331 | 0.6777 | 0.6470 | | 0.174 | 23.54 | 74000 | 2.2554 | 0.6765 | 0.6471 | | 0.174 | 23.7 | 74500 | 2.2000 | 0.6854 | 0.6517 | | 0.2071 | 23.85 | 75000 | 2.1896 | 0.6790 | 0.6500 | | 0.2071 | 24.01 | 75500 | 2.2270 | 0.6828 | 0.6479 | | 0.1419 | 24.17 | 76000 | 2.2776 | 0.6765 | 0.6426 | | 0.1419 | 24.33 | 76500 | 2.2895 | 0.6809 | 0.6437 | | 0.1564 | 24.49 | 77000 | 2.2746 | 0.6828 | 0.6515 | | 0.1564 | 24.65 | 77500 | 2.3156 | 0.6765 | 0.6356 | | 0.1802 | 24.81 | 78000 | 2.2891 | 0.6726 | 0.6426 | | 0.1802 | 24.97 | 78500 | 2.2610 | 0.6835 | 0.6502 | | 0.1795 | 25.13 | 79000 | 2.2856 | 0.6777 | 0.6478 | | 0.1795 | 25.29 | 79500 | 2.2410 | 0.6828 | 0.6478 | | 0.1753 | 25.45 | 80000 | 2.2738 | 0.6701 | 0.6451 | | 0.1753 | 25.6 | 80500 | 2.2679 | 0.6847 | 0.6440 | | 0.1517 | 25.76 | 81000 | 2.2667 | 0.6796 | 0.6525 | | 0.1517 | 25.92 | 81500 | 2.3471 | 0.6682 | 0.6455 | | 0.1593 | 26.08 | 82000 | 2.2945 | 0.6816 | 0.6504 | | 0.1593 | 26.24 | 82500 | 2.3202 | 0.6841 | 0.6456 | | 0.1332 | 26.4 | 83000 | 2.3667 | 0.6733 | 0.6405 | | 0.1332 | 26.56 | 83500 | 2.3295 | 0.6771 | 0.6377 | | 0.1765 | 26.72 | 84000 | 2.3680 | 0.6720 | 0.6394 | | 0.1765 | 26.88 | 84500 | 2.3246 | 0.6828 | 0.6456 | | 0.1578 | 27.04 | 85000 | 2.3192 | 0.6745 | 0.6453 | | 0.1578 | 27.19 | 85500 | 2.3216 | 0.6822 | 0.6471 | | 0.1355 | 27.35 | 86000 | 2.3730 | 0.6796 | 0.6490 | | 0.1355 | 27.51 | 86500 | 2.3650 | 0.6758 | 0.6415 | | 0.1308 | 27.67 | 87000 | 2.4015 | 0.6784 | 0.6471 | | 0.1308 | 27.83 | 87500 | 2.3700 | 0.6809 | 0.6403 | | 0.1446 | 27.99 | 88000 | 2.3748 | 0.6796 | 0.6483 | | 0.1446 | 28.15 | 88500 | 2.3575 | 0.6809 | 0.6497 | | 0.1135 | 28.31 | 89000 | 2.3663 | 0.6835 | 0.6438 | | 0.1135 | 28.47 | 89500 | 2.3817 | 0.6809 | 0.6490 | | 0.1354 | 28.63 | 90000 | 2.4026 | 0.6739 | 0.6436 | | 0.1354 | 28.78 | 90500 | 2.3825 | 0.6745 | 0.6392 | | 0.1661 | 28.94 | 91000 | 2.3461 | 0.6771 | 0.6482 | | 0.1661 | 29.1 | 91500 | 2.3496 | 0.6771 | 0.6422 | | 0.1188 | 29.26 | 92000 | 2.3568 | 0.6790 | 0.6488 | | 0.1188 | 29.42 | 92500 | 2.3496 | 0.6828 | 0.6430 | | 0.1433 | 29.58 | 93000 | 2.4252 | 0.6707 | 0.6378 | | 0.1433 | 29.74 | 93500 | 2.3805 | 0.6847 | 0.6459 | | 0.1328 | 29.9 | 94000 | 2.3918 | 0.6860 | 0.6495 | | 0.1328 | 30.06 | 94500 | 2.4026 | 0.6828 | 0.6495 | | 0.1317 | 30.22 | 95000 | 2.4319 | 0.6841 | 0.6483 | | 0.1317 | 30.38 | 95500 | 2.4375 | 0.6828 | 0.6492 | | 0.122 | 30.53 | 96000 | 2.4401 | 0.6822 | 0.6475 | | 0.122 | 30.69 | 96500 | 2.4397 | 0.6860 | 0.6473 | | 0.1266 | 30.85 | 97000 | 2.4572 | 0.6847 | 0.6504 | | 0.1266 | 31.01 | 97500 | 2.4506 | 0.6847 | 0.6513 | | 0.1437 | 31.17 | 98000 | 2.4251 | 0.6822 | 0.6496 | | 0.1437 | 31.33 | 98500 | 2.4420 | 0.6822 | 0.6521 | | 0.1205 | 31.49 | 99000 | 2.4446 | 0.6816 | 0.6464 | | 0.1205 | 31.65 | 99500 | 2.4408 | 0.6790 | 0.6450 | | 0.1188 | 31.81 | 100000 | 2.4522 | 0.6765 | 0.6487 | | 0.1188 | 31.97 | 100500 | 2.4313 | 0.6828 | 0.6495 | | 0.1326 | 32.12 | 101000 | 2.4577 | 0.6784 | 0.6466 | | 0.1326 | 32.28 | 101500 | 2.4524 | 0.6822 | 0.6479 | | 0.1103 | 32.44 | 102000 | 2.4665 | 0.6765 | 0.6426 | | 0.1103 | 32.6 | 102500 | 2.4642 | 0.6777 | 0.6431 | | 0.118 | 32.76 | 103000 | 2.4628 | 0.6771 | 0.6451 | | 0.118 | 32.92 | 103500 | 2.4671 | 0.6835 | 0.6474 | | 0.1214 | 33.08 | 104000 | 2.4613 | 0.6771 | 0.6503 | | 0.1214 | 33.24 | 104500 | 2.4833 | 0.6771 | 0.6475 | | 0.0965 | 33.4 | 105000 | 2.4888 | 0.6803 | 0.6450 | | 0.0965 | 33.56 | 105500 | 2.4910 | 0.6816 | 0.6476 | | 0.1207 | 33.72 | 106000 | 2.4806 | 0.6860 | 0.6482 | | 0.1207 | 33.87 | 106500 | 2.4741 | 0.6771 | 0.6445 | | 0.1277 | 34.03 | 107000 | 2.5050 | 0.6790 | 0.6409 | | 0.1277 | 34.19 | 107500 | 2.4809 | 0.6777 | 0.6402 | | 0.1164 | 34.35 | 108000 | 2.5006 | 0.6777 | 0.6428 | | 0.1164 | 34.51 | 108500 | 2.4889 | 0.6822 | 0.6474 | | 0.1103 | 34.67 | 109000 | 2.4852 | 0.6822 | 0.6457 | | 0.1103 | 34.83 | 109500 | 2.4923 | 0.6771 | 0.6418 | | 0.1013 | 34.99 | 110000 | 2.4662 | 0.6784 | 0.6437 | | 0.1013 | 35.15 | 110500 | 2.4755 | 0.6822 | 0.6483 | | 0.0922 | 35.31 | 111000 | 2.4908 | 0.6816 | 0.6465 | | 0.0922 | 35.46 | 111500 | 2.4922 | 0.6809 | 0.6502 | | 0.0856 | 35.62 | 112000 | 2.5096 | 0.6828 | 0.6422 | | 0.0856 | 35.78 | 112500 | 2.5035 | 0.6828 | 0.6463 | | 0.1005 | 35.94 | 113000 | 2.5231 | 0.6828 | 0.6452 | | 0.1005 | 36.1 | 113500 | 2.5196 | 0.6796 | 0.6469 | | 0.0884 | 36.26 | 114000 | 2.5187 | 0.6796 | 0.6444 | | 0.0884 | 36.42 | 114500 | 2.5180 | 0.6790 | 0.6454 | | 0.0891 | 36.58 | 115000 | 2.5407 | 0.6771 | 0.6442 | | 0.0891 | 36.74 | 115500 | 2.5349 | 0.6765 | 0.6417 | | 0.1082 | 36.9 | 116000 | 2.5451 | 0.6777 | 0.6427 | | 0.1082 | 37.05 | 116500 | 2.5349 | 0.6803 | 0.6469 | | 0.1072 | 37.21 | 117000 | 2.5507 | 0.6816 | 0.6457 | | 0.1072 | 37.37 | 117500 | 2.5485 | 0.6790 | 0.6459 | | 0.0882 | 37.53 | 118000 | 2.5477 | 0.6809 | 0.6448 | | 0.0882 | 37.69 | 118500 | 2.5620 | 0.6790 | 0.6401 | | 0.0852 | 37.85 | 119000 | 2.5597 | 0.6790 | 0.6447 | | 0.0852 | 38.01 | 119500 | 2.5545 | 0.6796 | 0.6436 | | 0.1029 | 38.17 | 120000 | 2.5519 | 0.6796 | 0.6436 | | 0.1029 | 38.33 | 120500 | 2.5539 | 0.6822 | 0.6463 | | 0.0903 | 38.49 | 121000 | 2.5590 | 0.6822 | 0.6490 | | 0.0903 | 38.65 | 121500 | 2.5658 | 0.6803 | 0.6457 | | 0.092 | 38.8 | 122000 | 2.5590 | 0.6803 | 0.6433 | | 0.092 | 38.96 | 122500 | 2.5620 | 0.6803 | 0.6449 | | 0.094 | 39.12 | 123000 | 2.5634 | 0.6796 | 0.6436 | | 0.094 | 39.28 | 123500 | 2.5677 | 0.6790 | 0.6435 | | 0.0801 | 39.44 | 124000 | 2.5662 | 0.6803 | 0.6445 | | 0.0801 | 39.6 | 124500 | 2.5648 | 0.6796 | 0.6440 | | 0.103 | 39.76 | 125000 | 2.5641 | 0.6809 | 0.6451 | | 0.103 | 39.92 | 125500 | 2.5641 | 0.6803 | 0.6446 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
20,239
[ [ -0.049652099609375, -0.03302001953125, 0.0169677734375, 0.00710296630859375, -0.001514434814453125, 0.00897979736328125, 0.004123687744140625, 0.0033054351806640625, 0.04986572265625, 0.0263519287109375, -0.042449951171875, -0.03961181640625, -0.037445068359375,...
YakovElm/Apache10Classic_with_cleaning
2023-05-23T17:00:42.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Apache10Classic_with_cleaning
0
2
transformers
2023-05-23T17:00:05
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache10Classic_with_cleaning results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache10Classic_with_cleaning This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1824 - Train Accuracy: 0.9385 - Validation Loss: 0.5452 - Validation Accuracy: 0.8644 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2431 | 0.9340 | 0.4461 | 0.8644 | 0 | | 0.2183 | 0.9383 | 0.4053 | 0.8644 | 1 | | 0.1824 | 0.9385 | 0.5452 | 0.8644 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,802
[ [ -0.0482177734375, -0.047515869140625, 0.021514892578125, -0.0003936290740966797, -0.036468505859375, -0.03228759765625, -0.01763916015625, -0.0282440185546875, 0.0098724365234375, 0.0190887451171875, -0.0526123046875, -0.04681396484375, -0.0533447265625, -0....
YakovElm/Qt10Classic
2023-05-23T17:48:15.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt10Classic
0
2
transformers
2023-05-23T17:47:40
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt10Classic results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt10Classic This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2261 - Train Accuracy: 0.9202 - Validation Loss: 0.2375 - Validation Accuracy: 0.9408 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2779 | 0.9208 | 0.2090 | 0.9416 | 0 | | 0.2558 | 0.9210 | 0.2075 | 0.9416 | 1 | | 0.2261 | 0.9202 | 0.2375 | 0.9408 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,766
[ [ -0.038726806640625, -0.032928466796875, 0.02471923828125, 0.0018625259399414062, -0.0345458984375, -0.01873779296875, -0.01093292236328125, -0.019195556640625, 0.00662994384765625, 0.0113677978515625, -0.05267333984375, -0.047576904296875, -0.048583984375, -...
YakovElm/Apache15Classic_with_cleaning
2023-05-23T18:05:44.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Apache15Classic_with_cleaning
0
2
transformers
2023-05-23T18:04:54
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache15Classic_with_cleaning results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache15Classic_with_cleaning This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1583 - Train Accuracy: 0.9535 - Validation Loss: 0.3355 - Validation Accuracy: 0.8924 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.1921 | 0.9542 | 0.3429 | 0.8924 | 0 | | 0.1792 | 0.9542 | 0.3336 | 0.8924 | 1 | | 0.1583 | 0.9535 | 0.3355 | 0.8924 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,802
[ [ -0.047576904296875, -0.04730224609375, 0.020416259765625, -0.0008416175842285156, -0.0369873046875, -0.031341552734375, -0.0174102783203125, -0.026885986328125, 0.008941650390625, 0.0185546875, -0.05316162109375, -0.048187255859375, -0.052734375, -0.02278137...
bahmanreza/keras-dummy-sequential-demo
2023-05-23T18:17:12.000Z
[ "keras", "region:us" ]
null
bahmanreza
null
null
bahmanreza/keras-dummy-sequential-demo
0
2
keras
2023-05-23T18:17:09
--- library_name: keras --- ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: | Hyperparameters | Value | | :-- | :-- | | name | Adam | | weight_decay | None | | clipnorm | None | | global_clipnorm | None | | clipvalue | None | | use_ema | False | | ema_momentum | 0.99 | | ema_overwrite_frequency | None | | jit_compile | True | | is_legacy_optimizer | False | | learning_rate | 0.0010000000474974513 | | beta_1 | 0.9 | | beta_2 | 0.999 | | epsilon | 1e-07 | | amsgrad | False | | training_precision | float32 | ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
840
[ [ -0.03759765625, -0.0401611328125, 0.0321044921875, 0.007656097412109375, -0.0433349609375, -0.017974853515625, 0.01090240478515625, -0.0037326812744140625, 0.020172119140625, 0.0307464599609375, -0.043670654296875, -0.051025390625, -0.039306640625, 0.0002460...
bahmanreza/keras-dummy-functional-demo
2023-05-23T18:19:23.000Z
[ "keras", "region:us" ]
null
bahmanreza
null
null
bahmanreza/keras-dummy-functional-demo
0
2
keras
2023-05-23T18:19:20
--- library_name: keras --- ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: | Hyperparameters | Value | | :-- | :-- | | name | Adam | | weight_decay | None | | clipnorm | None | | global_clipnorm | None | | clipvalue | None | | use_ema | False | | ema_momentum | 0.99 | | ema_overwrite_frequency | None | | jit_compile | True | | is_legacy_optimizer | False | | learning_rate | 0.0010000000474974513 | | beta_1 | 0.9 | | beta_2 | 0.999 | | epsilon | 1e-07 | | amsgrad | False | | training_precision | float32 | ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
840
[ [ -0.03759765625, -0.0401611328125, 0.0321044921875, 0.007633209228515625, -0.043304443359375, -0.0179443359375, 0.01091766357421875, -0.0037364959716796875, 0.02020263671875, 0.030731201171875, -0.043670654296875, -0.051025390625, -0.039306640625, 0.000247001...
YakovElm/Qt15Classic
2023-05-23T18:39:04.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt15Classic
0
2
transformers
2023-05-23T18:38:29
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt15Classic results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt15Classic This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2046 - Train Accuracy: 0.9367 - Validation Loss: 0.2038 - Validation Accuracy: 0.9505 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2400 | 0.9354 | 0.1896 | 0.9505 | 0 | | 0.2235 | 0.9367 | 0.1826 | 0.9505 | 1 | | 0.2046 | 0.9367 | 0.2038 | 0.9505 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,766
[ [ -0.040802001953125, -0.036529541015625, 0.0227813720703125, 0.0045928955078125, -0.03509521484375, -0.022796630859375, -0.0131072998046875, -0.021636962890625, 0.005573272705078125, 0.01251983642578125, -0.054656982421875, -0.049591064453125, -0.04852294921875, ...
AustinCarthy/MixGPT2_100KP_BFall_fromB_40KGen_topP_0.75
2023-05-24T04:28:24.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
AustinCarthy
null
null
AustinCarthy/MixGPT2_100KP_BFall_fromB_40KGen_topP_0.75
0
2
transformers
2023-05-23T18:55:41
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: MixGPT2_100KP_BFall_fromB_40KGen_topP_0.75 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MixGPT2_100KP_BFall_fromB_40KGen_topP_0.75 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0185 - Accuracy: 0.9973 - F1: 0.9712 - Precision: 0.9996 - Recall: 0.9444 - Roc Auc Score: 0.9722 - Tpr At Fpr 0.01: 0.9588 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 | |:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:| | 0.0052 | 1.0 | 91875 | 0.0700 | 0.9904 | 0.8876 | 0.9995 | 0.7982 | 0.8991 | 0.7842 | | 0.0055 | 2.0 | 183750 | 0.0208 | 0.9968 | 0.9652 | 0.9985 | 0.934 | 0.9670 | 0.9362 | | 0.0029 | 3.0 | 275625 | 0.0209 | 0.9970 | 0.9674 | 0.9991 | 0.9376 | 0.9688 | 0.9544 | | 0.0006 | 4.0 | 367500 | 0.0290 | 0.9962 | 0.9579 | 0.9996 | 0.9196 | 0.9598 | 0.9528 | | 0.001 | 5.0 | 459375 | 0.0185 | 0.9973 | 0.9712 | 0.9996 | 0.9444 | 0.9722 | 0.9588 | ### Framework versions - Transformers 4.29.1 - Pytorch 1.9.0+cu111 - Datasets 2.10.1 - Tokenizers 0.13.2
2,253
[ [ -0.04461669921875, -0.041961669921875, 0.0078125, 0.015380859375, -0.021484375, -0.01885986328125, -0.006397247314453125, -0.0202789306640625, 0.0281829833984375, 0.023681640625, -0.0521240234375, -0.04656982421875, -0.0538330078125, -0.0179595947265625, ...
YakovElm/Apache20Classic_with_cleaning
2023-05-23T19:11:07.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Apache20Classic_with_cleaning
0
2
transformers
2023-05-23T19:10:31
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache20Classic_with_cleaning results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache20Classic_with_cleaning This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1300 - Train Accuracy: 0.9622 - Validation Loss: 0.4258 - Validation Accuracy: 0.9055 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.1764 | 0.9548 | 0.3066 | 0.9055 | 0 | | 0.1518 | 0.9624 | 0.3933 | 0.9055 | 1 | | 0.1300 | 0.9622 | 0.4258 | 0.9055 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,802
[ [ -0.047943115234375, -0.047576904296875, 0.0211944580078125, -0.0004603862762451172, -0.035400390625, -0.032867431640625, -0.017303466796875, -0.02911376953125, 0.00812530517578125, 0.0199737548828125, -0.054718017578125, -0.0487060546875, -0.053466796875, -0...
oransom48/pretrained_bert_fordiseaseclassif_1
2023-05-23T19:34:04.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
oransom48
null
null
oransom48/pretrained_bert_fordiseaseclassif_1
0
2
transformers
2023-05-23T19:12:22
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: pretrained_bert_fordiseaseclassif_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # pretrained_bert_fordiseaseclassif_1 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Tokenizers 0.13.3
1,280
[ [ -0.040679931640625, -0.05010986328125, 0.0290679931640625, 0.01085662841796875, -0.043060302734375, -0.02227783203125, -0.0144805908203125, -0.01910400390625, 0.0162200927734375, 0.01328277587890625, -0.0611572265625, -0.044830322265625, -0.053131103515625, ...
damapika/roberta-base_mod_quoref
2023-05-23T21:20:20.000Z
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:quoref", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
question-answering
damapika
null
null
damapika/roberta-base_mod_quoref
0
2
transformers
2023-05-23T19:19:39
--- license: mit tags: - generated_from_trainer datasets: - quoref model-index: - name: roberta-base_mod_quoref results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base_mod_quoref This model is a fine-tuned version of [damapika/roberta-base_mod_squad](https://huggingface.co/damapika/roberta-base_mod_squad) on the quoref dataset. It achieves the following results on the evaluation set: - Loss: 1.5566 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1263 | 1.0 | 1213 | 1.2665 | | 0.7404 | 2.0 | 2426 | 1.3567 | | 0.5172 | 3.0 | 3639 | 1.5566 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
1,410
[ [ -0.031524658203125, -0.0506591796875, 0.01222991943359375, 0.01263427734375, -0.0304718017578125, -0.0255279541015625, -0.008270263671875, -0.01247406005859375, -0.0020503997802734375, 0.025115966796875, -0.0660400390625, -0.036712646484375, -0.049896240234375, ...
YakovElm/Qt20Classic
2023-05-23T19:29:42.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt20Classic
0
2
transformers
2023-05-23T19:29:07
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt20Classic results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt20Classic This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1836 - Train Accuracy: 0.9462 - Validation Loss: 0.1813 - Validation Accuracy: 0.9594 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2163 | 0.9454 | 0.1596 | 0.9586 | 0 | | 0.2044 | 0.9462 | 0.1554 | 0.9586 | 1 | | 0.1836 | 0.9462 | 0.1813 | 0.9594 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,766
[ [ -0.038909912109375, -0.0330810546875, 0.024505615234375, 0.0041656494140625, -0.03521728515625, -0.0181121826171875, -0.00885772705078125, -0.020416259765625, 0.0029697418212890625, 0.0134124755859375, -0.05413818359375, -0.04815673828125, -0.04742431640625, ...
YakovElm/Qt5Classic_with_cleaning
2023-05-23T19:40:41.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt5Classic_with_cleaning
0
2
transformers
2023-05-23T19:39:30
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt5Classic_with_cleaning results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt5Classic_with_cleaning This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2601 - Train Accuracy: 0.8948 - Validation Loss: 0.2534 - Validation Accuracy: 0.9262 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3465 | 0.8910 | 0.2510 | 0.9294 | 0 | | 0.3091 | 0.8943 | 0.2427 | 0.9294 | 1 | | 0.2601 | 0.8948 | 0.2534 | 0.9262 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,792
[ [ -0.040069580078125, -0.0300445556640625, 0.025909423828125, -0.00838470458984375, -0.036773681640625, -0.0213623046875, -0.006999969482421875, -0.02099609375, 0.0004801750183105469, 0.0156097412109375, -0.052886962890625, -0.052490234375, -0.047332763671875, ...
DraiP/NELA-GT_Classifier
2023-05-30T15:41:50.000Z
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "endpoints_compatible", "region:us" ]
text-classification
DraiP
null
null
DraiP/NELA-GT_Classifier
0
2
transformers
2023-05-23T19:52:04
--- tags: - generated_from_trainer model-index: - name: NELA-GT_Classifier results: [] metrics: - f1 - accuracy - roc_auc pipeline_tag: text-classification --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NELA-GT_Classifier This model was Fine-Tuned on the NELA-GT dataset. ## Model description This is a pretrained distilbert-uncased model finetuned for Fake News classification. ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1500 - num_epochs: 5 ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,080
[ [ -0.0269775390625, -0.057342529296875, 0.00739288330078125, 0.006839752197265625, -0.0226287841796875, -0.00942230224609375, 0.0014657974243164062, -0.020751953125, 0.0220489501953125, 0.01140594482421875, -0.036865234375, -0.039886474609375, -0.05670166015625, ...
YakovElm/Qt10Classic_with_cleaning
2023-05-23T20:30:37.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt10Classic_with_cleaning
0
2
transformers
2023-05-23T20:29:00
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt10Classic_with_cleaning results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt10Classic_with_cleaning This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2156 - Train Accuracy: 0.9208 - Validation Loss: 0.2238 - Validation Accuracy: 0.9416 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2830 | 0.9159 | 0.2121 | 0.9416 | 0 | | 0.2515 | 0.9210 | 0.2015 | 0.9416 | 1 | | 0.2156 | 0.9208 | 0.2238 | 0.9416 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,794
[ [ -0.038421630859375, -0.03466796875, 0.02471923828125, -0.005828857421875, -0.037933349609375, -0.0214080810546875, -0.00955963134765625, -0.018768310546875, 0.00634002685546875, 0.01519012451171875, -0.05230712890625, -0.049163818359375, -0.04901123046875, -...
YakovElm/Qt15Classic_with_cleaning
2023-05-23T21:20:39.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt15Classic_with_cleaning
0
2
transformers
2023-05-23T21:19:37
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt15Classic_with_cleaning results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt15Classic_with_cleaning This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2075 - Train Accuracy: 0.9367 - Validation Loss: 0.1841 - Validation Accuracy: 0.9505 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2493 | 0.9319 | 0.1890 | 0.9505 | 0 | | 0.2289 | 0.9367 | 0.1823 | 0.9505 | 1 | | 0.2075 | 0.9367 | 0.1841 | 0.9505 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,794
[ [ -0.041168212890625, -0.0361328125, 0.0245513916015625, -0.0038776397705078125, -0.036834716796875, -0.0240478515625, -0.010223388671875, -0.0207061767578125, 0.005218505859375, 0.0168914794921875, -0.053192138671875, -0.05023193359375, -0.049072265625, -0.02...
YakovElm/Hyperledger5Classic_with_cleaning
2023-05-23T21:26:48.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger5Classic_with_cleaning
0
2
transformers
2023-05-23T21:25:29
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger5Classic_with_cleaning results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger5Classic_with_cleaning This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2460 - Train Accuracy: 0.8983 - Validation Loss: 0.4738 - Validation Accuracy: 0.8102 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.4027 | 0.8537 | 0.4117 | 0.8361 | 0 | | 0.3585 | 0.8571 | 0.4243 | 0.8330 | 1 | | 0.2460 | 0.8983 | 0.4738 | 0.8102 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,810
[ [ -0.049591064453125, -0.0400390625, 0.022735595703125, -0.007717132568359375, -0.03204345703125, -0.0261688232421875, -0.01546478271484375, -0.0277252197265625, 0.0081939697265625, 0.0179901123046875, -0.053192138671875, -0.05322265625, -0.053680419921875, -0...
nakker/bert-base-banking77-pt2
2023-05-23T22:00:22.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:banking77", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
nakker
null
null
nakker/bert-base-banking77-pt2
0
2
transformers
2023-05-23T21:45:48
--- license: apache-2.0 tags: - generated_from_trainer datasets: - banking77 metrics: - f1 model-index: - name: bert-base-banking77-pt2 results: - task: name: Text Classification type: text-classification dataset: name: banking77 type: banking77 config: default split: test args: default metrics: - name: F1 type: f1 value: 0.9287229411281823 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-banking77-pt2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the banking77 dataset. It achieves the following results on the evaluation set: - Loss: 0.3041 - F1: 0.9287 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.0427 | 1.0 | 626 | 0.7423 | 0.8439 | | 0.3703 | 2.0 | 1252 | 0.3573 | 0.9200 | | 0.174 | 3.0 | 1878 | 0.3041 | 0.9287 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.11.0
1,723
[ [ -0.0293731689453125, -0.039154052734375, 0.01153564453125, 0.0139007568359375, -0.04315185546875, -0.0280303955078125, -0.00904083251953125, -0.0180816650390625, -0.005184173583984375, 0.040771484375, -0.042694091796875, -0.0433349609375, -0.05242919921875, ...
futuredatascience/strat_call_followup_prod
2023-05-23T21:55:37.000Z
[ "transformers", "pytorch", "deberta-v2", "text-classification", "autotrain", "en", "dataset:futuredatascience/autotrain-data-strat_call_follow_up", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
futuredatascience
null
null
futuredatascience/strat_call_followup_prod
0
2
transformers
2023-05-23T21:54:48
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain 🤗" datasets: - futuredatascience/autotrain-data-strat_call_follow_up co2_eq_emissions: emissions: 0.1979200898207588 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 61102134664 - CO2 Emissions (in grams): 0.1979 ## Validation Metrics - Loss: 0.266 - Accuracy: 0.939 - Precision: 0.955 - Recall: 0.913 - AUC: 0.952 - F1: 0.933 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/futuredatascience/autotrain-strat_call_follow_up-61102134664 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("futuredatascience/autotrain-strat_call_follow_up-61102134664", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("futuredatascience/autotrain-strat_call_follow_up-61102134664", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
1,214
[ [ -0.0272216796875, -0.029388427734375, 0.015533447265625, 0.00724029541015625, -0.00241851806640625, -0.0003056526184082031, 0.0067138671875, -0.0228271484375, 0.00469207763671875, 0.01531982421875, -0.0626220703125, -0.033905029296875, -0.05584716796875, -0....
YakovElm/Hyperledger10Classic_with_cleaning
2023-05-23T22:05:19.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger10Classic_with_cleaning
0
2
transformers
2023-05-23T22:04:44
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger10Classic_with_cleaning results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger10Classic_with_cleaning This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2350 - Train Accuracy: 0.9004 - Validation Loss: 0.4827 - Validation Accuracy: 0.7552 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3534 | 0.8831 | 0.3701 | 0.8600 | 0 | | 0.3162 | 0.8841 | 0.3594 | 0.8600 | 1 | | 0.2350 | 0.9004 | 0.4827 | 0.7552 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,812
[ [ -0.048797607421875, -0.043426513671875, 0.0218048095703125, -0.00762176513671875, -0.03009033203125, -0.027984619140625, -0.0191802978515625, -0.0247344970703125, 0.01360321044921875, 0.0186614990234375, -0.05096435546875, -0.048126220703125, -0.0538330078125, ...
YakovElm/Qt20Classic_with_cleaning
2023-05-23T22:11:51.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Qt20Classic_with_cleaning
0
2
transformers
2023-05-23T22:10:56
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt20Classic_with_cleaning results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt20Classic_with_cleaning This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1619 - Train Accuracy: 0.9500 - Validation Loss: 0.1838 - Validation Accuracy: 0.9554 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2142 | 0.9462 | 0.1640 | 0.9586 | 0 | | 0.1934 | 0.9462 | 0.1576 | 0.9586 | 1 | | 0.1619 | 0.9500 | 0.1838 | 0.9554 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,794
[ [ -0.038970947265625, -0.032867431640625, 0.02520751953125, -0.0042724609375, -0.03863525390625, -0.0196990966796875, -0.00699615478515625, -0.0184783935546875, 0.00284576416015625, 0.0169830322265625, -0.053375244140625, -0.048614501953125, -0.046966552734375, ...
wiorz/legal_bert_small
2023-05-23T22:37:57.000Z
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:cc-by-sa-4.0", "endpoints_compatible", "region:us" ]
text-classification
wiorz
null
null
wiorz/legal_bert_small
0
2
transformers
2023-05-23T22:34:52
--- license: cc-by-sa-4.0 tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: legal_bert_small results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # legal_bert_small This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0455 - Accuracy: 0.815 - Precision: 0.5 - Recall: 0.3784 - F1: 0.4308 - D-index: 1.5791 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1600 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | D-index | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:| | No log | 1.0 | 200 | 0.4205 | 0.84 | 0.7778 | 0.1892 | 0.3043 | 1.5473 | | No log | 2.0 | 400 | 0.5287 | 0.785 | 0.425 | 0.4595 | 0.4416 | 1.5664 | | 0.4788 | 3.0 | 600 | 0.8663 | 0.78 | 0.4146 | 0.4595 | 0.4359 | 1.5597 | | 0.4788 | 4.0 | 800 | 1.0432 | 0.8 | 0.4681 | 0.5946 | 0.5238 | 1.6309 | | 0.2168 | 5.0 | 1000 | 1.2325 | 0.795 | 0.375 | 0.1622 | 0.2264 | 1.4766 | | 0.2168 | 6.0 | 1200 | 1.3369 | 0.815 | 0.5 | 0.2432 | 0.3273 | 1.5326 | | 0.2168 | 7.0 | 1400 | 1.4949 | 0.785 | 0.4286 | 0.4865 | 0.4557 | 1.5754 | | 0.0682 | 8.0 | 1600 | 1.4499 | 0.815 | 0.5 | 0.3514 | 0.4127 | 1.5700 | | 0.0682 | 9.0 | 1800 | 1.7761 | 0.8 | 0.4348 | 0.2703 | 0.3333 | 1.5218 | | 0.0154 | 10.0 | 2000 | 1.8939 | 0.805 | 0.4375 | 0.1892 | 0.2642 | 1.5000 | | 0.0154 | 11.0 | 2200 | 1.9630 | 0.8 | 0.4211 | 0.2162 | 0.2857 | 1.5028 | | 0.0154 | 12.0 | 2400 | 1.9712 | 0.805 | 0.4545 | 0.2703 | 0.3390 | 1.5286 | | 0.0132 | 13.0 | 2600 | 1.9184 | 0.805 | 0.4737 | 0.4865 | 0.4800 | 1.6021 | | 0.0132 | 14.0 | 2800 | 1.9261 | 0.805 | 0.4706 | 0.4324 | 0.4507 | 1.5841 | | 0.0 | 15.0 | 3000 | 1.9619 | 0.815 | 0.5 | 0.4054 | 0.4478 | 1.5883 | | 0.0 | 16.0 | 3200 | 1.9798 | 0.82 | 0.5172 | 0.4054 | 0.4545 | 1.5949 | | 0.0 | 17.0 | 3400 | 2.0126 | 0.815 | 0.5 | 0.3784 | 0.4308 | 1.5791 | | 0.0 | 18.0 | 3600 | 2.0203 | 0.82 | 0.5185 | 0.3784 | 0.4375 | 1.5858 | | 0.0 | 19.0 | 3800 | 2.0286 | 0.82 | 0.5185 | 0.3784 | 0.4375 | 1.5858 | | 0.0 | 20.0 | 4000 | 2.0455 | 0.815 | 0.5 | 0.3784 | 0.4308 | 1.5791 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
3,574
[ [ -0.043609619140625, -0.042633056640625, 0.01337432861328125, 0.006153106689453125, -0.006214141845703125, -0.01091766357421875, -0.002399444580078125, -0.00943756103515625, 0.0478515625, 0.0238494873046875, -0.04193115234375, -0.05029296875, -0.04638671875, ...
YakovElm/Hyperledger15Classic_with_cleaning
2023-05-23T22:44:21.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger15Classic_with_cleaning
0
2
transformers
2023-05-23T22:43:21
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger15Classic_with_cleaning results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger15Classic_with_cleaning This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2347 - Train Accuracy: 0.9045 - Validation Loss: 0.3515 - Validation Accuracy: 0.8651 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3126 | 0.9031 | 0.3318 | 0.8807 | 0 | | 0.2844 | 0.9028 | 0.3275 | 0.8807 | 1 | | 0.2347 | 0.9045 | 0.3515 | 0.8651 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,812
[ [ -0.050750732421875, -0.04473876953125, 0.0212249755859375, -0.005512237548828125, -0.031341552734375, -0.02716064453125, -0.018585205078125, -0.0252685546875, 0.01122283935546875, 0.01910400390625, -0.054351806640625, -0.050933837890625, -0.052001953125, -0....
Gaivoronsky/ppo-Worm
2023-05-23T22:58:10.000Z
[ "ml-agents", "tensorboard", "onnx", "Worm", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Worm", "region:us" ]
reinforcement-learning
Gaivoronsky
null
null
Gaivoronsky/ppo-Worm
0
2
ml-agents
2023-05-23T22:58:04
--- library_name: ml-agents tags: - Worm - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Worm --- # **ppo** Agent playing **Worm** This is a trained model of a **ppo** agent playing **Worm** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Worm 2. Step 1: Find your model_id: Gaivoronsky/ppo-Worm 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
930
[ [ -0.025634765625, -0.0343017578125, 0.0212554931640625, 0.00347137451171875, -0.016937255859375, 0.00716400146484375, 0.031402587890625, -0.01073455810546875, 0.048065185546875, 0.04486083984375, -0.036102294921875, -0.05224609375, -0.038848876953125, -0.0025...
wiorz/legal_bert_small_defined_summarized
2023-05-23T23:21:17.000Z
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:cc-by-sa-4.0", "endpoints_compatible", "region:us" ]
text-classification
wiorz
null
null
wiorz/legal_bert_small_defined_summarized
0
2
transformers
2023-05-23T23:19:50
--- license: cc-by-sa-4.0 tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: legal_bert_small_defined_summarized results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # legal_bert_small_defined_summarized This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4178 - Accuracy: 0.87 - Precision: 0.6 - Recall: 0.2143 - F1: 0.3158 - D-index: 1.5771 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1600 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | D-index | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:| | No log | 1.0 | 200 | 0.4008 | 0.86 | 0.0 | 0.0 | 0.0 | 1.4803 | | No log | 2.0 | 400 | 0.3871 | 0.86 | 0.0 | 0.0 | 0.0 | 1.4803 | | 0.5179 | 3.0 | 600 | 0.4293 | 0.87 | 0.625 | 0.1786 | 0.2778 | 1.5635 | | 0.5179 | 4.0 | 800 | 0.6702 | 0.87 | 0.625 | 0.1786 | 0.2778 | 1.5635 | | 0.3816 | 5.0 | 1000 | 0.7388 | 0.865 | 0.5455 | 0.2143 | 0.3077 | 1.5706 | | 0.3816 | 6.0 | 1200 | 1.0422 | 0.86 | 0.5 | 0.1786 | 0.2632 | 1.5504 | | 0.3816 | 7.0 | 1400 | 1.0804 | 0.875 | 0.7143 | 0.1786 | 0.2857 | 1.5700 | | 0.0567 | 8.0 | 1600 | 1.1490 | 0.875 | 0.6364 | 0.25 | 0.3590 | 1.5970 | | 0.0567 | 9.0 | 1800 | 1.3190 | 0.865 | 0.5556 | 0.1786 | 0.2703 | 1.5570 | | 0.0125 | 10.0 | 2000 | 1.4220 | 0.835 | 0.3913 | 0.3214 | 0.3529 | 1.5718 | | 0.0125 | 11.0 | 2200 | 1.3567 | 0.855 | 0.4706 | 0.2857 | 0.3556 | 1.5845 | | 0.0125 | 12.0 | 2400 | 1.3349 | 0.875 | 0.7143 | 0.1786 | 0.2857 | 1.5700 | | 0.0021 | 13.0 | 2600 | 1.3494 | 0.87 | 0.5714 | 0.2857 | 0.3810 | 1.6038 | | 0.0021 | 14.0 | 2800 | 1.3747 | 0.87 | 0.6 | 0.2143 | 0.3158 | 1.5771 | | 0.0 | 15.0 | 3000 | 1.3890 | 0.87 | 0.6 | 0.2143 | 0.3158 | 1.5771 | | 0.0 | 16.0 | 3200 | 1.4069 | 0.875 | 0.6667 | 0.2143 | 0.3243 | 1.5835 | | 0.0 | 17.0 | 3400 | 1.4185 | 0.875 | 0.6667 | 0.2143 | 0.3243 | 1.5835 | | 0.0 | 18.0 | 3600 | 1.3945 | 0.865 | 0.5385 | 0.25 | 0.3415 | 1.5840 | | 0.0 | 19.0 | 3800 | 1.3921 | 0.87 | 0.6 | 0.2143 | 0.3158 | 1.5771 | | 0.0037 | 20.0 | 4000 | 1.4178 | 0.87 | 0.6 | 0.2143 | 0.3158 | 1.5771 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
3,611
[ [ -0.041351318359375, -0.037567138671875, 0.017974853515625, 0.0082855224609375, -0.0123443603515625, -0.02044677734375, -0.00395965576171875, -0.016754150390625, 0.03826904296875, 0.02703857421875, -0.041595458984375, -0.0540771484375, -0.045257568359375, -0....
YakovElm/Hyperledger20Classic_with_cleaning
2023-05-23T23:22:59.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Hyperledger20Classic_with_cleaning
0
2
transformers
2023-05-23T23:22:20
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger20Classic_with_cleaning results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger20Classic_with_cleaning This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2203 - Train Accuracy: 0.9253 - Validation Loss: 0.3795 - Validation Accuracy: 0.8102 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2855 | 0.9139 | 0.2936 | 0.8983 | 0 | | 0.2684 | 0.9132 | 0.2944 | 0.8983 | 1 | | 0.2203 | 0.9253 | 0.3795 | 0.8102 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,812
[ [ -0.049530029296875, -0.04400634765625, 0.022369384765625, -0.006893157958984375, -0.0309295654296875, -0.0268402099609375, -0.016448974609375, -0.0265350341796875, 0.010772705078125, 0.0203399658203125, -0.0538330078125, -0.049774169921875, -0.0546875, -0.01...
YakovElm/Apache5Classic_Unbalance
2023-05-24T00:53:55.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Apache5Classic_Unbalance
0
2
transformers
2023-05-24T00:52:53
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache5Classic_Unbalance results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache5Classic_Unbalance This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2052 - Train Accuracy: 0.9296 - Validation Loss: 0.6112 - Validation Accuracy: 0.7634 - Epoch: 3 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3100 | 0.9086 | 0.4565 | 0.8233 | 0 | | 0.2939 | 0.9094 | 0.4991 | 0.8233 | 1 | | 0.2656 | 0.9096 | 0.5105 | 0.8214 | 2 | | 0.2052 | 0.9296 | 0.6112 | 0.7634 | 3 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,873
[ [ -0.045806884765625, -0.03741455078125, 0.00998687744140625, 0.0162200927734375, -0.035919189453125, -0.0225830078125, -0.00865936279296875, -0.0238037109375, 0.0128173828125, 0.0186767578125, -0.055389404296875, -0.045562744140625, -0.05303955078125, -0.0250...
YakovElm/IntelDAOS5Classic_with_cleaning
2023-05-24T01:55:07.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS5Classic_with_cleaning
0
2
transformers
2023-05-24T01:54:32
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS5Classic_with_cleaning results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS5Classic_with_cleaning This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3107 - Train Accuracy: 0.8770 - Validation Loss: 0.4998 - Validation Accuracy: 0.8168 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.4073 | 0.8740 | 0.4326 | 0.8438 | 0 | | 0.3668 | 0.8740 | 0.4437 | 0.8438 | 1 | | 0.3107 | 0.8770 | 0.4998 | 0.8168 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,806
[ [ -0.045745849609375, -0.03778076171875, 0.022003173828125, -0.00878143310546875, -0.034088134765625, -0.025665283203125, -0.017608642578125, -0.029144287109375, 0.011322021484375, 0.01403045654296875, -0.0538330078125, -0.050994873046875, -0.051788330078125, ...
YakovElm/IntelDAOS10Classic_with_cleaning
2023-05-24T02:08:53.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS10Classic_with_cleaning
0
2
transformers
2023-05-24T02:08:18
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS10Classic_with_cleaning results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS10Classic_with_cleaning This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2283 - Train Accuracy: 0.9210 - Validation Loss: 0.4310 - Validation Accuracy: 0.8739 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3141 | 0.9200 | 0.3738 | 0.8739 | 0 | | 0.2612 | 0.9200 | 0.4105 | 0.8739 | 1 | | 0.2283 | 0.9210 | 0.4310 | 0.8739 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,808
[ [ -0.0462646484375, -0.03924560546875, 0.02154541015625, -0.009490966796875, -0.033477783203125, -0.025604248046875, -0.01885986328125, -0.02825927734375, 0.01447296142578125, 0.0140838623046875, -0.053192138671875, -0.048431396484375, -0.051910400390625, -0.0...
YakovElm/Apache10Classic_Unbalance
2023-05-24T02:16:00.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Apache10Classic_Unbalance
0
2
transformers
2023-05-24T02:15:25
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache10Classic_Unbalance results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache10Classic_Unbalance This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1445 - Train Accuracy: 0.9474 - Validation Loss: 0.5445 - Validation Accuracy: 0.8449 - Epoch: 3 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2424 | 0.9374 | 0.3809 | 0.8644 | 0 | | 0.2210 | 0.9383 | 0.4042 | 0.8644 | 1 | | 0.2036 | 0.9387 | 0.4134 | 0.8611 | 2 | | 0.1445 | 0.9474 | 0.5445 | 0.8449 | 3 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,875
[ [ -0.0455322265625, -0.040618896484375, 0.0099945068359375, 0.018585205078125, -0.036712646484375, -0.0238037109375, -0.0107879638671875, -0.0221405029296875, 0.016082763671875, 0.01898193359375, -0.053619384765625, -0.041259765625, -0.052764892578125, -0.0252...
YakovElm/IntelDAOS15Classic_with_cleaning
2023-05-24T02:22:39.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS15Classic_with_cleaning
0
2
transformers
2023-05-24T02:22:05
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS15Classic_with_cleaning results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS15Classic_with_cleaning This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1497 - Train Accuracy: 0.9460 - Validation Loss: 0.4865 - Validation Accuracy: 0.8859 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2546 | 0.9420 | 0.3716 | 0.8859 | 0 | | 0.1920 | 0.9460 | 0.3766 | 0.8859 | 1 | | 0.1497 | 0.9460 | 0.4865 | 0.8859 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,808
[ [ -0.045074462890625, -0.0418701171875, 0.02056884765625, -0.00637054443359375, -0.03570556640625, -0.026763916015625, -0.01922607421875, -0.0264739990234375, 0.0137939453125, 0.01363372802734375, -0.05413818359375, -0.0498046875, -0.05145263671875, -0.0250091...
YakovElm/IntelDAOS20Classic_with_cleaning
2023-05-24T02:36:25.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/IntelDAOS20Classic_with_cleaning
0
2
transformers
2023-05-24T02:35:50
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS20Classic_with_cleaning results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS20Classic_with_cleaning This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1286 - Train Accuracy: 0.9610 - Validation Loss: 0.3677 - Validation Accuracy: 0.9099 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2178 | 0.9600 | 0.3604 | 0.9099 | 0 | | 0.1502 | 0.9610 | 0.3197 | 0.9099 | 1 | | 0.1286 | 0.9610 | 0.3677 | 0.9099 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,808
[ [ -0.0458984375, -0.040252685546875, 0.0223236083984375, -0.008026123046875, -0.032928466796875, -0.0263824462890625, -0.0178680419921875, -0.029144287109375, 0.01361083984375, 0.01458740234375, -0.054779052734375, -0.04962158203125, -0.051971435546875, -0.024...
YakovElm/Jira5Classic_with_cleaning
2023-05-24T03:37:03.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira5Classic_with_cleaning
0
2
transformers
2023-05-24T03:36:17
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira5Classic_with_cleaning results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira5Classic_with_cleaning This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2646 - Train Accuracy: 0.8919 - Validation Loss: 1.1625 - Validation Accuracy: 0.5584 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5257 | 0.7639 | 0.6646 | 0.5931 | 0 | | 0.4200 | 0.7901 | 1.2433 | 0.4890 | 1 | | 0.2646 | 0.8919 | 1.1625 | 0.5584 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,796
[ [ -0.0386962890625, -0.03851318359375, 0.022186279296875, -0.0092926025390625, -0.03546142578125, -0.022430419921875, -0.01493072509765625, -0.0262908935546875, 0.0141448974609375, 0.0158538818359375, -0.05096435546875, -0.0513916015625, -0.050079345703125, -0...
YakovElm/Jira10Classic_with_cleaning
2023-05-24T03:49:14.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira10Classic_with_cleaning
0
2
transformers
2023-05-24T03:48:37
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira10Classic_with_cleaning results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira10Classic_with_cleaning This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2784 - Train Accuracy: 0.8961 - Validation Loss: 1.2932 - Validation Accuracy: 0.5773 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5051 | 0.7807 | 0.6951 | 0.4921 | 0 | | 0.4049 | 0.8048 | 1.1332 | 0.5079 | 1 | | 0.2784 | 0.8961 | 1.2932 | 0.5773 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,798
[ [ -0.0390625, -0.0426025390625, 0.0220947265625, -0.007266998291015625, -0.033782958984375, -0.0252685546875, -0.0167999267578125, -0.0243988037109375, 0.0175628662109375, 0.0159759521484375, -0.048614501953125, -0.04779052734375, -0.050750732421875, -0.026229...
YakovElm/Jira15Classic_with_cleaning
2023-05-24T04:01:31.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira15Classic_with_cleaning
0
2
transformers
2023-05-24T04:00:56
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira15Classic_with_cleaning results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira15Classic_with_cleaning This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3366 - Train Accuracy: 0.8384 - Validation Loss: 0.8679 - Validation Accuracy: 0.5868 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5010 | 0.7891 | 0.7283 | 0.5205 | 0 | | 0.4284 | 0.8006 | 0.9625 | 0.5205 | 1 | | 0.3366 | 0.8384 | 0.8679 | 0.5868 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,798
[ [ -0.040771484375, -0.042938232421875, 0.021392822265625, -0.007022857666015625, -0.034271240234375, -0.026611328125, -0.0166778564453125, -0.0252227783203125, 0.01526641845703125, 0.016815185546875, -0.050811767578125, -0.0499267578125, -0.05029296875, -0.026...
YakovElm/Apache15Classic_Unbalance
2023-05-24T04:09:32.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Apache15Classic_Unbalance
0
2
transformers
2023-05-24T04:08:55
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache15Classic_Unbalance results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache15Classic_Unbalance This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0303 - Train Accuracy: 0.9896 - Validation Loss: 0.7388 - Validation Accuracy: 0.8625 - Epoch: 5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.1954 | 0.9537 | 0.3336 | 0.8924 | 0 | | 0.1814 | 0.9542 | 0.3277 | 0.8924 | 1 | | 0.1669 | 0.9542 | 0.3218 | 0.8924 | 2 | | 0.1210 | 0.9555 | 0.4820 | 0.8716 | 3 | | 0.0538 | 0.9828 | 0.5766 | 0.8716 | 4 | | 0.0303 | 0.9896 | 0.7388 | 0.8625 | 5 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
2,035
[ [ -0.0455322265625, -0.039154052734375, 0.00925445556640625, 0.01352691650390625, -0.03424072265625, -0.02142333984375, -0.00949859619140625, -0.0206298828125, 0.0167999267578125, 0.0197601318359375, -0.055267333984375, -0.0447998046875, -0.05255126953125, -0....
YakovElm/Jira20Classic_with_cleaning
2023-05-24T04:13:50.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Jira20Classic_with_cleaning
0
2
transformers
2023-05-24T04:13:15
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira20Classic_with_cleaning results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira20Classic_with_cleaning This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1768 - Train Accuracy: 0.9339 - Validation Loss: 0.2889 - Validation Accuracy: 0.9085 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3741 | 0.8720 | 0.2881 | 0.9306 | 0 | | 0.2767 | 0.8793 | 0.2442 | 0.9338 | 1 | | 0.1768 | 0.9339 | 0.2889 | 0.9085 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,798
[ [ -0.03790283203125, -0.041961669921875, 0.0215606689453125, -0.0084228515625, -0.034088134765625, -0.0229644775390625, -0.01593017578125, -0.025115966796875, 0.017120361328125, 0.0178375244140625, -0.051361083984375, -0.0491943359375, -0.050323486328125, -0.0...
srglnjmb/mongolian-xlm-roberta-large-ner
2023-05-24T10:04:55.000Z
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "mn", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
srglnjmb
null
null
srglnjmb/mongolian-xlm-roberta-large-ner
1
2
transformers
2023-05-24T05:38:53
--- language: - mn license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: mongolian-xlm-roberta-large-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mongolian-xlm-roberta-large-ner This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1256 - Precision: 0.9361 - Recall: 0.9423 - F1: 0.9392 - Accuracy: 0.9824 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1837 | 1.0 | 477 | 0.0939 | 0.8524 | 0.8895 | 0.8705 | 0.9745 | | 0.0736 | 2.0 | 954 | 0.0731 | 0.9318 | 0.9370 | 0.9344 | 0.9809 | | 0.0525 | 3.0 | 1431 | 0.0724 | 0.9244 | 0.9311 | 0.9278 | 0.9795 | | 0.036 | 4.0 | 1908 | 0.0807 | 0.9312 | 0.9409 | 0.9361 | 0.9819 | | 0.0248 | 5.0 | 2385 | 0.0855 | 0.9314 | 0.9407 | 0.9360 | 0.9814 | | 0.0163 | 6.0 | 2862 | 0.1014 | 0.9327 | 0.9397 | 0.9362 | 0.9815 | | 0.0112 | 7.0 | 3339 | 0.0997 | 0.9354 | 0.9433 | 0.9393 | 0.9822 | | 0.0064 | 8.0 | 3816 | 0.1171 | 0.9384 | 0.9432 | 0.9408 | 0.9824 | | 0.0049 | 9.0 | 4293 | 0.1237 | 0.9355 | 0.9418 | 0.9387 | 0.9822 | | 0.0024 | 10.0 | 4770 | 0.1256 | 0.9361 | 0.9423 | 0.9392 | 0.9824 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
2,357
[ [ -0.034423828125, -0.039337158203125, 0.01490020751953125, 0.0023593902587890625, -0.012359619140625, -0.0172119140625, -0.009246826171875, -0.01316070556640625, 0.0281982421875, 0.031036376953125, -0.0504150390625, -0.060638427734375, -0.051422119140625, -0....
YakovElm/Apache20Classic_Unbalance
2023-05-24T05:40:05.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/Apache20Classic_Unbalance
0
2
transformers
2023-05-24T05:39:29
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache20Classic_Unbalance results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache20Classic_Unbalance This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0512 - Train Accuracy: 0.9824 - Validation Loss: 0.4866 - Validation Accuracy: 0.8748 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.1714 | 0.9611 | 0.3033 | 0.9055 | 0 | | 0.1558 | 0.9624 | 0.2976 | 0.9055 | 1 | | 0.1447 | 0.9624 | 0.3133 | 0.9055 | 2 | | 0.1024 | 0.9666 | 0.4150 | 0.8598 | 3 | | 0.0512 | 0.9824 | 0.4866 | 0.8748 | 4 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,955
[ [ -0.0458984375, -0.0401611328125, 0.00894927978515625, 0.0178985595703125, -0.036102294921875, -0.022979736328125, -0.00916290283203125, -0.0220794677734375, 0.0159912109375, 0.0207977294921875, -0.056243896484375, -0.043182373046875, -0.053558349609375, -0.0...
YakovElm/MariaDB5Classic_with_cleaning
2023-05-24T05:49:05.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB5Classic_with_cleaning
0
2
transformers
2023-05-24T05:48:30
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB5Classic_with_cleaning results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB5Classic_with_cleaning This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2274 - Train Accuracy: 0.9079 - Validation Loss: 0.2936 - Validation Accuracy: 0.9271 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3439 | 0.8962 | 0.2474 | 0.9322 | 0 | | 0.2799 | 0.8979 | 0.2671 | 0.9322 | 1 | | 0.2274 | 0.9079 | 0.2936 | 0.9271 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,802
[ [ -0.0435791015625, -0.042816162109375, 0.022216796875, -0.00482940673828125, -0.03436279296875, -0.0294647216796875, -0.012481689453125, -0.0265655517578125, 0.0148468017578125, 0.0191802978515625, -0.05712890625, -0.053863525390625, -0.04986572265625, -0.023...
YakovElm/MariaDB10Classic_with_cleaning
2023-05-24T06:04:11.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB10Classic_with_cleaning
0
2
transformers
2023-05-24T06:03:32
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB10Classic_with_cleaning results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB10Classic_with_cleaning This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1906 - Train Accuracy: 0.9163 - Validation Loss: 0.2498 - Validation Accuracy: 0.9523 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3088 | 0.9138 | 0.1970 | 0.9523 | 0 | | 0.2364 | 0.9163 | 0.2051 | 0.9523 | 1 | | 0.1906 | 0.9163 | 0.2498 | 0.9523 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,804
[ [ -0.042877197265625, -0.04388427734375, 0.021209716796875, -0.0034732818603515625, -0.03594970703125, -0.029815673828125, -0.01331329345703125, -0.0245819091796875, 0.017333984375, 0.0186309814453125, -0.05694580078125, -0.052093505859375, -0.049652099609375, ...
YakovElm/MariaDB15Classic_with_cleaning
2023-05-24T06:18:59.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB15Classic_with_cleaning
0
2
transformers
2023-05-24T06:18:23
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB15Classic_with_cleaning results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB15Classic_with_cleaning This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1825 - Train Accuracy: 0.9381 - Validation Loss: 0.1702 - Validation Accuracy: 0.9598 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2807 | 0.9264 | 0.1655 | 0.9598 | 0 | | 0.2227 | 0.9339 | 0.1533 | 0.9598 | 1 | | 0.1825 | 0.9381 | 0.1702 | 0.9598 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,804
[ [ -0.04443359375, -0.043914794921875, 0.0213775634765625, -0.002410888671875, -0.0343017578125, -0.0298919677734375, -0.01457977294921875, -0.025390625, 0.01438140869140625, 0.01904296875, -0.055755615234375, -0.050628662109375, -0.051239013671875, -0.02590942...
YakovElm/MariaDB20Classic_with_cleaning
2023-05-24T06:34:03.000Z
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
YakovElm
null
null
YakovElm/MariaDB20Classic_with_cleaning
0
2
transformers
2023-05-24T06:33:08
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB20Classic_with_cleaning results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB20Classic_with_cleaning This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1994 - Train Accuracy: 0.9356 - Validation Loss: 0.1398 - Validation Accuracy: 0.9698 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2732 | 0.9331 | 0.1507 | 0.9698 | 0 | | 0.2300 | 0.9356 | 0.1264 | 0.9698 | 1 | | 0.1994 | 0.9356 | 0.1398 | 0.9698 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
1,804
[ [ -0.043121337890625, -0.04595947265625, 0.0218658447265625, -0.0036411285400390625, -0.03582763671875, -0.0298004150390625, -0.01293182373046875, -0.025665283203125, 0.0167388916015625, 0.0198822021484375, -0.0584716796875, -0.05267333984375, -0.049652099609375, ...
MJ03/distilbert-base-uncased-finetuned-emotion
2023-05-24T07:13:05.000Z
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emo", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
MJ03
null
null
MJ03/distilbert-base-uncased-finetuned-emotion
0
2
transformers
2023-05-24T06:55:05
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emo metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emo type: emo config: emo2019 split: test args: emo2019 metrics: - name: Accuracy type: accuracy value: 0.8718460700671629 - name: F1 type: f1 value: 0.8831861224754917 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emo dataset. It achieves the following results on the evaluation set: - Loss: 0.3598 - Accuracy: 0.8718 - F1: 0.8832 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5285 | 1.0 | 472 | 0.3616 | 0.8673 | 0.8792 | | 0.2833 | 2.0 | 944 | 0.3598 | 0.8718 | 0.8832 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 1.16.1 - Tokenizers 0.13.3
1,842
[ [ -0.035125732421875, -0.052581787109375, 0.0233917236328125, 0.01415252685546875, -0.024261474609375, -0.0197296142578125, -0.013519287109375, -0.00481414794921875, 0.019195556640625, 0.0108795166015625, -0.058349609375, -0.05584716796875, -0.057098388671875, ...
sadra-barikbin/ppo-UnityPyramids
2023-05-24T07:17:04.000Z
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
sadra-barikbin
null
null
sadra-barikbin/ppo-UnityPyramids
0
2
ml-agents
2023-05-24T07:16:32
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: sadra-barikbin/ppo-UnityPyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
962
[ [ -0.026580810546875, -0.01898193359375, -0.0006728172302246094, 0.0252838134765625, -0.0106048583984375, 0.0068511962890625, 0.0279998779296875, -0.003414154052734375, 0.035125732421875, 0.034271240234375, -0.035369873046875, -0.05126953125, -0.035675048828125, ...
Avitas8485/speecht5_tts_commonvoice_en
2023-09-12T22:34:44.000Z
[ "transformers", "pytorch", "tensorboard", "safetensors", "speecht5", "text-to-audio", "text-to-speech", "generated_from_trainer", "en", "dataset:mozilla/commonvoice", "license:mit", "endpoints_compatible", "region:us" ]
text-to-speech
Avitas8485
null
null
Avitas8485/speecht5_tts_commonvoice_en
1
2
transformers
2023-05-24T07:31:08
--- language: - en license: mit tags: - text-to-speech - generated_from_trainer datasets: - mozilla/commonvoice base_model: microsoft/speecht5_tts model-index: - name: SpeechT5 TTS English results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SpeechT5 TTS English This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the commonvoice dataset. It achieves the following results on the evaluation set: - Loss: 0.4261 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.4543 | 13.61 | 1000 | 0.4225 | | 0.4525 | 27.21 | 2000 | 0.4203 | | 0.4359 | 40.82 | 3000 | 0.4228 | | 0.4324 | 54.42 | 4000 | 0.4261 | ### Framework versions - Transformers 4.30.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
1,667
[ [ -0.0251922607421875, -0.03179931640625, -0.0002244710922241211, 0.01505279541015625, -0.031280517578125, -0.019927978515625, -0.010955810546875, -0.019866943359375, -0.0004382133483886719, 0.0226287841796875, -0.049163818359375, -0.05462646484375, -0.05215454101...