license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
cc-by-4.0
['norwegian', 'bert', 'ner']
false
Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("NbAiLab/nb-bert-base-ner") model = AutoModelForTokenClassification.from_pretrained("NbAiLab/nb-bert-base-ner") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Jeg heter Kjell og bor i Oslo." ner_results = nlp(example) print(ner_results) ```
a8b1b3b1bf66f94290554cc4978339ee
apache-2.0
['bert', 'cola', 'glue', 'torchdistill']
false
`bert-large-uncased` fine-tuned on CoLA dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb). The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/cola/ce/bert_large_uncased.yaml). I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **80.2**.
b8175ede373ff5212287c4a0c5669aaf
cc-by-4.0
['translation', 'opus-mt-tc']
false
opus-mt-tc-big-cat_oci_spa-en Neural machine translation model for translating from Catalan, Occitan and Spanish (cat+oci+spa) to English (en). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ```
6ac38f5eb5413c660b0d5cf73d5b9591
cc-by-4.0
['translation', 'opus-mt-tc']
false
Model info * Release: 2022-03-13 * source language(s): cat spa * target language(s): eng * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-03-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cat+oci+spa-eng/opusTCv20210807+bt_transformer-big_2022-03-13.zip) * more information released models: [OPUS-MT cat+oci+spa-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat+oci+spa-eng/README.md)
5c3ab09244a753dc9155936097c2f889
cc-by-4.0
['translation', 'opus-mt-tc']
false
Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "¿Puedo hacerte una pregunta?", "Toca algo de música." ] model_name = "pytorch-models/opus-mt-tc-big-cat_oci_spa-en" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) )
d496bab19c39b0c4a4ccaafbb87a12d4
cc-by-4.0
['translation', 'opus-mt-tc']
false
He plays some music. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-cat_oci_spa-en") print(pipe("¿Puedo hacerte una pregunta?"))
7858a39993786413528ea605a6a5f6b6
cc-by-4.0
['translation', 'opus-mt-tc']
false
Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-03-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat+oci+spa-eng/opusTCv20210807+bt_transformer-big_2022-03-13.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat+oci+spa-eng/opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU |
1c2738681406fd55f582b9379eaa7c00
cc-by-4.0
['translation', 'opus-mt-tc']
false
words | |----------|---------|-------|-------|-------|--------| | cat-eng | tatoeba-test-v2021-08-07 | 0.72019 | 57.3 | 1631 | 12627 | | spa-eng | tatoeba-test-v2021-08-07 | 0.76017 | 62.3 | 16583 | 138123 | | cat-eng | flores101-devtest | 0.69572 | 45.4 | 1012 | 24721 | | oci-eng | flores101-devtest | 0.63347 | 37.5 | 1012 | 24721 | | spa-eng | flores101-devtest | 0.59696 | 29.9 | 1012 | 24721 | | spa-eng | newssyscomb2009 | 0.57104 | 30.8 | 502 | 11818 | | spa-eng | news-test2008 | 0.55440 | 27.9 | 2051 | 49380 | | spa-eng | newstest2009 | 0.57153 | 30.2 | 2525 | 65399 | | spa-eng | newstest2010 | 0.61890 | 36.8 | 2489 | 61711 | | spa-eng | newstest2011 | 0.60278 | 34.7 | 3003 | 74681 | | spa-eng | newstest2012 | 0.62760 | 38.6 | 3003 | 72812 | | spa-eng | newstest2013 | 0.60994 | 35.3 | 3000 | 64505 | | spa-eng | tico19-test | 0.74033 | 51.8 | 2100 | 56315 |
dc2591e73926d512c65fa72f5da9967b
mit
['generated_from_trainer', 'nlu', 'text-classification']
false
multilingual_minilm-amazon_massive-intent_eu7 This model is a fine-tuned version of [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) on the [MASSIVE 1.1](https://huggingface.co/datasets/AmazonScience/massive) dataset. It achieves the following results on the evaluation set: - Loss: 0.8238 - Accuracy: 0.8623 - F1: 0.8623
9110d479cb581915f85cd3e793ff730a
mit
['generated_from_trainer', 'nlu', 'text-classification']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 1.3523 | 1.0 | 5038 | 1.3058 | 0.6937 | 0.6937 | | 0.7842 | 2.0 | 10076 | 0.8434 | 0.8059 | 0.8059 | | 0.5359 | 3.0 | 15114 | 0.7231 | 0.8302 | 0.8302 | | 0.4106 | 4.0 | 20152 | 0.7121 | 0.8443 | 0.8443 | | 0.3294 | 5.0 | 25190 | 0.7366 | 0.8497 | 0.8497 | | 0.2621 | 6.0 | 30228 | 0.7702 | 0.8528 | 0.8528 | | 0.2164 | 7.0 | 35266 | 0.7773 | 0.8577 | 0.8577 | | 0.1756 | 8.0 | 40304 | 0.8080 | 0.8569 | 0.8569 | | 0.1625 | 9.0 | 45342 | 0.8162 | 0.8624 | 0.8624 | | 0.1448 | 10.0 | 50380 | 0.8238 | 0.8623 | 0.8623 |
9231f1997f48c5fe45c5a4952d8f7676
apache-2.0
['generated_from_trainer']
false
whisper-dpv-finetuned This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset. It achieves the following results on the evaluation set: - epoch: 13.07 - eval_loss: 0.0002 - eval_runtime: 8695.8511 - eval_samples_per_second: 0.458 - eval_steps_per_second: 0.458 - eval_wer: 0.0112 - step: 13000
d499af1e4e367c2528f12e5356a0a350
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 15 - mixed_precision_training: Native AMP
f3182ee67a6da48ef0cdfa15bc246207
mit
['generated_from_trainer']
false
bert-base-historic-multilingual-64k-td-cased-squad-fr This model is a fine-tuned version of [dbmdz/bert-base-historic-multilingual-64k-td-cased](https://huggingface.co/dbmdz/bert-base-historic-multilingual-64k-td-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7419
026bce2b2e6f50d94e5c42b6b6ddd653
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9834 | 1.0 | 3569 | 1.8605 | | 1.663 | 2.0 | 7138 | 1.7419 |
7d7ed4166556b3b565718a7d25a3c676
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-google-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5108 - Wer: 0.3342
7286ac5d04db1503e8748124337f9955
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.6383 | 1.0 | 500 | 2.3747 | 1.0 | | 0.9624 | 2.01 | 1000 | 0.5724 | 0.5213 | | 0.4521 | 3.01 | 1500 | 0.4892 | 0.4794 | | 0.3126 | 4.02 | 2000 | 0.4250 | 0.3991 | | 0.2299 | 5.02 | 2500 | 0.4288 | 0.3929 | | 0.195 | 6.02 | 3000 | 0.4707 | 0.3974 | | 0.1602 | 7.03 | 3500 | 0.4731 | 0.4034 | | 0.1477 | 8.03 | 4000 | 0.4405 | 0.3896 | | 0.1284 | 9.04 | 4500 | 0.4663 | 0.3850 | | 0.1114 | 10.04 | 5000 | 0.4814 | 0.3759 | | 0.1024 | 11.04 | 5500 | 0.4821 | 0.3701 | | 0.0973 | 12.05 | 6000 | 0.4718 | 0.3709 | | 0.0832 | 13.05 | 6500 | 0.5257 | 0.3678 | | 0.0741 | 14.06 | 7000 | 0.4741 | 0.3621 | | 0.0696 | 15.06 | 7500 | 0.5073 | 0.3710 | | 0.0664 | 16.06 | 8000 | 0.4886 | 0.3651 | | 0.0613 | 17.07 | 8500 | 0.5300 | 0.3588 | | 0.0612 | 18.07 | 9000 | 0.4983 | 0.3543 | | 0.049 | 19.08 | 9500 | 0.5158 | 0.3592 | | 0.0455 | 20.08 | 10000 | 0.5213 | 0.3525 | | 0.042 | 21.08 | 10500 | 0.4979 | 0.3474 | | 0.0376 | 22.09 | 11000 | 0.5335 | 0.3493 | | 0.0331 | 23.09 | 11500 | 0.5276 | 0.3451 | | 0.0346 | 24.1 | 12000 | 0.5106 | 0.3428 | | 0.0294 | 25.1 | 12500 | 0.5414 | 0.3426 | | 0.0265 | 26.1 | 13000 | 0.5234 | 0.3363 | | 0.0273 | 27.11 | 13500 | 0.5207 | 0.3356 | | 0.0255 | 28.11 | 14000 | 0.5092 | 0.3354 | | 0.0248 | 29.12 | 14500 | 0.5108 | 0.3342 |
f56859379225c84a9e9667ec1af5f5c6
apache-2.0
['generated_from_keras_callback']
false
ksabeh/distilbert-base-uncased-mlm-electronics-attribute-correction-qa-mlm This model is a fine-tuned version of [ksabeh/distilbert-base-uncased-mlm-electronics](https://huggingface.co/ksabeh/distilbert-base-uncased-mlm-electronics) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0607 - Validation Loss: 0.0609 - Epoch: 1
1a9ceaaa21380daf31846915fbce7bf4
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 36794, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32
e45f06f27b4dab6a0115be06cc416836
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_data_aug_qqp_384 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.5363 - Accuracy: 0.7995 - F1: 0.7338 - Combined Score: 0.7666
8e8c1d67e4e664114c61184c954af243
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Accuracy | Combined Score | F1 | Validation Loss | |:-------------:|:-----:|:------:|:--------:|:--------------:|:------:|:---------------:| | 0.3538 | 1.0 | 29671 | 0.7995 | 0.7666 | 0.7338 | 0.5363 | | 0.1571 | 2.0 | 59342 | 0.7215 | 0.8000 | 0.7396 | 0.7698 | | 0.0894 | 3.0 | 89013 | 0.7922 | 0.7998 | 0.7407 | 0.7702 | | 0.0596 | 4.0 | 118684 | 0.8829 | 0.8045 | 0.7399 | 0.7722 | | 0.0433 | 5.0 | 148355 | 0.8505 | 0.8110 | 0.7443 | 0.7777 | | 0.0334 | 6.0 | 178026 | 1.0843 | 0.8047 | 0.7446 | 0.7746 |
d633a5831c490a3e6036e7a9b6a729f4
apache-2.0
['automatic-speech-recognition', 'id']
false
exp_w2v2t_id_vp-100k_s842 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (id)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
784d3bfcb703489ee48ad78594a8ee6d
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 - mixed_precision_training: Native AMP
ff97d13dfc0427fe137136787104765c
mit
['generated_from_trainer']
false
Train ```bash python run_qa.py \ --model_name_or_path roberta-large \ --dataset_name squad \ --do_eval \ --do_train \ --evaluation_strategy steps \ --eval_steps 500 \ --learning_rate 3e-5 \ --fp16 \ --num_train_epochs 2 \ --per_device_eval_batch_size 64 \ --per_device_train_batch_size 16 \ --max_seq_length 384 \ --doc_stride 128 \ --save_steps 1000 \ --logging_steps 1 \ --overwrite_output_dir \ --run_name $RUNID \ --output_dir $OUTDIR ```
16f2f9f15a699a16fde1f13aa1952702
mit
['generated_from_trainer']
false
Eval ```bash export CUDA_VISIBLE_DEVICES=0 MODEL=vuiseng9/roberta-l-squadv1.1 OUTDIR=eval-$(basename $MODEL) WORKDIR=transformers/examples/pytorch/question-answering cd $WORKDIR nohup python run_qa.py \ --model_name_or_path $MODEL \ --dataset_name squad \ --do_eval \ --per_device_eval_batch_size 16 \ --max_seq_length 384 \ --doc_stride 128 \ --overwrite_output_dir \ --output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log & ``` ```bash eval_exact_match = 88.4674 eval_f1 = 94.3001 eval_samples = 10790 ```
fc1302745f6f6d334f995fc144f298ff
apache-2.0
['automatic-speech-recognition', 'ru']
false
exp_w2v2t_ru_xlsr-53_s303 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
f7cb7f198fca36d3141cffbe0d5f333e
apache-2.0
['translation']
false
opus-mt-es-ru * source languages: es * target languages: ru * OPUS readme: [es-ru](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ru/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ru/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ru/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ru/opus-2020-01-20.eval.txt)
841330209bc4c136361aa73a2ce98212
apache-2.0
['translation']
false
Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newstest2012.es.ru | 20.9 | 0.489 | | newstest2013.es.ru | 23.4 | 0.504 | | Tatoeba.es.ru | 47.0 | 0.657 |
203524ead4ffcefb680e21757e27c477
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the rotten_tomatoes_movie_review dataset. It achieves the following results on the evaluation set: - Loss: 0.8692 - Accuracy: 0.8433 - F1: 0.8407
d6d6a4267d2779a0ff74e6eec15743e4
apache-2.0
['generated_from_trainer']
false
wav2vec-base-All This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0545 - Wer: 0.8861 - Cer: 0.5014
cf7383243027a263b661965d57d0d21d
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 120 - mixed_precision_training: Native AMP
bdada70946f46d7b74ca77655b717a7c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:------:|:-----:|:---------------:|:------:|:------:| | No log | 3.33 | 500 | 4.0654 | 1.0 | 0.9823 | | No log | 6.67 | 1000 | 3.4532 | 1.0 | 0.9823 | | No log | 10.0 | 1500 | 3.0707 | 0.9992 | 0.9781 | | No log | 13.33 | 2000 | 2.7335 | 1.0017 | 0.9027 | | No log | 16.67 | 2500 | 2.5896 | 1.0690 | 0.7302 | | No log | 20.0 | 3000 | 2.3315 | 1.0690 | 0.6677 | | No log | 23.33 | 3500 | 2.2217 | 1.0150 | 0.5966 | | No log | 26.67 | 4000 | 2.3802 | 1.0549 | 0.5948 | | No log | 30.0 | 4500 | 2.2208 | 0.9975 | 0.5681 | | 2.4224 | 33.33 | 5000 | 2.2687 | 0.9800 | 0.5537 | | 2.4224 | 36.67 | 5500 | 2.3169 | 0.9476 | 0.5493 | | 2.4224 | 40.0 | 6000 | 2.5196 | 0.9900 | 0.5509 | | 2.4224 | 43.33 | 6500 | 2.4816 | 0.9501 | 0.5272 | | 2.4224 | 46.67 | 7000 | 2.4894 | 0.9485 | 0.5276 | | 2.4224 | 50.0 | 7500 | 2.4555 | 0.9418 | 0.5305 | | 2.4224 | 53.33 | 8000 | 2.7326 | 0.9559 | 0.5255 | | 2.4224 | 56.67 | 8500 | 2.5514 | 0.9227 | 0.5209 | | 2.4224 | 60.0 | 9000 | 2.9135 | 0.9717 | 0.5455 | | 2.4224 | 63.33 | 9500 | 3.0465 | 0.8346 | 0.5002 | | 0.8569 | 66.67 | 10000 | 2.8177 | 0.9302 | 0.5216 | | 0.8569 | 70.0 | 10500 | 2.9908 | 0.9310 | 0.5128 | | 0.8569 | 73.33 | 11000 | 3.1752 | 0.9235 | 0.5284 | | 0.8569 | 76.67 | 11500 | 2.7412 | 0.8886 | 0.5 | | 0.8569 | 80.0 | 12000 | 2.7362 | 0.9127 | 0.5040 | | 0.8569 | 83.33 | 12500 | 2.9636 | 0.9152 | 0.5093 | | 0.8569 | 86.67 | 13000 | 3.0139 | 0.9011 | 0.5097 | | 0.8569 | 90.0 | 13500 | 2.8325 | 0.8853 | 0.5032 | | 0.8569 | 93.33 | 14000 | 3.0383 | 0.8845 | 0.5056 | | 0.8569 | 96.67 | 14500 | 2.7931 | 0.8795 | 0.4965 | | 0.3881 | 100.0 | 15000 | 2.8972 | 0.8928 | 0.5012 | | 0.3881 | 103.33 | 15500 | 2.7780 | 0.8736 | 0.4947 | | 0.3881 | 106.67 | 16000 | 3.1081 | 0.9036 | 0.5109 | | 0.3881 | 110.0 | 16500 | 3.0078 | 0.8928 | 0.5032 | | 0.3881 | 113.33 | 17000 | 3.0245 | 0.8886 | 0.5009 | | 0.3881 | 116.67 | 17500 | 3.0739 | 0.8928 | 0.5065 | | 0.3881 | 120.0 | 18000 | 3.0545 | 0.8861 | 0.5014 |
db1c13782ce9262c75d64b4d18b170e3
apache-2.0
['translation']
false
Model description This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the kde4 dataset. It achieves the following results on the evaluation set: - Bleu: 40.70 More information needed
87ea98ef8b05b3922109950b2edeee86
apache-2.0
['translation']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP
4348705f85f226b06e31cdf848a4785e
apache-2.0
['generated_from_keras_callback']
false
HanSSH/mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.2684 - Validation Loss: 3.2288 - Epoch: 3
d24c887f0d65cea559caf8d5734a571f
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.00056, 'decay_steps': 4836, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16
0262b354fa1acb6a48615c10662f8182
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 5.8144 | 3.5283 | 0 | | 3.8758 | 3.2971 | 1 | | 3.4741 | 3.2452 | 2 | | 3.2684 | 3.2288 | 3 |
e8cd4ba75def0ce0f6d0e31fe036d9e5
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-sst2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.3739 - Accuracy: 0.9128
6c9a54a4194a9295beb64b6b6cf0e7f3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1885 | 1.0 | 4210 | 0.3092 | 0.9083 | | 0.1311 | 2.0 | 8420 | 0.3809 | 0.9071 | | 0.1036 | 3.0 | 12630 | 0.3739 | 0.9128 | | 0.0629 | 4.0 | 16840 | 0.4623 | 0.9083 | | 0.036 | 5.0 | 21050 | 0.5198 | 0.9048 |
4c940e330d45bf4c5f34400de270c8eb
apache-2.0
['Quality Estimation', 'siamesetransquest', 'da']
false
Using Pre-trained Models ```python import torch from transquest.algo.sentence_level.siamesetransquest.run_model import SiameseTransQuestModel model = SiameseTransQuestModel("TransQuest/siamesetransquest-da-ro_en-wiki") predictions = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]]) print(predictions) ```
1ba7300c010981228ed81e3db30bb179
mit
['generated_from_trainer']
false
PubMedBert-abstract-cord19-v2 This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the [pritamdeka/cord-19-abstract](https://huggingface.co/datasets/pritamdeka/cord-19-abstract) dataset. It achieves the following results on the evaluation set: - Loss: 1.2371 - Accuracy: 0.7247
a76cf904b39dedc9a4e1bbf4799f7b57
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10000 - num_epochs: 4.0 - mixed_precision_training: Native AMP
428c15931e1e4bf025306ed821e0e5d0
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.27 | 0.53 | 5000 | 1.2425 | 0.7236 | | 1.2634 | 1.06 | 10000 | 1.3123 | 0.7141 | | 1.3041 | 1.59 | 15000 | 1.3583 | 0.7072 | | 1.3829 | 2.12 | 20000 | 1.3590 | 0.7121 | | 1.3069 | 2.65 | 25000 | 1.3506 | 0.7154 | | 1.2921 | 3.18 | 30000 | 1.3448 | 0.7160 | | 1.2731 | 3.7 | 35000 | 1.3375 | 0.7178 |
a6b611dddac622e992f72d5630a0fc52
mit
['generated_from_trainer']
false
roberta-large-unlabeled-gab-reddit-semeval2023-task10-57000sample This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8874
c87fa783cd5e47b0033d2d6dea6e620a
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4
aef32128bf736527755eda4334964ccc
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.1999 | 1.0 | 3563 | 2.0576 | | 2.0587 | 2.0 | 7126 | 1.9371 | | 1.9591 | 3.0 | 10689 | 1.8823 | | 1.8652 | 4.0 | 14252 | 1.8874 |
26f4605333151fb50b852aece2258ff1
apache-2.0
['tapas', 'table-question-answering']
false
TAPAS large model fine-tuned on WikiTable Questions (WTQ) This model has 2 versions which can be used. The default version corresponds to the `tapas_wtq_wikisql_sqa_inter_masklm_large_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned in a chain on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253), [WikiSQL](https://github.com/salesforce/WikiSQL) and finally [WTQ](https://github.com/ppasupat/WikiTableQuestions). It uses relative position embeddings (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is: - `no_reset`, which corresponds to `tapas_wtq_wikisql_sqa_inter_masklm_large` (intermediate pre-training, absolute position embeddings). Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors.
041fc6d93702c19d16144c1dd268010b
apache-2.0
['tapas', 'table-question-answering']
false
Results Size | Reset | Dev Accuracy | Link -------- | --------| -------- | ---- **LARGE** | **noreset** | **0.5062** | [tapas-large-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/no_reset) **LARGE** | **reset** | **0.5097** | [tapas-large-finetuned-wtq](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/main) BASE | noreset | 0.4525 | [tapas-base-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/no_reset) BASE | reset | 0.4638 | [tapas-base-finetuned-wtq](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/main) MEDIUM | noreset | 0.4324 | [tapas-medium-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/no_reset) MEDIUM | reset | 0.4324 | [tapas-medium-finetuned-wtq](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/main) SMALL | noreset | 0.3681 | [tapas-small-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/no_reset) SMALL | reset | 0.3762 | [tapas-small-finetuned-wtq](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/main) MINI | noreset | 0.2783 | [tapas-mini-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/no_reset) MINI | reset | 0.2854 | [tapas-mini-finetuned-wtq](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/main) TINY | noreset | 0.0823 | [tapas-tiny-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/no_reset) TINY | reset | 0.1039 | [tapas-tiny-finetuned-wtq](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/main)
a9e5d366fd6b2e201f4601ef29c4d125
apache-2.0
['generated_from_trainer']
false
mobilebert_add_GLUE_Experiment_mnli This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 1.0985 - Accuracy: 0.3522
5ad93e7b70ce670c6b7a3f1e80c27f97
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.0988 | 1.0 | 3068 | 1.0988 | 0.3182 | | 1.0987 | 2.0 | 6136 | 1.0986 | 0.3184 | | 1.0987 | 3.0 | 9204 | 1.0989 | 0.3274 | | 1.0987 | 4.0 | 12272 | 1.0987 | 0.3182 | | 1.0987 | 5.0 | 15340 | 1.0984 | 0.3545 | | 1.0986 | 6.0 | 18408 | 1.0987 | 0.3274 | | 1.0986 | 7.0 | 21476 | 1.0993 | 0.3274 | | 1.0986 | 8.0 | 24544 | 1.0985 | 0.3545 | | 1.0986 | 9.0 | 27612 | 1.0985 | 0.3545 | | 1.0986 | 10.0 | 30680 | 1.0987 | 0.3182 |
0a982cb475006b8cec8be1621a6fadcf
gpl-2.0
['corenlp']
false
Core NLP model for english-extra CoreNLP is your one stop shop for natural language processing in Java! CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, coreference, sentiment, quote attributions, and relations. Find more about it in [our website](https://stanfordnlp.github.io/CoreNLP) and our [GitHub repository](https://github.com/stanfordnlp/CoreNLP). This card and repo were automatically prepared with `hugging_corenlp.py` in the `stanfordnlp/huggingface-models` repo Last updated 2023-01-21 01:36:25.611
ee5bc5be3e30b3edc1a3f88617ac40de
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
lacroix_can_plus_van_gogh Dreambooth model trained by spooncats with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept: ![0](https://huggingface.co/spooncats/lacroix-can-plus-van-gogh/resolve/main/sample_images/grid-0020.png)
778d25df0b1e78f16c5ffb4307b4e368
cc-by-4.0
['answer extraction']
false
Model Card of `lmqg/mt5-base-jaquad-ae` This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for answer extraction on the [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
a7bf2eff3f42942a6f85d223df0af7d0
cc-by-4.0
['answer extraction']
false
Overview - **Language model:** [google/mt5-base](https://huggingface.co/google/mt5-base) - **Language:** ja - **Training data:** [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
5ec06ba90c37cf2ea4dea48c53bf237b
cc-by-4.0
['answer extraction']
false
model prediction answers = model.generate_a("フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め30数点しか現存しない。現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mt5-base-jaquad-ae") output = pipe("『クマのプーさん』の物語はまず1925年12月24日、『イヴニング・ニュース』紙のクリスマス特集号に短編作品として掲載された。これは『クマのプーさん』の第一章にあたる作品で、このときだけは挿絵をJ.H.ダウドがつけている。その後作品10話と挿絵が整い、刊行に先駆けて「イーヨーの誕生日」のエピソードが1926年8月に『ロイヤルマガジン』に、同年10月9日に『ニューヨーク・イヴニング・ポスト』紙に掲載されたあと、同年10月14日にロンドンで(メシュエン社)、21日にニューヨークで(ダットン社)『クマのプーさん』が刊行された。<hl>前著『ぼくたちがとてもちいさかったころ』がすでに大きな成功を収めていたこともあり、イギリスでは初版は前著の7倍に当たる3万5000部が刷られた。<hl>他方のアメリカでもその年の終わりまでに15万部を売り上げている。ただし依然として人気のあった前著を売り上げで追い越すには数年の時間を要した。") ```
ec1b8982d1567ac4159e0cf4c14e95df
cc-by-4.0
['answer extraction']
false
Evaluation - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-jaquad-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_jaquad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:-----------------------------------------------------------------| | AnswerExactMatch | 28.33 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | AnswerF1Score | 28.33 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | BERTScore | 77.33 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | Bleu_1 | 33.75 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | Bleu_2 | 30.74 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | Bleu_3 | 28.29 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | Bleu_4 | 26.48 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | METEOR | 25.61 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | MoverScore | 64.96 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | ROUGE_L | 35.58 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
5e0028168536a7aed387be9262af9952
cc-by-4.0
['answer extraction']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_jaquad - dataset_name: default - input_types: ['paragraph_sentence'] - output_types: ['answer'] - prefix_types: None - model: google/mt5-base - max_length: 512 - max_length_output: 32 - epoch: 9 - batch: 8 - lr: 0.0005 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 8 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-base-jaquad-ae/raw/main/trainer_config.json).
ead771333e23a3236237a33d67b42be2
mit
['classification']
false
inference_nsmc.py ```python import json import sys import logging import torch from torch import nn from transformers import ElectraConfig from transformers import ElectraModel, AutoTokenizer, ElectraTokenizer, ElectraForSequenceClassification logging.basicConfig( level=logging.INFO, format='[{%(filename)s:%(lineno)d} %(levelname)s - %(message)s', handlers=[ logging.FileHandler(filename='tmp.log'), logging.StreamHandler(sys.stdout) ] ) logger = logging.getLogger(__name__) max_seq_length = 128 classes = ['Neg', 'Pos'] tokenizer = AutoTokenizer.from_pretrained("daekeun-ml/koelectra-small-v3-nsmc") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") def model_fn(model_path=None):
8d55b31dc29182e3b30c715246b28196
mit
['classification']
false
Download model from the Huggingface hub model = ElectraForSequenceClassification.from_pretrained('daekeun-ml/koelectra-small-v3-nsmc') model.to(device) return model def input_fn(input_data, content_type="application/jsonlines"): data_str = input_data.decode("utf-8") jsonlines = data_str.split("\n") transformed_inputs = [] for jsonline in jsonlines: text = json.loads(jsonline)["text"][0] logger.info("input text: {}".format(text)) encode_plus_token = tokenizer.encode_plus( text, max_length=max_seq_length, add_special_tokens=True, return_token_type_ids=False, padding="max_length", return_attention_mask=True, return_tensors="pt", truncation=True, ) transformed_inputs.append(encode_plus_token) return transformed_inputs def predict_fn(transformed_inputs, model): predicted_classes = [] for data in transformed_inputs: data = data.to(device) output = model(**data) softmax_fn = nn.Softmax(dim=1) softmax_output = softmax_fn(output[0]) _, prediction = torch.max(softmax_output, dim=1) predicted_class_idx = prediction.item() predicted_class = classes[predicted_class_idx] score = softmax_output[0][predicted_class_idx] logger.info("predicted_class: {}".format(predicted_class)) prediction_dict = {} prediction_dict["predicted_label"] = predicted_class prediction_dict['score'] = score.cpu().detach().numpy().tolist() jsonline = json.dumps(prediction_dict) logger.info("jsonline: {}".format(jsonline)) predicted_classes.append(jsonline) predicted_classes_jsonlines = "\n".join(predicted_classes) return predicted_classes_jsonlines def output_fn(outputs, accept="application/jsonlines"): return outputs, accept ```
094024285cf0a75a8b1fb9697e265ca9
mit
['classification']
false
test.py ```python >>> from inference_nsmc import model_fn, input_fn, predict_fn, output_fn >>> with open('samples/nsmc.txt', mode='rb') as file: >>> model_input_data = file.read() >>> model = model_fn() >>> transformed_inputs = input_fn(model_input_data) >>> predicted_classes_jsonlines = predict_fn(transformed_inputs, model) >>> model_outputs = output_fn(predicted_classes_jsonlines) >>> print(model_outputs[0]) [{inference_nsmc.py:47} INFO - input text: 이 영화는 최고의 영화입니다 [{inference_nsmc.py:47} INFO - input text: 최악이에요. 배우의 연기력도 좋지 않고 내용도 너무 허접합니다 [{inference_nsmc.py:77} INFO - predicted_class: Pos [{inference_nsmc.py:84} INFO - jsonline: {"predicted_label": "Pos", "score": 0.9619030952453613} [{inference_nsmc.py:77} INFO - predicted_class: Neg [{inference_nsmc.py:84} INFO - jsonline: {"predicted_label": "Neg", "score": 0.9994170665740967} {"predicted_label": "Pos", "score": 0.9619030952453613} {"predicted_label": "Neg", "score": 0.9994170665740967} ```
54f705951112cf04d7953f7c84f72669
apache-2.0
['vision', 'image-classification']
false
RegNet RegNet model trained on imagenet-1k. It was introduced in the paper [Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision](https://arxiv.org/abs/2202.08360) and first released in [this repository](https://github.com/facebookresearch/vissl/tree/main/projects/SEER). Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
cd7e80c3ba5c4a5bf435ae9ab5e9bd35
apache-2.0
['vision', 'image-classification']
false
Model description The authors trained [RegNets](https://huggingface.co/?models=regnet) models in a self-supervised fashion on a billion uncurated Instagram images. This model is later fine-tuned on ImageNet. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/regnet_architecture.png)
baf1520e8db9ca3664738a776264ee5f
apache-2.0
['vision', 'image-classification']
false
How to use Here is how to use this model: ```python >>> from transformers import AutoFeatureExtractor, RegNetForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/regnet-y-320-seer-in1k") >>> model = RegNetForImageClassification.from_pretrained("facebook/regnet-y-320-seer-in1k") >>> inputs = feature_extractor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>>
198e4e12693578b232ac58fa6d19f6dc
apache-2.0
['vision', 'image-classification']
false
model predicts one of the 1000 ImageNet classes >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) 'tabby, tabby cat' ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet).
7028894206cbaa8aa274713838884c14
mit
['question-answering', 'generated_from_trainer', 'bert', 'jaquad']
false
roberta_qa_japanese (Japanese caption : 日本語の (抽出型) 質問応答のモデル) This model is a fine-tuned version of [rinna/japanese-roberta-base](https://huggingface.co/rinna/japanese-roberta-base) (pre-trained RoBERTa model provided by rinna Co., Ltd.) trained for extractive question answering. The model is fine-tuned on [JaQuAD](https://huggingface.co/datasets/SkelterLabsInc/JaQuAD) dataset provided by Skelter Labs, in which data is collected from Japanese Wikipedia articles and annotated by a human.
03571aebf04806ececcc3802e56bbe87
mit
['question-answering', 'generated_from_trainer', 'bert', 'jaquad']
false
Intended uses When running with a dedicated pipeline : ```python from transformers import pipeline model_name = "tsmatz/roberta_qa_japanese" qa_pipeline = pipeline( "question-answering", model=model_name, tokenizer=model_name) result = qa_pipeline( question = "決勝トーナメントで日本に勝ったのはどこでしたか。", context = "日本は予選リーグで強豪のドイツとスペインに勝って決勝トーナメントに進んだが、クロアチアと対戦して敗れた。", align_to_words = False, ) print(result) ``` When manually running through forward pass : ```python import torch import numpy as np from transformers import AutoModelForQuestionAnswering, AutoTokenizer model_name = "tsmatz/roberta_qa_japanese" model = (AutoModelForQuestionAnswering .from_pretrained(model_name)) tokenizer = AutoTokenizer.from_pretrained(model_name) def inference_answer(question, context): question = question context = context test_feature = tokenizer( question, context, max_length=318, ) with torch.no_grad(): outputs = model(torch.tensor([test_feature["input_ids"]])) start_logits = outputs.start_logits.cpu().numpy() end_logits = outputs.end_logits.cpu().numpy() answer_ids = test_feature["input_ids"][np.argmax(start_logits):np.argmax(end_logits)+1] return "".join(tokenizer.batch_decode(answer_ids)) question = "決勝トーナメントで日本に勝ったのはどこでしたか。" context = "日本は予選リーグで強豪のドイツとスペインに勝って決勝トーナメントに進んだが、クロアチアと対戦して敗れた。" answer_pred = inference_answer(question, context) print(answer_pred) ```
36025330b6059c525dc59b431ed0e6f9
mit
['question-answering', 'generated_from_trainer', 'bert', 'jaquad']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 3
9935c386d516477df08648628fb720ee
mit
['question-answering', 'generated_from_trainer', 'bert', 'jaquad']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1293 | 0.13 | 150 | 1.0311 | | 1.1965 | 0.26 | 300 | 0.6723 | | 1.022 | 0.39 | 450 | 0.4838 | | 0.9594 | 0.53 | 600 | 0.5174 | | 0.9187 | 0.66 | 750 | 0.4671 | | 0.8229 | 0.79 | 900 | 0.4650 | | 0.71 | 0.92 | 1050 | 0.2648 | | 0.5436 | 1.05 | 1200 | 0.2665 | | 0.5045 | 1.19 | 1350 | 0.2686 | | 0.5025 | 1.32 | 1500 | 0.2082 | | 0.5213 | 1.45 | 1650 | 0.1715 | | 0.4648 | 1.58 | 1800 | 0.1563 | | 0.4698 | 1.71 | 1950 | 0.1488 | | 0.4823 | 1.84 | 2100 | 0.1050 | | 0.4482 | 1.97 | 2250 | 0.0821 | | 0.2755 | 2.11 | 2400 | 0.0898 | | 0.2834 | 2.24 | 2550 | 0.0964 | | 0.2525 | 2.37 | 2700 | 0.0533 | | 0.2606 | 2.5 | 2850 | 0.0561 | | 0.2467 | 2.63 | 3000 | 0.0601 | | 0.2799 | 2.77 | 3150 | 0.0562 | | 0.2497 | 2.9 | 3300 | 0.0516 |
ae2d4dd4c4a0c6267be64d29b96a8e02
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2258 - Accuracy: 0.9245 - F1: 0.9248
542df236c11b5ffca836fd7c0300a919
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8359 | 1.0 | 250 | 0.3316 | 0.901 | 0.8967 | | 0.2584 | 2.0 | 500 | 0.2258 | 0.9245 | 0.9248 |
5a1b131a9af14f8865f1de3381737474
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1
2383239655d9d1d305c52878023fa8f2
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 334 | 0.1500 | 24.5024 | 21.4979 | 24.0227 | 24.0303 | 19.0 |
b28fe2d7c2861f1c6be28a50effc597d
mit
['conversational']
false
How to use ```python from transformers import OpenAIGPTLMHeadModel, GPT2LMHeadModel, BertTokenizer import torch tokenizer = BertTokenizer.from_pretrained("thu-coai/CDial-GPT2_LCCC-base") model = GPT2LMHeadModel.from_pretrained("thu-coai/CDial-GPT2_LCCC-base") ``` For more details, please refer to our [repo.](https://github.com/thu-coai/CDial-GPT) on github.
ae536100804fcbc68d379e6191e1bd83
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-query This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3668 - Accuracy: 0.8936 - F1: 0.8924
bb9d379f250d594ac5939f6dce833acc
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3
b059eb55b037dfe3d76cec041143ee77
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.6511 | 1.0 | 30 | 0.5878 | 0.7234 | 0.6985 | | 0.499 | 2.0 | 60 | 0.4520 | 0.8723 | 0.8683 | | 0.3169 | 3.0 | 90 | 0.3668 | 0.8936 | 0.8924 |
b5577323b62fa1caf72d038242c63cf7
apache-2.0
['generated_from_trainer']
false
test_ner-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0623 - Precision: 0.9242 - Recall: 0.9349 - F1: 0.9295 - Accuracy: 0.9834
107618a3cfb587f6104d8c85afac15ac
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2385 | 1.0 | 878 | 0.0708 | 0.9140 | 0.9216 | 0.9178 | 0.9808 | | 0.055 | 2.0 | 1756 | 0.0626 | 0.9209 | 0.9340 | 0.9274 | 0.9828 | | 0.0309 | 3.0 | 2634 | 0.0623 | 0.9242 | 0.9349 | 0.9295 | 0.9834 |
d6d887094e8c07397bf89f3941e70256
apache-2.0
['stanza', 'token-classification']
false
Stanza model for Western_Armenian (hyw) Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing. Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza). This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo Last updated 2022-09-25 01:32:11.573
1ad8aeff121610b7d67530b77b2dae83
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
`pyf98/tedlium2_transducer_e_branchformer` This model was trained by Yifan Peng using tedlium2 recipe in [espnet](https://github.com/espnet/espnet/). References: - [E-Branchformer: Branchformer with Enhanced merging for speech recognition (SLT 2022)](https://arxiv.org/abs/2210.00077) - [Branchformer: Parallel MLP-Attention Architectures to Capture Local and Global Context for Speech Recognition and Understanding (ICML 2022)](https://proceedings.mlr.press/v162/peng22a.html)
1547b8b5eb2307a89b0ea9bbc5e9e20e
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout 478ba004e114e7862b05fb01112de7f7e1da3996 pip install -e . cd egs2/tedlium2/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model pyf98/tedlium2_transducer_e_branchformer ``` <!-- Generated by scripts/utils/show_asr_result.sh -->
46b272c7054671eee9ad4f8bb437a5fc
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
Environments - date: `Thu Feb 9 01:29:33 CST 2023` - python version: `3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0]` - espnet version: `espnet 202301` - pytorch version: `pytorch 1.13.1` - Git hash: `478ba004e114e7862b05fb01112de7f7e1da3996` - Commit date: `Tue Feb 7 00:50:49 2023 +0000`
b31d347b6543e66b5ca955d728683e9e
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_transducer_asr_model_valid.loss.ave/dev|466|14671|93.4|4.3|2.3|1.0|7.6|71.7| |decode_asr_transducer_asr_model_valid.loss.ave/test|1155|27500|93.6|4.0|2.4|1.0|7.4|63.5|
1c918dfb39e1a212f8e7c9996c61d3d1
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_transducer_asr_model_valid.loss.ave/dev|466|78259|97.1|0.9|2.0|0.9|3.8|71.7| |decode_asr_transducer_asr_model_valid.loss.ave/test|1155|145066|97.1|0.9|2.1|0.9|3.9|63.5|
822b2f5f9042c314e05972fcc536ec0d
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_transducer_asr_model_valid.loss.ave/dev|466|28296|94.7|3.1|2.3|0.8|6.2|71.7| |decode_asr_transducer_asr_model_valid.loss.ave/test|1155|52113|95.1|2.6|2.2|0.9|5.8|63.5|
c25ec079bfb9d8aa85ecb5dbe979086b
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_transducer_e_branchformer_e12.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_transducer_e_branchformer_e12_raw_en_bpe500_sp ngpu: 1 seed: 2022 num_workers: 6 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 2 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 45753 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 50 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 5 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 10000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_en_bpe500_sp/train/speech_shape - exp/asr_stats_raw_en_bpe500_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_en_bpe500_sp/valid/speech_shape - exp/asr_stats_raw_en_bpe500_sp/valid/text_shape.bpe batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_sp/wav.scp - speech - kaldi_ark - - dump/raw/train_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - kaldi_ark - - dump/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null exclude_weight_decay: false exclude_weight_decay_conf: {} optim: adam optim_conf: lr: 0.002 weight_decay: 1.0e-06 scheduler: warmuplr scheduler_conf: warmup_steps: 15000 token_list: - <blank> - <unk> - s - ▁the - t - ▁a - ▁and - ▁to - d - e - ▁of - '''' - n - ing - ▁in - ▁i - ▁that - i - a - l - p - m - y - o - ▁it - ▁we - c - u - ▁you - ed - ▁ - r - ▁is - re - ▁this - ar - g - ▁so - al - b - ▁s - or - ▁f - ▁c - in - k - f - ▁for - ic - er - le - ▁be - ▁do - ▁re - ve - ▁e - ▁w - ▁was - es - ▁they - ly - h - ▁on - v - ▁are - ri - ▁have - an - ▁what - ▁with - ▁t - w - ur - it - ent - ▁can - ▁he - ▁but - ra - ce - ▁me - ▁b - ▁ma - ▁p - ll - ▁st - ▁one - 'on' - ▁about - th - ▁de - en - ▁all - ▁not - il - ▁g - ch - at - ▁there - ▁mo - ter - ation - tion - ▁at - ▁my - ro - ▁as - te - ▁le - ▁con - ▁like - ▁people - ▁or - ▁an - el - ▁if - ▁from - ver - ▁su - ▁co - ate - ▁these - ol - ci - ▁now - ▁see - ▁out - ▁our - ion - ▁know - ect - ▁just - as - ▁ex - ▁ch - ▁d - ▁when - ▁very - ▁think - ▁who - ▁because - ▁go - ▁up - ▁us - ▁pa - ▁no - ies - ▁di - ▁ho - om - ive - ▁get - id - ▁o - ▁hi - un - ▁how - ▁by - ir - et - ck - ity - ▁po - ul - ▁which - ▁mi - ▁some - z - ▁sp - ▁un - ▁going - ▁pro - ist - ▁se - ▁look - ▁time - ment - de - ▁more - ▁had - ng - ▁would - ge - la - ▁here - ▁really - x - ▁your - ▁them - us - me - ▁en - ▁two - ▁k - ▁li - ▁world - ne - ow - ▁way - ▁want - ▁work - ▁don - ▁lo - ▁fa - ▁were - ▁their - age - vi - ▁ha - ac - der - est - ▁bo - am - ▁other - able - ▁actually - ▁sh - ▁make - ▁ba - ▁la - ine - ▁into - ▁where - ▁could - ▁comp - ting - ▁has - ▁will - ▁ne - j - ical - ally - ▁vi - ▁things - ▁te - igh - ▁say - ▁years - ers - ▁ra - ther - ▁than - ru - ▁ro - op - ▁did - ▁any - ▁new - ound - ig - ▁well - mo - ▁she - ▁na - ▁been - he - ▁thousand - ▁car - ▁take - ▁right - ▁then - ▁need - ▁start - ▁hundred - ▁something - ▁over - ▁com - ia - ▁kind - um - if - ▁those - ▁first - ▁pre - ta - ▁said - ize - end - ▁even - ▁thing - one - ▁back - ite - ▁every - ▁little - ry - ▁life - ▁much - ke - ▁also - ▁most - ant - per - ▁three - ▁come - ▁lot - ance - ▁got - ▁talk - ▁per - ▁inter - ▁sa - ▁use - ▁mu - ▁part - ish - ence - ▁happen - ▁bi - ▁mean - ough - ▁qu - ▁bu - ▁day - ▁ga - ▁only - ▁many - ▁different - ▁dr - ▁th - ▁show - ful - ▁down - ated - ▁good - ▁tra - ▁around - ▁idea - ▁human - ous - ▁put - ▁through - ▁five - ▁why - ▁change - ▁real - ff - ible - ▁fact - ▁same - ▁jo - ▁live - ▁year - ▁problem - ▁ph - ▁four - ▁give - ▁big - ▁tell - ▁great - ▁try - ▁va - ▁ru - ▁system - ▁six - ▁plan - ▁place - ▁build - ▁called - ▁again - ▁point - ▁twenty - ▁percent - ▁nine - ▁find - ▁app - ▁after - ▁long - ▁eight - ▁imp - ▁gene - ▁design - ▁today - ▁should - ▁made - ious - ▁came - ▁learn - ▁last - ▁own - way - ▁turn - ▁seven - ▁high - ▁question - ▁person - ▁brain - ▁important - ▁another - ▁thought - ▁trans - ▁create - ness - ▁hu - ▁power - ▁act - land - ▁play - ▁sort - ▁old - ▁before - ▁course - ▁understand - ▁feel - ▁might - ▁each - ▁million - ▁better - ▁together - ▁ago - ▁example - ▁help - ▁story - ▁next - ▁hand - ▁school - ▁water - ▁develop - ▁technology - que - ▁second - ▁grow - ▁still - ▁cell - ▁believe - ▁number - ▁small - ▁between - qui - ▁data - ▁become - ▁america - ▁maybe - ▁space - ▁project - ▁organ - ▁vo - ▁children - ▁book - graph - ▁open - ▁fifty - ▁picture - ▁health - ▁thirty - ▁africa - ▁reason - ▁large - ▁hard - ▁computer - ▁always - ▁sense - ▁money - ▁women - ▁everything - ▁information - ▁country - ▁teach - ▁energy - ▁experience - ▁food - ▁process - qua - ▁interesting - ▁future - ▁science - q - '0' - '5' - '6' - '9' - '3' - '8' - '4' - N - A - '7' - S - G - F - R - L - U - E - T - H - _ - B - D - J - M - ă - ō - ť - '2' - '-' - '1' - C - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: null zero_infinity: true joint_net_conf: joint_space_size: 320 use_preprocessor: true token_type: bpe bpemodel: data/en_token_list/bpe_unigram500/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' short_noise_thres: 0.5 aux_ctc_tasks: [] frontend: default frontend_conf: n_fft: 512 win_length: 400 hop_length: 160 fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 5 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_en_bpe500_sp/train/feats_stats.npz model: espnet model_conf: ctc_weight: 0.3 report_cer: false report_wer: false preencoder: null preencoder_conf: {} encoder: e_branchformer encoder_conf: output_size: 256 attention_heads: 4 attention_layer_type: rel_selfattn pos_enc_layer_type: rel_pos rel_pos_type: latest cgmlp_linear_units: 1024 cgmlp_conv_kernel: 31 use_linear_after_conv: false gate_activation: identity num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d layer_drop_rate: 0.0 linear_units: 1024 positionwise_layer_type: linear use_ffn: true macaron_ffn: true merge_conv_kernel: 31 postencoder: null postencoder_conf: {} decoder: transducer decoder_conf: rnn_type: lstm num_layers: 1 hidden_size: 256 dropout: 0.1 dropout_embed: 0.2 preprocessor: default preprocessor_conf: {} required: - output_dir - token_list version: '202301' distributed: true ``` </details>
e5e590bdbb0790158a27b29d3e0f2ded
apache-2.0
['vision', 'depth-estimation', 'generated_from_trainer']
false
glpn-nyu-finetuned-diode-230119-100058 This model is a fine-tuned version of [vinvino02/glpn-nyu](https://huggingface.co/vinvino02/glpn-nyu) on the diode-subset dataset. It achieves the following results on the evaluation set: - Loss: 0.4305 - Mae: 0.4203 - Rmse: 0.6123 - Abs Rel: 0.4280 - Log Mae: 0.1694 - Log Rmse: 0.2214 - Delta1: 0.3813 - Delta2: 0.6446 - Delta3: 0.8152
a18e462204edbb03ccb3a0ed7f32093f
apache-2.0
['vision', 'depth-estimation', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 48 - seed: 2022 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.15 - num_epochs: 75 - mixed_precision_training: Native AMP
5e02d0b2d137b67e0749e071b4c540fe
apache-2.0
['vision', 'depth-estimation', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Mae | Rmse | Abs Rel | Log Mae | Log Rmse | Delta1 | Delta2 | Delta3 | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:-------:|:--------:|:------:|:------:|:------:| | 1.2807 | 1.0 | 72 | 0.9866 | 0.8312 | 1.0131 | 0.7179 | 0.5655 | 0.5924 | 0.0087 | 0.0200 | 0.0552 | | 0.7396 | 2.0 | 144 | 0.4976 | 0.4741 | 0.6670 | 0.5279 | 0.1989 | 0.2567 | 0.3070 | 0.5470 | 0.7943 | | 0.5018 | 3.0 | 216 | 0.4811 | 0.4630 | 0.6367 | 0.5198 | 0.1929 | 0.2446 | 0.3211 | 0.5440 | 0.7506 | | 0.482 | 4.0 | 288 | 0.4726 | 0.4556 | 0.6337 | 0.4951 | 0.1893 | 0.2410 | 0.3306 | 0.5636 | 0.7663 | | 0.4874 | 5.0 | 360 | 0.4813 | 0.4662 | 0.6355 | 0.5265 | 0.1941 | 0.2446 | 0.3179 | 0.5385 | 0.7278 | | 0.4648 | 6.0 | 432 | 0.4681 | 0.4512 | 0.6309 | 0.4783 | 0.1869 | 0.2383 | 0.3430 | 0.5757 | 0.7527 | | 0.4346 | 7.0 | 504 | 0.4637 | 0.4499 | 0.6292 | 0.4710 | 0.1859 | 0.2357 | 0.3453 | 0.5671 | 0.7644 | | 0.4018 | 8.0 | 576 | 0.4790 | 0.4638 | 0.6349 | 0.5161 | 0.1928 | 0.2436 | 0.3255 | 0.5408 | 0.7338 | | 0.4092 | 9.0 | 648 | 0.4559 | 0.4449 | 0.6267 | 0.4540 | 0.1827 | 0.2319 | 0.3541 | 0.5814 | 0.7692 | | 0.3891 | 10.0 | 720 | 0.4619 | 0.4433 | 0.6259 | 0.4748 | 0.1823 | 0.2351 | 0.3579 | 0.5870 | 0.7742 | | 0.3707 | 11.0 | 792 | 0.4624 | 0.4500 | 0.6269 | 0.4828 | 0.1851 | 0.2350 | 0.3421 | 0.5672 | 0.7638 | | 0.4129 | 12.0 | 864 | 0.4648 | 0.4468 | 0.6265 | 0.4836 | 0.1836 | 0.2358 | 0.3533 | 0.5786 | 0.7625 | | 0.4108 | 13.0 | 936 | 0.4474 | 0.4312 | 0.6187 | 0.4501 | 0.1752 | 0.2280 | 0.3801 | 0.6088 | 0.7887 | | 0.3948 | 14.0 | 1008 | 0.4619 | 0.4498 | 0.6263 | 0.4853 | 0.1844 | 0.2344 | 0.3401 | 0.5721 | 0.7645 | | 0.4009 | 15.0 | 1080 | 0.4619 | 0.4440 | 0.6244 | 0.4889 | 0.1820 | 0.2351 | 0.3563 | 0.5841 | 0.7751 | | 0.3657 | 16.0 | 1152 | 0.4636 | 0.4491 | 0.6260 | 0.4936 | 0.1846 | 0.2360 | 0.3422 | 0.5734 | 0.7644 | | 0.3605 | 17.0 | 1224 | 0.4353 | 0.4255 | 0.6153 | 0.4248 | 0.1715 | 0.2218 | 0.3844 | 0.6207 | 0.8008 | | 0.3937 | 18.0 | 1296 | 0.4756 | 0.4609 | 0.6310 | 0.5281 | 0.1909 | 0.2423 | 0.3220 | 0.5461 | 0.7538 | | 0.3453 | 19.0 | 1368 | 0.4698 | 0.4517 | 0.6270 | 0.5145 | 0.1863 | 0.2392 | 0.3360 | 0.5702 | 0.7689 | | 0.3883 | 20.0 | 1440 | 0.4349 | 0.4240 | 0.6145 | 0.4311 | 0.1712 | 0.2230 | 0.3841 | 0.6321 | 0.8030 | | 0.3482 | 21.0 | 1512 | 0.4339 | 0.4209 | 0.6146 | 0.4223 | 0.1694 | 0.2223 | 0.3967 | 0.6337 | 0.8036 | | 0.3374 | 22.0 | 1584 | 0.4400 | 0.4289 | 0.6167 | 0.4431 | 0.1737 | 0.2254 | 0.3743 | 0.6191 | 0.7971 | | 0.3516 | 23.0 | 1656 | 0.4395 | 0.4280 | 0.6171 | 0.4426 | 0.1737 | 0.2259 | 0.3710 | 0.6241 | 0.7998 | | 0.3901 | 24.0 | 1728 | 0.4444 | 0.4324 | 0.6184 | 0.4562 | 0.1758 | 0.2280 | 0.3665 | 0.6118 | 0.7991 | | 0.3587 | 25.0 | 1800 | 0.4326 | 0.4200 | 0.6129 | 0.4281 | 0.1690 | 0.2222 | 0.3920 | 0.6403 | 0.8073 | | 0.3425 | 26.0 | 1872 | 0.4371 | 0.4231 | 0.6152 | 0.4341 | 0.1709 | 0.2242 | 0.3852 | 0.6372 | 0.7974 | | 0.3252 | 27.0 | 1944 | 0.4381 | 0.4225 | 0.6140 | 0.4399 | 0.1705 | 0.2245 | 0.3851 | 0.6396 | 0.8065 | | 0.3586 | 28.0 | 2016 | 0.4441 | 0.4304 | 0.6162 | 0.4488 | 0.1746 | 0.2258 | 0.3674 | 0.6179 | 0.7929 | | 0.3389 | 29.0 | 2088 | 0.4240 | 0.4112 | 0.6100 | 0.4017 | 0.1640 | 0.2173 | 0.4152 | 0.6599 | 0.8128 | | 0.3418 | 30.0 | 2160 | 0.4312 | 0.4195 | 0.6126 | 0.4211 | 0.1687 | 0.2206 | 0.3899 | 0.6435 | 0.8123 | | 0.3454 | 31.0 | 2232 | 0.4301 | 0.4176 | 0.6126 | 0.4167 | 0.1674 | 0.2203 | 0.3974 | 0.6479 | 0.8089 | | 0.3499 | 32.0 | 2304 | 0.4262 | 0.4154 | 0.6115 | 0.4081 | 0.1661 | 0.2184 | 0.3997 | 0.6578 | 0.8083 | | 0.3649 | 33.0 | 2376 | 0.4429 | 0.4313 | 0.6171 | 0.4507 | 0.1753 | 0.2263 | 0.3641 | 0.6134 | 0.7982 | | 0.3341 | 34.0 | 2448 | 0.4292 | 0.4207 | 0.6127 | 0.4161 | 0.1689 | 0.2192 | 0.3874 | 0.6415 | 0.8007 | | 0.3323 | 35.0 | 2520 | 0.4402 | 0.4266 | 0.6148 | 0.4434 | 0.1728 | 0.2247 | 0.3754 | 0.6254 | 0.7983 | | 0.3374 | 36.0 | 2592 | 0.4336 | 0.4233 | 0.6139 | 0.4277 | 0.1706 | 0.2219 | 0.3810 | 0.6362 | 0.8008 | | 0.334 | 37.0 | 2664 | 0.4310 | 0.4230 | 0.6138 | 0.4240 | 0.1703 | 0.2209 | 0.3826 | 0.6345 | 0.8034 | | 0.3471 | 38.0 | 2736 | 0.4372 | 0.4250 | 0.6144 | 0.4397 | 0.1720 | 0.2240 | 0.3780 | 0.6303 | 0.8046 | | 0.3283 | 39.0 | 2808 | 0.4421 | 0.4301 | 0.6168 | 0.4497 | 0.1743 | 0.2259 | 0.3654 | 0.6209 | 0.7993 | | 0.3418 | 40.0 | 2880 | 0.4340 | 0.4224 | 0.6137 | 0.4334 | 0.1703 | 0.2228 | 0.3857 | 0.6351 | 0.8054 | | 0.3455 | 41.0 | 2952 | 0.4294 | 0.4174 | 0.6118 | 0.4212 | 0.1675 | 0.2203 | 0.3959 | 0.6469 | 0.8109 | | 0.3229 | 42.0 | 3024 | 0.4291 | 0.4165 | 0.6121 | 0.4199 | 0.1671 | 0.2207 | 0.4035 | 0.6464 | 0.8103 | | 0.352 | 43.0 | 3096 | 0.4393 | 0.4266 | 0.6154 | 0.4462 | 0.1729 | 0.2253 | 0.3744 | 0.6287 | 0.8049 | | 0.3163 | 44.0 | 3168 | 0.4250 | 0.4113 | 0.6098 | 0.4112 | 0.1647 | 0.2187 | 0.4041 | 0.6620 | 0.8201 | | 0.3284 | 45.0 | 3240 | 0.4358 | 0.4245 | 0.6138 | 0.4379 | 0.1716 | 0.2233 | 0.3745 | 0.6306 | 0.8106 | | 0.3359 | 46.0 | 3312 | 0.4321 | 0.4217 | 0.6124 | 0.4283 | 0.1699 | 0.2210 | 0.3770 | 0.6412 | 0.8129 | | 0.3406 | 47.0 | 3384 | 0.4238 | 0.4127 | 0.6104 | 0.4084 | 0.1653 | 0.2183 | 0.3982 | 0.6617 | 0.8177 | | 0.3207 | 48.0 | 3456 | 0.4375 | 0.4275 | 0.6147 | 0.4435 | 0.1733 | 0.2243 | 0.3658 | 0.6262 | 0.8071 | | 0.3338 | 49.0 | 3528 | 0.4331 | 0.4223 | 0.6142 | 0.4310 | 0.1705 | 0.2228 | 0.3846 | 0.6374 | 0.8071 | | 0.3203 | 50.0 | 3600 | 0.4308 | 0.4212 | 0.6136 | 0.4253 | 0.1695 | 0.2213 | 0.3878 | 0.6407 | 0.8054 | | 0.3238 | 51.0 | 3672 | 0.4379 | 0.4267 | 0.6148 | 0.4416 | 0.1727 | 0.2241 | 0.3723 | 0.6244 | 0.8036 | | 0.3209 | 52.0 | 3744 | 0.4289 | 0.4187 | 0.6121 | 0.4178 | 0.1681 | 0.2198 | 0.3920 | 0.6461 | 0.8096 | | 0.3198 | 53.0 | 3816 | 0.4376 | 0.4264 | 0.6145 | 0.4402 | 0.1724 | 0.2237 | 0.3708 | 0.6279 | 0.8066 | | 0.3137 | 54.0 | 3888 | 0.4294 | 0.4180 | 0.6115 | 0.4242 | 0.1681 | 0.2208 | 0.3888 | 0.6494 | 0.8152 | | 0.3238 | 55.0 | 3960 | 0.4416 | 0.4294 | 0.6158 | 0.4521 | 0.1743 | 0.2261 | 0.3645 | 0.6205 | 0.8069 | | 0.3173 | 56.0 | 4032 | 0.4257 | 0.4142 | 0.6116 | 0.4145 | 0.1661 | 0.2198 | 0.4016 | 0.6586 | 0.8136 | | 0.3173 | 57.0 | 4104 | 0.4303 | 0.4193 | 0.6123 | 0.4246 | 0.1687 | 0.2210 | 0.3879 | 0.6451 | 0.8118 | | 0.3297 | 58.0 | 4176 | 0.4302 | 0.4219 | 0.6132 | 0.4259 | 0.1700 | 0.2211 | 0.3792 | 0.6394 | 0.8122 | | 0.3261 | 59.0 | 4248 | 0.4319 | 0.4220 | 0.6131 | 0.4312 | 0.1702 | 0.2221 | 0.3781 | 0.6407 | 0.8142 | | 0.3082 | 60.0 | 4320 | 0.4340 | 0.4234 | 0.6136 | 0.4346 | 0.1710 | 0.2228 | 0.3754 | 0.6373 | 0.8106 | | 0.31 | 61.0 | 4392 | 0.4225 | 0.4120 | 0.6104 | 0.4073 | 0.1646 | 0.2181 | 0.4054 | 0.6626 | 0.8168 | | 0.3065 | 62.0 | 4464 | 0.4313 | 0.4197 | 0.6125 | 0.4280 | 0.1690 | 0.2216 | 0.3854 | 0.6472 | 0.8127 | | 0.3046 | 63.0 | 4536 | 0.4316 | 0.4202 | 0.6127 | 0.4268 | 0.1691 | 0.2213 | 0.3849 | 0.6448 | 0.8131 | | 0.303 | 64.0 | 4608 | 0.4352 | 0.4241 | 0.6137 | 0.4373 | 0.1712 | 0.2231 | 0.3760 | 0.6364 | 0.8097 | | 0.3094 | 65.0 | 4680 | 0.4318 | 0.4205 | 0.6128 | 0.4304 | 0.1695 | 0.2220 | 0.3828 | 0.6438 | 0.8140 | | 0.3035 | 66.0 | 4752 | 0.4351 | 0.4233 | 0.6136 | 0.4386 | 0.1709 | 0.2235 | 0.3781 | 0.6388 | 0.8099 | | 0.327 | 67.0 | 4824 | 0.4307 | 0.4203 | 0.6131 | 0.4280 | 0.1693 | 0.2216 | 0.3828 | 0.6463 | 0.8143 | | 0.3175 | 68.0 | 4896 | 0.4325 | 0.4219 | 0.6137 | 0.4314 | 0.1701 | 0.2222 | 0.3809 | 0.6406 | 0.8135 | | 0.3188 | 69.0 | 4968 | 0.4299 | 0.4203 | 0.6126 | 0.4271 | 0.1694 | 0.2214 | 0.3827 | 0.6440 | 0.8141 | | 0.3158 | 70.0 | 5040 | 0.4304 | 0.4203 | 0.6126 | 0.4274 | 0.1694 | 0.2215 | 0.3832 | 0.6443 | 0.8133 | | 0.3298 | 71.0 | 5112 | 0.4315 | 0.4219 | 0.6135 | 0.4292 | 0.1700 | 0.2218 | 0.3792 | 0.6423 | 0.8136 | | 0.3246 | 72.0 | 5184 | 0.4323 | 0.4219 | 0.6129 | 0.4322 | 0.1703 | 0.2223 | 0.3769 | 0.6418 | 0.8133 | | 0.3116 | 73.0 | 5256 | 0.4301 | 0.4198 | 0.6124 | 0.4264 | 0.1691 | 0.2213 | 0.3833 | 0.6459 | 0.8141 | | 0.3192 | 74.0 | 5328 | 0.4301 | 0.4200 | 0.6125 | 0.4266 | 0.1691 | 0.2213 | 0.3819 | 0.6464 | 0.8156 | | 0.3172 | 75.0 | 5400 | 0.4305 | 0.4203 | 0.6123 | 0.4280 | 0.1694 | 0.2214 | 0.3813 | 0.6446 | 0.8152 |
a8f17a03923eabd0963c6687d30f4cff
mit
[]
false
Dragonborn on Stable Diffusion This is the `<dragonborn>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<dragonborn> 0](https://huggingface.co/sd-concepts-library/dragonborn/resolve/main/concept_images/5.jpeg) ![<dragonborn> 1](https://huggingface.co/sd-concepts-library/dragonborn/resolve/main/concept_images/6.jpeg) ![<dragonborn> 2](https://huggingface.co/sd-concepts-library/dragonborn/resolve/main/concept_images/3.jpeg) ![<dragonborn> 3](https://huggingface.co/sd-concepts-library/dragonborn/resolve/main/concept_images/0.jpeg) ![<dragonborn> 4](https://huggingface.co/sd-concepts-library/dragonborn/resolve/main/concept_images/2.jpeg) ![<dragonborn> 5](https://huggingface.co/sd-concepts-library/dragonborn/resolve/main/concept_images/1.jpeg) ![<dragonborn> 6](https://huggingface.co/sd-concepts-library/dragonborn/resolve/main/concept_images/4.jpeg)
525fe8138f74e200ba36d25f3d72b15f
apache-2.0
['generated_from_keras_callback']
false
tf_bert_uncased_emotion_detection This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0659 - Train Accuracy: 0.9661 - Validation Loss: 0.1150 - Validation Accuracy: 0.9370 - Epoch: 9
88b4e441f59e0d105eb319b8a4a31efc
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 6000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32
9fdf746970d5cd359f6ebdfa7a32466f
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3703 | 0.8683 | 0.1511 | 0.9315 | 0 | | 0.1208 | 0.9414 | 0.1145 | 0.9380 | 1 | | 0.0820 | 0.9561 | 0.1150 | 0.9370 | 2 | | 0.0656 | 0.9681 | 0.1150 | 0.9370 | 3 | | 0.0643 | 0.9671 | 0.1150 | 0.9370 | 4 | | 0.0652 | 0.9697 | 0.1150 | 0.9370 | 5 | | 0.0646 | 0.9689 | 0.1150 | 0.9370 | 6 | | 0.0651 | 0.9678 | 0.1150 | 0.9370 | 7 | | 0.0651 | 0.9691 | 0.1150 | 0.9370 | 8 | | 0.0659 | 0.9661 | 0.1150 | 0.9370 | 9 |
e1662cd5de3be4800313567f7eb926cc
apache-2.0
['generated_from_trainer']
false
function-arg-swap-model-148k-files-365k-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4783 - Accuracy: 0.7679 - Precision: 0.7641 - Recall: 0.7812 - F1 score: 0.7725
fd225a788eedc8fb1018dbe7e9378931
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10
105a8dfe8c796f4c0e5c7dce8467715f
cc-by-sa-3.0
['spacy']
false
xx_sent_ud_sm Multi-language pipeline optimized for CPU. Components: senter. | Feature | Description | | --- | --- | | **Name** | `xx_sent_ud_sm` | | **Version** | `3.5.0` | | **spaCy** | `>=3.5.0,<3.6.0` | | **Default Pipeline** | `senter` | | **Components** | `senter` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.8 (UD_Afrikaans-AfriBooms, UD_Croatian-SET, UD_Czech-CAC, UD_Czech-CLTT, UD_Danish-DDT, UD_Dutch-Alpino, UD_Dutch-LassySmall, UD_English-EWT, UD_Finnish-FTB, UD_Finnish-TDT, UD_French-GSD, UD_French-Spoken, UD_German-GSD, UD_Indonesian-GSD, UD_Irish-IDT, UD_Italian-TWITTIRO, UD_Korean-GSD, UD_Korean-Kaist, UD_Latvian-LVTB, UD_Lithuanian-ALKSNIS, UD_Lithuanian-HSE, UD_Marathi-UFAL, UD_Norwegian-Bokmaal, UD_Norwegian-Nynorsk, UD_Norwegian-NynorskLIA, UD_Persian-Seraji, UD_Portuguese-Bosque, UD_Portuguese-GSD, UD_Romanian-Nonstandard, UD_Romanian-RRT, UD_Russian-GSD, UD_Russian-Taiga, UD_Serbian-SET, UD_Slovak-SNK, UD_Spanish-GSD, UD_Swedish-Talbanken, UD_Telugu-MTG, UD_Vietnamese-VTB)](https://universaldependencies.org/) (Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell; et al.) | | **License** | `CC BY-SA 3.0` | | **Author** | [Explosion](https://explosion.ai) |
ab6c6e1c4fb91a6131eb8486e128a651
apache-2.0
[]
false
Model Description FLIPPED uses a unique meta-learning method to show zero-shot task generalization on classification natural language prompts, outperforming GPT-3 and T0-11B on many tasks with a 4x smaller scale. It is a series of encoder-decoder model trained on a numerous classification dataset. We show inputs and its corresponding outputs of each instances in each dataset to FLIPPED, and train it to generate its possible instruction. We add unlikelihood loss in order **not** to generate the instruction when given the same input, but a wrong output. To obtain FLIPPED, we fine-tune a T5 model in a given scale on a multitask mixture covering many different classification NLP tasks.
0e5f405664f2e719e766432ba970ce78
apache-2.0
[]
false
Intended uses You can use the models to perform inference on tasks by specifying your input-output NLP query in a "input: {input}\noutput: {output}" form , and the model will predict the instruction. For example, You can try *"input: <extra_id_0> this is the best cast iron skillet you will ever buy<extra_id_1>\noutput: Positive"* as an input, and the model will hopefully generate *"Title: Review:"*.
d20a0f6634d9114f2d2eb4e140a3676a
apache-2.0
[]
false
How to use Our overall explanation models along with ablations can be found in our [paper](https://arxiv.org/abs/2210.02969). We recommend using the [FLIPPED-11B](seonghyeonye/flipped_11B) checkpoint as it leads (on average) to the best performances on a variety of NLP tasks. |Model|Number of parameters| |-|-| |[Flipped_11B](https://huggingface.co/seonghyeonye/flipped_11B)|11 billion| |[Flipped_3B](https://huggingface.co/seonghyeonye/flipped_3B)|3 billion| Here is how to download the model in PyTorch: ```python import torch from transformers import T5Tokenizer, T5ForConditionalGeneration model = T5ForConditionalGeneration.from_pretrained("seonghyeonye/flipped_11B") tokenizer = T5Tokenizer.from_pretrained("seonghyeonye/flipped_11B") ``` If you want to use another checkpoint, please replace the path in `T5Tokenizer` and `T5ForConditionalGeneration`. We also provide a quick [Jupyter Notebook](https://github.com/seonghyeonye/Flipped-Learning/blob/master/flipped_inference.ipynb) where you can inference with our method. **Note: the model was trained with bfloat16 activations. As such, we highly discourage running inference with fp16.**
1e5130755d5bc84101ae4cc1e3c0d31b
apache-2.0
[]
false
Training procedure FLIPPED models are based on [T5](https://huggingface.co/google/t5-v1_1-xxl), a Transformer-based encoder-decoder language model pre-trained with a masked language modeling-style objective on [C4](https://huggingface.co/datasets/c4). At a high level, the input text along with output label is fed to the encoder and the instruction text is produced by the decoder. The model is fine-tuned to autoregressively generate the target. We also feed input text along with a wrong input, adding an unlikelihood loss in order not to make model produce the proper instruction in that case. Here are our training details. Training details: - Fine-tuning steps: 5'000 - Input sequence length: 384 - Target sequence length: 64 - Batch size: 240 - Optimizer: Adafactor - Learning rate: 5e-5 - Dropout: 0.1 - Sampling strategy: proportional to the number of examples in each dataset (we randomly sampled any dataset if it has over 500'000 examples so that it has at most 500'000 examples. Also, we randomly choose which instruction to generate for each training steps, so ideally each instruction appears *num_examples/num_templates* while training.)
93f125a3e6515db51772329bf612983b
apache-2.0
[]
false
Training data We trained different variants T0 with different mixtures of datasets. |Model|Training datasets| |--|--| |FLIPPED-11B|- Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ<br>- Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp<br>- Topic Classification: AG News, DBPedia<br>- Paraphrase Identification: MRPC, PAWS, QQP| |FLIPPED_3B|Same as FLIPPED-11B| We only choose prompts examples that has output lables, which can be found on the dataset page.
9c70884910692cec06617308a2b1fa55