license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['t5', 'seq2seq']
false
eval samples | 1000 | 1000 | Note that the amount of training data is limited to a fraction of the total dataset sizes, therefore the scores below can only be used to compare the 'transfer-learning' strength. The fine-tuned checkpoints for this evaluation are not saved, since they were trained for comparison of pre-trained models only. The numbers for summarization are the Rouge scores on 1000 documents from the test split. | | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base | |:------------------------|----------------:|-----------------------------:|---------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:| | *rouge1* | 33.38 | 33.97 | 34.39 | 33.38 | 34.97 | 34.38 | 30.35 | **35.04** | 34.04 | 33.25 | | *rouge2* | 13.32 | 13.85 | 13.98 | 13.47 | 14.01 | 13.89 | 11.57 | **14.23** | 13.76 | 12.74 | | *rougeL* | 24.22 | 24.72 | 25.1 | 24.34 | 24.99 | **25.25** | 22.69 | 25.05 | 24.75 | 23.5 | | *rougeLsum* | 30.23 | 30.9 | 31.44 | 30.51 | 32.01 | 31.38 | 27.5 | **32.12** | 31.12 | 30.15 | | *samples_per_second* | 3.18 | 3.02 | 2.99 | 3.22 | 2.97 | 1.57 | 2.8 | 0.61 | **3.27** | 1.22 | The models below have been evaluated for English to Dutch translation. Note that the first four models are pre-trained on Dutch only. That they still perform adequate is probably because the translation direction is English to Dutch. The numbers reported are the Bleu scores on 1000 documents from the test split. | | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base | |:-------------------------------|----------------:|-----------------------------:|---------------------------:|----------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:| | *precision_ng1* | 74.17 | 78.09 | 77.08 | 72.12 | 77.19 | 78.76 | 78.59 | 77.3 | **79.75** | 78.88 | 73.47 | | *precision_ng2* | 52.42 | 57.52 | 55.31 | 48.7 | 55.39 | 58.01 | 57.83 | 55.27 | **59.89** | 58.27 | 50.12 | | *precision_ng3* | 39.55 | 45.2 | 42.54 | 35.54 | 42.25 | 45.13 | 45.02 | 42.06 | **47.4** | 45.95 | 36.59 | | *precision_ng4* | 30.23 | 36.04 | 33.26 | 26.27 | 32.74 | 35.72 | 35.41 | 32.61 | **38.1** | 36.91 | 27.26 | | *bp* | 0.99 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 | | *score* | 45.88 | 51.21 | 48.31 | 41.59 | 48.17 | 51.31 | 50.82 | 47.83 | **53** | 51.79 | 42.74 | | *samples_per_second* | **45.19** | 45.05 | 38.67 | 10.12 | 42.19 | 42.61 | 12.85 | 33.74 | 9.07 | 37.86 | 9.03 |
9745fb1f7e01750a38cd002e426f3ad6
apache-2.0
['t5', 'seq2seq']
false
Translation models The models `t5-small-24L-dutch-english` and `t5-base-36L-dutch-english` have been fine-tuned for both language directions on the first 25M samples from CCMatrix, giving a total of 50M training samples. Evaluation is performed on out-of-sample CCMatrix and also on Tatoeba and Opus Books. The `_bp` columns list the *brevity penalty*. The `avg_bleu` score is the bleu score averaged over all three evaluation datasets. The best scores displayed in bold for both translation directions. | | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) | |:-----------------------|:-----------------------------|:-----------------------------|:------------------------------|:------------------------------| | *source_lang* | en | nl | en | nl | | *target_lang* | nl | en | nl | en | | *source_prefix* | translate English to Dutch: | translate Dutch to English: | translate English to Dutch: | translate Dutch to English: | | *ccmatrix_bleu* | **56.8** | 62.8 | 57.4 | **63.1** | | *tatoeba_bleu* | **46.6** | **52.8** | 46.4 | 51.7 | | *opus_books_bleu* | **13.5** | **24.9** | 12.9 | 23.4 | | *ccmatrix_bp* | 0.95 | 0.96 | 0.95 | 0.96 | | *tatoeba_bp* | 0.97 | 0.94 | 0.98 | 0.94 | | *opus_books_bp* | 0.8 | 0.94 | 0.77 | 0.89 | | *avg_bleu* | **38.96** | **46.86** | 38.92 | 46.06 | | *max_source_length* | 128 | 128 | 128 | 128 | | *max_target_length* | 128 | 128 | 128 | 128 | | *adam_beta1* | 0.9 | 0.9 | 0.9 | 0.9 | | *adam_beta2* | 0.997 | 0.997 | 0.997 | 0.997 | | *weight_decay* | 0.05 | 0.05 | 0.002 | 0.002 | | *lr* | 5e-05 | 5e-05 | 0.0005 | 0.0005 | | *label_smoothing_factor* | 0.15 | 0.15 | 0.1 | 0.1 | | *train_batch_size* | 128 | 128 | 128 | 128 | | *warmup_steps* | 2000 | 2000 | 2000 | 2000 | | *total steps* | 390625 | 390625 | 390625 | 390625 | | *duration* | 4d 5h | 4d 5h | 3d 2h | 3d 2h | | *num parameters* | 729M | 729M | 250M | 250M |
69ce8dd34a7c3fcc788ad8058bd8ac57
apache-2.0
['t5', 'seq2seq']
false
Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/). The HuggingFace 🤗 ecosystem was instrumental in all parts of the training. Weights & Biases made it possible to keep track of many training sessions and orchestrate hyper-parameter sweeps with insightful visualizations. The following repositories where helpful in setting up the TPU-VM, and getting an idea what sensible hyper-parameters are for training gpt2 from scratch: * [Gsarti's Pretrain and Fine-tune a T5 model with Flax on GCP](https://github.com/gsarti/t5-flax-gcp) * [Flax/Jax Community week t5-base-dutch](https://huggingface.co/flax-community/t5-base-dutch) Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
2214969866d4dacd9cfb7fe9d9576fcc
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xls-r-300m-tr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.2891 - Wer: 0.4741
c26e7539be5d36849b6bcc98c2536eac
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP
186a046fe3eaddfaab4f81a4b7606ae5
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 5.4933 | 0.39 | 400 | 1.0543 | 0.9316 | | 0.7039 | 0.78 | 800 | 0.6927 | 0.7702 | | 0.4768 | 1.17 | 1200 | 0.4779 | 0.6774 | | 0.4004 | 1.57 | 1600 | 0.4462 | 0.6450 | | 0.3739 | 1.96 | 2000 | 0.4287 | 0.6296 | | 0.317 | 2.35 | 2400 | 0.4395 | 0.6248 | | 0.3027 | 2.74 | 2800 | 0.4052 | 0.6027 | | 0.2633 | 3.13 | 3200 | 0.4026 | 0.5938 | | 0.245 | 3.52 | 3600 | 0.3814 | 0.5902 | | 0.2415 | 3.91 | 4000 | 0.3691 | 0.5708 | | 0.2193 | 4.31 | 4400 | 0.3626 | 0.5623 | | 0.2057 | 4.7 | 4800 | 0.3591 | 0.5551 | | 0.1874 | 5.09 | 5200 | 0.3670 | 0.5512 | | 0.1782 | 5.48 | 5600 | 0.3483 | 0.5406 | | 0.1706 | 5.87 | 6000 | 0.3392 | 0.5338 | | 0.153 | 6.26 | 6400 | 0.3189 | 0.5207 | | 0.1493 | 6.65 | 6800 | 0.3185 | 0.5164 | | 0.1381 | 7.05 | 7200 | 0.3199 | 0.5185 | | 0.1244 | 7.44 | 7600 | 0.3082 | 0.4993 | | 0.1182 | 7.83 | 8000 | 0.3122 | 0.4998 | | 0.1136 | 8.22 | 8400 | 0.3003 | 0.4936 | | 0.1047 | 8.61 | 8800 | 0.2945 | 0.4858 | | 0.0986 | 9.0 | 9200 | 0.2827 | 0.4809 | | 0.0925 | 9.39 | 9600 | 0.2894 | 0.4786 | | 0.0885 | 9.78 | 10000 | 0.2891 | 0.4741 |
6d89ac86e848fbf310be76e92a030d23
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-marc-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 1.0442 - Mae: 0.5385
00b4c795435b3b4bd9f9ff680d0a02ba
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.0371 | 1.0 | 1105 | 1.0522 | 0.5256 | | 0.8925 | 2.0 | 2210 | 1.0442 | 0.5385 |
ef335397dbe8ad6edb50c7e97ab3319a
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
datasets) This model transcribes speech into lowercase Latin alphabet including space and apostrophe, and is trained on around 2000 hours of Kinyarwanda speech data. It is a non-autoregressive "large" variant of Conformer, with around 120 million parameters. See the [model architecture](
5c9343f71480df5538b2b38ce5e94024
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Transcribing many audio files ```shell python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="nvidia/stt_rw_conformer_transducer_large" audio_dir="<DIRECTORY CONTAINING AUDIO FILES>" ```
22ec1817293a56d27715ad2c6f109203
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Model Architecture Conformer-Transducer model is an autoregressive variant of Conformer model [1] for Automatic Speech Recognition which uses Transducer loss/decoding. You may find more info on the detail of this model here: [Conformer-Transducer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html).
4012f0d9d9e999048c88db1d56363c92
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Training The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_transducer/speech_to_text_rnnt_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/conformer/conformer_transducer_bpe.yaml). The vocabulary we use contains 28 characters: ```python [' ', "'", 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z'] ``` Rare symbols with diacritics were replaced during preprocessing. The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py). For vocabulary of size 1024 we restrict maximum subtoken length to 4 symbols to avoid populating vocabulary with specific frequent words from the dataset. This does not affect the model performance and potentially helps to adapt to other domain without retraining tokenizer. Full config can be found inside the .nemo files.
e0fd9e80010e896319397c059e90f926
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Performance The list of the available models in this collection is shown in the following table. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding. | Version | Tokenizer | Vocabulary Size | Dev WER| Test WER| Train Dataset | |---------|-----------------------|-----------------|--------|---------|-----------------| | 1.11.0 | SentencePiece BPE, maxlen=4 | 1024 |13.82 | 16.19 | MCV-9.0 Train set|
872142480053cdb41ba49941ce0fd68e
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Limitations Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
94515706e911d6d4734a3e1a2fe406ff
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Deployment with NVIDIA Riva [NVIDIA Riva](https://developer.nvidia.com/riva), is an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, on edge, and embedded. Additionally, Riva provides: * World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours * Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization * Streaming speech recognition, Kubernetes compatible scaling, and enterprise-grade support Although this model isn’t supported yet by Riva, the [list of supported models is here](https://huggingface.co/models?other=Riva). Check out [Riva live demo](https://developer.nvidia.com/riva
5fe1ecb0c6e48ed7d8e6a45019e85294
cc-by-sa-4.0
['deberta', 'deberta-v2', 'fill-mask']
false
How to use You can use this model for masked language modeling as follows: ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained('ku-nlp/deberta-v2-tiny-japanese') model = AutoModelForMaskedLM.from_pretrained('ku-nlp/deberta-v2-tiny-japanese') sentence = '京都 大学 で 自然 言語 処理 を [MASK] する 。'
f1453826f7eb3834ece85726d1b56c86
cc-by-sa-4.0
['deberta', 'deberta-v2', 'fill-mask']
false
Tokenization The input text should be segmented into words by [Juman++](https://github.com/ku-nlp/jumanpp) in advance. [Juman++ 2.0.0-rc3](https://github.com/ku-nlp/jumanpp/releases/tag/v2.0.0-rc3) was used for pre-training. Each word is tokenized into subwords by [sentencepiece](https://github.com/google/sentencepiece).
7d9f859128eb381e49af18abf53851d7
cc-by-sa-4.0
['deberta', 'deberta-v2', 'fill-mask']
false
Training data We used the following corpora for pre-training: - Japanese Wikipedia (as of 20221020, 3.2GB, 27M sentences, 1.3M documents) - Japanese portion of CC-100 (85GB, 619M sentences, 66M documents) - Japanese portion of OSCAR (54GB, 326M sentences, 25M documents) Note that we filtered out documents annotated with "header", "footer", or "noisy" tags in OSCAR. Also note that Japanese Wikipedia was duplicated 10 times to make the total size of the corpus comparable to that of CC-100 and OSCAR. As a result, the total size of the training data is 171GB.
f7ba4e8a81cb271090d16b9f0a797dc9
cc-by-sa-4.0
['deberta', 'deberta-v2', 'fill-mask']
false
Training procedure We first segmented texts in the corpora into words using [Juman++](https://github.com/ku-nlp/jumanpp). Then, we built a sentencepiece model with 32000 tokens including words ([JumanDIC](https://github.com/ku-nlp/JumanDIC)) and subwords induced by the unigram language model of [sentencepiece](https://github.com/google/sentencepiece). We tokenized the segmented corpora into subwords using the sentencepiece model and trained the Japanese DeBERTa model using [transformers](https://github.com/huggingface/transformers) library. The training took 33 hours using 8 NVIDIA A100-SXM4-40GB GPUs. The following hyperparameters were used during pre-training: - learning_rate: 1e-3 - per_device_train_batch_size: 128 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 6 - total_train_batch_size: 6,144 - max_seq_length: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06 - lr_scheduler_type: linear schedule with warmup - training_steps: 100,000 - warmup_steps: 10,000 The accuracy of the trained model on the masked language modeling task was 0.593. The evaluation set consists of 5,000 randomly sampled documents from each of the training corpora.
c1bab0c0b65c1a16ad3950fa7e6ef3de
cc-by-sa-4.0
['deberta', 'deberta-v2', 'fill-mask']
false
Acknowledgments This work was supported by Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures (JHPCN) through General Collaboration Project no. jh221004, "Developing a Platform for Constructing and Sharing of Large-Scale Japanese Language Models". For training models, we used the mdx: a platform for the data-driven future.
a8e31ba2191061476db5f76e924c24b7
apache-2.0
['generated_from_trainer']
false
tiny-mlm-glue-sst2 This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.2692
52cb66ffcc85fb308db73ecfe6efa3a3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.0578 | 0.4 | 500 | 4.3208 | | 4.9384 | 0.8 | 1000 | 4.2217 | | 4.723 | 1.2 | 1500 | 4.2379 | | 4.7743 | 1.6 | 2000 | 4.1685 | | 4.7412 | 2.0 | 2500 | 4.2323 | | 4.6544 | 2.4 | 3000 | 4.1379 | | 4.5779 | 2.8 | 3500 | 4.2603 | | 4.5658 | 3.2 | 4000 | 4.2627 | | 4.5364 | 3.6 | 4500 | 4.2692 |
ae11fad3075ad013ed28a7ed27d12f7b
mit
['uzbek', 'cyrillic', 'news category classifier']
false
Uzbek news category classifier (based on UzBERT) UzBERT fine-tuned to classify news articles into one of the following categories: - дунё - жамият - жиноят - иқтисодиёт - маданият - реклама - саломатлик - сиёсат - спорт - фан ва техника - шоу-бизнес
8f3095dd7330db2040af5d44191571d7
mit
['uzbek', 'cyrillic', 'news category classifier']
false
How to use ```python >>> from transformers import pipeline >>> classifier = pipeline('text-classification', model='coppercitylabs/uzbek-news-category-classifier') >>> text = """Маҳоратли пара-енгил атлетикачимиз Ҳусниддин Норбеков Токио-2020 Паралимпия ўйинларида ғалаба қозониб, делегациямиз ҳисобига навбатдаги олтин медални келтирди. Бу ҳақда МОҚ хабар берди. Норбеков ҳозиргина ядро улоқтириш дастурида ўз ғалабасини тантана қилди. Ушбу машқда вакилимиз 16:13 метр натижа билан энг яхши кўрсаткични қайд этди. Шу тариқа, делегациямиз ҳисобидаги медаллар сони 16 (6 та олтин, 4 та кумуш ва 6 та бронза) тага етди. Кейинги кун дастурларида иштирок этадиган ҳамюртларимизга омад тилаб қоламиз!""" >>> classifier(text) [{'label': 'спорт', 'score': 0.9865401983261108}] ```
902df5c2a9127dc8d751d7456d7cd5e7
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
false
Diffusers ```py from diffusers import StableDiffusionPipeline import torch model_id = "runwayml/stable-diffusion-v1-5" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, revision="fp16") pipe = pipe.to(device) prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` For more detailed instructions, use-cases and examples in JAX follow the instructions [here](https://github.com/huggingface/diffusers
875f8f9e0e626dc07cfa2a753df49377
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4725
b480ae9b6f84ab0497f59c42fa6c0d00
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7086 | 1.0 | 157 | 2.4897 | | 2.5756 | 2.0 | 314 | 2.4230 | | 2.5395 | 3.0 | 471 | 2.4358 |
d54c116b928f88635e58ddbcf6db830c
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1335 - F1: 0.8652
4147fc0b72ca378d856c6a3dbbb1770c
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2566 | 1.0 | 525 | 0.1632 | 0.8292 | | 0.1276 | 2.0 | 1050 | 0.1340 | 0.8475 | | 0.0816 | 3.0 | 1575 | 0.1335 | 0.8652 |
8ddf772220da1183c13a3764b0b82fb5
mit
['text-generation']
false
Hungarian GPT-2 news generator For further models, scripts and details, see [our repository](https://github.com/nytud/neural-models) or [our demo site](https://juniper.nytud.hu/demo/nlp). - Pretrained on Hungarian Wikipedia - Finetuned on hin corpus (hvg.hu, index.hu, nol.hu)
439fc5a7a01ee2120ca65d58d7c85672
mit
['text-generation']
false
Citation If you use this model, please cite the following paper: ``` @inproceedings {yang-gpt2, title = {{"Az invazív medvék nem tolerálják a suzukis agressziót" - Magyar GPT-2 kísérleti modell}}, booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia}, year = {2022}, publisher = {Szegedi Tudományegyetem, Informatikai Intézet}, address = {Szeged, Magyarország}, author = {Yang, Zijian Győző}, pages = {463--476} } ```
c48ed031744ddcf5a377ddb845eede14
apache-2.0
['translation']
false
opus-mt-fr-lg * source languages: fr * target languages: lg * OPUS readme: [fr-lg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-lg/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-lg/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lg/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lg/opus-2020-01-09.eval.txt)
7dffe361ce282447918d034cb341df66
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.2200
85ab130ba83fc26bb4c081a3b2f78aa0
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: IPU - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - total_eval_batch_size: 20 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.25 - num_epochs: 3 - training precision: Mixed Precision
07b9f66682367c4517ab7ee5439c06ad
mit
['roberta-base', 'roberta-base-epoch_16']
false
RoBERTa, Intermediate Checkpoint - Epoch 16 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_16.
f7bc4f2753d5929981a7d4849a46a49e
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1725
85fb29f6bbb7682e6639b936cece9432
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2194 | 1.0 | 5533 | 1.1700 | | 0.9533 | 2.0 | 11066 | 1.1341 | | 0.7452 | 3.0 | 16599 | 1.1725 |
2011a2afd8056984f76f3d14a79aad78
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
ttoottoogg Dreambooth model trained by Tomasgomezdelfresno with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
fb42304452ea1b71114382e7ecc0850b
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7720 - Accuracy: 0.9184
7b92809a3254a73607abbd2d26105b55
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 318 | 3.2891 | 0.7429 | | 3.7868 | 2.0 | 636 | 1.8755 | 0.8374 | | 3.7868 | 3.0 | 954 | 1.1570 | 0.8961 | | 1.6928 | 4.0 | 1272 | 0.8573 | 0.9132 | | 0.9056 | 5.0 | 1590 | 0.7720 | 0.9184 |
b75c392056c2c19bc6e99f95fc9253d8
afl-3.0
['generated_from_trainer']
false
covid-general-news-bert This model is a fine-tuned version of [bvrau/covid-twitter-bert-v2-struth](https://huggingface.co/bvrau/covid-twitter-bert-v2-struth) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0688 - Accuracy: 0.9774 - Precision: 0.9781 - Recall: 0.9738 - F1: 0.9760
e1d51fba3bb2eecd4a59c455a4cb75ed
afl-3.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.2183 | 1.0 | 365 | 0.0688 | 0.9774 | 0.9781 | 0.9738 | 0.9760 | | 0.0783 | 2.0 | 730 | 0.0754 | 0.9842 | 0.9812 | 0.9855 | 0.9833 | | 0.0354 | 3.0 | 1095 | 0.0766 | 0.9856 | 0.9785 | 0.9913 | 0.9848 | | 0.0185 | 4.0 | 1460 | 0.0956 | 0.9822 | 0.9715 | 0.9913 | 0.9813 | | 0.0227 | 5.0 | 1825 | 0.0693 | 0.9870 | 0.9827 | 0.9898 | 0.9862 | | 0.0084 | 6.0 | 2190 | 0.0870 | 0.9849 | 0.9926 | 0.9753 | 0.9839 | | 0.0021 | 7.0 | 2555 | 0.0729 | 0.9877 | 0.9883 | 0.9855 | 0.9869 | | 0.0002 | 8.0 | 2920 | 0.1197 | 0.9808 | 0.9688 | 0.9913 | 0.9799 | | 0.0033 | 9.0 | 3285 | 0.0768 | 0.9884 | 0.9912 | 0.9840 | 0.9876 | | 0.0009 | 10.0 | 3650 | 0.1013 | 0.9863 | 0.9869 | 0.9840 | 0.9854 | | 0.0 | 11.0 | 4015 | 0.1069 | 0.9863 | 0.9869 | 0.9840 | 0.9854 | | 0.0 | 12.0 | 4380 | 0.1124 | 0.9856 | 0.9854 | 0.9840 | 0.9847 | | 0.0 | 13.0 | 4745 | 0.1175 | 0.9849 | 0.9854 | 0.9826 | 0.9840 | | 0.0 | 14.0 | 5110 | 0.1221 | 0.9849 | 0.9854 | 0.9826 | 0.9840 | | 0.0 | 15.0 | 5475 | 0.1256 | 0.9849 | 0.9854 | 0.9826 | 0.9840 | | 0.0 | 16.0 | 5840 | 0.1286 | 0.9849 | 0.9854 | 0.9826 | 0.9840 | | 0.0 | 17.0 | 6205 | 0.1300 | 0.9856 | 0.9854 | 0.9840 | 0.9847 | | 0.0 | 18.0 | 6570 | 0.1293 | 0.9849 | 0.9854 | 0.9826 | 0.9840 | | 0.0 | 19.0 | 6935 | 0.1304 | 0.9849 | 0.9854 | 0.9826 | 0.9840 | | 0.0 | 20.0 | 7300 | 0.1308 | 0.9849 | 0.9854 | 0.9826 | 0.9840 |
aa935c46c77753ccc65113281ea8f4ce
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard']
false
DreamBooth model of Kratos from God of War <img src="https://huggingface.co/matteopilotto/kratos-sd-v1-4-dreambooth/resolve/main/grid_hub_512px.png"> This is a Stable Diffusion model fine-tuned on the person concept with DreamBooth. It can be used by adding the string `krts person` to any prompt. Check out the exampls below ☟ to see a few practical examples on how to use it. If you are curious to learn more about the training script, then I suggest you to visit the [report](https://wandb.ai/matt24/dreambooth-kratos/reports/Kratos-Dreambooth--VmlldzozMzQyMjQ4)📝 I created with Weights & Biases 🐝. This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
359f13641ad6a4cff5718b5b81cca3e2
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard']
false
Description This is a Stable Diffusion model fine-tuned on [`matteopilotto/kratos`](https://huggingface.co/datasets/matteopilotto/kratos) dataset containing 10 images of **Kratos** 🪓 from **God of War** for the wildcard theme using [`CompVis/stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) pre-trained model.
4aeff4903bc913ebe71a32511edd64fc
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard']
false
Example Output <img src="https://huggingface.co/matteopilotto/kratos-sd-v1-4-dreambooth/resolve/main/sample_outputs/245581956f83dc275e5d.png"> **Prompt:** "An illustration of **krts** **person** punk playing electric guitar, tristan eaton, victo ngai, artgerm, rhads, ross draws"\ **Negative prompt:** "low contrast, blurry, low resolution, warped"\ **Resolution:** 512 x 512\ **Guidance Scale:** 7\ **Inference steps:** 50\ **Seeds:** [556850, 459286, 768745, 594109] --- <img src="https://huggingface.co/matteopilotto/kratos-sd-v1-4-dreambooth/resolve/main/sample_outputs/4c4a87edbc0d5f03469a.png"> **Prompt:** "a drawing of **krts** **person** wearing a Spider-man costume in the style of Marvel comics"\ **Negative prompt:** "low contrast, blurry, low resolution, warped"\ **Resolution:** 512 x 512\ **Guidance Scale:** 7\ **Inference steps:** 50\ **Seeds:** [553766, 537908, 147395, 343240] --- <img src="https://huggingface.co/matteopilotto/kratos-sd-v1-4-dreambooth/resolve/main/sample_outputs/4dae428d30bddcc70967.png"> **Prompt:** "an illustration of **krts** **person** sitting in a movie theater eating popcorn watching a movie, unreal engine, cozy indoor lighting, artstation, detailed, digital painting, cinematic, character design by mark ryden and pixar and hayao miyazaki, unreal 5, daz, hyperrealistic, octane render"\ **Negative prompt:** "low contrast, blurry, low resolution, warped"\ **Resolution:** 512 x 512\ **Guidance Scale:** 7\ **Inference steps:** 50\ **Seeds:** [737986, 488711, 799063, 121111]
f01e88d79cdf0df05e085869997a803a
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard']
false
stable diffusion hyperparameters unique_token = 'krts' class_type = 'person' prompt = f'An illustration of {unique_token} {class_type} punk playing electric guitar, tristan eaton, victo ngai, artgerm, rhads, ross draws' negative_prompt = 'low contrast, blurry, low resolution, warped' guidance_scale = 7 h = 512 w = 512 inference_steps = 50 seed = 594109
e2a14d4338ecf2fc60f6294e22547565
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard']
false
generate image image = pipeline( prompt, negative_prompt=negative_prompt, guidance_scale=guidance_scale, height=h, width=w, num_inference_steps=inference_steps, generator=generator ).images[0] ```
d1718b7660afb2395ad91b48a31fde90
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1372 - F1: 0.8621
b66d6667f58b00160e0fbbac6ed76abf
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 | | 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 | | 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 |
ea4099c9d8ecce3e78aae5ee988c3fe6
apache-2.0
[]
false
Model Description DIRECT is a strong baseline of FLIPPED, based on the training objective on [T0-3B](https://huggingface.co/bigscience/T0_3B). With only 5% token updates and half of training datasets compared to T0-3B, DIRECT outperforms T0-3B. (+6.38% mean accuracy on 14 NLP tasks, +1.19% mean accuracy on 14 BIG-bench tasks)
bf04327ef367acb25d386b17a1050a42
apache-2.0
[]
false
How to use Our overall explanation models along with ablations can be found in our [paper](https://arxiv.org/abs/2210.02969). We recommend using the [FLIPPED-11B](seonghyeonye/flipped_11B) checkpoint as it leads (on average) to the best performances on a variety of NLP tasks. |Model|Number of parameters| |-|-| |[Flipped_11B](https://huggingface.co/seonghyeonye/flipped_11B)|11 billion| |[Flipped_3B](https://huggingface.co/seonghyeonye/flipped_3B)|3 billion| Here is how to download the model in PyTorch: ```python import torch from transformers import T5Tokenizer, T5ForConditionalGeneration model = T5ForConditionalGeneration.from_pretrained("seonghyeonye/direct_3B") tokenizer = T5Tokenizer.from_pretrained("seonghyeonye/direct_3B") ``` If you want to use another checkpoint, please replace the path in `T5Tokenizer` and `T5ForConditionalGeneration`. We also provide a quick [Jupyter Notebook](https://github.com/seonghyeonye/Flipped-Learning/blob/master/flipped_inference.ipynb) where you can inference with our method. **Note: the model was trained with fp32 activations. As such, we highly discourage running inference with fp16.**
0548970bdb1dcd69a1d733dfbbd4b75e
apache-2.0
[]
false
Training procedure DIRECT model is based on [T5+LM](https://huggingface.co/google/t5-xl-lm-adapt), a Transformer-based encoder-decoder language model pre-trained with a masked language modeling-style objective additionally pretrained on language modeling objective on [C4](https://huggingface.co/datasets/c4). Training details: - Fine-tuning steps: 5'000 - Input sequence length: 512 - Target sequence length: 128 - Batch size: 240 - Optimizer: Adafactor - Learning rate: 1e-4 - Dropout: 0.1 - Sampling strategy: proportional to the number of examples in each dataset (we randomly sampled any dataset if it has over 500'000 examples so that it has at most 500'000 examples. Also, we randomly choose which instruction to generate for each training steps, so ideally each instruction appears *num_examples/num_templates* while training.)
5e6d9911687e8cb75488e6f8ac19a82a
apache-2.0
[]
false
Training data We trained different variants T0 with different mixtures of datasets. |Model|Training datasets| |--|--| |FLIPPED_11B|- Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ<br>- Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp<br>- Topic Classification: AG News, DBPedia<br>- Paraphrase Identification: MRPC, PAWS, QQP| |FLIPPED_3B|Same as FLIPPED-11B| |DIRECT_3B|Same as FLIPPED-11B| We only choose prompts examples that has output lables, which can be found on the dataset page.
3da64df30402cf96804589d8dad93767
apache-2.0
[]
false
Evaluation data We evaluate our models on following datasets: |Task category|Datasets| |-|-| |Natural language inference|ANLI(R1, R2, R3), CB, RTE| |Coreference resolution|WSC, Winogrande| |Word sense disambiguation|WiC| |Sentence completion|COPA, HellaSwag, Story Cloze| |QA|PIQA, ARC-Challenge, OpenbookQA| We also evaluate FLIPPED on a subset of [BIG-bench benchmark](https://github.com/google/BIG-bench): - Code description task - Conceptual combinations - Hindu knowledge json - Known unknowns - Language identification - Logic grid puzzle task - Logical deduction - Common misconceptions - Movie dialog same or different - Novel concepts - Strategyqa - Formal fallacies syllogisms negation - VitaminC - Winowhy multiple choice
68c850c568651df955771eadf4aa9ab8
apache-2.0
[]
false
Label generalization We evaluate the robustness of models on following datasets with changing the output label of the datasets. The substitute words can be found in our [paper](https://arxiv.org/abs/2210.02969). |Task category|(Datasets, Template name)| |-|-| |Unseen tasks|(WSC, does the pronoun refer to), (CB, can we infer), (RTE, MNLI crowdsource)| |Seen tasks|(IMDB, Reviewer Enjoyment Yes No), (PAWS, Meaning) | The template name we used can be found in the [promptsource template library](https://github.com/bigscience-workshop/promptsource/tree/main/promptsource/templates).
6e64ac071251ee5a18f3b3203c56b8a3
apache-2.0
[]
false
BibTeX entry and citation info ```bibtex @article{ye2022guess, title={Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners}, author={Ye, Seonghyeon and Kim, Doyoung and Jang, Joel and Shin, Joongbo and Seo, Minjoon}, journal={arXiv preprint arXiv:2210.02969}, year={2022} } ```
5249ebf967adc30ecbba5859a2a47f0a
mit
['generated_from_trainer']
false
deberta-base-finetuned-aqa-newsqa This model is a fine-tuned version of [stevemobs/deberta-base-finetuned-aqa](https://huggingface.co/stevemobs/deberta-base-finetuned-aqa) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7657
f63ac94f23e40491b9aca69d47d846ab
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2
7f4be0b2aaee79da5ff9c0e114fad55b
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.6883 | 1.0 | 17307 | 0.7325 | | 0.4807 | 2.0 | 34614 | 0.7657 |
c4c2daf9e311699e841e1e00e9303668
cc-by-sa-4.0
['asteroid', 'audio', 'ConvTasNet', 'audio-to-audio']
false
Description: This model was trained by Mathieu Hu using the librimix/ConvTasNet recipe in [Asteroid](https://github.com/asteroid-team/asteroid). It was trained on the `enh_single` task of the Libri1Mix dataset.
70de07ec22e42ca6b79894b1e69b1aa4
cc-by-sa-4.0
['asteroid', 'audio', 'ConvTasNet', 'audio-to-audio']
false
Training config: ```yaml data: n_src: 1 sample_rate: 16000 segment: 3 task: enh_single train_dir: data/wav16k/min/train-100 valid_dir: data/wav16k/min/dev filterbank: kernel_size: 16 n_filters: 512 stride: 8 main_args: exp_dir: exp/train_convtasnet_f34664b9 help: None masknet: bn_chan: 128 hid_chan: 512 mask_act: relu n_blocks: 8 n_repeats: 3 n_src: 1 skip_chan: 128 optim: lr: 0.001 optimizer: adam weight_decay: 0.0 positional arguments: training: batch_size: 2 early_stop: True epochs: 200 half_lr: True num_workers: 4 ```
b31b7f3b5947735674655fc9c9861aac
cc-by-sa-4.0
['asteroid', 'audio', 'ConvTasNet', 'audio-to-audio']
false
Results: ```yaml si_sdr: 13.938355526049932 si_sdr_imp: 10.488574220190232 sdr: 14.567380104207393 sdr_imp: 11.064717304994337 sir: inf sir_imp: nan sar: 14.567380104207393 sar_imp: 11.064717304994337 stoi: 0.9201010933251715 stoi_imp: 0.1241812697846321 ```
8b07aae0db72b1b215f074ac57311319
cc-by-sa-4.0
['asteroid', 'audio', 'ConvTasNet', 'audio-to-audio']
false
License notice: This work "ConvTasNet_Libri1Mx_enhsingle" is a derivative of [CSR-I (WSJ0) Complete](https://catalog.ldc.upenn.edu/LDC93S6A) by [LDC](https://www.ldc.upenn.edu/), used under [LDC User Agreement for Non-Members](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf) (Research only). "ConvTasNet_Libri1Mix_enhsingle" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Mathieu Hu.
d4439b5bc0f83d4d12fd5d7f540a7049
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-subreddit_classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2958 - Accuracy: 0.91
61ada5fdf3e727b5573418d15b83155d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4142 | 0.6 | 30 | 1.2653 | 0.45 | | 0.9856 | 1.2 | 60 | 0.7754 | 0.87 | | 0.5056 | 1.8 | 90 | 0.4413 | 0.9 | | 0.2248 | 2.4 | 120 | 0.2984 | 0.92 | | 0.1352 | 3.0 | 150 | 0.3265 | 0.89 | | 0.0856 | 3.6 | 180 | 0.2958 | 0.91 | | 0.0715 | 4.2 | 210 | 0.2611 | 0.92 | | 0.0615 | 4.8 | 240 | 0.2738 | 0.93 |
b2330e224a8771f350fcfd3400017a4d
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-marc-en-j-run This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.9189 - Mae: 0.4634
9c0b36540980703ca26d622252e90080
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.2327 | 1.0 | 235 | 1.0526 | 0.6341 | | 0.9943 | 2.0 | 470 | 0.9189 | 0.4634 |
9f478abbabb04b85eb5ce6f33c80c5b1
apache-2.0
['generated_from_trainer']
false
small-vanilla-target-glue-wnli This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 8.2398 - Accuracy: 0.0845
bf2448bd25986de648d9c4303b1afa8d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6354 | 25.0 | 500 | 2.5362 | 0.0845 | | 0.3043 | 50.0 | 1000 | 5.1175 | 0.0986 | | 0.138 | 75.0 | 1500 | 6.7552 | 0.0986 | | 0.0732 | 100.0 | 2000 | 7.6533 | 0.0986 | | 0.0413 | 125.0 | 2500 | 8.2398 | 0.0845 |
d49469a15caf31f6e60eda16ed43522d
mit
[]
false
kanovt on Stable Diffusion This is the `kanovt` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![kanovt 0](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/34.jpeg) ![kanovt 1](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/8.jpeg) ![kanovt 2](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/33.jpeg) ![kanovt 3](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/3.jpeg) ![kanovt 4](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/12.jpeg) ![kanovt 5](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/14.jpeg) ![kanovt 6](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/28.jpeg) ![kanovt 7](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/29.jpeg) ![kanovt 8](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/18.jpeg) ![kanovt 9](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/26.jpeg) ![kanovt 10](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/1.jpeg) ![kanovt 11](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/27.jpeg) ![kanovt 12](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/16.jpeg) ![kanovt 13](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/20.jpeg) ![kanovt 14](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/24.jpeg) ![kanovt 15](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/11.jpeg) ![kanovt 16](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/5.jpeg) ![kanovt 17](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/0.jpeg) ![kanovt 18](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/21.jpeg) ![kanovt 19](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/7.jpeg) ![kanovt 20](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/15.jpeg) ![kanovt 21](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/17.jpeg) ![kanovt 22](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/23.jpeg) ![kanovt 23](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/31.jpeg) ![kanovt 24](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/22.jpeg) ![kanovt 25](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/32.jpeg) ![kanovt 26](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/10.jpeg) ![kanovt 27](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/30.jpeg) ![kanovt 28](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/25.jpeg) ![kanovt 29](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/19.jpeg) ![kanovt 30](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/6.jpeg) ![kanovt 31](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/4.jpeg) ![kanovt 32](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/9.jpeg) ![kanovt 33](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/2.jpeg) ![kanovt 34](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/13.jpeg)
7b15ab939a449f91a62fafdc31fefea0
gpl-3.0
[]
false
ConfliBERT is a pre-trained language model for political conflict and violence. We provided four versions of ConfliBERT: <ol> <li>ConfliBERT-scr-uncased: &nbsp;&nbsp;&nbsp;&nbsp; Pretraining from scratch with our own uncased vocabulary (preferred)</li> <li>ConfliBERT-scr-cased: &nbsp;&nbsp;&nbsp;&nbsp; Pretraining from scratch with our own cased vocabulary</li> <li>ConfliBERT-cont-uncased: &nbsp;&nbsp;&nbsp;&nbsp; Continual pretraining with original BERT's uncased vocabulary</li> <li>ConfliBERT-cont-cased: &nbsp;&nbsp;&nbsp;&nbsp; Continual pretraining with original BERT's cased vocabulary</li> </ol> See more details in https://github.com/eventdata/ConfliBERT/
7bc7aaa41f0b9be67566824a862ff63b
apache-2.0
['automatic-speech-recognition', 'pt']
false
exp_w2v2t_pt_xlsr-53_s677 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
57f9b7cb7d7a63eb0468703035f10251
apache-2.0
['generated_from_trainer']
false
mt5-small-finetuned-2epochs-opus_books-en-to-it This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the opus_books dataset. It achieves the following results on the evaluation set: - Loss: 3.0110
0cb905099d6a0bdb6a0d9f39c48d59b3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.957 | 1.0 | 3638 | 3.0675 | | 3.8286 | 2.0 | 7276 | 3.0110 |
8b1fd502a0eb4a08dde0f0342cfaf8ac
apache-2.0
['generated_from_trainer']
false
wav2vec2-xls-r-300m-ar-8 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 76.6942 - Wer: 0.2108
a3b8e0001b9274ac3444c0d70a6130cd
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 64 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 60 - mixed_precision_training: Native AMP
0e6dce829ca163de36f45a9fac3409dd
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6295.0487 | 4.71 | 400 | 615.8572 | 1.0 | | 1464.0058 | 9.41 | 800 | 111.7187 | 0.5361 | | 425.6333 | 14.12 | 1200 | 80.7770 | 0.3446 | | 280.069 | 18.82 | 1600 | 74.0422 | 0.2980 | | 213.0118 | 23.53 | 2000 | 78.4876 | 0.2783 | | 175.6819 | 28.24 | 2400 | 70.4845 | 0.2491 | | 148.5846 | 32.94 | 2800 | 70.5758 | 0.2443 | | 131.1029 | 37.65 | 3200 | 75.3770 | 0.2371 | | 116.7131 | 42.35 | 3600 | 78.7061 | 0.2268 | | 105.369 | 47.06 | 4000 | 76.4783 | 0.2210 | | 97.0829 | 51.76 | 4400 | 76.6051 | 0.2153 | | 90.4009 | 56.47 | 4800 | 76.6942 | 0.2108 |
a3d3750514b3b621afb1dab79ae3c8da
apache-2.0
['pytorch', 'causal-lm', 'code-generation', 'The Pile']
false
Model Description FIM-1.3B is the first of a series of large-scale infilling-enabled autoregressive language models trained by CarperAI. FIM-1.3B is the first of these models, and future models (both larger and smaller) trained on greater quantities of code data will be released, potentially with different architectural variations optimized for code. This is a preliminary release of an experimental artifact and should be treated as such. We are releasing these results and this model in the hopes that it may be useful to the greater research community, especially those interested in LMs for code and pair programming tools. CarperAI will be releasing larger LMs better tuned for code in the near future, building on these experiments.
863717e692f27b85d5fbf278eef5908e
apache-2.0
['pytorch', 'causal-lm', 'code-generation', 'The Pile']
false
Model Dimensions | Hyperparameter | Value | |----------------------|----------------------------------------------------------------------------------------------------------------------------------------| | \\(n_{parameters}\\) | 1,331,810,304 | | \\(n_{layers}\\) | 24 | | \\(d_{model}\\) | 2048 | | \\(d_{ff}\\) | 8192 | | \\(n_{heads}\\) | 16 | | \\(d_{head}\\) | 128 | | \\(n_{ctx}\\) | 2048 | | \\(n_{vocab}\\) | 50280 | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) The model consists of 24 transformer layers with a hidden dimension of 2048, and a feedforward intermediate dimension of 8192. The hidden dimension is split into 16 heads for self-attention, each with a dimension of 128. Rotary Position Embedding (RoPE) is used. The model is trained with the same tokenizer as [GPT-NeoX-20b](https://arxiv.org/abs/2204.06745), for a vocabulary size of 50254 tokens.
721390f40bae03ed9940cf8886c0b7a8
apache-2.0
['pytorch', 'causal-lm', 'code-generation', 'The Pile']
false
Training Data The model was trained on the Pile, an 800Gb dataset composed of varied web corpora. The datasheet and paper for the Pile can be found [here](https://arxiv.org/abs/2201.07311) and [here](https://arxiv.org/abs/2101.00027) respectively.
c5681200c81080817be68065f1e799f1
apache-2.0
['pytorch', 'causal-lm', 'code-generation', 'The Pile']
false
Training Details This model was trained for 47,000 steps at a batch size of 6,291,456 tokens per step in the [GPT-NeoX codebase](https://github.com/EleutherAI/gpt-neox). It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly. Following [Bavarian et al. 2022](https://arxiv.org/abs/2207.14255), we train the model to additionally perform infilling via a data transformation applied randomly to 90% of input contexts at train-time. Middle segments “to infill” were selected uniformly at random from contexts at the character level, and these contexts were then reformatted as \<SUF\> {last 1/3rd of the context} \<PRE\> {first 1/3rd of the context} \<MID\> {middle 1/3rd of the context} \<EOD\>
72f904f20cc3384dc46ac1c0a2b1bf55
apache-2.0
['pytorch', 'causal-lm', 'code-generation', 'The Pile']
false
How to use This model can be easily loaded using the `AutoModelForCausalLM` class: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("CarperAI/FIM-NeoX-1.3B") model = AutoModelForCausalLM.from_pretrained("CarperAI/FIM-NeoX-1.3B") ```
8d505a5516d3a53632c3b9098dd6939e
apache-2.0
['pytorch', 'causal-lm', 'code-generation', 'The Pile']
false
Performing Infilling Suppose we have some text that we would like to perform infilling on at a certain “cursor location”. This would have the form {some prelude text here} \<INFILLING LOCATION\> {some text following cursor}. The way to perform infilling generation would be via placing the input text into this format: \<SUF\> {some text following cursor} \<PRE\> {some prelude text here} \<MID\> ... language model output is generated after \<MID\> token! As a concrete example, here is a code snippet that should allow a model to perform infilling: There was an issue where the sentinel `<|SUF|>`, `<|PRE|>`, and `<|MID|>` tokens were not the correct ids in the uploaded tokenizer and model card! Please try clearing the Huggingface cache and redownloading the model :)) Here is a minimal example of performing open-ended generation with this model, on a simple function `score(x, y)`: ``` def score(x,y) -> int: """ ``` and also infilling with the function and end of docstring already placed: ``` def score(x,y) -> int: """ <|MID|> (infill here) """ score = x + y return score ``` ``` from transformers import AutoTokenizer, AutoModelForCausalLM import torch model = AutoModelForCausalLM.from_pretrained("CarperAI/FIM-NeoX-1.3B") tok = AutoTokenizer.from_pretrained("CarperAI/
bde0d73e437683ec768b6ce18d4b3da7
apache-2.0
['pytorch', 'causal-lm', 'code-generation', 'The Pile']
false
infilling demo prefix = 'def score(x, y) -> int:\n"""\n' suffix = '"""\n\n score = x + y\n return score' model_input = [50277, *tok(suffix)["input_ids"], 50278, *tok(prefix)["input_ids"], 50279] output = tok.decode(model.generate(torch.IntTensor(model_input).unsqueeze(0), max_length=40)[0]) print(output) ``` outputs: `'<|SUF|>"""\n\n score = x + y\n return score<|PRE|>def score(x, y) -> int:\n"""\n<|MID|> score(x, y) -> int\n<|endoftext|>'` ``` from transformers import AutoTokenizer, AutoModelForCausalLM import torch
fb909b4f758388f8bde3664a8cde4cbe
apache-2.0
['pytorch', 'causal-lm', 'code-generation', 'The Pile']
false
non-infilling demo prefix = 'def score(x, y) -> int:\n"""\n' model_input = [*tok(prefix)["input_ids"]] output = tok.decode(model.generate(torch.IntTensor(model_input).unsqueeze(0), max_length=100)[0]) print(output) ``` outputs: `'def score(x, y) -> int:\n"""\n Return the score of the given point.\n """\n return sum(x * y for x, y in zip(x_list, y_list))\n\ndef get_point_score(x, y) -> int:\n """\n Return the score of the given point.\n """\n return sum(x * y for x, y in zip(x_list, y'` The sentinel tokens are now accessible via `tokenizer.decode(50277) = "<|SUF|>"`, `tokenizer.decode(50278) = "<|PRE|>"`, `tokenizer.decode(50279) = "<|MID|>"`.
9b90e4bb6571466f1eb9bf3da5de066b
apache-2.0
['pytorch', 'causal-lm', 'code-generation', 'The Pile']
false
Intended Uses and Limitations FIM-1.3B learns a representation of the English language that can be used to extract features useful for downstream NLP and Code generation tasks. However, the model has solely been trained on a standard next-token-prediction language modeling task on its training data.
ba1946e2815631832aa43d38942aad2a
apache-2.0
['pytorch', 'causal-lm', 'code-generation', 'The Pile']
false
Limitations and Biases FIM-1.3B was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. FIM-1.3B may produce socially unacceptable or otherwise harmful text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how FIM-1.3B will respond to particular prompts, and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. Code generated by FIM-1.3B should also be checked for security errors by a human before use in production.
dddd3c48c6b08b0e3b89c0d5e817e8b2
apache-2.0
['pytorch', 'causal-lm', 'code-generation', 'The Pile']
false
Evaluation results We evaluate our model on a number of standard NLP datasets to verify that our infilling model performs on par with a comparable autoregressive model. We use the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) library developed by EleutherAI for all evaluations except for HumanEval-infilling, for which we use the code in [https://github.com/openai/human-eval-infilling](https://github.com/openai/human-eval-infilling) to evaluate performance. All 3 models here are trained using the same configuration with differing FIM hyperparameters and/or different positional embeddings. "AR-1.3B" refers to a model trained without FIM and with rotary positional embeddings, "CarperAI/FIM-NeoX-1.3B" refers to this model (trained with a FIM rate of 0.9 in SPM mode according to [Bavarian et al. 2022](https://arxiv.org/abs/2207.14255)), and "FIM-1.3B-alibi" refers to a model trained with [AliBi](https://arxiv.org/abs/2108.12409) positional embeddings but otherwise the same as this model. | Model | HumanEval-infilling | arc\_easy | arc\_challenge | lambada | piqa | sciq | wsc | winogrande | |-----------------|---------------------|----------|---------------|---------|--------|-------|--------|------------| | AR-1.3B | 0.0029 | 0.5816 | 0.2465 | 7.03 | 0.7116 | 0.85 | 0.3654 | 0.5651 | | CarperAI/FIM-NeoX-1.3B | 0.0155 | 0.5829 | 0.2457 | 7.08 | 0.7029 | 0.861 | 0.3654 | 0.5390 | | FIM-1.3B-alibi | 0.0029 | 0.5589 | 0.25 | 7.49 | 0.6926 | 0.856 | 0.3654 | 0.5406 | Here HumanEval-infilling is reported as Pass@10 with a temperature of 0.8 (such that 100 times the score reported here = Pass@10 as a percentage), Lambada is reported as perplexity, and all other benchmarks report accuracy as a number between 0 and 1. These results are subject to change, but appear to indicate that AliBi with FIM does not enable infilling, while rotary positional embeddings do allow for infilling to be learned.
44e2bdc1dc23830409ebf47b7a966c08
apache-2.0
['pytorch', 'causal-lm', 'code-generation', 'The Pile']
false
Licensing This model is licensed under the terms of the Apache License 2.0. ``` Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ```
aac9b7e544a5f0d725f6fb9ea1974217
apache-2.0
['pytorch', 'causal-lm', 'code-generation', 'The Pile']
false
Acknowledgements This project would not have been possible without compute resources provided by [Stability.ai](https://stability.ai) and [CarperAI](https://carper.ai/). This model was trained by Hailey Schoelkopf, and would also not have been possible without help, guidance, and feedback by many including Louis Castricato, Stella Biderman, Shivanshu Purohit, Quentin Anthony, and others.
528370a0e0388d8c71bcaf8a5956bce6
apache-2.0
['automatic-speech-recognition', 'id']
false
exp_w2v2t_id_xlsr-53_s358 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (id)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
139484fd1d2ccaafdcb0e9ae8b92df84
apache-2.0
['generated_from_trainer']
false
kd-distilBERT-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7752 - Accuracy: 0.9129
09957362e050575fd9624b98a653bd8b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.3211 | 1.0 | 318 | 3.3313 | 0.7235 | | 2.6568 | 2.0 | 636 | 1.9016 | 0.8452 | | 1.5575 | 3.0 | 954 | 1.1668 | 0.8955 | | 1.0094 | 4.0 | 1272 | 0.8619 | 0.9087 | | 0.7914 | 5.0 | 1590 | 0.7752 | 0.9129 |
912b48cab36b4da88ba589b4f2e807e3
apache-2.0
['question_answering', 'qa', 'answer_consolidation']
false
QA Consolidation Model Model card for the QA Consolidation (step 3) of the Discord Questions framework (EMNLP 2022 - Findings). The model assesses the similarity between two answers (a1, a2) to a question Q. The score obtained is on a scale from 1 (most dissimilar) to 5 (most similar). See example below for formatting. The model is a RoBERTa-large model, finetuned on the [MOCHA dataset](https://arxiv.org/abs/2010.03636), and a 5-pt version of the [Answer Equivalence](https://arxiv.org/abs/2202.07654v1) dataset. For a (question, answer1, answer2)-tuple, the model outputs a [1-5] answer similarity score, where 5 is most similar. Example usage of the model: ```py from transformers import AutoModelForSequenceClassification, AutoTokenizer import itertools ae_tokenizer = AutoTokenizer.from_pretrained("Salesforce/qa_consolidation") ae_model = AutoModelForSequenceClassification.from_pretrained("Salesforce/qa_consolidation").eval() question = "When will the recession happen?" answers = ["probably next January", "never", "we're already in a recession", "it won't happen", "it's going on right now", "not before next year", "upcoming January-March"] dataset = [{"a1": a1, "a2": a2, "input": "%s <sep> %s <sep> %s" % (question, a1, a2)} for a1, a2 in itertools.combinations(answers, 2)] input_ids = ae_tokenizer.batch_encode_plus([d["input"] for d in dataset], add_special_tokens=False, padding=True, return_tensors="pt")["input_ids"] scores = ae_model(input_ids=input_ids)["logits"][:, 0].tolist() for d, score in zip(dataset, scores): d["score"] = score for d in sorted(dataset, key=lambda d: -d["score"]): print("[Score: %.3f] %s" % (d["score"], d["input"])) ``` The output then looks like: ``` [Score: 4.980] When will the recession happen? <sep> never <sep> it won't happen [Score: 3.831] When will the recession happen? <sep> probably next January <sep> upcoming January-March [Score: 3.366] When will the recession happen? <sep> we're already in a recession <sep> it's going on right now [Score: 2.302] When will the recession happen? <sep> never <sep> not before next year [Score: 1.899] When will the recession happen? <sep> probably next January <sep> not before next year [Score: 1.290] When will the recession happen? <sep> it won't happen <sep> not before next year [Score: 1.230] When will the recession happen? <sep> we're already in a recession <sep> it won't happen [Score: 1.187] When will the recession happen? <sep> not before next year <sep> upcoming January-March [Score: 1.126] When will the recession happen? <sep> it won't happen <sep> it's going on right now [Score: 1.108] When will the recession happen? <sep> never <sep> we're already in a recession [Score: 1.099] When will the recession happen? <sep> we're already in a recession <sep> not before next year [Score: 1.091] When will the recession happen? <sep> probably next January <sep> it's going on right now [Score: 1.084] When will the recession happen? <sep> never <sep> it's going on right now [Score: 1.048] When will the recession happen? <sep> probably next January <sep> we're already in a recession [Score: 1.023] When will the recession happen? <sep> probably next January <sep> it won't happen [Score: 1.017] When will the recession happen? <sep> probably next January <sep> never [Score: 1.006] When will the recession happen? <sep> it's going on right now <sep> not before next year [Score: 0.994] When will the recession happen? <sep> we're already in a recession <sep> upcoming January-March [Score: 0.917] When will the recession happen? <sep> it's going on right now <sep> upcoming January-March [Score: 0.903] When will the recession happen? <sep> it won't happen <sep> upcoming January-March [Score: 0.896] When will the recession happen? <sep> never <sep> upcoming January-March ``` In the paper, we find that a threshold of `T=2.75` achieves the highest F1 score on the validation portions of the two datasets. In the above example, only the first three pairs would be classified as equivalent answers, and all pairs below would be labeled as non-equivalent answers.
27e6bbd9fe74001d1bfc09437bbdb2dd
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased__sst2__train-32-7 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6736 - Accuracy: 0.5931
8306bbdaefdccf2aabcb1e000576b706
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7094 | 1.0 | 13 | 0.6887 | 0.5385 | | 0.651 | 2.0 | 26 | 0.6682 | 0.6923 | | 0.6084 | 3.0 | 39 | 0.6412 | 0.6923 | | 0.4547 | 4.0 | 52 | 0.6095 | 0.6923 | | 0.2903 | 5.0 | 65 | 0.6621 | 0.6923 | | 0.1407 | 6.0 | 78 | 0.7130 | 0.7692 | | 0.0444 | 7.0 | 91 | 0.9007 | 0.6923 | | 0.0176 | 8.0 | 104 | 0.9525 | 0.7692 | | 0.0098 | 9.0 | 117 | 1.0289 | 0.7692 | | 0.0071 | 10.0 | 130 | 1.0876 | 0.7692 | | 0.0052 | 11.0 | 143 | 1.1431 | 0.6923 | | 0.0038 | 12.0 | 156 | 1.1687 | 0.7692 | | 0.0034 | 13.0 | 169 | 1.1792 | 0.7692 | | 0.0031 | 14.0 | 182 | 1.2033 | 0.7692 |
4e3c6fdaf47ef1876fa052cd00049972
apache-2.0
['generated_from_trainer']
false
distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6421
79facca4f583cdfccfd561df07015026
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7602 | 1.0 | 2334 | 3.6669 | | 3.653 | 2.0 | 4668 | 3.6472 | | 3.6006 | 3.0 | 7002 | 3.6421 |
9624009e2d2d17982901bbc0b5ae6ecd
mit
['generated_from_trainer']
false
my-finetuned-xml-roberta2 This model is a fine-tuned version of [knurm/my-finetuned-xml-roberta](https://huggingface.co/knurm/my-finetuned-xml-roberta) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.4644
c60f0c5e4b15e2251f2262c303e9f4b1
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.4491 | 1.0 | 5652 | 3.3339 | | 3.171 | 2.0 | 11304 | 3.2681 | | 2.9518 | 3.0 | 16956 | 3.3003 | | 2.7305 | 4.0 | 22608 | 3.3447 | | 2.5974 | 5.0 | 28260 | 3.4644 |
df077594a19b0136a48f93d50d36a743