license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2651 - F1: 0.8355
be7922389680d525525bcc77a86ed5db
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5954 | 1.0 | 191 | 0.3346 | 0.7975 | | 0.2689 | 2.0 | 382 | 0.2900 | 0.8347 | | 0.1821 | 3.0 | 573 | 0.2651 | 0.8355 |
fb333b01f29ab276c080dad435bbee24
apache-2.0
['audio', 'automatic-speech-recognition', 'en', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_6_0', 'robust-speech-event', 'speech', 'xlsr-fine-tuning-week']
false
Fine-tuned XLSR-53 large model for speech recognition in English Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on English using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :) The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
11237fdac48ec3d7da80a618d2c4ed3f
apache-2.0
['audio', 'automatic-speech-recognition', 'en', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_6_0', 'robust-speech-event', 'speech', 'xlsr-fine-tuning-week']
false
Citation If you want to cite this model you can use this: ```bibtex @misc{grosman2021xlsr53-large-english, title={Fine-tuned {XLSR}-53 large model for speech recognition in {E}nglish}, author={Grosman, Jonatas}, howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english}}, year={2021} } ```
85037d71df1a3c32ea59b6062eccc329
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4779 - Wer: 0.3453
e3caa5e5e9c35442a7547643a81e1e05
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4307 | 4.0 | 500 | 1.4129 | 0.9980 | | 0.626 | 8.0 | 1000 | 0.4605 | 0.4499 | | 0.2199 | 12.0 | 1500 | 0.4457 | 0.3898 | | 0.1303 | 16.0 | 2000 | 0.4418 | 0.3771 | | 0.0851 | 20.0 | 2500 | 0.4647 | 0.3548 | | 0.0604 | 24.0 | 3000 | 0.4603 | 0.3499 | | 0.0461 | 28.0 | 3500 | 0.4779 | 0.3453 |
2b168032950c52394e2957c83ff2c5a6
apache-2.0
['generated_from_trainer']
false
distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6652
5c33011e6a45928c12249483ac6a3c64
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.9109 | 1.0 | 584 | 3.6956 | | 3.7555 | 2.0 | 1168 | 3.6712 | | 3.7002 | 3.0 | 1752 | 3.6652 |
eb7f839c1d3481df3c775f79629c06b7
apache-2.0
['generated_from_trainer']
false
bert-tiny-finetuned-xglue-ner This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the xglue dataset. It achieves the following results on the evaluation set: - Loss: 0.2489 - Precision: 0.6308 - Recall: 0.6681 - F1: 0.6489 - Accuracy: 0.9274
26cb79006343a05cc67b91701b11f8c9
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.4082 | 1.0 | 1756 | 0.3326 | 0.5600 | 0.5798 | 0.5697 | 0.9118 | | 0.2974 | 2.0 | 3512 | 0.2635 | 0.6143 | 0.6562 | 0.6346 | 0.9248 | | 0.2741 | 3.0 | 5268 | 0.2489 | 0.6308 | 0.6681 | 0.6489 | 0.9274 |
2d99ad2d61d8355c1b74bf7b8b8deb8f
mit
[]
false
YB Anime on Stable Diffusion This is the `<anime-character>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<anime-character> 0](https://huggingface.co/sd-concepts-library/yb-anime/resolve/main/concept_images/5.jpeg) ![<anime-character> 1](https://huggingface.co/sd-concepts-library/yb-anime/resolve/main/concept_images/6.jpeg) ![<anime-character> 2](https://huggingface.co/sd-concepts-library/yb-anime/resolve/main/concept_images/3.jpeg) ![<anime-character> 3](https://huggingface.co/sd-concepts-library/yb-anime/resolve/main/concept_images/0.jpeg) ![<anime-character> 4](https://huggingface.co/sd-concepts-library/yb-anime/resolve/main/concept_images/2.jpeg) ![<anime-character> 5](https://huggingface.co/sd-concepts-library/yb-anime/resolve/main/concept_images/1.jpeg) ![<anime-character> 6](https://huggingface.co/sd-concepts-library/yb-anime/resolve/main/concept_images/4.jpeg)
fcba532151973baea816b278b9a5ab7b
cc-by-4.0
[]
false
Model description This model was trained from scratch using the [Fairseq toolkit](https://fairseq.readthedocs.io/en/latest/) on a combination of Catalan-Spanish datasets, up to 92 million sentences. Additionally, the model is evaluated on several public datasecomprising 5 different domains (general, adminstrative, technology, biomedical, and news).
7795a95f7cc24b1ba95632664fd4d3fb
cc-by-4.0
[]
false
Usage Required libraries: ```bash pip install ctranslate2 pyonmttok ``` Translate a sentence using python ```python import ctranslate2 import pyonmttok from huggingface_hub import snapshot_download model_dir = snapshot_download(repo_id="projecte-aina/mt-aina-ca-es", revision="main") tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/spm.model") tokenized=tokenizer.tokenize("Benvingut al projecte Aina!") translator = ctranslate2.Translator(model_dir) translated = translator.translate_batch([tokenized[0]]) print(tokenizer.detokenize(translated[0][0]['tokens'])) ```
e6775b681e1f9d3a8fbbf485b0f25784
cc-by-4.0
[]
false
Training data The was trained on a combination of the following datasets: | Dataset | Sentences | Tokens | |-------------------|----------------|-------------------| | DOCG v2 | 8.472.786 | 188.929.206 | | El Periodico | 6.483.106 | 145.591.906 | | EuroParl | 1.876.669 | 49.212.670 | | WikiMatrix | 1.421.077 | 34.902.039 | | Wikimedia | 335.955 | 8.682.025 | | QED | 71.867 | 1.079.705 | | TED2020 v1 | 52.177 | 836.882 | | CCMatrix v1 | 56.103.820 | 1.064.182.320 | | MultiCCAligned v1 | 2.433.418 | 48.294.144 | | ParaCrawl | 15.327.808 | 334.199.408 | | **Total** | **92.578.683** | **1.875.910.305** |
8831daa1cb0d7d719e86ba450ebc17b8
cc-by-4.0
[]
false
Hyperparameters The model is based on the Transformer-XLarge proposed by [Subramanian et al.](https://aclanthology.org/2021.wmt-1.18.pdf) The following hyperparamenters were set on the Fairseq toolkit: | Hyperparameter | Value | |------------------------------------|----------------------------------| | Architecture | transformer_vaswani_wmt_en_de_bi | | Embedding size | 1024 | | Feedforward size | 4096 | | Number of heads | 16 | | Encoder layers | 24 | | Decoder layers | 6 | | Normalize before attention | True | | --share-decoder-input-output-embed | True | | --share-all-embeddings | True | | Effective batch size | 96.000 | | Optimizer | adam | | Adam betas | (0.9, 0.980) | | Clip norm | 0.0 | | Learning rate | 1e-3 | | Lr. schedurer | inverse sqrt | | Warmup updates | 4000 | | Dropout | 0.1 | | Label smoothing | 0.1 | The model was trained using shards of 10 million sentences, for a total of 13.000 updates. Weights were saved every 1000 updates and reported results are the average of the last 6 checkpoints.
69352965c1b8edc456f2d596ae01d225
cc-by-4.0
[]
false
Evaluation results Below are the evaluation results on the machine translation from Catalan to Spanish compared to [Softcatalà](https://www.softcatala.org/) and [Google Translate](https://translate.google.es/?hl=es): | Test set | SoftCatalà | Google Translate | mt-aina-ca-es | |----------------------|------------|------------------|---------------| | Spanish Constitution | 70,7 | **77,1** | 75,5 | | United Nations | 78,1 | 84,3 | **86,3** | | Flores 101 dev | 23,5 | 24 | **24,1** | | Flores 101 devtest | 24,1 | 24,2 | **24,4** | | Cybersecurity | 67,3 | **76,9** | 75,1 | | wmt 19 biomedical | 60,4 | 62,7 | **63,0** | | wmt 13 news | 22,5 | 23,1 | **23,4** | | aina_aapp_ca-es | 80,9 | 81,4 | **82,8** | | Average | 53,4 | 56,7 | **56,8** |
5ec0ed911be59fd5025baec2b5d89086
apache-2.0
['roberta', 'NLU', 'NLI', 'Chinese']
false
模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | | :----: | :----: | :----: | :----: | :----: | :----: | | 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | Roberta | 110M | 自然语言推理 NLI |
9af8381b0cd0eade00c88ad74fcc6e40
apache-2.0
['roberta', 'NLU', 'NLI', 'Chinese']
false
模型信息 Model Information 基于[chinese-roberta-wwm-ext-base](https://huggingface.co/hfl/chinese-roberta-wwm-ext),我们在收集的4个中文领域的NLI(自然语言推理)数据集,总计1014787个样本上微调了一个NLI版本。 Based on [chinese-roberta-wwm-ext-base](https://huggingface.co/hfl/chinese-roberta-wwm-ext), we fine-tuned an NLI version on 4 Chinese Natural Language Inference (NLI) datasets, with totaling 1,014,787 samples.
299c1c95d5a3cb5a0d8875d7731a573e
apache-2.0
['roberta', 'NLU', 'NLI', 'Chinese']
false
使用 Usage ``` python from transformers import BertForSequenceClassification from transformers import BertTokenizer import torch tokenizer=BertTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-Roberta-110M-NLI') model=BertForSequenceClassification.from_pretrained('IDEA-CCNL/Erlangshen-Roberta-110M-NLI') texta='今天的饭不好吃' textb='今天心情不好' output=model(torch.tensor([tokenizer.encode(texta,textb)])) print(torch.nn.functional.softmax(output.logits,dim=-1)) ```
76b571bfc127450c6d8bdfd31926c65d
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xlsr-53-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4170 - Wer: 0.4282
570cb998b4e06c27f3a1cba601d4a28f
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15 - mixed_precision_training: Native AMP
e9386cfb1347c4c4d9b28bdf9fe407c3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.7049 | 0.8 | 200 | 3.0234 | 0.9683 | | 2.9496 | 1.6 | 400 | 2.9348 | 0.9683 | | 2.6582 | 2.4 | 600 | 1.2843 | 0.9818 | | 1.0417 | 3.2 | 800 | 0.6061 | 0.5853 | | 0.7853 | 4.0 | 1000 | 0.5113 | 0.5013 | | 0.681 | 4.8 | 1200 | 0.4723 | 0.4695 | | 0.6074 | 5.6 | 1400 | 0.4528 | 0.4491 | | 0.5539 | 6.4 | 1600 | 0.4818 | 0.4555 | | 0.5257 | 7.2 | 1800 | 0.4439 | 0.4298 | | 0.5038 | 8.0 | 2000 | 0.4495 | 0.4398 | | 0.4868 | 8.8 | 2200 | 0.4467 | 0.4392 | | 0.4727 | 9.6 | 2400 | 0.4076 | 0.4045 | | 0.4493 | 10.4 | 2600 | 0.4559 | 0.4452 | | 0.4452 | 11.2 | 2800 | 0.4174 | 0.4124 | | 0.4407 | 12.0 | 3000 | 0.4188 | 0.4098 | | 0.4385 | 12.8 | 3200 | 0.4123 | 0.4098 | | 0.4192 | 13.6 | 3400 | 0.4090 | 0.4199 | | 0.4061 | 14.4 | 3600 | 0.4170 | 0.4282 |
6a5de14e0cad48ef13d36ed7944bff42
mit
['generated_from_trainer']
false
poem-gen-spanish-t5-small-d2 This model is a fine-tuned version of [flax-community/spanish-t5-small](https://huggingface.co/flax-community/spanish-t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9027
5a7309fbe6f35c12f61a40de614d8f3b
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6
5f83028ba19c558a93702a18fbc4d1a0
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 3.223 | 0.73 | 30000 | 3.1479 | | 3.0109 | 1.46 | 60000 | 3.0544 | | 2.8649 | 2.19 | 90000 | 2.9730 | | 2.7603 | 2.93 | 120000 | 2.9301 | | 2.6343 | 3.66 | 150000 | 2.9188 | | 2.5094 | 4.39 | 180000 | 2.9064 | | 2.391 | 5.12 | 210000 | 2.9073 | | 2.3592 | 5.85 | 240000 | 2.9022 |
56a9fa3609aefce95b13a3d4cec0c8ca
apache-2.0
['automatic-speech-recognition', 'hf-asr-leaderboard', 'robust-speech-event']
false
wav2vec2-large-xls-r-300m-hi-wx1 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 -HI dataset. It achieves the following results on the evaluation set: - Loss: 0.6552 - Wer: 0.3200 Evaluation Commands 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-wx1 --dataset mozilla-foundation/common_voice_7_0 --config hi --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data NA
437e2fe7a25f2fcb173274f9b9bd3c81
apache-2.0
['automatic-speech-recognition', 'hf-asr-leaderboard', 'robust-speech-event']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00024 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1800 - num_epochs: 50 - mixed_precision_training: Native AMP
16ae1813dfc8df68e92b4879886b65b5
apache-2.0
['automatic-speech-recognition', 'hf-asr-leaderboard', 'robust-speech-event']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 12.2663 | 1.36 | 200 | 5.9245 | 1.0 | | 4.1856 | 2.72 | 400 | 3.4968 | 1.0 | | 3.3908 | 4.08 | 600 | 2.9970 | 1.0 | | 1.5444 | 5.44 | 800 | 0.9071 | 0.6139 | | 0.7237 | 6.8 | 1000 | 0.6508 | 0.4862 | | 0.5323 | 8.16 | 1200 | 0.6217 | 0.4647 | | 0.4426 | 9.52 | 1400 | 0.5785 | 0.4288 | | 0.3933 | 10.88 | 1600 | 0.5935 | 0.4217 | | 0.3532 | 12.24 | 1800 | 0.6358 | 0.4465 | | 0.3319 | 13.6 | 2000 | 0.5789 | 0.4118 | | 0.2877 | 14.96 | 2200 | 0.6163 | 0.4056 | | 0.2663 | 16.33 | 2400 | 0.6176 | 0.3893 | | 0.2511 | 17.68 | 2600 | 0.6065 | 0.3999 | | 0.2275 | 19.05 | 2800 | 0.6183 | 0.3842 | | 0.2098 | 20.41 | 3000 | 0.6486 | 0.3864 | | 0.1943 | 21.77 | 3200 | 0.6365 | 0.3885 | | 0.1877 | 23.13 | 3400 | 0.6013 | 0.3677 | | 0.1679 | 24.49 | 3600 | 0.6451 | 0.3795 | | 0.1667 | 25.85 | 3800 | 0.6410 | 0.3635 | | 0.1514 | 27.21 | 4000 | 0.6000 | 0.3577 | | 0.1453 | 28.57 | 4200 | 0.6020 | 0.3518 | | 0.134 | 29.93 | 4400 | 0.6531 | 0.3517 | | 0.1354 | 31.29 | 4600 | 0.6874 | 0.3578 | | 0.1224 | 32.65 | 4800 | 0.6519 | 0.3492 | | 0.1199 | 34.01 | 5000 | 0.6553 | 0.3490 | | 0.1077 | 35.37 | 5200 | 0.6621 | 0.3429 | | 0.0997 | 36.73 | 5400 | 0.6641 | 0.3413 | | 0.0964 | 38.09 | 5600 | 0.6722 | 0.3385 | | 0.0931 | 39.45 | 5800 | 0.6365 | 0.3363 | | 0.0944 | 40.81 | 6000 | 0.6454 | 0.3326 | | 0.0862 | 42.18 | 6200 | 0.6497 | 0.3256 | | 0.0848 | 43.54 | 6400 | 0.6599 | 0.3226 | | 0.0793 | 44.89 | 6600 | 0.6625 | 0.3232 | | 0.076 | 46.26 | 6800 | 0.6463 | 0.3186 | | 0.0749 | 47.62 | 7000 | 0.6559 | 0.3225 | | 0.0663 | 48.98 | 7200 | 0.6552 | 0.3200 |
398e4a8ddba8a09ac6ca3218e5b6bcb7
apache-2.0
['generated_from_trainer']
false
distilbert-IMDB This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1905 - Accuracy: 0.9295
b537ff4f835bc7af3386da5a726afe3b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1928 | 1.0 | 2000 | 0.1905 | 0.9295 |
25c7f9785785faf50f12a7be6370d657
creativeml-openrail-m
['text-to-image']
false
It can be used by modifying the `instance_prompt`: **ari** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/face007.png) ![image 1](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/face001.png) ![image 2](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/waist005.png) ![image 3](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/knees005.png) ![image 4](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/face011.png) ![image 5](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/full012.png) ![image 6](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/shoulders002.png) ![image 7](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/waist002.png) ![image 8](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/shoulders007.png) ![image 9](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/knees002.png) ![image 10](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/waist007.png) ![image 11](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/full013.png) ![image 12](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/face021.png) ![image 13](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/full002.png) ![image 14](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/full001.png) ![image 15](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/face005.png) ![image 16](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/full007.png) ![image 17](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/face022.png) ![image 18](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/waist003.png) ![image 19](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/face013.png) ![image 20](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/shoulders006.png) ![image 21](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/face006.png) ![image 22](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/knees004.png) ![image 23](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/face002.png) ![image 24](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/shoulders001.png) ![image 25](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/full011.png) ![image 26](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/full014.png) ![image 27](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/full006.png) ![image 28](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/full010.png) ![image 29](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/face018.png) ![image 30](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/shoulders009.png) ![image 31](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/waist001.png) ![image 32](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/face020.png) ![image 33](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/face003.png) ![image 34](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/full007_117774334_2664767070430577_5662611452087096913_n.png) ![image 35](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/waist006.png) ![image 36](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/face010.png) ![image 37](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/face015.png) ![image 38](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/face023_media_ERMRbdRWoAEl-N1.png) ![image 39](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/face024_media_ETs8vMdVAAEiE5H.png) ![image 40](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/shoulders010.png) ![image 41](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/full008.png) ![image 42](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/face019.png) ![image 43](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/face008.png) ![image 44](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/shoulders003.png) ![image 45](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/face004.png) ![image 46](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/full005.png) ![image 47](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/full009.png) ![image 48](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/face014.png) ![image 49](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/face009.png) ![image 50](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/knees003.png) ![image 51](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/shoulders008.png) ![image 52](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/face017.png) ![image 53](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/face016.png) ![image 54](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/knees001.png) ![image 55](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/shoulders005.png) ![image 56](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/shoulders004.png) ![image 57](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/full003.png) ![image 58](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/full004.png) ![image 59](https://huggingface.co/NickKolok/ari-20230205-2130-dlpr2-4800-steps_1/resolve/main/concept_images/face012.png)
6f20fb9512ee3ad6afccdcf3034a7e5a
apache-2.0
['bert', 'sst2', 'glue', 'torchdistill']
false
`bert-large-uncased` fine-tuned on SST-2 dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb). The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/sst2/ce/bert_large_uncased.yaml). I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **80.2**.
82753cc0b7d81c9934149c53ad64f604
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-analysis-en This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0792 - Accuracy: 0.9803 - F1: 0.9856 - Precision: 0.9875 - Recall: 0.9837
89e6a5301a27255a5d0f2d3e9b61cfa6
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.426 | 1.0 | 1408 | 0.2718 | 0.8910 | 0.9201 | 0.9251 | 0.9151 | | 0.3247 | 2.0 | 2816 | 0.1552 | 0.9540 | 0.9665 | 0.9656 | 0.9674 | | 0.1582 | 3.0 | 4224 | 0.0792 | 0.9803 | 0.9856 | 0.9875 | 0.9837 |
41bf9785aacc4f1217f7cec496d29abe
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xlsr-53-torgo-demo-m02-nolm This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0260 - Wer: 0.4968
4c111ab712515b12309b3506c3a8498c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.4385 | 0.91 | 500 | 4.0160 | 1.0 | | 3.0413 | 1.81 | 1000 | 3.2881 | 1.0 | | 3.0011 | 2.72 | 1500 | 3.2401 | 1.0 | | 2.8653 | 3.62 | 2000 | 3.0338 | 1.0 | | 2.6386 | 4.53 | 2500 | 2.7806 | 1.0492 | | 2.5376 | 5.43 | 3000 | 2.5253 | 1.3647 | | 2.2722 | 6.34 | 3500 | 2.1425 | 1.3252 | | 1.627 | 7.25 | 4000 | 1.4101 | 1.3658 | | 1.2689 | 8.15 | 4500 | 0.9284 | 1.2448 | | 1.0197 | 9.06 | 5000 | 0.6370 | 1.1254 | | 0.8198 | 9.96 | 5500 | 0.4743 | 0.9947 | | 0.7357 | 10.87 | 6000 | 0.3423 | 0.8820 | | 0.5532 | 11.78 | 6500 | 0.2764 | 0.8203 | | 0.5133 | 12.68 | 7000 | 0.2158 | 0.7580 | | 0.4943 | 13.59 | 7500 | 0.1872 | 0.7195 | | 0.3741 | 14.49 | 8000 | 0.1529 | 0.6762 | | 0.3524 | 15.4 | 8500 | 0.1269 | 0.6527 | | 0.3086 | 16.3 | 9000 | 0.1049 | 0.6254 | | 0.3141 | 17.21 | 9500 | 0.0887 | 0.6012 | | 0.2879 | 18.12 | 10000 | 0.0829 | 0.5863 | | 0.3141 | 19.02 | 10500 | 0.0660 | 0.5688 | | 0.2609 | 19.93 | 11000 | 0.0732 | 0.5591 | | 0.2707 | 20.83 | 11500 | 0.0552 | 0.5434 | | 0.2307 | 21.74 | 12000 | 0.0524 | 0.5406 | | 0.1863 | 22.64 | 12500 | 0.0466 | 0.5281 | | 0.2211 | 23.55 | 13000 | 0.0426 | 0.5226 | | 0.1827 | 24.46 | 13500 | 0.0365 | 0.5129 | | 0.1782 | 25.36 | 14000 | 0.0356 | 0.5099 | | 0.1799 | 26.27 | 14500 | 0.0323 | 0.5049 | | 0.1481 | 27.17 | 15000 | 0.0300 | 0.5034 | | 0.1609 | 28.08 | 15500 | 0.0278 | 0.5030 | | 0.1752 | 28.99 | 16000 | 0.0269 | 0.4978 | | 0.1541 | 29.89 | 16500 | 0.0260 | 0.4968 |
d9d5095328e90678a0c3352dc5cf4261
cc-by-sa-4.0
['asteroid', 'audio', 'DPRNNTasNet', 'audio-to-audio']
false
Asteroid model `JorisCos/DPRNNTasNet_Libri1Mix_enhsignle_16k` Description: This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid). It was trained on the `enh_single` task of the Libri1Mix dataset. Training config: ```yml data: n_src: 1 sample_rate: 16000 segment: 1 task: enh_single train_dir: data/wav16k/min/train-360 valid_dir: data/wav16k/min/dev filterbank: kernel_size: 2 n_filters: 64 stride: 1 masknet: bidirectional: true bn_chan: 128 chunk_size: 250 dropout: 0 hid_size: 128 hop_size: 125 in_chan: 64 mask_act: sigmoid n_repeats: 6 n_src: 1 out_chan: 64 optim: lr: 0.001 optimizer: adam weight_decay: 1.0e-05 training: batch_size: 2 early_stop: true epochs: 200 gradient_clipping: 5 half_lr: true num_workers: 4 ``` Results: On Libri1Mix min test set : ```yml si_sdr: 14.7228101708889 si_sdr_imp: 11.2730288650292 sdr: 15.35661405197161 sdr_imp: 11.853951252758595 sir: Infinity sir_imp: NaN sar: 15.35661405197161 sar_imp: 11.853951252758595 stoi: 0.9300461826351578 stoi_imp: 0.13412635909461715 ``` License notice: This work "DPRNNTasNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov, used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only). "DPRNNTasNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino
937501e25f9370fe87e05b396a547589
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5657 - Matthews Correlation: 0.5470
ddf0f6b101175d3df1761cc327874a43
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.521 | 1.0 | 535 | 0.5159 | 0.4152 | | 0.3445 | 2.0 | 1070 | 0.4905 | 0.5022 | | 0.2317 | 3.0 | 1605 | 0.5657 | 0.5470 | | 0.1774 | 4.0 | 2140 | 0.7557 | 0.5282 | | 0.1323 | 5.0 | 2675 | 0.8073 | 0.5455 |
41b75a9120fc32eb53b256f9ffccf0c5
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Small hi- HYDDCSEZ This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.6357 - Wer: 18.7986
bd0892f52d0c87aac787addbca9cb60a
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0037 | 14.01 | 1000 | 0.4715 | 19.1786 | | 0.0001 | 28.01 | 2000 | 0.5589 | 18.5377 | | 0.0001 | 43.01 | 3000 | 0.6008 | 18.5903 | | 0.0 | 57.01 | 4000 | 0.6234 | 18.7735 | | 0.0 | 72.01 | 5000 | 0.6357 | 18.7986 |
cb3919ad013cbaf822929bfaf1d9086d
apache-2.0
['generated_from_trainer', 'HC3', 'chatGPT', 'assistant']
false
pythia-6.9b-deduped for general QA <a href="https://colab.research.google.com/gist/pszemraj/351f04fc2afb6346c763885f127284ef/pythia-6-9b-deduped-for-general-qa.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> This model is a fine-tuned version of [EleutherAI/pythia-6.9b-deduped](https://huggingface.co/EleutherAI/pythia-6.9b-deduped) on the pszemraj/HC3-textgen-qa dataset. It achieves the following results on the evaluation set: - Loss: 1.2372 - Accuracy: 0.6769 - perplexity: 3.446
05675c92d9cfbc730802d058454a96ea
apache-2.0
['generated_from_trainer', 'HC3', 'chatGPT', 'assistant']
false
Usage Install necessary packages for inference (_unless you have a big boi GPU_) ```bash pip install -U -q transformers bitsandbytes accelerate ``` Basic inference example: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("pszemraj/pythia-6.9b-HC3") model = AutoModelForCausalLM.from_pretrained( "pszemraj/pythia-6.9b-HC3", load_in_8bit=True, device_map="auto" )
208d676adb8d2da6c72dd6d1bbad5b23
apache-2.0
['generated_from_trainer', 'HC3', 'chatGPT', 'assistant']
false
shards are ~4GB each, there are eight total prompt = "I was wondering how much wood a woodchuck could chuck? <answer>" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") outputs = model.generate( **inputs, max_new_tokens=300 )
b6632983815fa20bd793243905cf15d6
apache-2.0
['generated_from_trainer', 'HC3', 'chatGPT', 'assistant']
false
default generation config (+ 300 tokens) result = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0] result = result.split("<end_answer>")[0].strip() import pprint as pp pp.pprint(result) ``` The defautl `GenerationConfig` uses contrastive search with `top_k=4` and `penalty_alpha=0.6`. For more information on inference and parameters to use, see [the transformers docs](https://huggingface.co/docs/transformers/generation_strategies
bb2fa7df9891e490f6893f17ae054d5d
apache-2.0
['generated_from_trainer', 'HC3', 'chatGPT', 'assistant']
false
Intended uses & limitations - **Intended use:** research/exploration into comparing RLHF tuning vs. "guided"/specific tuning on "quality" datasets/responses of _"what the human would want as answer anyway"_ - This is **not** trained/fine-tuned with RLHF and therefore will not be as helpful/generalizable/safe as chatGPT (_outside of the fact that this model is ~30x smaller_)
7cc2b35a0e83d1e1589362b43201dfd8
apache-2.0
['generated_from_trainer', 'HC3', 'chatGPT', 'assistant']
false
Training and evaluation data ```yaml model-index: - name: pythia-6.9b-hc3-qa-assistant results: - task: name: Causal Language Modeling type: text-generation dataset: name: pszemraj/HC3-textgen-qa metrics: - name: Accuracy type: accuracy value: 0.6768941789814655 ```
991ef5e74ecb7b631db55df82ceb091d
apache-2.0
['generated_from_trainer', 'HC3', 'chatGPT', 'assistant']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.2598 | 0.99 | 79 | 1.3291 | 0.6496 | | 0.7446 | 1.99 | 158 | 1.2372 | 0.6769 |
76cfca513f5f902202bc5734cffedc40
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0
17a15f5da81f4c33be2ed7a86ea41417
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.3962 | 1.0 | 18050 | 3.3250 | | 3.2561 | 2.0 | 36100 | 3.2652 | | 3.1727 | 3.0 | 54150 | 3.2572 |
406cff7225149c526e4284efbfe36669
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-whole-word-word-ids-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.6573
33404189ea823ba9553c75386797248f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.7261 | 1.0 | 157 | 0.6532 | | 0.6766 | 2.0 | 314 | 0.6514 | | 0.6677 | 3.0 | 471 | 0.6555 |
3813ce00fe2e63d53319cfdf5b08aa18
apache-2.0
['italian', 'sequence-to-sequence', 'style-transfer', 'formality-style-transfer']
false
IT5 Base for Informal-to-formal Style Transfer 🧐 This repository contains the checkpoint for the [IT5 Base](https://huggingface.co/gsarti/it5-base) model fine-tuned on Informal-to-formal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
18c729ec00c8338d1de3365d904a4914
apache-2.0
['italian', 'sequence-to-sequence', 'style-transfer', 'formality-style-transfer']
false
Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines i2f = pipeline("text2text-generation", model='it5/it5-base-informal-to-formal') i2f("nn capisco xke tt i ragazzi lo fanno") >>> [{"generated_text": "non comprendo perché tutti i ragazzi agiscono così"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-base-informal-to-formal") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-base-informal-to-formal") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
8c9540c44530ee3b128a64686825dba1
cc-by-sa-4.0
[]
false
This is a model for correcting spelling and grammar errors in Icelandic text. It is based on the pretrained ByT5 model (https://arxiv.org/abs/2105.13626) and finetuned on Icelandic error correction data along with synthetic error data. The model is trained using the HuggingFace and PyTorch libraries. The model is trained to correct a single sentence at a time, but may work on longer context. The model performs well on correcting a variety of common issues in Icelandic text. This README will be updated soon along with citation reference.
5a355d9098dadffc3bd1dab5227984cc
mit
['generated_from_trainer']
false
deberta-v3-base-finetuned-imdb This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 3.0016
624530b66d491b5bd5a3169fe04a8cb3
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 4.2666 | 1.0 | 6690 | 3.4001 | | 3.3574 | 2.0 | 13380 | 3.1174 | | 3.1715 | 3.0 | 20070 | 3.0034 |
4e0c3e7ffbfccef476e7366a3cefbc0c
apache-2.0
['generated_from_trainer']
false
bert-base-cased-wikitext2-test-mlm This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.8438
b99d81469ed6fa3f6fd626e178b5ff23
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: IPU - gradient_accumulation_steps: 64 - total_train_batch_size: 64 - total_eval_batch_size: 5 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - training precision: Mixed Precision
954116cb72064b6dc4f11195da25a0f2
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3097 - Accuracy: 0.8633 - F1: 0.8647
dd45ff1ca97e7b6fe6a1163a6716fb43
mit
['vision', 'image-to-text', 'image-captioning', 'visual-question-answering']
false
BLIP-2, OPT-2.7b, fine-tuned on COCO BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.
7e73350f20221d16ee6f720037a32b7b
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'art', 'artistic', 'diffusers', 'protogen']
false
663380; padding-top:0px;" span title="Protogen x5.3 Raw Output"></center> <center><h1>Protogen x5.3 (Photorealism) Official Release</h1></center> <center><p><em>Research Model by <a href="https://instagram.com/officialvictorespinoza">darkstorm2150</a></em></p></center> </div>
3a22bb12ffd1a7842e00bf3b9f046ce1
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'art', 'artistic', 'diffusers', 'protogen']
false
General info Protogen x5.3 - One Step Closer to Reality by [darkstorm2150](https://instagram.com/officialvictorespinoza) Protogen was warm-started with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) and continued fine-tuned from [darkstorm2150/Protogen_x3.4_Official_Release](https://huggingface.co/darkstorm2150/Protogen_x3.4_Official_Release) Robodiffusion has been removed and 10% Dreamlike-PhotoReal V.2 added, the result is better sampling at 768px to 1024px of humans and surroundings, The results are immediate!!! Also this bad boy comes with a license, so do please read it, thank you! * Model control Now its recommended that you add nude, naked to your negative prompts, its a horny model, well 10% but still....cant be too careful! As for realism, you can use this template modelshoot style, (extremely detailed 8k wallpaper),a medium shot photo of a (what you want here), Intricate, High Detail, dramatic It should also be very "dreambooth-able", being able to generate high fidelity faces with a little amount of steps (see [dreambooth](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth)).
ac58c839834caf615878e67cef232354
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'art', 'artistic', 'diffusers', 'protogen']
false
Granular Adaptive Learning Granular adaptive learning is a machine learning technique that focuses on adjusting the learning process at a fine-grained level, rather than making global adjustments to the model. This approach allows the model to adapt to specific patterns or features in the data, rather than making assumptions based on general trends. Granular adaptive learning can be achieved through techniques such as active learning, which allows the model to select the data it wants to learn from, or through the use of reinforcement learning, where the model receives feedback on its performance and adapts based on that feedback. It can also be achieved through techniques such as online learning where the model adjust itself as it receives more data. Granular adaptive learning is often used in situations where the data is highly diverse or non-stationary and where the model needs to adapt quickly to changing patterns. This is often the case in dynamic environments such as robotics, financial markets, and natural language processing.
c5bc22bff3ab3b16bb37536026cca5ba
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'art', 'artistic', 'diffusers', 'protogen']
false
CKPT [Download ProtoGen x5.3.ckpt (4.27GB)](https://huggingface.co/darkstorm2150/Protogen_v5.3_Official_Release/blob/main/ProtoGen_X5.3.ckpt) [Download ProtoGen x5.3-pruned-fp16.ckpt (1.89GB)](https://huggingface.co/darkstorm2150/Protogen_x5.3_Official_Release/resolve/main/ProtoGen_X5.3-pruned-fp16.ckpt)
6ba4609ed3a796dadff680ca7c4a08ad
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'art', 'artistic', 'diffusers', 'protogen']
false
Safetensors [Download ProtoGen x5.3.safetensors (4.27GB)](https://huggingface.co/darkstorm2150/Protogen_x5.3_Official_Release/resolve/main/ProtoGen_X5.3.safetensors) [Download ProtoGen x5.3-pruned-fp16.safetensors (1.89GB)](https://huggingface.co/darkstorm2150/Protogen_x5.3_Official_Release/resolve/main/ProtoGen_X5.3-pruned-fp16.safetensors)
e111623c17fd7b7e384c91aa2543340a
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'art', 'artistic', 'diffusers', 'protogen']
false
🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion Pipeline](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). ```python from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler import torch prompt = ( "modelshoot style, (extremely detailed CG unity 8k wallpaper), full shot body photo of the most beautiful artwork in the world, " "english medieval witch, black silk vale, pale skin, black silk robe, black cat, necromancy magic, medieval era, " "photorealistic painting by Ed Blinkey, Atey Ghailan, Studio Ghibli, by Jeremy Mann, Greg Manchess, Antonio Moro, trending on ArtStation, " "trending on CGSociety, Intricate, High Detail, Sharp focus, dramatic, photorealistic painting art by midjourney and greg rutkowski" ) model_id = "darkstorm2150/Protogen_v5.3_Official_Release" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) pipe = pipe.to("cuda") image = pipe(prompt, num_inference_steps=25).images[0] image.save("./result.jpg") ```
257e0a1711a10bc023629cdfd4c85566
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'art', 'artistic', 'diffusers', 'protogen']
false
663380; } </style> <table class="myTable"> <tr> <th>Models</th> <th>Protogen v2.2 (Anime)</th> <th>Protogen x3.4 (Photo)</th> <th>Protogen x5.3 (Photo)</th> <th>Protogen x5.8 (Sci-fi/Anime)</th> <th>Protogen x5.9 (Dragon)</th> <th>Protogen x7.4 (Eclipse)</th> <th>Protogen x8.0 (Nova)</th> <th>Protogen x8.6 (Infinity)</th> </tr> <tr> <td>seek_art_mega v1</td> <td>52.50%</td> <td>42.76%</td> <td>42.63%</td> <td></td> <td></td> <td></td> <td>25.21%</td> <td>14.83%</td> </tr> <tr> <td>modelshoot v1</td> <td>30.00%</td> <td>24.44%</td> <td>24.37%</td> <td>2.56%</td> <td>2.05%</td> <td>3.48%</td> <td>22.91%</td> <td>13.48%</td> </tr> <tr> <td>elldreth v1</td> <td>12.64%</td> <td>10.30%</td> <td>10.23%</td> <td></td> <td></td> <td></td> <td>6.06%</td> <td>3.57%</td> </tr> <tr> <td>photoreal v2</td> <td></td> <td></td> <td>10.00%</td> <td>48.64%</td> <td>38.91%</td> <td>66.33%</td> <td>20.49%</td> <td>12.06%</td> </tr> <tr> <td>analogdiffusion v1</td> <td></td> <td>4.75%</td> <td>4.50%</td> <td></td> <td></td> <td></td> <td>1.75%</td> <td>1.03%</td> </tr> <tr> <td>openjourney v2</td> <td></td> <td>4.51%</td> <td>4.28%</td> <td></td> <td></td> <td>4.75%</td> <td>2.26%</td> <td>1.33%</td> </tr> <tr> <td>hassan1.4</td> <td>2.63%</td> <td>2.14%</td> <td>2.13%</td> <td></td> <td></td> <td></td> <td>1.26%</td> <td>0.74%</td> </tr> <tr> <td>f222</td> <td>2.23%</td> <td>1.82%</td> <td>1.81%</td> <td></td> <td></td> <td></td> <td>1.07%</td> <td>0.63%</td> </tr> <tr> <td>hasdx</td> <td></td> <td></td> <td></td> <td>20.00%</td> <td>16.00%</td> <td>4.07%</td> <td>5.01%</td> <td>2.95%</td> </tr> <tr> <td>moistmix</td> <td></td> <td></td> <td></td> <td>16.00%</td> <td>12.80%</td> <td>3.86%</td> <td>4.08%</td> <td>2.40%</td> </tr> <tr> <td>roboDiffusion v1</td> <td></td> <td>4.29%</td> <td></td> <td>12.80%</td> <td>10.24%</td> <td>3.67%</td> <td>4.41%</td> <td>2.60%</td> </tr> <tr> <td>RPG v3</td> <td></td> <td>5.00%</td> <td></td> <td></td> <td>20.00%</td> <td>4.29%</td> <td>4.29%</td> <td>2.52%</td> </tr> <tr> <td>anything&everything</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>4.51%</td> <td>0.56%</td> <td>0.33%</td> </tr> <tr> <td>dreamlikediff v1</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>5.0%</td> <td>0.63%</td> <td>0.37%</td> </tr> <tr> <td>sci-fidiff v1</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>3.10%</td> </tr> <tr> <td>synthwavepunk v2</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>3.26%</td> </tr> <tr> <td>mashupv2</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>11.51%</td> </tr> <tr> <td>dreamshaper 252</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>4.04%</td> </tr> <tr> <td>comicdiff v2</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>4.25%</td> </tr> <tr> <td>artEros</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>15.00%</td> </tr> </table>
75580006fc16f5b85dac904b6f11f4f3
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'art', 'artistic', 'diffusers', 'protogen']
false
License By downloading you agree to the terms of these licenses <a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">CreativeML Open RAIL-M</a> <a href="https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/blob/main/LICENSE.md">Dreamlike License</a> <a href="https://huggingface.co/coreco/seek.art_MEGA/blob/main/LICENSE.txt">Seek Art Mega License</a>
aebb0bc6198605d4af48a6c02cf1e45a
mit
['generated_from_trainer']
false
roberta-base_mnli_bc This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.2125 - Accuracy: 0.9584
ce5a261605e1f2bff65df8619f0a790c
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2015 | 1.0 | 16363 | 0.1820 | 0.9470 | | 0.1463 | 2.0 | 32726 | 0.1909 | 0.9559 | | 0.0768 | 3.0 | 49089 | 0.2117 | 0.9585 |
6e7e2350d1b5557bf370d2ff85cbc762
apache-2.0
[]
false
Training Data This model was trained on the [STS benchmark dataset](http://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark). The model will predict a score between 0 and 1 how for the semantic similarity of two sentences.
caac9c8c4cc66f0fe09a0185e300ceed
apache-2.0
[]
false
Usage and Performance Pre-trained models can be used like this: ``` from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')]) ``` The model will predict scores for the pairs `('Sentence 1', 'Sentence 2')` and `('Sentence 3', 'Sentence 4')`. You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class
2301e7bcbbdf6b3e0add56b8023846fd
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-marc This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 1.1698 - Mae: 0.6090
0ecc67868930b5507bfc29faf90998d8
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1662 | 1.0 | 333 | 1.2084 | 0.7068 | | 1.0122 | 2.0 | 666 | 1.1698 | 0.6090 |
c67abb113776801446ac628f7cc1e6dc
mit
[]
false
Toho-pixel on Stable Diffusion This is the `<toho-pixel>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<toho-pixel> 0](https://huggingface.co/sd-concepts-library/toho-pixel/resolve/main/concept_images/4.jpeg) ![<toho-pixel> 1](https://huggingface.co/sd-concepts-library/toho-pixel/resolve/main/concept_images/0.jpeg) ![<toho-pixel> 2](https://huggingface.co/sd-concepts-library/toho-pixel/resolve/main/concept_images/2.jpeg) ![<toho-pixel> 3](https://huggingface.co/sd-concepts-library/toho-pixel/resolve/main/concept_images/3.jpeg) ![<toho-pixel> 4](https://huggingface.co/sd-concepts-library/toho-pixel/resolve/main/concept_images/1.jpeg)
da7dbc6a0bde6df8eedca85e8a241b45
apache-2.0
['generated_from_trainer']
false
littledataset This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0000
ac7330a477ce730f70a833065f91fe1c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 169 | 0.0001 | | No log | 2.0 | 338 | 0.0000 | | 0.0036 | 3.0 | 507 | 0.0001 | | 0.0036 | 4.0 | 676 | 0.0000 | | 0.0036 | 5.0 | 845 | 0.0000 |
11027173939671ff1460c3e8cc19a82a
apache-2.0
['automatic-speech-recognition', 'de']
false
exp_w2v2r_de_xls-r_gender_male-5_female-5_s896 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
fdd14278da48235b5997eb8029afd296
other
['vision', 'image-segmentation']
false
Mask2Former Mask2Former model trained on COCO instance segmentation (large-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation ](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/). Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
5cd9317e6870479c9c0fa6d9d23f8230
other
['vision', 'image-segmentation']
false
load Mask2Former fine-tuned on COCO instance segmentation processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-large-coco-instance") model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-large-coco-instance") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs)
52b4c27137b231d842b89ec4385b5389
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Wav2Vec2-Large-XLSR-53-Persian V2 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Persian (Farsi) using [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz.
55c54a16d99c48b067c4dc7533f5f3ba
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa !pip install jiwer !pip install hazm ``` **Prediction** ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset import numpy as np import hazm import re import string import IPython.display as ipd _normalizer = hazm.Normalizer() chars_to_ignore = [ ",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "
a1f67c7db445e3cc9d23b2758d535b61
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
In case of farsi chars_to_ignore = chars_to_ignore + list(string.ascii_lowercase + string.digits) chars_to_mapping = { 'ك': 'ک', 'دِ': 'د', 'بِ': 'ب', 'زِ': 'ز', 'ذِ': 'ذ', 'شِ': 'ش', 'سِ': 'س', 'ى': 'ی', 'ي': 'ی', 'أ': 'ا', 'ؤ': 'و', "ے": "ی", "ۀ": "ه", "ﭘ": "پ", "ﮐ": "ک", "ﯽ": "ی", "ﺎ": "ا", "ﺑ": "ب", "ﺘ": "ت", "ﺧ": "خ", "ﺩ": "د", "ﺱ": "س", "ﻀ": "ض", "ﻌ": "ع", "ﻟ": "ل", "ﻡ": "م", "ﻢ": "م", "ﻪ": "ه", "ﻮ": "و", 'ﺍ': "ا", 'ة': "ه", 'ﯾ': "ی", 'ﯿ': "ی", 'ﺒ': "ب", 'ﺖ': "ت", 'ﺪ': "د", 'ﺮ': "ر", 'ﺴ': "س", 'ﺷ': "ش", 'ﺸ': "ش", 'ﻋ': "ع", 'ﻤ': "م", 'ﻥ': "ن", 'ﻧ': "ن", 'ﻭ': "و", 'ﺭ': "ر", "ﮔ": "گ",
391772096f3e1bbe4024988176f59686
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
"ها": " ها", "ئ": "ی", "a": " ای ", "b": " بی ", "c": " سی ", "d": " دی ", "e": " ایی ", "f": " اف ", "g": " جی ", "h": " اچ ", "i": " آی ", "j": " جی ", "k": " کی ", "l": " ال ", "m": " ام ", "n": " ان ", "o": " او ", "p": " پی ", "q": " کیو ", "r": " آر ", "s": " اس ", "t": " تی ", "u": " یو ", "v": " وی ", "w": " دبلیو ", "x": " اکس ", "y": " وای ", "z": " زد ", "\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ", } def multiple_replace(text, chars_to_mapping): pattern = "|".join(map(re.escape, chars_to_mapping.keys())) return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text)) def remove_special_characters(text, chars_to_ignore_regex): text = re.sub(chars_to_ignore_regex, '', text).lower() + " " return text def normalizer(batch, chars_to_ignore, chars_to_mapping): chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]""" text = batch["sentence"].lower().strip() text = _normalizer.normalize(text) text = multiple_replace(text, chars_to_mapping) text = remove_special_characters(text, chars_to_ignore_regex) text = re.sub(" +", " ", text) text = text.strip() + " " batch["sentence"] = text return batch def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-v2") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-v2").to(device) dataset = load_dataset("common_voice", "fa", split="test[:1%]") dataset = dataset.map( normalizer, fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) max_items = np.random.randint(0, len(result), 20).tolist() for i in max_items: reference, predicted = result["sentence"][i], result["predicted"][i] print("reference:", reference) print("predicted:", predicted) print('---') ``` **Output:** ```text reference: عجم زنده کردم بدین پارسی predicted: عجم زنده کردم بدین پارسی --- reference: لباس هایم کی آماده خواهند شد predicted: لباس خایم کی آماده خواهند شد --- reference: با مهان همنشین شدم predicted: با مهان همنشین شدم --- reference: یکی از بهترین فیلم هایی بود که در این سال ها دیدم predicted: یکی از بهترین فیلمهایی بود که در این سالها دیدم --- reference: اون خیلی بد ماساژ میده predicted: اون خیلی بد ماساژ میده --- reference: هنوزم بزرگترین دستاورد دولت روحانی اینه که رییسی رییسجمهور نشد predicted: هنوزم بزرگترین دستآوردار دولت روانیاینه که ریسی ریسیومرو نشد --- reference: واسه بدنسازی آماده ای predicted: واسه بعدنسافی آماده ای --- reference: خدای من شماها سالمین predicted: خدای من شما ها سالمین --- reference: بهشون ثابت میشه که دروغ نگفتم predicted: بهشون ثابت میشه که دروغ مگفتم --- reference: آیا ممکن است یک پتو برای من بیاورید predicted: سف کمیتخ لظا --- reference: نزدیک جلو predicted: رزیک جلو --- reference: شایعه پراکن دربارهاش دروغ و شایعه می سازد predicted: شایه پراکن دربارهاش دروغ و شایعه می سازد --- reference: وقتی نیاز است که یک چهره دوستانه بیابند predicted: وقتی نیاز است یک چهره دوستانه بیابند --- reference: ممکنه رادیواکتیوی چیزی باشه predicted: ممکنه به آدیوتیوی چیزی باشه --- reference: دهنتون رو ببندید predicted: دهن جن رو ببندید --- reference: پاشیم بریم قند و شکر و روغنمون رو بگیریم تا تموم نشده predicted: پاشین بریم قند و شکر و روغنمون رو بگیریم تا تموم نشده --- reference: اما قبل از تمام کردن بحث تاریخی باید ذکری هم از ناپیکس بکنیم predicted: اما قبل از تمام کردن بحث تاریخی باید ذکری هم از نایپکس بکنیم --- reference: لطفا کپی امضا شده قرارداد را بازگردانید predicted: لطفا کپی امضال شده قرار داد را باز گردانید --- reference: خیلی هم چیز مهمی نیست predicted: خیلی هم چیز مهمی نیست --- reference: شایعه پراکن دربارهاش دروغ و شایعه می سازد predicted: شایه پراکن دربارهاش دروغ و شایعه می سازد --- ```
4fbeb63457186c93a67b212df9a94e83
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation The model can be evaluated as follows on the Persian (Farsi) test data of Common Voice. ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset, load_metric import numpy as np import hazm import re import string _normalizer = hazm.Normalizer() chars_to_ignore = [ ",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "
b8fa9293110f0346c4ff298dd22531e1
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
"ها": " ها", "ئ": "ی", "a": " ای ", "b": " بی ", "c": " سی ", "d": " دی ", "e": " ایی ", "f": " اف ", "g": " جی ", "h": " اچ ", "i": " آی ", "j": " جی ", "k": " کی ", "l": " ال ", "m": " ام ", "n": " ان ", "o": " او ", "p": " پی ", "q": " کیو ", "r": " آر ", "s": " اس ", "t": " تی ", "u": " یو ", "v": " وی ", "w": " دبلیو ", "x": " اکس ", "y": " وای ", "z": " زد ", "\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ", } def multiple_replace(text, chars_to_mapping): pattern = "|".join(map(re.escape, chars_to_mapping.keys())) return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text)) def remove_special_characters(text, chars_to_ignore_regex): text = re.sub(chars_to_ignore_regex, '', text).lower() + " " return text def normalizer(batch, chars_to_ignore, chars_to_mapping): chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]""" text = batch["sentence"].lower().strip() text = _normalizer.normalize(text) text = multiple_replace(text, chars_to_mapping) text = remove_special_characters(text, chars_to_ignore_regex) text = re.sub(" +", " ", text) text = text.strip() + " " batch["sentence"] = text return batch def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-v2") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-v2").to(device) dataset = load_dataset("common_voice", "fa", split="test") dataset = dataset.map( normalizer, fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) wer = load_metric("wer") print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"]))) ``` **Test Result:** - WER: 31.92%
3a6b1265945d8a87f5ec3a278a45bb69
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Training The Common Voice `train`, `validation` datasets were used for training. You can see the training states [here](https://wandb.ai/m3hrdadfi/finetuned_wav2vec_xlsr_persian/reports/Fine-Tuning-for-Wav2Vec2-Large-XLSR-53-Persian--Vmlldzo1NjY1NjU?accessToken=pspukt0liicopnwe93wo1ipetqk0gzkuv8669g00wc6hcesk1fh0rfkbd0h46unk) The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Persian_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb)
cbe50885d0f69606e03100791e610c55
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Wav2Vec2-Large-XLSR-Breton Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the [Breton Common Voice dataset](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz.
7dfe979c8458b8ac51eaf6e0d2fdd53f
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "br", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-breton") model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-breton") chars_to_ignore_regex = '[\\,\,\?\.\!\;\:\"\“\%\”\�\(\)\/\«\»\½\…]'
57cda3928c87d3f887d00ded9c2715a8
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " " batch["sentence"] = batch["sentence"].replace("ʼ", "'") batch["sentence"] = batch["sentence"].replace("’", "'") batch["sentence"] = batch["sentence"].replace('‘', "'") speech_array, sampling_rate = torchaudio.load(batch["path"]) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset[:2]["sentence"]) ``` The above code leads to the following prediction for the first two samples: ``` Prediction: ["ne' ler ket don a-benn us netra pa vez zer nic'hed evel-si", 'an eil hag egile'] Reference: ['"n\'haller ket dont a-benn eus netra pa vezer nec\'het evel-se." ', 'an eil hag egile. '] ```
cb83d9a36a53d15f4aaa1bb54526cd36
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation The model can be evaluated as follows on the Breton test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "br", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-breton") model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-breton") model.to("cuda") chars_to_ignore_regex = '[\\,\,\?\.\!\;\:\"\“\%\”\�\(\)\/\«\»\½\…]'
2be03aede43125bf49fcc3465bb211c7
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " " batch["sentence"] = batch["sentence"].replace("ʼ", "'") batch["sentence"] = batch["sentence"].replace("’", "'") batch["sentence"] = batch["sentence"].replace('‘', "'") speech_array, sampling_rate = torchaudio.load(batch["path"]) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn)
954d8d0a728302b921a359f8a3b7d528
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 41.71 %
5521f7ed5ff64ba94099cf6cd47f0cfa
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-marc-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.9575 - Mae: 0.5488
2edf6cb8d9b297906ce9229865babefd
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1253 | 1.0 | 235 | 0.9960 | 0.5366 | | 0.9708 | 2.0 | 470 | 0.9575 | 0.5488 |
da5f89d79af1d3e74095a7371cd43d1b
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
sentence-transformers/gtr-t5-xl This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space. The model was specifically trained for the task of sematic search. This model was converted from the Tensorflow model [gtr-xl-1](https://tfhub.dev/google/gtr/gtr-xl/1) to PyTorch. When using this model, have a look at the publication: [Large Dual Encoders Are Generalizable Retrievers](https://arxiv.org/abs/2112.07899). The tfhub model and this PyTorch model can produce slightly different embeddings, however, when run on the same benchmarks, they produce identical results. The model uses only the encoder from a T5-3B model. The weights are stored in FP16.
c9fcdc7da8d325a18c4594734c71e32a
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/gtr-t5-xl') embeddings = model.encode(sentences) print(embeddings) ``` The model requires sentence-transformers version 2.2.0 or newer.
e0c57405ab18761664eaf84bd58efb88
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/gtr-t5-xl)
3044bca41a5b11f03a8b77b9c555189a
mit
['roberta-base', 'roberta-base-epoch_71']
false
RoBERTa, Intermediate Checkpoint - Epoch 71 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_71.
4d99004b59e2444684c522fc0be95122