license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer']
false
t5-small-finetuned-text2log-finetuned-nl-to-fol-finetuned-nl-to-fol-finetuned-nl-to-fol-version2 This model is a fine-tuned version of [anki08/t5-small-finetuned-text2log-finetuned-nl-to-fol-finetuned-nl-to-fol](https://huggingface.co/anki08/t5-small-finetuned-text2log-finetuned-nl-to-fol-finetuned-nl-to-fol) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0069 - Bleu: 28.1311 - Gen Len: 18.7412
b206f24b20025b376ff1c2fe0ce2a0d0
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 - mixed_precision_training: Native AMP
0398187751e49a1eced2d297b57714d1
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | No log | 1.0 | 22 | 0.0692 | 27.4908 | 18.7353 | | No log | 2.0 | 44 | 0.0631 | 27.554 | 18.7294 | | No log | 3.0 | 66 | 0.0533 | 27.6007 | 18.7294 | | No log | 4.0 | 88 | 0.0484 | 27.6446 | 18.7294 | | No log | 5.0 | 110 | 0.0439 | 27.6401 | 18.7294 | | No log | 6.0 | 132 | 0.0404 | 27.5117 | 18.7294 | | No log | 7.0 | 154 | 0.0389 | 27.6358 | 18.7294 | | No log | 8.0 | 176 | 0.0362 | 27.6358 | 18.7294 | | No log | 9.0 | 198 | 0.0339 | 27.5731 | 18.7294 | | No log | 10.0 | 220 | 0.0319 | 27.2326 | 18.6882 | | No log | 11.0 | 242 | 0.0298 | 27.2326 | 18.6882 | | No log | 12.0 | 264 | 0.0293 | 27.5498 | 18.7294 | | No log | 13.0 | 286 | 0.0276 | 27.6566 | 18.7294 | | No log | 14.0 | 308 | 0.0268 | 27.6566 | 18.7294 | | No log | 15.0 | 330 | 0.0251 | 27.6107 | 18.7294 | | No log | 16.0 | 352 | 0.0239 | 27.7096 | 18.7294 | | No log | 17.0 | 374 | 0.0228 | 27.6716 | 18.7294 | | No log | 18.0 | 396 | 0.0231 | 27.8083 | 18.7294 | | No log | 19.0 | 418 | 0.0218 | 27.4838 | 18.6882 | | No log | 20.0 | 440 | 0.0212 | 27.4712 | 18.6882 | | No log | 21.0 | 462 | 0.0197 | 27.8787 | 18.7353 | | No log | 22.0 | 484 | 0.0207 | 27.6899 | 18.6941 | | 0.1026 | 23.0 | 506 | 0.0186 | 27.6376 | 18.6941 | | 0.1026 | 24.0 | 528 | 0.0202 | 27.6672 | 18.6941 | | 0.1026 | 25.0 | 550 | 0.0174 | 28.0172 | 18.7412 | | 0.1026 | 26.0 | 572 | 0.0170 | 27.8714 | 18.7412 | | 0.1026 | 27.0 | 594 | 0.0164 | 27.7423 | 18.7412 | | 0.1026 | 28.0 | 616 | 0.0164 | 27.8278 | 18.7412 | | 0.1026 | 29.0 | 638 | 0.0163 | 27.8278 | 18.7412 | | 0.1026 | 30.0 | 660 | 0.0158 | 27.907 | 18.7412 | | 0.1026 | 31.0 | 682 | 0.0165 | 27.7752 | 18.7412 | | 0.1026 | 32.0 | 704 | 0.0147 | 27.8284 | 18.7412 | | 0.1026 | 33.0 | 726 | 0.0150 | 27.8862 | 18.7412 | | 0.1026 | 34.0 | 748 | 0.0148 | 27.8402 | 18.7412 | | 0.1026 | 35.0 | 770 | 0.0141 | 27.8353 | 18.7412 | | 0.1026 | 36.0 | 792 | 0.0142 | 27.858 | 18.7412 | | 0.1026 | 37.0 | 814 | 0.0143 | 27.858 | 18.7412 | | 0.1026 | 38.0 | 836 | 0.0158 | 27.8353 | 18.7412 | | 0.1026 | 39.0 | 858 | 0.0125 | 27.8913 | 18.7412 | | 0.1026 | 40.0 | 880 | 0.0121 | 27.9167 | 18.7412 | | 0.1026 | 41.0 | 902 | 0.0122 | 27.9569 | 18.7412 | | 0.1026 | 42.0 | 924 | 0.0126 | 27.9569 | 18.7412 | | 0.1026 | 43.0 | 946 | 0.0120 | 28.001 | 18.7412 | | 0.1026 | 44.0 | 968 | 0.0125 | 28.0079 | 18.7412 | | 0.1026 | 45.0 | 990 | 0.0115 | 28.0079 | 18.7412 | | 0.072 | 46.0 | 1012 | 0.0113 | 27.9851 | 18.7412 | | 0.072 | 47.0 | 1034 | 0.0113 | 28.0184 | 18.7412 | | 0.072 | 48.0 | 1056 | 0.0110 | 28.0184 | 18.7412 | | 0.072 | 49.0 | 1078 | 0.0108 | 28.0184 | 18.7412 | | 0.072 | 50.0 | 1100 | 0.0107 | 28.0184 | 18.7412 | | 0.072 | 51.0 | 1122 | 0.0101 | 28.0184 | 18.7412 | | 0.072 | 52.0 | 1144 | 0.0102 | 28.0184 | 18.7412 | | 0.072 | 53.0 | 1166 | 0.0099 | 28.0184 | 18.7412 | | 0.072 | 54.0 | 1188 | 0.0100 | 28.0184 | 18.7412 | | 0.072 | 55.0 | 1210 | 0.0102 | 28.0184 | 18.7412 | | 0.072 | 56.0 | 1232 | 0.0095 | 28.0184 | 18.7412 | | 0.072 | 57.0 | 1254 | 0.0098 | 28.0184 | 18.7412 | | 0.072 | 58.0 | 1276 | 0.0092 | 28.0184 | 18.7412 | | 0.072 | 59.0 | 1298 | 0.0090 | 28.0184 | 18.7412 | | 0.072 | 60.0 | 1320 | 0.0095 | 28.0184 | 18.7412 | | 0.072 | 61.0 | 1342 | 0.0092 | 27.9674 | 18.7412 | | 0.072 | 62.0 | 1364 | 0.0091 | 27.9419 | 18.7412 | | 0.072 | 63.0 | 1386 | 0.0100 | 27.9419 | 18.7412 | | 0.072 | 64.0 | 1408 | 0.0084 | 28.0752 | 18.7412 | | 0.072 | 65.0 | 1430 | 0.0086 | 28.0192 | 18.7412 | | 0.072 | 66.0 | 1452 | 0.0084 | 28.0192 | 18.7412 | | 0.072 | 67.0 | 1474 | 0.0085 | 28.0192 | 18.7412 | | 0.072 | 68.0 | 1496 | 0.0087 | 28.0192 | 18.7412 | | 0.0575 | 69.0 | 1518 | 0.0084 | 28.0192 | 18.7412 | | 0.0575 | 70.0 | 1540 | 0.0080 | 28.0192 | 18.7412 | | 0.0575 | 71.0 | 1562 | 0.0082 | 28.0192 | 18.7412 | | 0.0575 | 72.0 | 1584 | 0.0080 | 28.0192 | 18.7412 | | 0.0575 | 73.0 | 1606 | 0.0075 | 28.0192 | 18.7412 | | 0.0575 | 74.0 | 1628 | 0.0079 | 28.0192 | 18.7412 | | 0.0575 | 75.0 | 1650 | 0.0078 | 28.0752 | 18.7412 | | 0.0575 | 76.0 | 1672 | 0.0076 | 28.1311 | 18.7412 | | 0.0575 | 77.0 | 1694 | 0.0073 | 28.1311 | 18.7412 | | 0.0575 | 78.0 | 1716 | 0.0074 | 28.1311 | 18.7412 | | 0.0575 | 79.0 | 1738 | 0.0072 | 28.1311 | 18.7412 | | 0.0575 | 80.0 | 1760 | 0.0078 | 28.1311 | 18.7412 | | 0.0575 | 81.0 | 1782 | 0.0077 | 28.1311 | 18.7412 | | 0.0575 | 82.0 | 1804 | 0.0071 | 28.1311 | 18.7412 | | 0.0575 | 83.0 | 1826 | 0.0072 | 28.1311 | 18.7412 | | 0.0575 | 84.0 | 1848 | 0.0075 | 28.1311 | 18.7412 | | 0.0575 | 85.0 | 1870 | 0.0071 | 28.1311 | 18.7412 | | 0.0575 | 86.0 | 1892 | 0.0070 | 28.1311 | 18.7412 | | 0.0575 | 87.0 | 1914 | 0.0069 | 28.1311 | 18.7412 | | 0.0575 | 88.0 | 1936 | 0.0069 | 28.1311 | 18.7412 | | 0.0575 | 89.0 | 1958 | 0.0069 | 28.1311 | 18.7412 | | 0.0575 | 90.0 | 1980 | 0.0069 | 28.1311 | 18.7412 | | 0.0509 | 91.0 | 2002 | 0.0069 | 28.1311 | 18.7412 | | 0.0509 | 92.0 | 2024 | 0.0070 | 28.1311 | 18.7412 | | 0.0509 | 93.0 | 2046 | 0.0069 | 28.1311 | 18.7412 | | 0.0509 | 94.0 | 2068 | 0.0070 | 28.1311 | 18.7412 | | 0.0509 | 95.0 | 2090 | 0.0069 | 28.1311 | 18.7412 | | 0.0509 | 96.0 | 2112 | 0.0069 | 28.1311 | 18.7412 | | 0.0509 | 97.0 | 2134 | 0.0069 | 28.1311 | 18.7412 | | 0.0509 | 98.0 | 2156 | 0.0069 | 28.1311 | 18.7412 | | 0.0509 | 99.0 | 2178 | 0.0069 | 28.1311 | 18.7412 | | 0.0509 | 100.0 | 2200 | 0.0069 | 28.1311 | 18.7412 |
641b65acb84ed903726d051da3698ac1
apache-2.0
['automatic-speech-recognition', 'ru']
false
exp_w2v2t_ru_wav2vec2_s847 Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
d5373b554e329a5893e8ff9575a7785a
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30
9d8b7db70c2aab673634ddde6978d2d6
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-finetuned-ks This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0862 - Accuracy: 0.9828
03aa3d161d8cdf85694525b9453fd33f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.668 | 1.0 | 399 | 0.5462 | 0.9588 | | 0.2728 | 2.0 | 798 | 0.1750 | 0.9766 | | 0.1846 | 3.0 | 1197 | 0.1166 | 0.9785 | | 0.1642 | 4.0 | 1596 | 0.0930 | 0.9813 | | 0.1522 | 5.0 | 1995 | 0.0862 | 0.9828 |
8556616a02d93b03bd95025960448734
mit
['ukrainian', 'english']
false
This is a variant of the [google/mt5-base](https://huggingface.co/google/mt5-base) model, in which Ukrainian and 9% English words remain. This model has 252M parameters - 43% of the original size. Special thanks for the practical example and inspiration: [cointegrated ](https://huggingface.co/cointegrated)
d4692647f9a44d262716ff2a273dc6a1
apache-2.0
['generated_from_trainer']
false
roberta-base-bne-finetuned-sqac This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the sqac dataset. It achieves the following results on the evaluation set: - Loss: 1.2111
10423484e708f172a2191a1e5203aeb2
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.9971 | 1.0 | 1196 | 0.8646 | | 0.482 | 2.0 | 2392 | 0.9334 | | 0.1652 | 3.0 | 3588 | 1.2111 |
f54cc793fd99110f4a85ddcf8451e564
other
['vision', 'image-segmentation']
false
Mask2Former Mask2Former model trained on COCO panoptic segmentation (small-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation ](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/). Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
4c0cc0259a21a65a9c3ba255086d1456
other
['vision', 'image-segmentation']
false
load Mask2Former fine-tuned on COCO panoptic segmentation processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-small-coco-panoptic") model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-small-coco-panoptic") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs)
d2347b4dd0343b9cb610611c220529b1
other
['vision', 'image-segmentation']
false
we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs) predicted_panoptic_map = result["segmentation"] ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former).
3357e3bf34df0ac93cfd85a67868a0a7
apache-2.0
['generated_from_keras_callback']
false
kasrahabib/distilbert-base-uncased-trained-on-open-and-closed-source This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0039 - Validation Loss: 0.2082 - Train Precision: 0.9374 - Train Recall: 0.9714 - Train F1: 0.9541 - Epoch: 9
a5b156c6bf33747040c50859cd8b77cf
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:-----:| | 0.2472 | 0.1604 | 0.8967 | 0.9771 | 0.9352 | 0 | | 0.0924 | 0.1266 | 0.9330 | 0.9561 | 0.9444 | 1 | | 0.0439 | 0.1281 | 0.9543 | 0.9561 | 0.9552 | 2 | | 0.0258 | 0.2058 | 0.8995 | 0.9905 | 0.9428 | 3 | | 0.0136 | 0.1767 | 0.9418 | 0.9580 | 0.9499 | 4 | | 0.0134 | 0.2637 | 0.8927 | 0.9847 | 0.9365 | 5 | | 0.0074 | 0.2197 | 0.9144 | 0.9790 | 0.9456 | 6 | | 0.0049 | 0.2140 | 0.9355 | 0.9695 | 0.9522 | 7 | | 0.0058 | 0.2117 | 0.9360 | 0.9771 | 0.9561 | 8 | | 0.0039 | 0.2082 | 0.9374 | 0.9714 | 0.9541 | 9 |
0389965313454c25700738da6a7da0ef
mit
['generated_from_trainer']
false
ukrainian-qa This model is a fine-tuned version of [ukr-models/xlm-roberta-base-uk](https://huggingface.co/ukr-models/xlm-roberta-base-uk) on the [UA-SQuAD](https://github.com/fido-ai/ua-datasets/tree/main/ua_datasets/src/question_answering) dataset. Link to training scripts - [https://github.com/robinhad/ukrainian-qa](https://github.com/robinhad/ukrainian-qa) It achieves the following results on the evaluation set: - Loss: 1.4778
4fc8061befc5a51c25ceec72918d1037
mit
['generated_from_trainer']
false
How to use ```python from transformers import pipeline, AutoTokenizer, AutoModelForQuestionAnswering model_name = "robinhad/ukrainian-qa" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForQuestionAnswering.from_pretrained(model_name) qa_model = pipeline("question-answering", model=model.to("cpu"), tokenizer=tokenizer) question = "Де ти живеш?" context = "Мене звати Сара і я живу у Лондоні" qa_model(question = question, context = context) ```
a02cc23f068c02f135db41a27f304cd9
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6
b35a6f4fde6d7771c945a672b37575d5
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4526 | 1.0 | 650 | 1.3631 | | 1.3317 | 2.0 | 1300 | 1.2229 | | 1.0693 | 3.0 | 1950 | 1.2184 | | 0.6851 | 4.0 | 2600 | 1.3171 | | 0.5594 | 5.0 | 3250 | 1.3893 | | 0.4954 | 6.0 | 3900 | 1.4778 |
02df506c62bbff325ab6cd77c6aca680
cc0-1.0
[]
false
I created this embedding for SD 2.x 768x768 models, it turns everything into your favorite Christmas classic AniMagic stop motion style as popularized by Rudolf the Red Nosed Reindeer and Santa Claus is Coming to Town among several others produced by the same studio! The Unreleased Christmas Stop Motion Mario Kart Movie! ![messages_0 (9).png](https://s3.amazonaws.com/moonup/production/uploads/1671856643956-632177f5b8fc9e78c2ff68d9.png) Prompt: mario kart toy, (rnknbss16 :1.3), highly textured, figurine Negative prompt: cgi, 3d render, videogame Steps: 34, Sampler: Euler a, CFG scale: 7, Seed: 2737353293, Size: 768x768, Model: SD 2.0_Standard_v2-1_768-ema-pruned, Denoising strength: 0.79, Mask blur: 3, aesthetic_score: 4.9 The Upcoming Stop Action Pikachu Movie! ![05329-459369051-pikachu in the style of rnknbss16.png](https://s3.amazonaws.com/moonup/production/uploads/1671856736417-632177f5b8fc9e78c2ff68d9.png) Prompt: pikachu in the style of rnknbss16 Steps: 30, Sampler: Euler a, CFG scale: 7, Seed: 459369051, Size: 768x768, Model: SD 2.0_Standard_v2-1_768-ema-pruned, aesthetic_score: 5.2 ![05330-4076512951-pikachu in the style of rnknbss16-100.png](https://s3.amazonaws.com/moonup/production/uploads/1671856739194-632177f5b8fc9e78c2ff68d9.png) Prompt: pikachu in the style of rnknbss16-100 Steps: 30, Sampler: Euler a, CFG scale: 7, Seed: 4076512951, Size: 768x768, Model: SD 2.0_Standard_v2-1_768-ema-pruned, aesthetic_score: 5.2 Some 2022 Holiday Ads for the Latest Celebs! Donald Trump ![05356-1397465632-a close up of (donald trump_1.) in the style of (rnknbss16 _1.0).png](https://s3.amazonaws.com/moonup/production/uploads/1671856809340-632177f5b8fc9e78c2ff68d9.png) Prompt: a close up of (donald trump:1.) in the style of (rnknbss16 :1.0) Negative prompt: blurry, text, words Steps: 29, Sampler: Euler a, CFG scale: 7, Seed: 1397465632, Size: 768x768, Model: SD 2.0_Standard_v2-1_768-ema-pruned, aesthetic_score: 5.4 Morgan Freeman ![05372-1868403973-morgan freeman in the style of (rnknbss16 _1.0).png](https://s3.amazonaws.com/moonup/production/uploads/1671856831368-632177f5b8fc9e78c2ff68d9.png) Prompt: morgan freeman in the style of (rnknbss16 :1.0) Steps: 29, Sampler: DPM++ 2S a Karras, CFG scale: 7, Seed: 1868403973, Size: 768x768, Model: SD 2.0_Standard_v2-1_768-ema-pruned, aesthetic_score: 5.7 Barack Obama ![05801-3661737292-barack obama in the style of rnknbss16v2-775.png](https://s3.amazonaws.com/moonup/production/uploads/1671857079900-632177f5b8fc9e78c2ff68d9.png) Prompt: barack obama in the style of rnknbss16v2-775 Steps: 47, Sampler: Euler a, CFG scale: 7, Seed: 3661737292, Size: 768x768, Model: SD 2.0_Standard_v2-1_768-ema-pruned And Lastly, The Remake of A Lifetime, Hogwarts Castle From the New Harry Potter Series ![02340-2909664084-Hogwarts school of witchcraft and wizardry in the style of (rnknbss16 _1.0), highly detailed, intricate.png](https://s3.amazonaws.com/moonup/production/uploads/1671857189186-632177f5b8fc9e78c2ff68d9.png) Prompt: Hogwarts school of witchcraft and wizardry in the style of (rnknbss16 :1.0), highly detailed, intricate Negative prompt: blurry Steps: 60, Sampler: Euler a, CFG scale: 7, Seed: 2909664084, Size: 768x768, Model: SD 2.0_Standard_512-depth-ema, Denoising strength: 0.66, Mask blur: 3, aesthetic_score: 6.2 Notes on the use of these: So I didn't really get a chance to fine-tune them as well as I would have liked, but I wanted to get them out there for people to enjoy so I've included the best of what I have. All of these were trained with 90-ish upscaled screen grabs from high quality DVDs of just the 2 movies mentioned above. I did use some of the letters, and postcards, and packages from the opening credits scenes in hopes to be able to reproduce those or something similar (haven't tried) so you will probably want to include the usual "words, text, letters, logos, watermarks..." in your negative prompts to try to weed those out. I also included some of the limited 2d artwork found in those movies, again in hopes to be able to generate that style as well. but that hasn't seemed to affect much except possibly when generating things that have a lot of 2d variations (i.e. comic book characters) so specifying 3d, or that you want a doll of the thing, or a model, or toy of the thing might help a lot with prompting. Otherwise, just saying " thing in the style of rnknbss16" should do the trick! The Models: They're all 16 vectors. rnknbss16: pretty good but was trained too far and/or fast and tends to make hybrid elf/Santa creatures out of everything and is hard to get it to do anything else, although if your concept is strong or present enough in the model it can do pretty well (i.e. Cinderella's castle which is on EVERYTHING Disney). Models rnknbss16-100 through rnknbss16-150 do much better, however these do less well with people and faces, they're better suited for things, creatures, animals, scenery, places, etc. rnknbss16v2: pretty sure this one is overtrained by a good deal, but you might have success with it. rnknbss16v2-750 and rnknbss16v2-775 are the sweet spot for people and characters with this v2 model, it also tends to get clearer outputs without looking as "fuzzy" or "blurry" and almost as a similar quality as VintageHelper embedding. Which brings me to mixing this with things: Using VingateHelper tends to enhance the "old school" vibes and film grain look as well as thematic props and other elements that may appear in the scene, and PhotoHelper embedding tends to create more "clay" models out of things, like with the Hogwarts castle it made it a wide angle clay diorama model of sorts which was cool and unexpected (see below). ![05345-3448665914-Hogwarts castle in the style of (rnknbss16 _1.2), highly detailed, very textured, intricate, shallow depth of field, photohelper.png](https://s3.amazonaws.com/moonup/production/uploads/1671860648087-632177f5b8fc9e78c2ff68d9.png) Prompt: Hogwarts castle in the style of (rnknbss16 :1.2), highly detailed, very textured, intricate, shallow depth of field, photohelper Negative prompt: blurry, text, words Steps: 50, Sampler: Euler a, CFG scale: 7, Seed: 3448665914, Size: 768x768, Model: SD 2.0_Standard_v2-1_768-ema-pruned, aesthetic_score: 5.6
7eb5450e26b6874f4f9a595ab43555b1
apache-2.0
['automatic-speech-recognition', 'nl']
false
exp_w2v2t_nl_xlsr-53_s948 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (nl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
ce711b1bf5c474af8f32d86aed550bbd
apache-2.0
['generated_from_trainer']
false
all-roberta-large-v1-home-8-16-5-oos This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3789 - Accuracy: 0.3356
6bf856204f351e1e6e51b6f51fa630e6
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7614 | 1.0 | 1 | 2.6146 | 0.1889 | | 2.2082 | 2.0 | 2 | 2.5232 | 0.2667 | | 1.8344 | 3.0 | 3 | 2.4516 | 0.2933 | | 1.4601 | 4.0 | 4 | 2.4033 | 0.3267 | | 1.2748 | 5.0 | 5 | 2.3789 | 0.3356 |
08b3b7185fe33546be8eea579e1f91f0
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
false
Using dbwhitemane.ckpt ![image1 1](https://huggingface.co/sd-dreambooth-library/sally-whitemanev/resolve/main/concept_images/tmp45mql4vt.png) ![image2 2](https://huggingface.co/sd-dreambooth-library/sally-whitemanev/resolve/main/concept_images/01019-2983591336-dbwhitemane%20standing%20at%20a%20rooftop%20looking%20over%20the%20city%2C%20night%2C%20cowboy%20shot%2C%20foggy%2C%20city%20lights%2Cdramatic%20lighting%2C%208k%2C%204k%2C%20(high.png) ![image2 2](https://huggingface.co/sd-dreambooth-library/sally-whitemanev/resolve/main/concept_images/01068-2365801682-sbwhitemane%20taking%20a%20bath%2C%208k%2C%204k%2C%20(highres_1.1)%2C%20best%20quality%2C%20(masterpiece_1.3)%2C%20(red%20eyes_1.2)%2C%20blush%2C%20embarrassed.png) ![image2 2](https://huggingface.co/sd-dreambooth-library/sally-whitemanev/resolve/main/concept_images/01095-1501953711-sbwhitemane%20leaning%20forward%2C%20princess%2C%201girl%2C%20solo%2Celf%20in%20forest%20%2C%20leather%20armor%2C%20large%20eyes%2C%20(ice%20green%20eyes_1.1)%2C%20lush%2C%20%20blond.png) ![image2 2](https://huggingface.co/sd-dreambooth-library/sally-whitemanev/resolve/main/concept_images/01099-3504900055-leaning%20forward%2C%20princess%2C%201girl%2C%20solo%2C%20sbwhitemane%20%20in%20forest%20%2C%20leather%20armor%2C%20large%20eyes%2C%20lush.png) ![image2 2](https://huggingface.co/sd-dreambooth-library/sally-whitemanev/resolve/main/concept_images/01103-1390776440-leaning%20forward%2C%20princess%2C%201girl%2C%20solo%2C%20sbwhitemane%20%20in%20forest%20%2C%20leather%20armor%2C%20large%20eyes%2C%20lush.png) ![image2 2](https://huggingface.co/sd-dreambooth-library/sally-whitemanev/resolve/main/concept_images/05248-2547952708-whitemanedb%20in%20a%20forestns_l89cu.png) ![image2 2](https://huggingface.co/sd-dreambooth-library/sally-whitemanev/resolve/main/concept_images/05253-2547952705-whitemanedb_in_a_forest28dbdxct.png) ![image2 2](https://huggingface.co/sd-dreambooth-library/sally-whitemanev/resolve/main/concept_images/05260-2547952708-whitemanedb_in_a_forest4ud2iio1.png) Clip skip comparsion ![clip 1](https://huggingface.co/sd-dreambooth-library/sally-whitemanev/resolve/main/concept_images/xy_grid-0005-3724517679.png) I uploaded for now 3 models (more incoming for whitemane): -[whitemanedb_step_2500.ckpt](https://huggingface.co/sd-dreambooth-library/sally-whitemanev/blob/main/whitemanedb_step_2500.ckpt) -[whitemanedb_step_3500.ckpt](https://huggingface.co/sd-dreambooth-library/sally-whitemanev/blob/main/whitemanedb_step_3500.ckpt) Are trained with 21 images and the trigger is "whitemanedb", this is my first attempts and I didn't get the final file because I ran out of space on drive :\ but model seems to work just fine. The second model is [dbwhitemane.ckpt](https://huggingface.co/sd-dreambooth-library/sally-whitemanev/blob/main/dbwhitemane.ckpt) This one has a total of 39 images used for training that you can find [here](https://huggingface.co/sd-dreambooth-library/sally-whitemanev/tree/main/dataset) **Model is based on AnythingV3 FP16 [38c1ebe3] And so I would recommend to use a VAE from NAI, Anything or WaifuDiffusion** **Also set clip skip to 2 will help because its based on NAI model**
7b052c6f09a237af9a67dd30ba8100fd
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
false
Promt examples This one is for the comparsion on top > whitemanedb , 8k, 4k, (highres:1.1), best quality, (masterpiece:1.3), (red eyes:1.2), blush, embarrassed > Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, deformed face, (poorly drawn face)),((buckteeth)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck))), 1boy, > Steps: 45, Sampler: Euler a, CFG scale: 7, Seed: 772493513, Size: 512x512, Model hash: 313ad056, Eta: 0.07, Clip skip: 2 > whitemanedb taking a bath, 8k, 4k, (highres:1.1), best quality, (masterpiece:1.3), (red eyes:1.2), nsfw, nude, blush, nipples, > Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, deformed face, (poorly drawn face)),((buckteeth)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck))), 1boy, > Steps: 45, Sampler: Euler a, CFG scale: 7, Seed: 3450621385, Size: 512x512, Model hash: 313ad056, Eta: 0.07, Clip skip: 2 > whitemanedb in a forest > Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, deformed face > Steps: 35, Sampler: Euler a, CFG scale: 10.0, Seed: 2547952708, Size: 512x512, Model hash: 313ad056, Eta: 0.07, Clip skip: 2 > lying in the ground , princess, 1girl, solo, sbwhitemane in forest , leather armor, red eyes, lush > Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, deformed face, (poorly drawn face)),((buckteeth)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck))), 1boy, > Steps: 58, Sampler: Euler a, CFG scale: 7, Seed: 1390776440, Size: 512x512, Model hash: 8b1a4378, Clip skip: 2 > sbwhitemane leaning forward, princess, 1girl, solo,elf in forest , leather armor, large eyes, (ice green eyes:1.1), lush, blonde hair, realistic photo > Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, deformed face, (poorly drawn face)),((buckteeth)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck))), 1boy, > Steps: 45, Sampler: Euler a, CFG scale: 7, Seed: 1501953711, Size: 512x512, Model hash: 8b1a4378, Clip skip: 2 Enjoy, any recommendation or help is welcome, this is my first model and probably a lot of things can be improved!
a742c9646621d071737de42459508bbe
cc-by-4.0
['translation', 'opus-mt-tc']
false
opus-mt-tc-base-uk-hu Neural machine translation model for translating from Ukrainian (uk) to Hungarian (hu). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ```
359b8a451666adefd39586950b7ed114
cc-by-4.0
['translation', 'opus-mt-tc']
false
Model info * Release: 2022-03-08 * source language(s): ukr * target language(s): hun * model: transformer-align * data: opusTCv20210807+pft ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+pft_transformer-align_2022-03-08.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hun/opusTCv20210807+pft_transformer-align_2022-03-08.zip) * more information released models: [OPUS-MT ukr-hun README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-hun/README.md)
12696b3b2f88ecb93fcdb7a36916c71a
cc-by-4.0
['translation', 'opus-mt-tc']
false
Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "Я тобі винний 1000 доларів.", "Я п'ю воду." ] model_name = "pytorch-models/opus-mt-tc-base-uk-hu" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) )
180b4280d23db9e9fd6ca387f2bf7496
cc-by-4.0
['translation', 'opus-mt-tc']
false
Vizet iszom. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-uk-hu") print(pipe("Я тобі винний 1000 доларів."))
8ef632d9c5842765f7d7a50fe0c90074
cc-by-4.0
['translation', 'opus-mt-tc']
false
Benchmarks * test set translations: [opusTCv20210807+pft_transformer-align_2022-03-08.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hun/opusTCv20210807+pft_transformer-align_2022-03-08.test.txt) * test set scores: [opusTCv20210807+pft_transformer-align_2022-03-08.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hun/opusTCv20210807+pft_transformer-align_2022-03-08.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU |
7564af4406d30bfd96b07a96f929964f
apache-2.0
['deep-narrow']
false
T5-Efficient-LARGE-EL12 (Deep-Narrow version) T5-Efficient-LARGE-EL12 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block.
d6961c1fddf90536966e427bb2fd1f7a
apache-2.0
['deep-narrow']
false
Details model architecture This model checkpoint - **t5-efficient-large-el12** - is of model type **Large** with the following variations: - **el** is **12** It has **586.69** million parameters and thus requires *ca.* **2346.78 MB** of memory in full precision (*fp32*) or **1173.39 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh |
f0c36b70b9b527ece8d38e194f265cf7
apache-2.0
['automatic-speech-recognition', 'nl']
false
exp_w2v2t_nl_vp-nl_s158 Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (nl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
45fc69f94dabfd99d34b0c486739dd97
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-squad-1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.6247
fb6ced7a09609d6a38460349d8841847
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.9872 | 1.0 | 554 | 1.7933 | | 1.6189 | 2.0 | 1108 | 1.6159 | | 1.3125 | 3.0 | 1662 | 1.6247 |
9764db42db998ff9cfd79b2fcc218d67
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
test1 Dreambooth model trained by ukeeba with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
0ce2e73ad5ae6e6ff7feeba5eba8d7a9
apache-2.0
[]
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(0.95, 0.999), weight_decay=1e-06 and epsilon=1e-08 - lr_scheduler: cosine - lr_warmup_steps: 500 - ema_inv_gamma: 1.0 - ema_inv_gamma: 0.75 - ema_inv_gamma: 0.9999 - mixed_precision: fp16
72712858905ab1d5e505197d0f721647
apache-2.0
['automatic-speech-recognition', 'pl']
false
exp_w2v2t_pl_xls-r_s287 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (pl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
772077fc21012a5c86beb99834b3a753
apache-2.0
['thai', 'token-classification', 'pos', 'dependency-parsing']
false
Model Description This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [roberta-base-thai-char-upos](https://huggingface.co/KoichiYasuoka/roberta-base-thai-char-upos).
538b5260cb6255be677b0890ef0949f0
apache-2.0
['thai', 'token-classification', 'pos', 'dependency-parsing']
false
text = "+text+"\n" v=[(s,e) for s,e in w["offset_mapping"] if s<e] for i,(s,e) in enumerate(v,1): q=self.model.config.id2label[p[i,h[i]]].split("|") u+="\t".join([str(i),text[s:e],"_",q[0],"_","|".join(q[1:-1]),str(h[i]),q[-1],"_","_" if i<len(v) and e<v[i][0] else "SpaceAfter=No"])+"\n" return u+"\n" nlp=UDgoeswith("KoichiYasuoka/roberta-base-thai-char-ud-goeswith") print(nlp("หลายหัวดีกว่าหัวเดียว")) ``` with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/). Or without ufal.chu-liu-edmonds: ``` from transformers import pipeline nlp=pipeline("universal-dependencies","KoichiYasuoka/roberta-base-thai-char-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple") print(nlp("หลายหัวดีกว่าหัวเดียว")) ```
23fc6e7505fda07de63c6de5f40d2f7f
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'robust-speech-event']
false
Wav2Vec2 XLS-R 300M Cantonese (zh-HK) Wav2Vec2 XLS-R 300M Cantonese (zh-HK) is an automatic speech recognition model based on the [XLS-R](https://arxiv.org/abs/2111.09296) architecture. This model is a fine-tuned version of [Wav2Vec2-XLS-R-300M](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the `zh-HK` subset of the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. This model was trained using HuggingFace's PyTorch framework and is part of the [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by HuggingFace. All training was done on a Tesla V100, sponsored by OVH. All necessary scripts used for training could be found in the [Files and versions](https://huggingface.co/w11wo/wav2vec2-xls-r-300m-zh-HK-v2/tree/main) tab, as well as the [Training metrics](https://huggingface.co/w11wo/wav2vec2-xls-r-300m-zh-HK-v2/tensorboard) logged via Tensorboard.
7a4407881ba995806f90ca6cc6fbed9a
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'robust-speech-event']
false
params | Arch. | Training/Validation data (text) | | ------------------------------ | ------- | ----- | ------------------------------- | | `wav2vec2-xls-r-300m-zh-HK-v2` | 300M | XLS-R | `Common Voice zh-HK` Dataset |
1fade30c966eaf2e03c87c151c8acf91
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'robust-speech-event']
false
Evaluation Results The model achieves the following results on evaluation: | Dataset | Loss | CER | | -------------------------------- | ------ | ------ | | `Common Voice` | 0.8089 | 31.73% | | `Common Voice 7` | N/A | 23.11% | | `Common Voice 8` | N/A | 23.02% | | `Robust Speech Event - Dev Data` | N/A | 56.60% |
53efcc67a471aaa180d1f5a312e567b7
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'robust-speech-event']
false
Training hyperparameters The following hyperparameters were used during training: - `learning_rate`: 0.0001 - `train_batch_size`: 8 - `eval_batch_size`: 8 - `seed`: 42 - `gradient_accumulation_steps`: 4 - `total_train_batch_size`: 32 - `optimizer`: Adam with `betas=(0.9, 0.999)` and `epsilon=1e-08` - `lr_scheduler_type`: linear - `lr_scheduler_warmup_steps`: 2000 - `num_epochs`: 100.0 - `mixed_precision_training`: Native AMP
d12b84bc51eb69ad9ded22eb52ff93ae
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'robust-speech-event']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | | :-----------: | :---: | :---: | :-------------: | :----: | :----: | | 69.8341 | 1.34 | 500 | 80.0722 | 1.0 | 1.0 | | 6.6418 | 2.68 | 1000 | 6.6346 | 1.0 | 1.0 | | 6.2419 | 4.02 | 1500 | 6.2909 | 1.0 | 1.0 | | 6.0813 | 5.36 | 2000 | 6.1150 | 1.0 | 1.0 | | 5.9677 | 6.7 | 2500 | 6.0301 | 1.1386 | 1.0028 | | 5.9296 | 8.04 | 3000 | 5.8975 | 1.2113 | 1.0058 | | 5.6434 | 9.38 | 3500 | 5.5404 | 2.1624 | 1.0171 | | 5.1974 | 10.72 | 4000 | 4.5440 | 2.1702 | 0.9366 | | 4.3601 | 12.06 | 4500 | 3.3839 | 2.2464 | 0.8998 | | 3.9321 | 13.4 | 5000 | 2.8785 | 2.3097 | 0.8400 | | 3.6462 | 14.74 | 5500 | 2.5108 | 1.9623 | 0.6663 | | 3.5156 | 16.09 | 6000 | 2.2790 | 1.6479 | 0.5706 | | 3.32 | 17.43 | 6500 | 2.1450 | 1.8337 | 0.6244 | | 3.1918 | 18.77 | 7000 | 1.8536 | 1.9394 | 0.6017 | | 3.1139 | 20.11 | 7500 | 1.7205 | 1.9112 | 0.5638 | | 2.8995 | 21.45 | 8000 | 1.5478 | 1.0624 | 0.3250 | | 2.7572 | 22.79 | 8500 | 1.4068 | 1.1412 | 0.3367 | | 2.6881 | 24.13 | 9000 | 1.3312 | 2.0100 | 0.5683 | | 2.5993 | 25.47 | 9500 | 1.2553 | 2.0039 | 0.6450 | | 2.5304 | 26.81 | 10000 | 1.2422 | 2.0394 | 0.5789 | | 2.4352 | 28.15 | 10500 | 1.1582 | 1.9970 | 0.5507 | | 2.3795 | 29.49 | 11000 | 1.1160 | 1.8255 | 0.4844 | | 2.3287 | 30.83 | 11500 | 1.0775 | 1.4123 | 0.3780 | | 2.2622 | 32.17 | 12000 | 1.0704 | 1.7445 | 0.4894 | | 2.2225 | 33.51 | 12500 | 1.0272 | 1.7237 | 0.5058 | | 2.1843 | 34.85 | 13000 | 0.9756 | 1.8042 | 0.5028 | | 2.1 | 36.19 | 13500 | 0.9527 | 1.8909 | 0.6055 | | 2.0741 | 37.53 | 14000 | 0.9418 | 1.9026 | 0.5880 | | 2.0179 | 38.87 | 14500 | 0.9363 | 1.7977 | 0.5246 | | 2.0615 | 40.21 | 15000 | 0.9635 | 1.8112 | 0.5599 | | 1.9448 | 41.55 | 15500 | 0.9249 | 1.7250 | 0.4914 | | 1.8966 | 42.89 | 16000 | 0.9023 | 1.5829 | 0.4319 | | 1.8662 | 44.24 | 16500 | 0.9002 | 1.4833 | 0.4230 | | 1.8136 | 45.58 | 17000 | 0.9076 | 1.1828 | 0.2987 | | 1.7908 | 46.92 | 17500 | 0.8774 | 1.5773 | 0.4258 | | 1.7354 | 48.26 | 18000 | 0.8727 | 1.5037 | 0.4024 | | 1.6739 | 49.6 | 18500 | 0.8636 | 1.1239 | 0.2789 | | 1.6457 | 50.94 | 19000 | 0.8516 | 1.2269 | 0.3104 | | 1.5847 | 52.28 | 19500 | 0.8399 | 1.3309 | 0.3360 | | 1.5971 | 53.62 | 20000 | 0.8441 | 1.3153 | 0.3335 | | 1.602 | 54.96 | 20500 | 0.8590 | 1.2932 | 0.3433 | | 1.5063 | 56.3 | 21000 | 0.8334 | 1.1312 | 0.2875 | | 1.4631 | 57.64 | 21500 | 0.8474 | 1.1698 | 0.2999 | | 1.4997 | 58.98 | 22000 | 0.8638 | 1.4279 | 0.3854 | | 1.4301 | 60.32 | 22500 | 0.8550 | 1.2737 | 0.3300 | | 1.3798 | 61.66 | 23000 | 0.8266 | 1.1802 | 0.2934 | | 1.3454 | 63.0 | 23500 | 0.8235 | 1.3816 | 0.3711 | | 1.3678 | 64.34 | 24000 | 0.8550 | 1.6427 | 0.5035 | | 1.3761 | 65.68 | 24500 | 0.8510 | 1.6709 | 0.4907 | | 1.2668 | 67.02 | 25000 | 0.8515 | 1.5842 | 0.4505 | | 1.2835 | 68.36 | 25500 | 0.8283 | 1.5353 | 0.4221 | | 1.2961 | 69.7 | 26000 | 0.8339 | 1.5743 | 0.4369 | | 1.2656 | 71.05 | 26500 | 0.8331 | 1.5331 | 0.4217 | | 1.2556 | 72.39 | 27000 | 0.8242 | 1.4708 | 0.4109 | | 1.2043 | 73.73 | 27500 | 0.8245 | 1.4469 | 0.4031 | | 1.2722 | 75.07 | 28000 | 0.8202 | 1.4924 | 0.4096 | | 1.202 | 76.41 | 28500 | 0.8290 | 1.3807 | 0.3719 | | 1.1679 | 77.75 | 29000 | 0.8195 | 1.4097 | 0.3749 | | 1.1967 | 79.09 | 29500 | 0.8059 | 1.2074 | 0.3077 | | 1.1241 | 80.43 | 30000 | 0.8137 | 1.2451 | 0.3270 | | 1.1414 | 81.77 | 30500 | 0.8117 | 1.2031 | 0.3121 | | 1.132 | 83.11 | 31000 | 0.8234 | 1.4266 | 0.3901 | | 1.0982 | 84.45 | 31500 | 0.8064 | 1.3712 | 0.3607 | | 1.0797 | 85.79 | 32000 | 0.8167 | 1.3356 | 0.3562 | | 1.0119 | 87.13 | 32500 | 0.8215 | 1.2754 | 0.3268 | | 1.0216 | 88.47 | 33000 | 0.8163 | 1.2512 | 0.3184 | | 1.0375 | 89.81 | 33500 | 0.8137 | 1.2685 | 0.3290 | | 0.9794 | 91.15 | 34000 | 0.8220 | 1.2724 | 0.3255 | | 1.0207 | 92.49 | 34500 | 0.8165 | 1.2906 | 0.3361 | | 1.0169 | 93.83 | 35000 | 0.8153 | 1.2819 | 0.3305 | | 1.0127 | 95.17 | 35500 | 0.8187 | 1.2832 | 0.3252 | | 0.9978 | 96.51 | 36000 | 0.8111 | 1.2612 | 0.3210 | | 0.9923 | 97.85 | 36500 | 0.8076 | 1.2278 | 0.3122 | | 1.0451 | 99.2 | 37000 | 0.8086 | 1.2451 | 0.3156 |
39374d4d8bee50c0bdb75cf2c6b8cb46
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2149 - Accuracy: 0.9265 - F1: 0.9266
09b92bfe5e5c1b2aacf3aa07360e6f2d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8307 | 1.0 | 250 | 0.3103 | 0.9065 | 0.9038 | | 0.2461 | 2.0 | 500 | 0.2149 | 0.9265 | 0.9266 |
0c887f65f6cc946754047932c0ac3cf0
cc
['text classification']
false
Model information: This model is the [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) model that has been finetuned using radiology report texts from the MIMIC-III database. The task performed was text classification in order to benchmark this model with a selection of other variants of BERT for the classifcation of MIMIC-III radiology report texts into two classes. Labels of [0,1] were assigned to radiology reports in MIMIC-III that were linked to an ICD9 diagnosis code for lung cancer = 1 and a random sample of reports which were not linked to any type of cancer diagnosis code at all = 0.
8fde37ade488235a8316cbcd86333174
cc
['text classification']
false
Limitations: Note that the dataset and model may not be fully represetative or suitable for all needs it is recommended that the paper for the dataset and the base model card should be reviewed before use - - [MIMIC-III](https://www.nature.com/articles/sdata201635.pdf) - [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased)
dff997f5c33d90613f3cef0f0684ec1f
cc
['text classification']
false
How to use: Load the model from the library using the following checkpoints: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("sarahmiller137/distilbert-base-uncased-ft-m3-lc") model = AutoModel.from_pretrained("sarahmiller137/distilbert-base-uncased-ft-m3-lc") ```
8329ccceb47d3823327dfa760980bfbf
apache-2.0
['automatic-speech-recognition', 'de']
false
exp_w2v2r_de_vp-100k_gender_male-2_female-8_s364 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
fd19560ecd20ca133ecb130212867ee2
apache-2.0
[]
false
AIShell-1 and Wenetspeech testset results with modified-beam-search streaming decode using epoch-14.pt | decode_chunk_len | AIShell-1 | TEST_NET | TEST_MEETING | |------------------|-----------|----------|--------------| | 32 | 3.19 | 9.58 | 9.51 || | 64 | 3.04 | 8.97 | 8.83 ||
233934d1f7c65446b75906b1b85f8fab
apache-2.0
[]
false
Training and decoding commands ``` nohup ./pruned_transducer_stateless7_streaming/train.py --world-size 8 --num-epochs 30 --start-epoch 1 --feedforward-dims "1024,1024,1536,1536,1024" --exp-dir pruned_transducer_stateless7_streaming/exp --max-duration 360 > pruned_transducer_stateless7_streaming/exp/nohup.zipformer & nohup ./pruned_transducer_stateless7_streaming/decode.py --epoch 6 --avg 1 --exp-dir ./pruned_transducer_stateless7_streaming/exp --max-duration 600 --decode-chunk-len 32 --decoding-method modified_beam_search --beam-size 4 > nohup.zipformer.deocode & ```
6a8dbf1574edba65f17e9559f35deb52
apache-2.0
[]
false
Tips some k2-fsa version and parameter is ``` {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'lo g_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.23.2', 'k2-build -type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'a74f59dba1863cd9386ba4d8815850421260eee7', 'k2-git-date': 'Fri Dec 2 08:32:22 2022', 'lhotse-version': '1.5.0.dev+gi t.8ce38fc.dirty', 'torch-version': '1.11.0+cu113', 'torch-cuda-available': True, 'torch-cuda-version': '11.3', 'python-version': '3.7', 'icefall-git-branch': 'master', 'icef all-git-sha1': '11b08db-dirty', 'icefall-git-date': 'Thu Jan 12 10:19:21 2023', 'icefall-path': '/opt/conda/lib/python3.7/site-packages', 'k2-path': '/opt/conda/lib/python3. 7/site-packages/k2/__init__.py', 'lhotse-path': '/opt/conda/lib/python3.7/site-packages/lhotse/__init__.py', 'hostname': 'xxx', 'IP add ress': 'x.x.x.x'}, 'world_size': 8, 'master_port': 12354, 'tensorboard': True, 'num_epochs': 30, 'start_epoch': 1, 'start_batch': 0, 'exp_dir': PosixPath('pruned_trans ducer_stateless7_streaming/exp'), 'lang_dir': 'data/lang_char_bpe', 'base_lr': 0.01, 'lr_batches': 5000, 'lr_epochs': 3.5, 'context_size': 2, 'prune_range': 5, 'lm_scale': 0 .25, 'am_scale': 0.0, 'simple_loss_scale': 0.5, 'seed': 42, 'print_diagnostics': False, 'inf_check': False, 'save_every_n': 2000, 'keep_last_k': 30, 'average_period': 200, ' use_fp16': False, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,1536,1536,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_ dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder _dim': 512, 'joiner_dim': 512, 'short_chunk_size': 50, 'num_left_chunks': 4, 'decode_chunk_len': 32, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 360, 'bucketing _sampler': True, 'num_buckets': 300, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'return_cuts': True, 'num_wor kers': 8, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'training_subset': '12k_hour', 'blank_id': 0, 'vocab_size': 6254} ```
839fbf2f442d294c8bf434bcd2ae1cde
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3185 - Accuracy: 0.8667 - F1: 0.8675
a7df9895fb129e65011045127514de5b
mit
[]
false
guttestreker on Stable Diffusion This is the `<guttestreker>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<guttestreker> 0](https://huggingface.co/sd-concepts-library/guttestreker/resolve/main/concept_images/9.jpeg) ![<guttestreker> 1](https://huggingface.co/sd-concepts-library/guttestreker/resolve/main/concept_images/10.jpeg) ![<guttestreker> 2](https://huggingface.co/sd-concepts-library/guttestreker/resolve/main/concept_images/3.jpeg) ![<guttestreker> 3](https://huggingface.co/sd-concepts-library/guttestreker/resolve/main/concept_images/1.jpeg) ![<guttestreker> 4](https://huggingface.co/sd-concepts-library/guttestreker/resolve/main/concept_images/4.jpeg) ![<guttestreker> 5](https://huggingface.co/sd-concepts-library/guttestreker/resolve/main/concept_images/8.jpeg) ![<guttestreker> 6](https://huggingface.co/sd-concepts-library/guttestreker/resolve/main/concept_images/11.jpeg) ![<guttestreker> 7](https://huggingface.co/sd-concepts-library/guttestreker/resolve/main/concept_images/6.jpeg) ![<guttestreker> 8](https://huggingface.co/sd-concepts-library/guttestreker/resolve/main/concept_images/5.jpeg) ![<guttestreker> 9](https://huggingface.co/sd-concepts-library/guttestreker/resolve/main/concept_images/0.jpeg) ![<guttestreker> 10](https://huggingface.co/sd-concepts-library/guttestreker/resolve/main/concept_images/2.jpeg) ![<guttestreker> 11](https://huggingface.co/sd-concepts-library/guttestreker/resolve/main/concept_images/7.jpeg) ![<guttestreker> 12](https://huggingface.co/sd-concepts-library/guttestreker/resolve/main/concept_images/12.jpeg)
7fa34036dc69067d108271043827d9ca
apache-2.0
['generated_from_keras_callback']
false
opus-mt-ar-en-finetunedTanzil-v5-ar-to-en This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.8101 - Validation Loss: 0.9477 - Train Bleu: 9.3241 - Train Gen Len: 88.73 - Train Rouge1: 56.4906 - Train Rouge2: 34.2668 - Train Rougel: 53.2279 - Train Rougelsum: 53.7836 - Epoch: 2
92380d030caa58449cb05658e8a1a243
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Train Bleu | Train Gen Len | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Epoch | |:----------:|:---------------:|:----------:|:-------------:|:------------:|:------------:|:------------:|:---------------:|:-----:| | 0.8735 | 0.9809 | 11.0863 | 78.68 | 56.4557 | 33.3673 | 53.4828 | 54.1197 | 0 | | 0.8408 | 0.9647 | 9.8543 | 88.955 | 57.3797 | 34.3539 | 53.8783 | 54.3714 | 1 | | 0.8101 | 0.9477 | 9.3241 | 88.73 | 56.4906 | 34.2668 | 53.2279 | 53.7836 | 2 |
bdf80f9cda83fffe58eac80070e76f70
apache-2.0
['automatic-speech-recognition', 'hf-asr-leaderboard', 'robust-speech-event']
false
wav2vec2-large-xls-r-1b-Indonesian This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.9550 - Wer: 0.4551 - Cer: 0.1643
31cbb507f002f1d61dc98a7a069b80c6
apache-2.0
['automatic-speech-recognition', 'hf-asr-leaderboard', 'robust-speech-event']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 3.663 | 7.69 | 200 | 0.7898 | 0.6039 | 0.1848 | | 0.7424 | 15.38 | 400 | 1.0215 | 0.5615 | 0.1924 | | 0.4494 | 23.08 | 600 | 1.0901 | 0.5249 | 0.1932 | | 0.5075 | 30.77 | 800 | 1.1013 | 0.5079 | 0.1935 | | 0.4671 | 38.46 | 1000 | 1.1034 | 0.4916 | 0.1827 | | 0.1928 | 46.15 | 1200 | 0.9550 | 0.4551 | 0.1643 |
aada2564ff4d0ddbe01bdc47a26fe8dd
apache-2.0
['image-classification']
false
MindSpore Image Classification models with MNIST on the 🤗Hub! This repository contains the model from [this notebook on image classification with MNIST dataset using LeNet architecture](https://gitee.com/mindspore/mindspore/blob/r1.2/model_zoo/official/cv/lenet/README.md
7037282e890cec7e4c12f09da54d531d
apache-2.0
['image-classification']
false
LeNet Description Lenet-5 is one of the earliest pre-trained models proposed by Yann LeCun and others in the year 1998, in the research paper Gradient-Based Learning Applied to Document Recognition. They used this architecture for recognizing the handwritten and machine-printed characters. The main reason behind the popularity of this model was its simple and straightforward architecture. It is a multi-layer convolution neural network for image classification. ![LeNet Architecture](./lenetarchitecture.jpeg) [source](https://www.analyticsvidhya.com/blog/2021/03/the-architecture-of-lenet-5/)
32bb0e9802001168568acbfa8a5852ca
mit
['generated_from_trainer']
false
Bio_ClinicalBERT_fold_6_ternary_v1 This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7302 - F1: 0.8128
a276289f627ecfe15a4002d1ca4e69c1
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 292 | 0.5359 | 0.7833 | | 0.5585 | 2.0 | 584 | 0.5376 | 0.8026 | | 0.5585 | 3.0 | 876 | 0.6117 | 0.8038 | | 0.2314 | 4.0 | 1168 | 0.8036 | 0.7974 | | 0.2314 | 5.0 | 1460 | 0.9467 | 0.8179 | | 0.1093 | 6.0 | 1752 | 1.2957 | 0.7923 | | 0.0384 | 7.0 | 2044 | 1.3423 | 0.8026 | | 0.0384 | 8.0 | 2336 | 1.2644 | 0.8218 | | 0.021 | 9.0 | 2628 | 1.3093 | 0.8231 | | 0.021 | 10.0 | 2920 | 1.3282 | 0.8179 | | 0.0129 | 11.0 | 3212 | 1.3853 | 0.8295 | | 0.0078 | 12.0 | 3504 | 1.4705 | 0.8154 | | 0.0078 | 13.0 | 3796 | 1.5063 | 0.8167 | | 0.0064 | 14.0 | 4088 | 1.5293 | 0.8179 | | 0.0064 | 15.0 | 4380 | 1.6303 | 0.8128 | | 0.0085 | 16.0 | 4672 | 1.5945 | 0.8115 | | 0.0085 | 17.0 | 4964 | 1.6899 | 0.8103 | | 0.0056 | 18.0 | 5256 | 1.6952 | 0.8064 | | 0.0055 | 19.0 | 5548 | 1.7550 | 0.7936 | | 0.0055 | 20.0 | 5840 | 1.6779 | 0.8141 | | 0.003 | 21.0 | 6132 | 1.7064 | 0.8128 | | 0.003 | 22.0 | 6424 | 1.7192 | 0.8154 | | 0.0013 | 23.0 | 6716 | 1.8188 | 0.7974 | | 0.0014 | 24.0 | 7008 | 1.7273 | 0.8128 | | 0.0014 | 25.0 | 7300 | 1.7302 | 0.8128 |
d239ddd6d0cdbc98d3f8a96bb6b81329
apache-2.0
['generated_from_trainer']
false
t5-small-summarization This model is a fine-tuned version of t5-small (https://huggingface.co/t5-small) on the cnn_dailymail dataset. It achieves the following results on the evaluation set: - Loss: 1.6477
18ba5cd6bd5cd8e65feeae1520761a69
apache-2.0
['generated_from_trainer']
false
Model description The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP
7869c41c946b519187cc59fcf9fb6968
apache-2.0
['setfit', 'sentence-transformers', 'text-classification']
false
fathyshalab/massive_general-roberta-large-v1-5-95 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer.
d733dd56f77faa597e0e51ebf3378e77
openrail++
['stable-diffusion', 'stable-diffusion-diffusers', 'text-guided-to-image-inpainting', 'endpoints-template']
false
Fork of [stabilityai/stable-diffusion-2-inpainting](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting) > Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. > For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion with 🧨Diffusers blog](https://huggingface.co/blog/stable_diffusion). For more information about the model, license and limitations check the original model card at [stabilityai/stable-diffusion-2-inpainting](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting). --- This repository implements a custom `handler` task for `text-guided-to-image-inpainting` for 🤗 Inference Endpoints. The code for the customized pipeline is in the [handler.py](https://huggingface.co/philschmid/stable-diffusion-2-inpainting-endpoint/blob/main/handler.py). There is also a [notebook](https://huggingface.co/philschmid/stable-diffusion-2-inpainting-endpoint/blob/main/create_handler.ipynb) included, on how to create the `handler.py` ![thubmnail](Stable%20Diffusion%20Inference%20endpoints%20-%20inpainting.png)
89fee600ceca7acf5958deaf2824ed67
openrail++
['stable-diffusion', 'stable-diffusion-diffusers', 'text-guided-to-image-inpainting', 'endpoints-template']
false
expected Request payload ```json { "inputs": "A prompt used for image generation", "image" : "iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAAABGdBTUEAALGPC", "mask_image": "iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAAABGdBTUEAALGPC", } ``` below is an example on how to run a request using Python and `requests`.
e012e4502025defbe30e495926c5af84
openrail++
['stable-diffusion', 'stable-diffusion-diffusers', 'text-guided-to-image-inpainting', 'endpoints-template']
false
helper image utils def encode_image(image_path): with open(image_path, "rb") as i: b64 = base64.b64encode(i.read()) return b64.decode("utf-8") def predict(prompt, image, mask_image): image = encode_image(image) mask_image = encode_image(mask_image)
2bce5c8671ee63656ba125f8db3318ab
openrail++
['stable-diffusion', 'stable-diffusion-diffusers', 'text-guided-to-image-inpainting', 'endpoints-template']
false
important to get an image back } response = r.post(ENDPOINT_URL, headers=headers, json=payload) img = Image.open(BytesIO(response.content)) return img prediction = predict( prompt="Face of a bengal cat, high resolution, sitting on a park bench", image="dog.png", mask_image="mask_dog.png" ) ``` expected output ![sample](result.png)
ea357773f2d5297e86c1f0a3823cec73
bsd-3-clause
['microsoft/MiniLM-L12-H384-uncased']
false
test-minilm-finetuned-emotion fine-tuned model (uncased) This model is a fine-tuned extension of the [Microsoft MiniLM distilled model](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased). This is the result of the learning exercise for [Simple Training with the 🤗 Transformers Trainer](https://www.youtube.com/watch?v=u--UVvH-LIQ&t=198s) and also going through Chapter 2, Text Classification in [Natural Language Processing with Transformers](https://transformersbook.com/), Revised Color Edition, May 2022. This model is uncased: it does not make a difference between english and English.
f133a3e4195bdda30d357a99bcf903e2
mit
[]
false
Anime Background style (v2) on Stable Diffusion This is the `<anime-background-style-v2>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<anime-background-style-v2> 0](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/5.jpeg) ![<anime-background-style-v2> 1](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/13.jpeg) ![<anime-background-style-v2> 2](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/9.jpeg) ![<anime-background-style-v2> 3](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/10.jpeg) ![<anime-background-style-v2> 4](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/6.jpeg) ![<anime-background-style-v2> 5](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/4.jpeg) ![<anime-background-style-v2> 6](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/1.jpeg) ![<anime-background-style-v2> 7](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/3.jpeg) ![<anime-background-style-v2> 8](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/12.jpeg) ![<anime-background-style-v2> 9](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/2.jpeg) ![<anime-background-style-v2> 10](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/0.jpeg) ![<anime-background-style-v2> 11](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/7.jpeg) ![<anime-background-style-v2> 12](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/8.jpeg) ![<anime-background-style-v2> 13](https://huggingface.co/sd-concepts-library/anime-background-style-v2/resolve/main/concept_images/11.jpeg) Here are images generated with this style: ![the facade of a café in the style of <anime-background-style-v2>](https://i.imgur.com/EE89tm9.png) ![painting of a lush jungle in the style of <anime-background-style-v2>](https://i.imgur.com/peoQF5n.png) ![urban street with brownstones in the style of <anime-background-style-v2>](https://i.imgur.com/zuFgFP9.png) ![wide angle image of a castle made of ice in the style of <anime-background-style-v2>](https://i.imgur.com/uyopxyv.png)
82e20ae67fa60befeb57712f39ca0fb9
apache-2.0
['translation']
false
epo-rus * source group: Esperanto * target group: Russian * OPUS readme: [epo-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-rus/README.md) * model: transformer-align * source language(s): epo * target language(s): rus * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.eval.txt)
9c1e7c187a55e912e43a57189fcdf9c0
apache-2.0
['translation']
false
System Info: - hf_name: epo-rus - source_languages: epo - target_languages: rus - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-rus/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['eo', 'ru'] - src_constituents: {'epo'} - tgt_constituents: {'rus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.test.txt - src_alpha3: epo - tgt_alpha3: rus - short_pair: eo-ru - chrF2_score: 0.379 - bleu: 17.7 - brevity_penalty: 0.9179999999999999 - ref_len: 71288.0 - src_name: Esperanto - tgt_name: Russian - train_date: 2020-06-16 - src_alpha2: eo - tgt_alpha2: ru - prefer_old: False - long_pair: epo-rus - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
2327bf088f4cc66c7ab000689700318d
apache-2.0
['generated_from_trainer']
false
bert-base-uncased-finetuned-convincingness-IBM This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6537 - Accuracy: 0.7511
43d40968ba4554c80208fa9b61c8dba9
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 270 | 0.5707 | 0.7337 | | 0.4673 | 2.0 | 540 | 0.6059 | 0.7221 | | 0.4673 | 3.0 | 810 | 0.6537 | 0.7511 | | 0.2218 | 4.0 | 1080 | 0.8485 | 0.7467 | | 0.2218 | 5.0 | 1350 | 0.9221 | 0.7438 |
357230fe6933f7142be1413fd7891421
mit
['generated_from_trainer']
false
predict-perception-bert-cause-human This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7139 - Rmse: 1.2259 - Rmse Cause::a Causata da un essere umano: 1.2259 - Mae: 1.0480 - Mae Cause::a Causata da un essere umano: 1.0480 - R2: 0.4563 - R2 Cause::a Causata da un essere umano: 0.4563 - Cos: 0.4783 - Pair: 0.0 - Rank: 0.5 - Neighbors: 0.3953 - Rsa: nan
6cdf131d9fb67eb0bf16802a167a4f68
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Cause::a Causata da un essere umano | Mae | Mae Cause::a Causata da un essere umano | R2 | R2 Cause::a Causata da un essere umano | Cos | Pair | Rank | Neighbors | Rsa | |:-------------:|:-----:|:----:|:---------------:|:------:|:----------------------------------------:|:------:|:---------------------------------------:|:------:|:--------------------------------------:|:------:|:----:|:----:|:---------:|:---:| | 1.0874 | 1.0 | 15 | 1.2615 | 1.6296 | 1.6296 | 1.3836 | 1.3836 | 0.0393 | 0.0393 | 0.0435 | 0.0 | 0.5 | 0.2935 | nan | | 0.9577 | 2.0 | 30 | 1.1988 | 1.5886 | 1.5886 | 1.3017 | 1.3017 | 0.0870 | 0.0870 | 0.4783 | 0.0 | 0.5 | 0.3944 | nan | | 0.8414 | 3.0 | 45 | 0.9870 | 1.4414 | 1.4414 | 1.1963 | 1.1963 | 0.2483 | 0.2483 | 0.3913 | 0.0 | 0.5 | 0.3048 | nan | | 0.7291 | 4.0 | 60 | 0.9098 | 1.3839 | 1.3839 | 1.1297 | 1.1297 | 0.3071 | 0.3071 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan | | 0.5949 | 5.0 | 75 | 0.9207 | 1.3921 | 1.3921 | 1.2079 | 1.2079 | 0.2988 | 0.2988 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan | | 0.4938 | 6.0 | 90 | 0.8591 | 1.3448 | 1.3448 | 1.1842 | 1.1842 | 0.3458 | 0.3458 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan | | 0.3611 | 7.0 | 105 | 0.8176 | 1.3119 | 1.3119 | 1.1454 | 1.1454 | 0.3774 | 0.3774 | 0.5652 | 0.0 | 0.5 | 0.4091 | nan | | 0.2663 | 8.0 | 120 | 0.6879 | 1.2034 | 1.2034 | 1.0300 | 1.0300 | 0.4761 | 0.4761 | 0.5652 | 0.0 | 0.5 | 0.4091 | nan | | 0.1833 | 9.0 | 135 | 0.7704 | 1.2735 | 1.2735 | 1.1031 | 1.1031 | 0.4133 | 0.4133 | 0.5652 | 0.0 | 0.5 | 0.3152 | nan | | 0.1704 | 10.0 | 150 | 0.7097 | 1.2222 | 1.2222 | 1.0382 | 1.0382 | 0.4596 | 0.4596 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan | | 0.1219 | 11.0 | 165 | 0.6872 | 1.2027 | 1.2027 | 1.0198 | 1.0198 | 0.4767 | 0.4767 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan | | 0.1011 | 12.0 | 180 | 0.7201 | 1.2312 | 1.2312 | 1.0466 | 1.0466 | 0.4516 | 0.4516 | 0.5652 | 0.0 | 0.5 | 0.3152 | nan | | 0.0849 | 13.0 | 195 | 0.7267 | 1.2368 | 1.2368 | 1.0454 | 1.0454 | 0.4466 | 0.4466 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan | | 0.0818 | 14.0 | 210 | 0.7361 | 1.2448 | 1.2448 | 1.0565 | 1.0565 | 0.4394 | 0.4394 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan | | 0.0634 | 15.0 | 225 | 0.7158 | 1.2275 | 1.2275 | 1.0384 | 1.0384 | 0.4549 | 0.4549 | 0.3913 | 0.0 | 0.5 | 0.3306 | nan | | 0.065 | 16.0 | 240 | 0.7394 | 1.2475 | 1.2475 | 1.0659 | 1.0659 | 0.4369 | 0.4369 | 0.3913 | 0.0 | 0.5 | 0.3306 | nan | | 0.0541 | 17.0 | 255 | 0.7642 | 1.2683 | 1.2683 | 1.0496 | 1.0496 | 0.4181 | 0.4181 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan | | 0.0577 | 18.0 | 270 | 0.7137 | 1.2257 | 1.2257 | 1.0303 | 1.0303 | 0.4565 | 0.4565 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan | | 0.0474 | 19.0 | 285 | 0.7393 | 1.2475 | 1.2475 | 1.0447 | 1.0447 | 0.4370 | 0.4370 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan | | 0.0494 | 20.0 | 300 | 0.7157 | 1.2274 | 1.2274 | 1.0453 | 1.0453 | 0.4550 | 0.4550 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan | | 0.0434 | 21.0 | 315 | 0.7248 | 1.2352 | 1.2352 | 1.0462 | 1.0462 | 0.4480 | 0.4480 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan | | 0.049 | 22.0 | 330 | 0.7384 | 1.2467 | 1.2467 | 1.0613 | 1.0613 | 0.4377 | 0.4377 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan | | 0.0405 | 23.0 | 345 | 0.7420 | 1.2498 | 1.2498 | 1.0653 | 1.0653 | 0.4349 | 0.4349 | 0.3913 | 0.0 | 0.5 | 0.3306 | nan | | 0.0398 | 24.0 | 360 | 0.7355 | 1.2442 | 1.2442 | 1.0620 | 1.0620 | 0.4399 | 0.4399 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan | | 0.0398 | 25.0 | 375 | 0.7570 | 1.2623 | 1.2623 | 1.0698 | 1.0698 | 0.4235 | 0.4235 | 0.3913 | 0.0 | 0.5 | 0.3306 | nan | | 0.0345 | 26.0 | 390 | 0.7359 | 1.2446 | 1.2446 | 1.0610 | 1.0610 | 0.4396 | 0.4396 | 0.5652 | 0.0 | 0.5 | 0.3152 | nan | | 0.0345 | 27.0 | 405 | 0.7417 | 1.2495 | 1.2495 | 1.0660 | 1.0660 | 0.4352 | 0.4352 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan | | 0.0386 | 28.0 | 420 | 0.7215 | 1.2323 | 1.2323 | 1.0514 | 1.0514 | 0.4506 | 0.4506 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan | | 0.0372 | 29.0 | 435 | 0.7140 | 1.2260 | 1.2260 | 1.0477 | 1.0477 | 0.4562 | 0.4562 | 0.5652 | 0.0 | 0.5 | 0.4091 | nan | | 0.0407 | 30.0 | 450 | 0.7139 | 1.2259 | 1.2259 | 1.0480 | 1.0480 | 0.4563 | 0.4563 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
fb98ac5f5bc6cd2226b26178f6bad167
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_logit_kd_mrpc_256 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.5199 - Accuracy: 0.3284 - F1: 0.0616 - Combined Score: 0.1950
2e95d5a0ff049b7f39bd540a4e3774f0
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.5375 | 1.0 | 15 | 0.5292 | 0.3162 | 0.0 | 0.1581 | | 0.5305 | 2.0 | 30 | 0.5292 | 0.3162 | 0.0 | 0.1581 | | 0.5294 | 3.0 | 45 | 0.5293 | 0.3162 | 0.0 | 0.1581 | | 0.5283 | 4.0 | 60 | 0.5284 | 0.3162 | 0.0 | 0.1581 | | 0.5258 | 5.0 | 75 | 0.5260 | 0.3162 | 0.0 | 0.1581 | | 0.519 | 6.0 | 90 | 0.5199 | 0.3284 | 0.0616 | 0.1950 | | 0.5036 | 7.0 | 105 | 0.5200 | 0.3848 | 0.2462 | 0.3155 | | 0.4916 | 8.0 | 120 | 0.5226 | 0.4167 | 0.3239 | 0.3703 | | 0.4725 | 9.0 | 135 | 0.5298 | 0.4289 | 0.3581 | 0.3935 | | 0.4537 | 10.0 | 150 | 0.5333 | 0.6152 | 0.6736 | 0.6444 | | 0.4382 | 11.0 | 165 | 0.5450 | 0.6201 | 0.6906 | 0.6554 |
10c03d03936dcb94a881282838f2a7d2
apache-2.0
['automatic-speech-recognition', 'fa']
false
exp_w2v2t_fa_vp-es_s533 Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
01b11d57af3b8bd539963ae87c826109
apache-2.0
[]
false
Model Details **Model Description:** The DistilBERT model was proposed in the blog post [Smaller, faster, cheaper, lighter: Introducing DistilBERT, adistilled version of BERT](https://medium.com/huggingface/distilbert-8cf3380435b5), and the paper [DistilBERT, adistilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108). DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than *bert-base-uncased*, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language understanding benchmark. This model is a fine-tune checkpoint of [DistilBERT-base-uncased](https://huggingface.co/distilbert-base-uncased), fine-tuned using (a second step of) knowledge distillation on [SQuAD v1.1](https://huggingface.co/datasets/squad). - **Developed by:** Hugging Face - **Model Type:** Transformer-based language model - **Language(s):** English - **License:** Apache 2.0 - **Related Models:** [DistilBERT-base-uncased](https://huggingface.co/distilbert-base-uncased) - **Resources for more information:** - See [this repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) for more about Distil\* (a class of compressed models including this model) - See [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108) for more information about knowledge distillation and the training procedure
bb01a35d6af767309b732ba88e739477
apache-2.0
[]
false
How to Get Started with the Model Use the code below to get started with the model. ```python >>> from transformers import pipeline >>> question_answerer = pipeline("question-answering", model='distilbert-base-uncased-distilled-squad') >>> context = r""" ... Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a ... question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune ... a model on a SQuAD task, you may leverage the examples/pytorch/question-answering/run_squad.py script. ... """ >>> result = question_answerer(question="What is a good example of a question answering dataset?", context=context) >>> print( ... f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}" ...) Answer: 'SQuAD dataset', score: 0.4704, start: 147, end: 160 ``` Here is how to use this model in PyTorch: ```python from transformers import DistilBertTokenizer, DistilBertForQuestionAnswering import torch tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased-distilled-squad') model = DistilBertForQuestionAnswering.from_pretrained('distilbert-base-uncased-distilled-squad') question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" inputs = tokenizer(question, text, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) answer_start_index = torch.argmax(outputs.start_logits) answer_end_index = torch.argmax(outputs.end_logits) predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] tokenizer.decode(predict_answer_tokens) ``` And in TensorFlow: ```python from transformers import DistilBertTokenizer, TFDistilBertForQuestionAnswering import tensorflow as tf tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased-distilled-squad") model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased-distilled-squad") question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" inputs = tokenizer(question, text, return_tensors="tf") outputs = model(**inputs) answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0]) answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0]) predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] tokenizer.decode(predict_answer_tokens) ```
387d04014953a79b98b6fc95ac65cce5
apache-2.0
[]
false
Misuse and Out-of-scope Use The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
8a35e4484197f85d2534c0740b0e65db
apache-2.0
[]
false
Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: ```python >>> from transformers import pipeline >>> question_answerer = pipeline("question-answering", model='distilbert-base-uncased-distilled-squad') >>> context = r""" ... Alice is sitting on the bench. Bob is sitting next to her. ... """ >>> result = question_answerer(question="Who is the CEO?", context=context) >>> print( ... f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}" ...) Answer: 'Bob', score: 0.4183, start: 32, end: 35 ``` Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
bae36f3199d62b66433f216241f6d8f0
apache-2.0
[]
false
Training Data The [distilbert-base-uncased model](https://huggingface.co/distilbert-base-uncased) model describes it's training data as: > DistilBERT pretrained on the same data as BERT, which is [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). To learn more about the SQuAD v1.1 dataset, see the [SQuAD v1.1 data card](https://huggingface.co/datasets/squad).
d9460fb2a68d4edfd0c8bf823abc7221
apache-2.0
[]
false
Evaluation As discussed in the [model repository](https://github.com/huggingface/transformers/blob/main/examples/research_projects/distillation/README.md) > This model reaches a F1 score of 86.9 on the [SQuAD v1.1] dev set (for comparison, Bert bert-base-uncased version reaches a F1 score of 88.5).
f7b9030b85aea156c2c05238008fd354
apache-2.0
[]
false
compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type and hours used based on the [associated paper](https://arxiv.org/pdf/1910.01108.pdf). Note that these details are just for training DistilBERT, not including the fine-tuning with SQuAD. - **Hardware Type:** 8 16GB V100 GPUs - **Hours used:** 90 hours - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown
4f40b8eb3851f7f8f86352af08b834fe
apache-2.0
[]
false
Citation Information ```bibtex @inproceedings{sanh2019distilbert, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Sanh, Victor and Debut, Lysandre and Chaumond, Julien and Wolf, Thomas}, booktitle={NeurIPS EMC^2 Workshop}, year={2019} } ``` APA: - Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
275cf2f1404b7118738312202b844152
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 200 - num_epochs: 60 - mixed_precision_training: Native AMP
8a2959fdf5afa173259003ee01ac53f2
apache-2.0
['generated_from_trainer']
false
squad-bn-qgen-mt5-all-metric This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the squad_bn dataset. It achieves the following results on the evaluation set: - Loss: 0.7273 - Rouge1 Precision: 35.8589 - Rouge1 Recall: 29.7041 - Rouge1 Fmeasure: 31.6373 - Rouge2 Precision: 15.4203 - Rouge2 Recall: 12.5155 - Rouge2 Fmeasure: 13.3978 - Rougel Precision: 34.4684 - Rougel Recall: 28.5887 - Rougel Fmeasure: 30.4627 - Rougelsum Precision: 34.4252 - Rougelsum Recall: 28.5362 - Rougelsum Fmeasure: 30.4053 - Sacrebleu: 6.4143 - Meteor: 0.1416 - Gen Len: 16.7199
5931befe97507fe142564a8b3a86d163
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
a575b7163f434ba156b6e6a6fa1d26b5
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 Precision | Rouge1 Recall | Rouge1 Fmeasure | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | Rougel Precision | Rougel Recall | Rougel Fmeasure | Rougelsum Precision | Rougelsum Recall | Rougelsum Fmeasure | Sacrebleu | Meteor | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:-------------------:|:----------------:|:------------------:|:---------:|:------:|:-------:| | 0.8449 | 1.0 | 16396 | 0.7340 | 31.6476 | 26.8901 | 28.2871 | 13.621 | 11.3545 | 11.958 | 30.3276 | 25.7754 | 27.1048 | 30.3426 | 25.7489 | 27.0991 | 5.9655 | 0.1336 | 16.8685 | | 0.7607 | 2.0 | 32792 | 0.7182 | 33.7173 | 28.6115 | 30.1049 | 14.8227 | 12.2059 | 12.9453 | 32.149 | 27.2036 | 28.6617 | 32.2479 | 27.2261 | 28.7272 | 6.6093 | 0.138 | 16.8522 | | 0.7422 | 3.0 | 49188 | 0.7083 | 34.6128 | 29.0223 | 30.7248 | 14.9888 | 12.3092 | 13.1021 | 33.2507 | 27.8154 | 29.4599 | 33.2848 | 27.812 | 29.5064 | 6.2407 | 0.1416 | 16.5806 | | 0.705 | 4.0 | 65584 | 0.7035 | 34.156 | 29.0012 | 30.546 | 14.72 | 12.0251 | 12.8161 | 32.7527 | 27.6511 | 29.1955 | 32.7692 | 27.6627 | 29.231 | 6.1784 | 0.1393 | 16.7793 | | 0.6859 | 5.0 | 81980 | 0.7038 | 35.1405 | 29.6033 | 31.2614 | 15.5108 | 12.6414 | 13.5059 | 33.8335 | 28.4264 | 30.0745 | 33.8782 | 28.4349 | 30.0901 | 6.5896 | 0.144 | 16.6651 |
64cb5264985cac5059e791a2ca09ed62
apache-2.0
['automatic-speech-recognition', 'th']
false
exp_w2v2t_th_vp-nl_s253 Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (th)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
f0613f0eb009a8b7c0f51adf7c32093f
apache-2.0
['vision', 'image-classification']
false
Vision Transformer (base-sized model) Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him. Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team.
26d732bc252983cdd38c023586947af7
apache-2.0
['vision', 'image-classification']
false
Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
a3675a9f52ed82c87cd8db7138a0c67b
apache-2.0
['vision', 'image-classification']
false
How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ViTFeatureExtractor, ViTForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224') model = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits
27df18fe0fe9de88d4e04dc1ce06a21b
apache-2.0
['vision', 'image-classification']
false
model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/vit.html
31eb653385aa365edbfe94a896bf2d5a
apache-2.0
['vision', 'image-classification']
false
Training data The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
327fe480dd18f772b821e0b5a61f1f61