license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.8403 | 6.94 | 500 | 1.1345 | 0.4657 | | 0.5795 | 13.88 | 1000 | 0.3579 | 0.1169 | | 0.3567 | 20.83 | 1500 | 0.3866 | 0.1174 | | 0.2717 | 27.77 | 2000 | 0.4219 | 0.1169 | | 0.2135 | 34.72 | 2500 | 0.4861 | 0.1199 | | 0.1664 | 41.66 | 3000 | 0.5490 | 0.1179 | | 0.1375 | 48.61 | 3500 | 0.5783 | 0.1178 |
70c22a37f62b1a6a9841f768ffee7a03
apache-2.0
['generated_from_trainer']
false
model_for_inca This model is a fine-tuned version of [marcus2000/finetuning-sentiment-model-3000-samples](https://huggingface.co/marcus2000/finetuning-sentiment-model-3000-samples) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3349 - F1: 0.9281
8b499bfdb61f6cb1f19c143a968c3807
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event', 'uk']
false
Ukrainian STT model (with Language Model) 🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech_recognition_uk ⭐ See other Ukrainian models - https://github.com/egorsmkv/speech-recognition-uk This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UK dataset. It achieves the following results on the evaluation set without the language model: - Loss: 0.1875 - Wer: 0.2033 - Cer: 0.0384
19e198c3ea41d3ec32470fccd1eec1b9
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event', 'uk']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 20 - total_train_batch_size: 160 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP
1b3a5218475d1661be04d9ac543586e8
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event', 'uk']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 1.2815 | 7.93 | 500 | 0.3536 | 0.4753 | 0.1009 | | 1.0869 | 15.86 | 1000 | 0.2317 | 0.3111 | 0.0614 | | 0.9984 | 23.8 | 1500 | 0.2022 | 0.2676 | 0.0521 | | 0.975 | 31.74 | 2000 | 0.1948 | 0.2469 | 0.0487 | | 0.9306 | 39.67 | 2500 | 0.1916 | 0.2377 | 0.0464 | | 0.8868 | 47.61 | 3000 | 0.1903 | 0.2257 | 0.0439 | | 0.8424 | 55.55 | 3500 | 0.1786 | 0.2206 | 0.0423 | | 0.8126 | 63.49 | 4000 | 0.1849 | 0.2160 | 0.0416 | | 0.7901 | 71.42 | 4500 | 0.1869 | 0.2138 | 0.0413 | | 0.7671 | 79.36 | 5000 | 0.1855 | 0.2075 | 0.0394 | | 0.7467 | 87.3 | 5500 | 0.1884 | 0.2049 | 0.0389 | | 0.731 | 95.24 | 6000 | 0.1877 | 0.2060 | 0.0387 |
9c63db4139f91635be6ce972e3850593
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event', 'uk']
false
Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test` ```bash python eval.py --model_id Yehor/wav2vec2-xls-r-1b-uk-with-lm --dataset mozilla-foundation/common_voice_7_0 --config uk --split test ```
0cb3c74df6dd17dced676a8de054cfab
mit
[]
false
model by homanp This your the Stable Diffusion model fine-tuned the Backpack concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of sks backpack** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). Here are the images used for training this concept: ![image 0](https://huggingface.co/sd-dreambooth-library/backpack/resolve/main/concept_images/1.jpeg) ![image 1](https://huggingface.co/sd-dreambooth-library/backpack/resolve/main/concept_images/0.jpeg)
cfc08e33624cfb86884da2cc6acb0a08
mit
['spacy', 'token-classification']
false
de_core_news_md German pipeline optimized for CPU. Components: tok2vec, tagger, morphologizer, parser, lemmatizer (trainable_lemmatizer), senter, ner. | Feature | Description | | --- | --- | | **Name** | `de_core_news_md` | | **Version** | `3.5.0` | | **spaCy** | `>=3.5.0,<3.6.0` | | **Default Pipeline** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `attribute_ruler`, `ner` | | **Components** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `senter`, `attribute_ruler`, `ner` | | **Vectors** | 500000 keys, 20000 unique vectors (300 dimensions) | | **Sources** | [TIGER Corpus](https://www.ims.uni-stuttgart.de/forschung/ressourcen/korpora/tiger.html) (Brants, Sabine, Stefanie Dipper, Peter Eisenberg, Silvia Hansen, Esther König, Wolfgang Lezius, Christian Rohrer, George Smith, and Hans Uszkoreit)<br />[Tiger2Dep](https://www.ims.uni-stuttgart.de/forschung/ressourcen/werkzeuge/tiger2dep/) (Wolfgang Seeker)<br />[WikiNER](https://figshare.com/articles/Learning_multilingual_named_entity_recognition_from_Wikipedia/5462500) (Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, James R Curran)<br />[Explosion fastText Vectors (cbow, OSCAR Common Crawl + Wikipedia)](https://spacy.io) (Explosion) | | **License** | `MIT` | | **Author** | [Explosion](https://explosion.ai) |
242ffda10096646f70b94771aa9fa5d5
mit
['spacy', 'token-classification']
false
Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 99.96 | | `TOKEN_P` | 99.92 | | `TOKEN_R` | 99.90 | | `TOKEN_F` | 99.91 | | `TAG_ACC` | 97.81 | | `POS_ACC` | 98.29 | | `MORPH_ACC` | 91.51 | | `MORPH_MICRO_P` | 95.69 | | `MORPH_MICRO_R` | 95.61 | | `MORPH_MICRO_F` | 95.65 | | `SENTS_P` | 95.41 | | `SENTS_R` | 96.22 | | `SENTS_F` | 95.08 | | `DEP_UAS` | 92.54 | | `DEP_LAS` | 90.57 | | `LEMMA_ACC` | 97.70 | | `ENTS_P` | 84.39 | | `ENTS_R` | 83.43 | | `ENTS_F` | 83.91 |
31c12d17b34a243bfe0f95d21a8edf90
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Small dysarthric Dutch This model is a fine-tuned version of [qmeeus/whisper-small-nl](https://huggingface.co/qmeeus/whisper-small-nl) on the data/copas copas-full dataset. It achieves the following results on the evaluation set: - Loss: 0.4242 - Wer: 24.5560
9f72d1967c3b2b86ced1f7e733195946
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 10000 - mixed_precision_training: Native AMP
8470ae9c6d94be3444625e5cf732ab24
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.3363 | 2.02 | 500 | 0.3762 | 29.7934 | | 0.0945 | 5.02 | 1000 | 0.3418 | 27.6912 | | 0.0332 | 8.01 | 1500 | 0.3353 | 26.1689 | | 0.0147 | 11.01 | 2000 | 0.3476 | 26.1327 | | 0.0071 | 14.01 | 2500 | 0.3623 | 25.9333 | | 0.0034 | 17.01 | 3000 | 0.3789 | 25.2084 | | 0.0024 | 20.01 | 3500 | 0.3827 | 24.8641 | | 0.0026 | 23.01 | 4000 | 0.3877 | 25.3171 | | 0.0021 | 26.01 | 4500 | 0.3933 | 25.4259 | | 0.0014 | 29.01 | 5000 | 0.3941 | 25.0997 | | 0.0008 | 32.01 | 5500 | 0.4014 | 25.0997 | | 0.0004 | 35.01 | 6000 | 0.4035 | 24.8278 | | 0.0003 | 38.01 | 6500 | 0.4080 | 24.9184 | | 0.0003 | 41.01 | 7000 | 0.4120 | 24.8097 | | 0.0002 | 44.01 | 7500 | 0.4151 | 24.6104 | | 0.0002 | 47.01 | 8000 | 0.4176 | 24.3929 | | 0.0002 | 50.01 | 8500 | 0.4200 | 24.5198 | | 0.0001 | 53.0 | 9000 | 0.4230 | 24.5198 | | 0.0001 | 56.0 | 9500 | 0.4252 | 24.4291 | | 0.0001 | 59.0 | 10000 | 0.4242 | 24.5560 |
5e817ecf81124824a21e5c76293c6b22
apache-2.0
['image-classification', 'generated_from_trainer']
false
new_exper3 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem1 dataset. It achieves the following results on the evaluation set: - Loss: 0.3000 - Accuracy: 0.9298
25a35f4a25800c8da5e8969b2d0a083b
apache-2.0
['image-classification', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 - mixed_precision_training: Apex, opt level O1
f4dd97e57c78ba7ada893ae7dd613ccb
apache-2.0
['image-classification', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.093 | 0.16 | 100 | 4.1045 | 0.1885 | | 3.5057 | 0.31 | 200 | 3.4448 | 0.3231 | | 2.9116 | 0.47 | 300 | 2.9483 | 0.4537 | | 2.561 | 0.63 | 400 | 2.5700 | 0.5258 | | 2.1611 | 0.78 | 500 | 2.1721 | 0.6145 | | 1.715 | 0.94 | 600 | 1.8255 | 0.6407 | | 1.2752 | 1.1 | 700 | 1.5340 | 0.7051 | | 1.2487 | 1.25 | 800 | 1.3533 | 0.7201 | | 1.0333 | 1.41 | 900 | 1.1474 | 0.7826 | | 0.8856 | 1.56 | 1000 | 1.0914 | 0.7645 | | 0.7512 | 1.72 | 1100 | 0.8893 | 0.8119 | | 0.747 | 1.88 | 1200 | 0.8370 | 0.8304 | | 0.5082 | 2.03 | 1300 | 0.7131 | 0.8566 | | 0.4449 | 2.19 | 1400 | 0.6573 | 0.8547 | | 0.2912 | 2.35 | 1500 | 0.6184 | 0.8597 | | 0.285 | 2.5 | 1600 | 0.5974 | 0.8570 | | 0.2267 | 2.66 | 1700 | 0.5621 | 0.8647 | | 0.2553 | 2.82 | 1800 | 0.5044 | 0.8816 | | 0.2029 | 2.97 | 1900 | 0.4342 | 0.8955 | | 0.1763 | 3.13 | 2000 | 0.4487 | 0.8905 | | 0.1418 | 3.29 | 2100 | 0.4173 | 0.9005 | | 0.0563 | 3.44 | 2200 | 0.3870 | 0.9048 | | 0.0579 | 3.6 | 2300 | 0.3849 | 0.9036 | | 0.166 | 3.76 | 2400 | 0.3933 | 0.9025 | | 0.11 | 3.91 | 2500 | 0.3918 | 0.9056 | | 0.0356 | 4.07 | 2600 | 0.3298 | 0.9202 | | 0.0513 | 4.23 | 2700 | 0.3371 | 0.9210 | | 0.0762 | 4.38 | 2800 | 0.3253 | 0.9225 | | 0.018 | 4.54 | 2900 | 0.3467 | 0.9148 | | 0.0263 | 4.69 | 3000 | 0.3544 | 0.9144 | | 0.0205 | 4.85 | 3100 | 0.3340 | 0.9221 | | 0.0237 | 5.01 | 3200 | 0.3353 | 0.9144 | | 0.013 | 5.16 | 3300 | 0.3218 | 0.9229 | | 0.0116 | 5.32 | 3400 | 0.3088 | 0.9291 | | 0.0119 | 5.48 | 3500 | 0.3047 | 0.9279 | | 0.0098 | 5.63 | 3600 | 0.3063 | 0.9283 | | 0.0086 | 5.79 | 3700 | 0.3074 | 0.9268 | | 0.0081 | 5.95 | 3800 | 0.3220 | 0.9237 | | 0.0078 | 6.1 | 3900 | 0.3064 | 0.9268 | | 0.0074 | 6.26 | 4000 | 0.3062 | 0.9279 | | 0.0068 | 6.42 | 4100 | 0.3051 | 0.9291 | | 0.006 | 6.57 | 4200 | 0.3000 | 0.9298 | | 0.0075 | 6.73 | 4300 | 0.3010 | 0.9310 | | 0.0057 | 6.89 | 4400 | 0.3037 | 0.9298 | | 0.0058 | 7.04 | 4500 | 0.3071 | 0.9279 | | 0.0075 | 7.2 | 4600 | 0.3075 | 0.9283 | | 0.0066 | 7.36 | 4700 | 0.3077 | 0.9295 | | 0.0056 | 7.51 | 4800 | 0.3084 | 0.9295 | | 0.0053 | 7.67 | 4900 | 0.3064 | 0.9310 | | 0.0057 | 7.82 | 5000 | 0.3068 | 0.9318 | | 0.0055 | 7.98 | 5100 | 0.3068 | 0.9318 |
013468799f0774e9c82e3b313c082c39
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xlsr-turkish-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4055 - Wer: 0.4800
c86acdf1845498ace23078893471b23a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.0179 | 4.21 | 400 | 1.4935 | 1.0249 | | 0.7075 | 8.42 | 800 | 0.4546 | 0.6071 | | 0.3072 | 12.63 | 1200 | 0.3947 | 0.5401 | | 0.2145 | 16.84 | 1600 | 0.4049 | 0.5194 | | 0.1647 | 21.05 | 2000 | 0.4199 | 0.5003 | | 0.1338 | 25.26 | 2400 | 0.4144 | 0.4859 | | 0.116 | 29.47 | 2800 | 0.4055 | 0.4800 |
bed53643fdfc54a3037535599a6b330b
apache-2.0
['generated_from_trainer']
false
test-trainer-init This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6581 - Accuracy: 0.8603 - F1: 0.9042
2301c7d6bdf5fd2d0555b91bc4e03d78
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 459 | 0.3660 | 0.8505 | 0.8893 | | 0.5003 | 2.0 | 918 | 0.5355 | 0.8407 | 0.8922 | | 0.2654 | 3.0 | 1377 | 0.6581 | 0.8603 | 0.9042 |
74f349d222591fa2c1a795f34153ed6b
creativeml-openrail-m
['text-to-image']
false
Persona-5-Shigenori-Style Dreambooth model trained by Allenbv with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept: 3200 Steps, 20% text encoder, 23 images "Shigenori Style" on your prompt ![descarga 0](https://huggingface.co/sd-dreambooth-library/persona-5-shigenori-style/resolve/main/concept_images/descarga_(12).png) ![descarga 1](https://huggingface.co/sd-dreambooth-library/persona-5-shigenori-style/resolve/main/concept_images/descarga_(10).png) ![descarga 2](https://huggingface.co/sd-dreambooth-library/persona-5-shigenori-style/resolve/main/concept_images/descarga_(3).png) ![descarga 3](https://huggingface.co/sd-dreambooth-library/persona-5-shigenori-style/resolve/main/concept_images/descarga_(4).png) ![descarga 4](https://huggingface.co/sd-dreambooth-library/persona-5-shigenori-style/resolve/main/concept_images/descarga_(5).png) ![descarga 5](https://huggingface.co/sd-dreambooth-library/persona-5-shigenori-style/resolve/main/concept_images/descarga_(8).png) ![descarga 6](https://huggingface.co/sd-dreambooth-library/persona-5-shigenori-style/resolve/main/concept_images/descarga_(10).png)
d638aee99d6fe4598a72bf681990c0a4
cc-by-4.0
['espnet', 'audio', 'text-to-speech']
false
`kan-bayashi/jsut_tts_train_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_accent_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4381100/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
463915fe4ac0b4e48d78902aea672752
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 1.7601 - Accuracy: 0.8532
a20cc764ff4a6416128a75d72c64e077
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
168f1c8ed96e78ffc29021586c70eb4d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 159 | 3.9593 | 0.6442 | | 4.0539 | 2.0 | 318 | 2.9237 | 0.7606 | | 4.0539 | 3.0 | 477 | 2.2412 | 0.8174 | | 2.3862 | 4.0 | 636 | 1.8768 | 0.8397 | | 2.3862 | 5.0 | 795 | 1.7601 | 0.8532 |
f1f653c5646d327319c15699a2d886c7
apache-2.0
['argumentation']
false
Generate the conclusion of an argument This model has the same model parameters as [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), but with an additional soft prompt which has been optimized on the task of generating the conclusion of an argument given its premises. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks. Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
0e672beb96a95d50a950a6bd923df6a0
apache-2.0
['argumentation']
false
Limitations and Biases The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
2796e59c1cb2979b0deb8a4245d00783
apache-2.0
['argumentation']
false
Acknowledgements This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia.
a0f7dcc429b984ea9a35c287ba2ce2b7
apache-2.0
['automatic-speech-recognition', 'et']
false
exp_w2v2t_et_r-wav2vec2_s732 Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (et)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
1679c353489f540b0fb0abbadcf54472
mit
[]
false
vb-mox on Stable Diffusion This is the `<vb-mox>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<vb-mox> 0](https://huggingface.co/sd-concepts-library/vb-mox/resolve/main/concept_images/5.jpeg) ![<vb-mox> 1](https://huggingface.co/sd-concepts-library/vb-mox/resolve/main/concept_images/6.jpeg) ![<vb-mox> 2](https://huggingface.co/sd-concepts-library/vb-mox/resolve/main/concept_images/3.jpeg) ![<vb-mox> 3](https://huggingface.co/sd-concepts-library/vb-mox/resolve/main/concept_images/0.jpeg) ![<vb-mox> 4](https://huggingface.co/sd-concepts-library/vb-mox/resolve/main/concept_images/2.jpeg) ![<vb-mox> 5](https://huggingface.co/sd-concepts-library/vb-mox/resolve/main/concept_images/7.jpeg) ![<vb-mox> 6](https://huggingface.co/sd-concepts-library/vb-mox/resolve/main/concept_images/1.jpeg) ![<vb-mox> 7](https://huggingface.co/sd-concepts-library/vb-mox/resolve/main/concept_images/4.jpeg)
16dd8255f1acf07a5c487bd8039f4fc4
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1644 - F1: 0.8617
5ad498c4813e8243ca3b51d2d7d6781c
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2891 | 1.0 | 715 | 0.1780 | 0.8288 | | 0.1471 | 2.0 | 1430 | 0.1627 | 0.8509 | | 0.0947 | 3.0 | 2145 | 0.1644 | 0.8617 |
dd202f0debf44b1abe916152454c3621
mit
['generated_from_trainer']
false
predict-perception-bert-focus-assassin This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2964 - Rmse: 0.8992 - Rmse Focus::a Sull'assassino: 0.8992 - Mae: 0.7331 - Mae Focus::a Sull'assassino: 0.7331 - R2: 0.6500 - R2 Focus::a Sull'assassino: 0.6500 - Cos: 0.7391 - Pair: 0.0 - Rank: 0.5 - Neighbors: 0.6131 - Rsa: nan
8f80f926f3b7ccef8360b845915fcab4
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Focus::a Sull'assassino | Mae | Mae Focus::a Sull'assassino | R2 | R2 Focus::a Sull'assassino | Cos | Pair | Rank | Neighbors | Rsa | |:-------------:|:-----:|:----:|:---------------:|:------:|:----------------------------:|:------:|:---------------------------:|:-------:|:--------------------------:|:------:|:----:|:----:|:---------:|:---:| | 1.0674 | 1.0 | 15 | 0.9851 | 1.6393 | 1.6393 | 1.5316 | 1.5316 | -0.1633 | -0.1633 | 0.1304 | 0.0 | 0.5 | 0.2457 | nan | | 1.0099 | 2.0 | 30 | 0.8921 | 1.5601 | 1.5601 | 1.4317 | 1.4317 | -0.0535 | -0.0535 | 0.5652 | 0.0 | 0.5 | 0.4734 | nan | | 0.9295 | 3.0 | 45 | 0.7345 | 1.4155 | 1.4155 | 1.3113 | 1.3113 | 0.1327 | 0.1327 | 0.5652 | 0.0 | 0.5 | 0.3596 | nan | | 0.8485 | 4.0 | 60 | 0.7282 | 1.4094 | 1.4094 | 1.2678 | 1.2678 | 0.1401 | 0.1401 | 0.7391 | 0.0 | 0.5 | 0.5367 | nan | | 0.7551 | 5.0 | 75 | 0.5966 | 1.2758 | 1.2758 | 1.1144 | 1.1144 | 0.2955 | 0.2955 | 0.6522 | 0.0 | 0.5 | 0.3911 | nan | | 0.5563 | 6.0 | 90 | 0.4578 | 1.1175 | 1.1175 | 0.9105 | 0.9105 | 0.4594 | 0.4594 | 0.6522 | 0.0 | 0.5 | 0.3911 | nan | | 0.4048 | 7.0 | 105 | 0.3539 | 0.9826 | 0.9826 | 0.7770 | 0.7770 | 0.5821 | 0.5821 | 0.6522 | 0.0 | 0.5 | 0.5522 | nan | | 0.3319 | 8.0 | 120 | 0.2938 | 0.8953 | 0.8953 | 0.7110 | 0.7110 | 0.6530 | 0.6530 | 0.6522 | 0.0 | 0.5 | 0.6021 | nan | | 0.2224 | 9.0 | 135 | 0.3455 | 0.9708 | 0.9708 | 0.7607 | 0.7607 | 0.5921 | 0.5921 | 0.6522 | 0.0 | 0.5 | 0.3911 | nan | | 0.1794 | 10.0 | 150 | 0.2719 | 0.8612 | 0.8612 | 0.6768 | 0.6768 | 0.6790 | 0.6790 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.1553 | 11.0 | 165 | 0.2855 | 0.8826 | 0.8826 | 0.7053 | 0.7053 | 0.6628 | 0.6628 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.1008 | 12.0 | 180 | 0.3000 | 0.9046 | 0.9046 | 0.7255 | 0.7255 | 0.6458 | 0.6458 | 0.6522 | 0.0 | 0.5 | 0.5261 | nan | | 0.1121 | 13.0 | 195 | 0.2817 | 0.8766 | 0.8766 | 0.7236 | 0.7236 | 0.6674 | 0.6674 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.08 | 14.0 | 210 | 0.3504 | 0.9777 | 0.9777 | 0.7631 | 0.7631 | 0.5863 | 0.5863 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0802 | 15.0 | 225 | 0.3031 | 0.9094 | 0.9094 | 0.7565 | 0.7565 | 0.6420 | 0.6420 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0685 | 16.0 | 240 | 0.3041 | 0.9109 | 0.9109 | 0.7409 | 0.7409 | 0.6408 | 0.6408 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0592 | 17.0 | 255 | 0.3496 | 0.9767 | 0.9767 | 0.7812 | 0.7812 | 0.5871 | 0.5871 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0625 | 18.0 | 270 | 0.3260 | 0.9430 | 0.9430 | 0.7757 | 0.7757 | 0.6151 | 0.6151 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0589 | 19.0 | 285 | 0.3118 | 0.9222 | 0.9222 | 0.7442 | 0.7442 | 0.6318 | 0.6318 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0518 | 20.0 | 300 | 0.3062 | 0.9140 | 0.9140 | 0.7459 | 0.7459 | 0.6384 | 0.6384 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0456 | 21.0 | 315 | 0.3200 | 0.9344 | 0.9344 | 0.7592 | 0.7592 | 0.6221 | 0.6221 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0477 | 22.0 | 330 | 0.3132 | 0.9244 | 0.9244 | 0.7532 | 0.7532 | 0.6301 | 0.6301 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0448 | 23.0 | 345 | 0.3006 | 0.9056 | 0.9056 | 0.7321 | 0.7321 | 0.6450 | 0.6450 | 0.6522 | 0.0 | 0.5 | 0.5261 | nan | | 0.0494 | 24.0 | 360 | 0.2985 | 0.9024 | 0.9024 | 0.7463 | 0.7463 | 0.6475 | 0.6475 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0369 | 25.0 | 375 | 0.3039 | 0.9105 | 0.9105 | 0.7359 | 0.7359 | 0.6412 | 0.6412 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0456 | 26.0 | 390 | 0.2989 | 0.9030 | 0.9030 | 0.7210 | 0.7210 | 0.6471 | 0.6471 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.044 | 27.0 | 405 | 0.2997 | 0.9042 | 0.9042 | 0.7418 | 0.7418 | 0.6461 | 0.6461 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0352 | 28.0 | 420 | 0.2970 | 0.9001 | 0.9001 | 0.7346 | 0.7346 | 0.6493 | 0.6493 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0429 | 29.0 | 435 | 0.2970 | 0.9001 | 0.9001 | 0.7281 | 0.7281 | 0.6493 | 0.6493 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0378 | 30.0 | 450 | 0.2964 | 0.8992 | 0.8992 | 0.7331 | 0.7331 | 0.6500 | 0.6500 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
61648a5331166f4aa2a55b9d67eb4789
apache-2.0
['generated_from_trainer']
false
roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_AugmentedTransfer_ES This model is a fine-tuned version of [StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES](https://huggingface.co/StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES) on the CRAFT dataset. It achieves the following results on the evaluation set: - Loss: 0.2043 - Precision: 0.8666 - Recall: 0.8614 - F1: 0.8639 - Accuracy: 0.9734
572fb6db7f2514e20d096463678ce6c7
apache-2.0
['generated_from_trainer']
false
Model description This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the CRAFT(Colorado Richly Annotated Full Text) Corpus in Spanish (MT translated) and English. Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical. This model is trained on augmented data created using Entity Replacement. 20% of the entities were replaced using a list of entities for each entity tag obtained from the official ontologies for each entity class. Three datasets (original, augmented, MT translated CRAFT) were concatenated. To improve F1 score the transfer learning was completed in two steps. Using [StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES](https://huggingface.co/StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES) as a base model, I finetuned once more on the original CRAFT dataset in English. Biobert --> Augmented CRAFT --> CRAFT ES (MT translated)
4e38e58eb93329953a4e8004c6f0163d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0088 | 1.0 | 1360 | 0.1793 | 0.8616 | 0.8487 | 0.8551 | 0.9721 | | 0.0046 | 2.0 | 2720 | 0.1925 | 0.8618 | 0.8426 | 0.8521 | 0.9713 | | 0.0032 | 3.0 | 4080 | 0.1926 | 0.8558 | 0.8630 | 0.8594 | 0.9725 | | 0.0011 | 4.0 | 5440 | 0.2043 | 0.8666 | 0.8614 | 0.8639 | 0.9734 |
0a31625c2948b6e7bbd1f5f7b70c8fa5
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
`Shinji_Watanabe/spgispeech_asr_train_asr_conformer6_n_fft512_hop_length256_raw_en_bpe5000_valid.acc.ave` ♻️ Imported from https://zenodo.org/record/4585546/ This model was trained by Shinji Watanabe using spgispeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
03d43838635e613fe7be0ebd5d777b2e
apache-2.0
['generated_from_trainer']
false
t5-small-finetuned-wikisql This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2640 - Rouge2 Precision: 0.8471 - Rouge2 Recall: 0.3841 - Rouge2 Fmeasure: 0.5064
a6b5f55e98c729757fc49efba6ebd466
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20
103178689c53b320978abcf28576831d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | |:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:| | No log | 1.0 | 11 | 2.7587 | 0.098 | 0.0305 | 0.045 | | No log | 2.0 | 22 | 2.0056 | 0.0969 | 0.0284 | 0.0422 | | No log | 3.0 | 33 | 1.4456 | 0.1046 | 0.0349 | 0.0503 | | No log | 4.0 | 44 | 1.0317 | 0.1054 | 0.0337 | 0.0482 | | No log | 5.0 | 55 | 0.7603 | 0.2749 | 0.1299 | 0.1724 | | No log | 6.0 | 66 | 0.5722 | 0.7115 | 0.352 | 0.4552 | | No log | 7.0 | 77 | 0.4751 | 0.6872 | 0.337 | 0.436 | | No log | 8.0 | 88 | 0.4253 | 0.7256 | 0.3439 | 0.4462 | | No log | 9.0 | 99 | 0.3805 | 0.7335 | 0.3204 | 0.4308 | | No log | 10.0 | 110 | 0.3562 | 0.7342 | 0.3239 | 0.433 | | No log | 11.0 | 121 | 0.3275 | 0.7906 | 0.355 | 0.471 | | No log | 12.0 | 132 | 0.3133 | 0.8382 | 0.3838 | 0.5061 | | No log | 13.0 | 143 | 0.2996 | 0.8409 | 0.3841 | 0.5062 | | No log | 14.0 | 154 | 0.2903 | 0.8304 | 0.3763 | 0.4978 | | No log | 15.0 | 165 | 0.2867 | 0.8409 | 0.3841 | 0.5062 | | No log | 16.0 | 176 | 0.2786 | 0.8409 | 0.3841 | 0.5062 | | No log | 17.0 | 187 | 0.2711 | 0.8409 | 0.3841 | 0.5062 | | No log | 18.0 | 198 | 0.2673 | 0.8409 | 0.3841 | 0.5062 | | No log | 19.0 | 209 | 0.2643 | 0.8471 | 0.3841 | 0.5064 | | No log | 20.0 | 220 | 0.2640 | 0.8471 | 0.3841 | 0.5064 |
993ba7fc5641c7e81deed382da81b8d9
apache-2.0
['automatic-speech-recognition', 'th']
false
exp_w2v2t_th_vp-100k_s497 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
7113214f396f7496a2a85f7bd6f48b93
apache-2.0
['translation']
false
opus-mt-en-tvl * source languages: en * target languages: tvl * OPUS readme: [en-tvl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tvl/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tvl/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tvl/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tvl/opus-2020-01-20.eval.txt)
a33893c9c609f69bdff45c586c48caf0
apache-2.0
['generated_from_trainer']
false
vit-base-patch16-224-in21k This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1026 - Accuracy: 0.982
807c280f971e6076689c7bc1ffe2e09c
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4
b2850832961164ca02f85d77823e8398
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.177 | 0.5 | 500 | 0.2100 | 0.9435 | | 0.1515 | 1.0 | 1000 | 0.0710 | 0.975 | | 0.0443 | 1.5 | 1500 | 0.2043 | 0.9535 | | 0.0625 | 2.0 | 2000 | 0.0898 | 0.9745 | | 0.0181 | 2.5 | 2500 | 0.0961 | 0.9805 | | 0.0091 | 3.0 | 3000 | 0.1049 | 0.982 | | 0.0016 | 3.5 | 3500 | 0.1066 | 0.981 | | 0.0015 | 4.0 | 4000 | 0.1026 | 0.982 |
2ad1b427253a6854ccd6581656ed7e63
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
282df92c3a5fe02031a8eaa98a5b59c1
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens') embeddings = model.encode(sentences) print(embeddings) ```
e59bbb1558269bacbb9e87ddfac86c37
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens') model = AutoModel.from_pretrained('sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens')
b4f602635e7a9d94cf0ec5e4e9cc5ce1
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens)
1e72c1b6a6d19a6b02fe7761f513c935
apache-2.0
['generated_from_trainer', 'CV', 'ConvNeXT', 'satellite', 'EuroSAT']
false
ConvNeXT (tiny) fine-tuned on EuroSAT This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the [EuroSAT](https://github.com/phelber/eurosat) dataset. It achieves the following results on the evaluation set: - Loss: 0.0549 - Accuracy: 0.9805
9e611f4bb453e5a5793bfcc89668c640
apache-2.0
['generated_from_trainer', 'CV', 'ConvNeXT', 'satellite', 'EuroSAT']
false
Drag and drop the following pics in the right widget to test the model ![image1](https://huggingface.co/mrm8488/convnext-tiny-finetuned-eurosat/resolve/main/test1.jpg) ![image2](https://huggingface.co/mrm8488/convnext-tiny-finetuned-eurosat/resolve/main/test2.jpg)
86283ed644286adda34258df2ac334ab
apache-2.0
['generated_from_trainer', 'CV', 'ConvNeXT', 'satellite', 'EuroSAT']
false
Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration.
e203a68e17b4c738839df5204004d368
apache-2.0
['generated_from_trainer', 'CV', 'ConvNeXT', 'satellite', 'EuroSAT']
false
Dataset information **EuroSAT : Land Use and Land Cover Classification with Sentinel-2** In this study, we address the challenge of land use and land cover classification using Sentinel-2 satellite images. The Sentinel-2 satellite images are openly and freely accessible provided in the Earth observation program Copernicus. We present a novel dataset based on Sentinel-2 satellite images covering 13 spectral bands and consisting out of 10 classes with in total 27,000 labeled and geo-referenced images. We provide benchmarks for this novel dataset with its spectral bands using state-of-the-art deep Convolutional Neural Network (CNNs). With the proposed novel dataset, we achieved an overall classification accuracy of 98.57%. The resulting classification system opens a gate towards a number of Earth observation applications. We demonstrate how this classification system can be used for detecting land use and land cover changes and how it can assist in improving geographical maps.
963e7bad369db2c9afc5b240595b9c0b
apache-2.0
['generated_from_trainer', 'CV', 'ConvNeXT', 'satellite', 'EuroSAT']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 7171 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3
ca89a5e4f2ebcffa8e76cde9b8374452
apache-2.0
['generated_from_trainer', 'CV', 'ConvNeXT', 'satellite', 'EuroSAT']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2082 | 1.0 | 718 | 0.1057 | 0.9654 | | 0.1598 | 2.0 | 1436 | 0.0712 | 0.9775 | | 0.1435 | 3.0 | 2154 | 0.0549 | 0.9805 |
5fc1fa10bb57d9f1b2b2c5e616dda6d3
cc-by-4.0
['question generation']
false
Model Card of `lmqg/mt5-base-itquad-qg` This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for question generation task on the [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
d0164e1f8746275de753066515d5f580
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="Dopo il 1971 , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.", list_answer="Dopo il 1971") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mt5-base-itquad-qg") output = pipe("<hl> Dopo il 1971 <hl> , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.") ```
cfb2f4fd9fcaf4e109aeb97280cf490c
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-itquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_itquad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:-----------------------------------------------------------------| | BERTScore | 81.16 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_1 | 23.29 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_2 | 15.37 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_3 | 10.72 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_4 | 7.7 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | METEOR | 18 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | MoverScore | 57.11 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | ROUGE_L | 22.51 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | - ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/mt5-base-itquad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_itquad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 87.93 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedF1Score (MoverScore) | 61.91 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedPrecision (BERTScore) | 88.02 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedPrecision (MoverScore) | 62.04 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedRecall (BERTScore) | 87.84 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedRecall (MoverScore) | 61.78 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | - ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/mt5-base-itquad-ae`](https://huggingface.co/lmqg/mt5-base-itquad-ae). [raw metric file](https://huggingface.co/lmqg/mt5-base-itquad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_itquad.default.lmqg_mt5-base-itquad-ae.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 81.68 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedF1Score (MoverScore) | 55.83 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedPrecision (BERTScore) | 81.25 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedPrecision (MoverScore) | 55.68 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedRecall (BERTScore) | 82.16 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedRecall (MoverScore) | 56.01 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
266a377c9dcfec764b679b5d058a27d0
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_itquad - dataset_name: default - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: None - model: google/mt5-base - max_length: 512 - max_length_output: 32 - epoch: 11 - batch: 4 - lr: 0.001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 16 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-base-itquad-qg/raw/main/trainer_config.json).
25a24289a616cb7ecdaf79bb05600c46
creativeml-openrail-m
['text-to-image']
false
Duskfall Ani Backgrounds Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! If you want to donate towards costs and don't want to subscribe: https://ko-fi.com/DUSKFALLcrew If you want to monthly support the EARTH & DUSK media projects and not just AI: https://www.patreon.com/earthndusk BgAniDusk (use that on your prompt)
340f2e5562d26f0c9e7a574959aa3c4a
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Whisper Tiny it 7 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 2.137834 - Wer: 97.566556
638441ce431d6d26238aee9883308037
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Model description This model is the openai whisper small transformer adapted for Italian audio to text transcription. As part of the hyperparameter tuning process weight decay set to 0.1, attention dropout, encoder dropout and decoder dropout have been set to 0.1, the learning rate has been set to 1e-6, the number of decoder attention heads and encoder attention heads have been set to 8 however, it did not improved the performance on the evaluation set.
71c1b9e132ef41ed338f76b5d17fab2a
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP
8e0afd02eb7a3792b23eda98cece4853
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 1.7353 | 3.82 | 4000 | 2.1378 | 97.5666 |
8fc2911c34536f001d4ae583e00544d5
apache-2.0
['generated_from_trainer']
false
wav2vec2-hindi This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8814 - Wer: 1.0
6ef52a346cf421fac3e40fb5fc9b7050
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 40 - mixed_precision_training: Native AMP
028c03a05ac100e8f11eadfdd7dc338c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 23.6834 | 6.25 | 100 | 13.5748 | 1.0 | | 8.2358 | 12.5 | 200 | 3.9834 | 1.0 | | 3.6953 | 18.75 | 300 | 3.7861 | 1.0 | | 3.4186 | 25.0 | 400 | 3.8232 | 1.0 | | 3.2462 | 31.25 | 500 | 3.4688 | 1.0 | | 2.8108 | 37.5 | 600 | 2.8814 | 1.0 |
2b032f122c3add849d91b28d57b6c23f
apache-2.0
[]
false
Funnel Transformer large model (B8-8-8 without decoder) Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in [this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in [this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
597c8423309139d0c47c72e6a8b7e69f
apache-2.0
[]
false
Model description Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. **Note:** This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if you need one input per initial token. You should use the `large` model in that case.
7640eee2766c17258f1243c72c0bb241
apache-2.0
[]
false
Intended uses & limitations You can use the raw model to extract a vector representation of a given text, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2.
33e756f0272020cf56947379f782a5b8
apache-2.0
[]
false
How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import FunnelTokenizer, FunnelBaseModel tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/large-base") model = FunnelBaseModel.from_pretrained("funnel-transformer/large-base") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import FunnelTokenizer, TFFunnelBaseModel tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/large-base") model = TFFunnelBaseModel.from_pretrained("funnel-transformer/large-base") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ```
f777a3f7b2e6c1dcad10ef4f8e05e318
apache-2.0
[]
false
Training data The BERT model was pretrained on: - [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books, - [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers), - [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages, - [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data, - [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
7797c65b77d25d57a122d8a1f0fabdba
apache-2.0
[]
false
BibTeX entry and citation info ```bibtex @misc{dai2020funneltransformer, title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing}, author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le}, year={2020}, eprint={2006.03236}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
7b57d04fc06be175107602a9be024f02
apache-2.0
['automatic-speech-recognition', 'nl']
false
exp_w2v2t_nl_no-pretraining_s399 Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (nl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
86217cac8a0750abd39d93af87e01c01
apache-2.0
['generated_from_keras_callback']
false
transformers-qa This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.3199 - Validation Loss: 3.2826 - Train Rougel: tf.Tensor(0.3922559, shape=(), dtype=float32) - Epoch: 0
9f28e368f6580fc090f3c3358786bc38
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Train Rougel | Epoch | |:----------:|:---------------:|:---------------------------------------------:|:-----:| | 2.3199 | 3.2826 | tf.Tensor(0.3922559, shape=(), dtype=float32) | 0 |
3e7c5e9ed14b82a9cab80e3184ebc33e
apache-2.0
['deep-narrow']
false
T5-Efficient-SMALL-DL8 (Deep-Narrow version) T5-Efficient-SMALL-DL8 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block.
fe3fb8d947163ac4b7b665cd37bf8bbc
apache-2.0
['deep-narrow']
false
Details model architecture This model checkpoint - **t5-efficient-small-dl8** - is of model type **Small** with the following variations: - **dl** is **8** It has **68.92** million parameters and thus requires *ca.* **275.66 MB** of memory in full precision (*fp32*) or **137.83 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh |
893844c5b94e47d3a0cc0e9429bb949f
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [emotion]( dataset(https://huggingface.co/datasets/emotion) dataset for in the dataset in HG. It achieves the following results on the evaluation set: - Loss: 0.2033 - Accuracy: 0.9275 - F1: 0.9273
83429eda879166967fb25118f9edcf8f
apache-2.0
['generated_from_trainer']
false
Model description This model is a copy of the model found in the book [Natural Language Processing with Transformers](https://github.com/nlp-with-transformers/notebooks/blob/main/02_classification.ipynb).
5247d020961f0b23c6d545ec34d0322c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.806 | 1.0 | 250 | 0.2954 | 0.908 | 0.9062 | | 0.2361 | 2.0 | 500 | 0.2033 | 0.9275 | 0.9273 |
dfe5d97d10c0af88466587aceb28ed8b
cc-by-4.0
['spanish', 'roberta', 'bertin']
false
Readability ES Sentences for three classes Model based on the Roberta architecture finetuned on [BERTIN](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) for readability assessment of Spanish texts.
7a34f7b0bf0eb83ad79eb1e062de4fe9
cc-by-4.0
['spanish', 'roberta', 'bertin']
false
Description and performance This version of the model was trained on a mix of datasets, using sentence-level granularity when possible. The model performs classification among three complexity levels: - Basic. - Intermediate. - Advanced. The relationship of these categories with the Common European Framework of Reference for Languages is described in [our report](https://wandb.ai/readability-es/readability-es/reports/Texts-Readability-Analysis-for-Spanish--VmlldzoxNzU2MDUx). This model achieves a F1 macro average score of 0.6951, measured on the validation set.
37cc663438dba41a3e2fa81a8112cfd6
cc-by-4.0
['spanish', 'roberta', 'bertin']
false
Model variants - [`readability-es-sentences`](https://huggingface.co/hackathon-pln-es/readability-es-sentences). Two classes, sentence-based dataset. - [`readability-es-paragraphs`](https://huggingface.co/hackathon-pln-es/readability-es-paragraphs). Two classes, paragraph-based dataset. - `readability-es-3class-sentences` (this model). Three classes, sentence-based dataset. - [`readability-es-3class-paragraphs`](https://huggingface.co/hackathon-pln-es/readability-es-3class-paragraphs). Three classes, paragraph-based dataset.
0da37dfcfe34b0d9fce81a976e4f0257
cc-by-4.0
['spanish', 'roberta', 'bertin']
false
Datasets - [`readability-es-hackathon-pln-public`](https://huggingface.co/datasets/hackathon-pln-es/readability-es-hackathon-pln-public), composed of: * coh-metrix-esp corpus. * Various text resources scraped from websites. - Other non-public datasets: newsela-es, simplext.
e524aea7f6f7c63f69a88f43c3cd2f75
cc-by-4.0
['spanish', 'roberta', 'bertin']
false
Biases and Limitations - Due to the scarcity of data and the lack of a reliable gold test set, performance metrics are reported on the validation set. - One of the datasets involved is the Spanish version of newsela, which is frequently used as a reference. However, it was created by translating previous datasets, and therefore it may contain somewhat unnatural phrases. - Some of the datasets used cannot be publicly disseminated, making it more difficult to assess the existence of biases or mistakes. - Language might be biased towards the Spanish dialect spoken in Spain. Other regional variants might be sub-represented. - No effort has been performed to alleviate the shortcomings and biases described in the [original implementation of BERTIN](https://huggingface.co/bertin-project/bertin-roberta-base-spanish
2749936f3dd48cb97ba83c883c797dd5
cc-by-4.0
['spanish', 'roberta', 'bertin']
false
Authors - [Laura Vásquez-Rodríguez](https://lmvasque.github.io/) - [Pedro Cuenca](https://twitter.com/pcuenq) - [Sergio Morales](https://www.fireblend.com/) - [Fernando Alva-Manchego](https://feralvam.github.io/)
0a6122171297c108e1e3ca2deea618ed
cc
[]
false
Installation To install `image-classifier` first [install and configure awesome-bash-cli](https://github.com/kamangir/awesome-bash-cli) then run: ``` abcli huggingface clone image-classifier ``` To see the list of `image-classifier` saved models type in ``` image_classifier list ``` You should see the following items: 1. [fashion-mnist](
72dc53626c552516c53391fb78c80715
cc
[]
false
fashion-mnist ![image](./saved_model/fashion-mnist/image_classifier/prediction/00000.jpg) `fashion-mnist` is an `image-classifier` trained on [Fashion-MNIST](https://github.com/zalandoresearch/fashion-mnist). To retrain `fashion-mnist` type in: ``` abcli select fashion_mnist train abcli upload image_classifier list . browser=1,model=object ``` You should now see the structure of the network (left) and the [content of the model](https://github.com/kamangir/browser) (right). | ![image](./abcli/assets/fashion_mnist_list.png) | ![image](./abcli/assets/fashion_mnist_browsed.png) | |---|---| You can save this model under a new name by typing in: ``` fashion_mnist save new_name_1 ``` / END
1a114dd240432c4bad5df54db296bf65
apache-2.0
['automatic-speech-recognition', 'th']
false
exp_w2v2t_th_xlsr-53_s711 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
3cd348d9ecbe0cead2c781fcab720794
apache-2.0
['generated_from_trainer']
false
tiny-mlm-glue-sst2-target-glue-stsb This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-sst2](https://huggingface.co/muhtasham/tiny-mlm-glue-sst2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9195 - Pearson: 0.8130 - Spearmanr: 0.8114
5c42a4aa483f13c8ea6fb8c72e780e2b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:| | 2.7776 | 2.78 | 500 | 1.1238 | 0.7313 | 0.7669 | | 0.932 | 5.56 | 1000 | 1.0628 | 0.7833 | 0.8086 | | 0.737 | 8.33 | 1500 | 1.0050 | 0.8025 | 0.8208 | | 0.6099 | 11.11 | 2000 | 0.8592 | 0.8165 | 0.8220 | | 0.5164 | 13.89 | 2500 | 0.8875 | 0.8158 | 0.8181 | | 0.4659 | 16.67 | 3000 | 0.9524 | 0.8155 | 0.8198 | | 0.4114 | 19.44 | 3500 | 0.8872 | 0.8173 | 0.8174 | | 0.3728 | 22.22 | 4000 | 0.9423 | 0.8163 | 0.8166 | | 0.3396 | 25.0 | 4500 | 0.9953 | 0.8197 | 0.8202 | | 0.321 | 27.78 | 5000 | 0.9409 | 0.8160 | 0.8160 | | 0.3034 | 30.56 | 5500 | 0.9273 | 0.8142 | 0.8139 | | 0.2811 | 33.33 | 6000 | 0.9195 | 0.8130 | 0.8114 |
f18d3ee416e777bc3f8dca8dc64b2749
apache-2.0
['generated_from_trainer']
false
my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 1.2335 - Accuracy: 0.985
16b7db5b19675530218d5e7da89f6a76
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3
f2dc6e130e3ab2e35e06868e0dd5a166
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0523 | 1.0 | 50 | 1.9226 | 0.935 | | 1.3718 | 2.0 | 100 | 1.3422 | 0.995 | | 1.2298 | 3.0 | 150 | 1.2335 | 0.985 |
a63721fb8b103b42eb38ff300e1d6e89
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Tiny Bengali This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the mozilla-foundation/common_voice_11_0 bn dataset. It achieves the following results on the evaluation set: - Loss: 0.2314 - Wer: 32.8977
eb79e5d5617fc1907328bbcd6d04d31f
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.3362 | 0.96 | 1000 | 0.3536 | 45.0860 | | 0.2395 | 1.91 | 2000 | 0.2745 | 37.1714 | | 0.205 | 2.87 | 3000 | 0.2485 | 34.7353 | | 0.1795 | 3.83 | 4000 | 0.2352 | 33.2469 | | 0.1578 | 4.78 | 5000 | 0.2314 | 32.8977 |
6361749b35d2d616faae11963bcc1c43
apache-2.0
['deep-narrow']
false
T5-Efficient-XL-NL8 (Deep-Narrow version) T5-Efficient-XL-NL8 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block.
d531038a8a883ad0a87b679546c3d887
apache-2.0
['deep-narrow']
false
Details model architecture This model checkpoint - **t5-efficient-xl-nl8** - is of model type **Xl** with the following variations: - **nl** is **8** It has **972.49** million parameters and thus requires *ca.* **3889.95 MB** of memory in full precision (*fp32*) or **1944.97 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh |
a3547dc24e1bb9310ebd4d3be6f7bb8a
apache-2.0
['generated_from_trainer']
false
opus-mt-ko-en-finetuned-ko-to-en-2780616 This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ko-en](https://huggingface.co/Helsinki-NLP/opus-mt-ko-en) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8435
eaa6aba8783687222d33fa1883da70b0