license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer']
false
distilgpt2-finetuned-custom-mail This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.1905
1aa348a36aeaaa24611bc2be0676f954
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 7 | 3.5915 | | No log | 2.0 | 14 | 3.4986 | | No log | 3.0 | 21 | 3.4418 | | No log | 4.0 | 28 | 3.3970 | | No log | 5.0 | 35 | 3.3569 | | No log | 6.0 | 42 | 3.3207 | | No log | 7.0 | 49 | 3.2972 | | No log | 8.0 | 56 | 3.2806 | | No log | 9.0 | 63 | 3.2620 | | No log | 10.0 | 70 | 3.2451 | | No log | 11.0 | 77 | 3.2302 | | No log | 12.0 | 84 | 3.2177 | | No log | 13.0 | 91 | 3.2083 | | No log | 14.0 | 98 | 3.2024 | | No log | 15.0 | 105 | 3.1984 | | No log | 16.0 | 112 | 3.1962 | | No log | 17.0 | 119 | 3.1938 | | No log | 18.0 | 126 | 3.1920 | | No log | 19.0 | 133 | 3.1913 | | No log | 20.0 | 140 | 3.1905 |
2a1f7e428b3229e1051a6b5350ba40be
apache-2.0
['generated_from_trainer']
false
distilbert-base-german-cased-finetuned-jl This model is a fine-tuned version of [distilbert-base-german-cased](https://huggingface.co/distilbert-base-german-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9427
d5d9982840b2f2d341557f7ff3a446a5
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP
74ce7c5b05befd4943f54d9ac195b803
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | No log | 0.1 | 1000 | 1.5731 | | No log | 0.19 | 2000 | 1.4019 | | No log | 0.29 | 3000 | 1.3042 | | No log | 0.39 | 4000 | 1.2398 | | No log | 0.48 | 5000 | 1.1949 | | No log | 0.58 | 6000 | 1.1584 | | No log | 0.68 | 7000 | 1.1296 | | No log | 0.77 | 8000 | 1.1055 | | No log | 0.87 | 9000 | 1.0842 | | No log | 0.97 | 10000 | 1.0680 | | No log | 1.06 | 11000 | 1.0521 | | No log | 1.16 | 12000 | 1.0388 | | No log | 1.26 | 13000 | 1.0248 | | No log | 1.35 | 14000 | 1.0154 | | No log | 1.45 | 15000 | 1.0051 | | No log | 1.55 | 16000 | 0.9981 | | No log | 1.64 | 17000 | 0.9891 | | No log | 1.74 | 18000 | 0.9827 | | No log | 1.84 | 19000 | 0.9765 | | No log | 1.93 | 20000 | 0.9714 | | 1.2477 | 2.03 | 21000 | 0.9672 | | 1.2477 | 2.13 | 22000 | 0.9613 | | 1.2477 | 2.22 | 23000 | 0.9582 | | 1.2477 | 2.32 | 24000 | 0.9548 | | 1.2477 | 2.42 | 25000 | 0.9508 | | 1.2477 | 2.51 | 26000 | 0.9491 | | 1.2477 | 2.61 | 27000 | 0.9466 | | 1.2477 | 2.71 | 28000 | 0.9458 | | 1.2477 | 2.8 | 29000 | 0.9446 | | 1.2477 | 2.9 | 30000 | 0.9431 | | 1.2477 | 3.0 | 31000 | 0.9427 |
0956ef6035e72952670706d6df215dc1
apache-2.0
['automatic-speech-recognition', 'hf-asr-leaderboard', 'whisper-event']
false
Fine-tuned whisper-large-v2 model for ASR in French This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2), trained on the mozilla-foundation/common_voice_11_0 fr dataset. When using the model make sure that your speech input is also sampled at 16Khz. **This model also predicts casing and punctuation.**
09bc4bd95c668d72e7de3e842e267030
apache-2.0
['automatic-speech-recognition', 'hf-asr-leaderboard', 'whisper-event']
false
Load model model = AutoModelForSpeechSeq2Seq.from_pretrained("bofenghuang/whisper-large-v2-cv11-french-punct").to(device) processor = AutoProcessor.from_pretrained("bofenghuang/whisper-large-v2-cv11-french-punct", language="french", task="transcribe")
9c30df8c6ff29e0455fc4d7afea8e743
mit
['generated_from_trainer']
false
xlm-roberta-large-xnli-finetuned-mnli-SJP-v3 This model is a fine-tuned version of [joeddav/xlm-roberta-large-xnli](https://huggingface.co/joeddav/xlm-roberta-large-xnli) on the swiss_judgment_prediction dataset. It achieves the following results on the evaluation set: - eval_loss: 5.4348 - eval_accuracy: 0.3352 - eval_runtime: 588.81 - eval_samples_per_second: 8.492 - eval_steps_per_second: 4.246 - epoch: 14.0 - step: 70
ec77f0e9df4c0e04c4d179ce830911b4
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20
e2638c37b2130f9cfbfadff19dc39c91
apache-2.0
['generated_from_trainer']
false
distilled-mt5-small-b0.02 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset. It achieves the following results on the evaluation set: - Loss: 2.8126 - Bleu: 7.632 - Gen Len: 45.006
fab475c915875a3fbeeeb95214389f80
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'ga-IE', 'robust-speech-event', 'hf-asr-leaderboard']
false
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - GA-IE dataset. It achieves the following results on the evaluation set: - Loss: 0.8445 - Wer: 0.5585
48ece4e03933cdae460813436b0af0ac
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'ga-IE', 'robust-speech-event', 'hf-asr-leaderboard']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 60.0 - mixed_precision_training: Native AMP
bdd52b41e8aecad20e28fa1f5d36d8f6
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'ga-IE', 'robust-speech-event', 'hf-asr-leaderboard']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.7135 | 31.24 | 500 | 0.9609 | 0.6926 |
395ed6a536149b10947b9d06b31ddf1e
apache-2.0
[]
false
Fine-tuned T5 small model for use as a frame semantic parser in the [Frame Semantic Transformer](https://github.com/chanind/frame-semantic-transformer) project. This model is trained on data from [FrameNet](https://framenet2.icsi.berkeley.edu/).
4e7fd7e4df052569f8d929e09fd0c3ce
apache-2.0
[]
false
Tasks This model is trained to perform 3 tasks related to semantic frame parsing: 1. Identify frame trigger locations in the text 2. Classify the frame given a trigger location 3. Extract frame elements in the sentence
ffb93ba8bb808fd00d747644db816178
apache-2.0
[]
false
Performance This model is trained and evaluated using the same train/dev/test splits from FrameNet 1.7 annotated corpora as used by [Open Sesame](https://github.com/swabhs/open-sesame). | Task | F1 Score (Dev) | F1 Score (Test) | | ---------------------- | -------------- | --------------- | | Trigger identification | 0.74 | 0.70 | | Frame Classification | 0.83 | 0.81 | | Argument Extraction | 0.68 | 0.70 |
4a2cb0bae6679a737f18e7c91757d409
apache-2.0
['generated_from_trainer']
false
mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_mrpc_256 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.1267 - Accuracy: 0.9926 - F1: 0.9947 - Combined Score: 0.9936
dcf9044fec7c2192429e340ebfa3f283
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:| | 0.3017 | 1.0 | 1959 | 0.2241 | 0.9608 | 0.9713 | 0.9661 | | 0.233 | 2.0 | 3918 | 0.2357 | 0.9828 | 0.9876 | 0.9852 | | 0.2241 | 3.0 | 5877 | 0.1908 | 0.9706 | 0.9786 | 0.9746 | | 0.2189 | 4.0 | 7836 | 0.1863 | 0.9755 | 0.9824 | 0.9789 | | 0.2149 | 5.0 | 9795 | 0.1868 | 0.9804 | 0.9858 | 0.9831 | | 0.211 | 6.0 | 11754 | 0.1735 | 0.9804 | 0.9859 | 0.9831 | | 0.2073 | 7.0 | 13713 | 0.1875 | 0.9828 | 0.9876 | 0.9852 | | 0.204 | 8.0 | 15672 | 0.1690 | 0.9853 | 0.9894 | 0.9873 | | 0.2014 | 9.0 | 17631 | 0.1597 | 0.9853 | 0.9893 | 0.9873 | | 0.1992 | 10.0 | 19590 | 0.1604 | 0.9877 | 0.9911 | 0.9894 | | 0.1975 | 11.0 | 21549 | 0.1563 | 0.9853 | 0.9894 | 0.9873 | | 0.1959 | 12.0 | 23508 | 0.1518 | 0.9853 | 0.9894 | 0.9873 | | 0.1948 | 13.0 | 25467 | 0.1429 | 0.9902 | 0.9929 | 0.9915 | | 0.1937 | 14.0 | 27426 | 0.1484 | 0.9853 | 0.9894 | 0.9873 | | 0.1928 | 15.0 | 29385 | 0.1527 | 0.9804 | 0.9856 | 0.9830 | | 0.1919 | 16.0 | 31344 | 0.1433 | 0.9926 | 0.9947 | 0.9936 | | 0.1913 | 17.0 | 33303 | 0.1445 | 0.9902 | 0.9929 | 0.9915 | | 0.1905 | 18.0 | 35262 | 0.1407 | 0.9926 | 0.9947 | 0.9936 | | 0.1899 | 19.0 | 37221 | 0.1402 | 0.9926 | 0.9947 | 0.9936 | | 0.1892 | 20.0 | 39180 | 0.1387 | 0.9926 | 0.9947 | 0.9936 | | 0.1887 | 21.0 | 41139 | 0.1384 | 0.9926 | 0.9947 | 0.9936 | | 0.1882 | 22.0 | 43098 | 0.1430 | 0.9951 | 0.9964 | 0.9958 | | 0.1877 | 23.0 | 45057 | 0.1384 | 0.9951 | 0.9964 | 0.9958 | | 0.1871 | 24.0 | 47016 | 0.1398 | 0.9951 | 0.9964 | 0.9958 | | 0.1867 | 25.0 | 48975 | 0.1336 | 0.9926 | 0.9947 | 0.9936 | | 0.1863 | 26.0 | 50934 | 0.1368 | 0.9951 | 0.9964 | 0.9958 | | 0.1859 | 27.0 | 52893 | 0.1337 | 0.9951 | 0.9964 | 0.9958 | | 0.1855 | 28.0 | 54852 | 0.1352 | 0.9926 | 0.9947 | 0.9936 | | 0.1851 | 29.0 | 56811 | 0.1314 | 0.9951 | 0.9964 | 0.9958 | | 0.1847 | 30.0 | 58770 | 0.1333 | 0.9951 | 0.9964 | 0.9958 | | 0.1844 | 31.0 | 60729 | 0.1368 | 0.9951 | 0.9964 | 0.9958 | | 0.184 | 32.0 | 62688 | 0.1310 | 0.9951 | 0.9964 | 0.9958 | | 0.1837 | 33.0 | 64647 | 0.1321 | 0.9951 | 0.9964 | 0.9958 | | 0.1834 | 34.0 | 66606 | 0.1302 | 0.9926 | 0.9947 | 0.9936 | | 0.183 | 35.0 | 68565 | 0.1320 | 0.9951 | 0.9964 | 0.9958 | | 0.1827 | 36.0 | 70524 | 0.1303 | 0.9951 | 0.9964 | 0.9958 | | 0.1825 | 37.0 | 72483 | 0.1273 | 0.9951 | 0.9964 | 0.9958 | | 0.1822 | 38.0 | 74442 | 0.1293 | 0.9951 | 0.9964 | 0.9958 | | 0.1819 | 39.0 | 76401 | 0.1296 | 0.9951 | 0.9964 | 0.9958 | | 0.1817 | 40.0 | 78360 | 0.1305 | 0.9926 | 0.9947 | 0.9936 | | 0.1814 | 41.0 | 80319 | 0.1267 | 0.9926 | 0.9947 | 0.9936 | | 0.1812 | 42.0 | 82278 | 0.1267 | 0.9951 | 0.9964 | 0.9958 | | 0.1809 | 43.0 | 84237 | 0.1278 | 0.9902 | 0.9929 | 0.9915 | | 0.1807 | 44.0 | 86196 | 0.1293 | 0.9951 | 0.9964 | 0.9958 | | 0.1805 | 45.0 | 88155 | 0.1269 | 0.9951 | 0.9964 | 0.9958 | | 0.1803 | 46.0 | 90114 | 0.1284 | 0.9951 | 0.9964 | 0.9958 |
391b39c1a3a4882dc3cdc2f5254201e1
apache-2.0
['deep-narrow']
false
T5-Efficient-BASE-NL2 (Deep-Narrow version) T5-Efficient-BASE-NL2 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block.
0b5c7f3281ae19b678555ac0c42b8fd4
apache-2.0
['deep-narrow']
false
Details model architecture This model checkpoint - **t5-efficient-base-nl2** - is of model type **Base** with the following variations: - **nl** is **2** It has **57.72** million parameters and thus requires *ca.* **230.88 MB** of memory in full precision (*fp32*) or **115.44 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh |
5b6c117cf15d2285d1e672bea560b36b
apache-2.0
[]
false
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**. The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions). **Note**: The model was fine-tuned on 90% of the train splits of [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions) for 20k steps and validated on the held-out 10% of the train split. Other community Checkpoints: [here](https://huggingface.co/models?search=ssm) Paper: [How Much Knowledge Can You Pack Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf) Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
1c76578b2efd447a2261ff0981b36743
apache-2.0
[]
false
Results on Natural Questions - Test Set |Id | link | Exact Match | |---|---|---| |T5-large|https://huggingface.co/google/t5-large-ssm-nqo|29.0| |T5-xxl|https://huggingface.co/google/t5-xxl-ssm-nqo|35.2| |T5-3b|https://huggingface.co/google/t5-3b-ssm-nqo|31.7| |**T5-11b**|**https://huggingface.co/google/t5-11b-ssm-nqo**|**34.8**|
33d0d870d791d182fe7c7580246e848f
apache-2.0
[]
false
Usage The model can be used as follows for **closed book question answering**: ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-11b-ssm-nqo") t5_tok = AutoTokenizer.from_pretrained("google/t5-11b-ssm-nqo") input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids gen_output = t5_qa_model.generate(input_ids)[0] print(t5_tok.decode(gen_output, skip_special_tokens=True)) ```
2c0f055c35b78e06d3b52307b8128bb4
apache-2.0
['deep-narrow']
false
T5-Efficient-SMALL-EL4 (Deep-Narrow version) T5-Efficient-SMALL-EL4 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block.
ab9399765eb4c128e472cb0741f688d2
apache-2.0
['deep-narrow']
false
Details model architecture This model checkpoint - **t5-efficient-small-el4** - is of model type **Small** with the following variations: - **el** is **4** It has **54.23** million parameters and thus requires *ca.* **216.9 MB** of memory in full precision (*fp32*) or **108.45 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh |
628bc0af2690580ad0c2de31b5b7cb8b
creativeml-openrail-m
['text-to-image']
false
DuskfallAi Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! If you want to donate towards costs and don't want to subscribe: https://ko-fi.com/DUSKFALLcrew If you want to monthly support the EARTH & DUSK media projects and not just AI: https://www.patreon.com/earthndusk WARNING: This is trained largely on a small data set of our own art with a focus on the fact that our art, and any stable/midjourney outputs we included in this are related to our Dissoicative Identity Disorder. May actually retrain a larger data set later on. Trained using the MultiModel Dreambooth App, sitting on a friday afternoon doing absolute squat. Please DO NOT re-upload the sample pictures that it was trained on, except in the instance you are inspired to use img2img.. In which we dutifully ask you to spam the community section with your outputs. DO NOT RESELL THIS MODEL, AS IT DOES HAVE A TON OF MY ART IN IT. You may: - Merge, use at will - SELL your generations - it's a STYLE after all! - Do credit when reuploading or merging if possible. - DO USE in any merged, OR home based model - cause that's what it's for! More information & output samples to all our models: [Civit AI -Duskfallcrew](https://civitai.com/user/duskfallcrew) lisdusk1 (use that on your prompt) lisdusk1 (use that on your prompt) ![lisdusk1 0](https://huggingface.co/Duskfallcrew/duskfallai/resolve/main/concept_images/lisdusk1_%281%29.jpg)![lisdusk1 1](https://huggingface.co/Duskfallcrew/duskfallai/resolve/main/concept_images/lisdusk1_%282%29.jpg)![lisdusk1 2](https://huggingface.co/Duskfallcrew/duskfallai/resolve/main/concept_images/lisdusk1_%283%29.jpg)![lisdusk1 3](https://huggingface.co/Duskfallcrew/duskfallai/resolve/main/concept_images/lisdusk1_%284%29.jpg)![lisdusk1 4](https://huggingface.co/Duskfallcrew/duskfallai/resolve/main/concept_images/lisdusk1_%285%29.jpg)![lisdusk1 5](https://huggingface.co/Duskfallcrew/duskfallai/resolve/main/concept_images/lisdusk1_%286%29.jpg) lisdusk2 (use that on your prompt) lisdusk2 (use that on your prompt) ![lisdusk2 108](https://huggingface.co/Duskfallcrew/duskfallai/resolve/main/concept_images/lisdusk2_%281%29.jpg)![lisdusk2 109](https://huggingface.co/Duskfallcrew/duskfallai/resolve/main/concept_images/lisdusk2_%282%29.jpg)![lisdusk2 110](https://huggingface.co/Duskfallcrew/duskfallai/resolve/main/concept_images/lisdusk2_%283%29.jpg)![lisdusk2 111](https://huggingface.co/Duskfallcrew/duskfallai/resolve/main/concept_images/lisdusk2_%284%29.jpg)![lisdusk2 112](https://huggingface.co/Duskfallcrew/duskfallai/resolve/main/concept_images/lisdusk2_%285%29.jpg)![lisdusk2 113](https://huggingface.co/Duskfallcrew/duskfallai/resolve/main/concept_images/lisdusk2_%286%29.jpg)![lisdusk2 114](https://huggingface.co/Duskfallcrew/duskfallai/resolve/main/concept_images/lisdusk2_%287%29.jpg)
a48e5a835d85d37b59745c3149e47bae
mit
[]
false
model by no3 This your the **waifu diffusion** model fine-tuned the pistachio from [vibrant venture](https://store.steampowered.com/app/1264520) taught to **waifu diffusion** with Dreambooth. It can be used by modifying the `instance_prompt`: **sks ps** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
a733d10fba15b7fa1e3faab7aa942510
mit
[]
false
Note If the output isn't that good using instance prompt you can use generic prompt like `a woman` or `a girl` you can add `, green hair` before `a woman` or `a girl` if that's doesn't give you good result. If you have issues or questions feel free to visit the Community Tab and start discussion about it. Here are the images used for training this concept: ![image 1](https://huggingface.co/no3/pistachio-wd-1.3-beta1/resolve/main/concept_images/1.png) ![image 2](https://huggingface.co/no3/pistachio-wd-1.3-beta1/resolve/main/concept_images/4.jpg) ![image 3](https://huggingface.co/no3/pistachio-wd-1.3-beta1/resolve/main/concept_images/5.jpg) ![image 4](https://huggingface.co/no3/pistachio-wd-1.3-beta1/resolve/main/concept_images/6.jpg) ![image 5](https://huggingface.co/no3/pistachio-wd-1.3-beta1/resolve/main/concept_images/2.png) ![image 6](https://huggingface.co/no3/pistachio-wd-1.3-beta1/resolve/main/concept_images/3.jpg) [and this](https://huggingface.co/no3/pistachio-wd-1.3-beta1/resolve/main/concept_images/7.jpg)
0de8600526f9af21710e7158d9142925
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - eval_loss: 1.1563 - eval_runtime: 141.535 - eval_samples_per_second: 76.193 - eval_steps_per_second: 4.762 - epoch: 1.0 - step: 5533
fa6ea8ea903ae7491a200c66aba9a2b7
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2251 - Accuracy: 0.9265 - F1: 0.9265
db3f33701caaf76fa5c970821efd1553
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8432 | 1.0 | 250 | 0.3353 | 0.8975 | 0.8939 | | 0.2582 | 2.0 | 500 | 0.2251 | 0.9265 | 0.9265 |
12a8f6a5e0f869cc10d0b3b5f6272cff
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-google-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5079 - Wer: 0.3365
cca2085dfaba46e51357940f683827f3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.4933 | 1.0 | 500 | 1.7711 | 0.9978 | | 0.8658 | 2.01 | 1000 | 0.6262 | 0.5295 | | 0.4405 | 3.01 | 1500 | 0.4841 | 0.4845 | | 0.3062 | 4.02 | 2000 | 0.4897 | 0.4215 | | 0.233 | 5.02 | 2500 | 0.4326 | 0.4101 | | 0.1896 | 6.02 | 3000 | 0.4924 | 0.4078 | | 0.1589 | 7.03 | 3500 | 0.4430 | 0.3896 | | 0.1391 | 8.03 | 4000 | 0.4334 | 0.3889 | | 0.1216 | 9.04 | 4500 | 0.4691 | 0.3828 | | 0.1063 | 10.04 | 5000 | 0.4726 | 0.3705 | | 0.0992 | 11.04 | 5500 | 0.4333 | 0.3690 | | 0.0872 | 12.05 | 6000 | 0.4986 | 0.3771 | | 0.0829 | 13.05 | 6500 | 0.4903 | 0.3685 | | 0.0713 | 14.06 | 7000 | 0.5293 | 0.3655 | | 0.068 | 15.06 | 7500 | 0.5039 | 0.3612 | | 0.0621 | 16.06 | 8000 | 0.5314 | 0.3665 | | 0.0571 | 17.07 | 8500 | 0.5038 | 0.3572 | | 0.0585 | 18.07 | 9000 | 0.4718 | 0.3550 | | 0.0487 | 19.08 | 9500 | 0.5482 | 0.3626 | | 0.0459 | 20.08 | 10000 | 0.5239 | 0.3545 | | 0.0419 | 21.08 | 10500 | 0.5096 | 0.3473 | | 0.0362 | 22.09 | 11000 | 0.5222 | 0.3500 | | 0.0331 | 23.09 | 11500 | 0.5062 | 0.3489 | | 0.0352 | 24.1 | 12000 | 0.4913 | 0.3459 | | 0.0315 | 25.1 | 12500 | 0.4701 | 0.3412 | | 0.028 | 26.1 | 13000 | 0.5178 | 0.3402 | | 0.0255 | 27.11 | 13500 | 0.5168 | 0.3405 | | 0.0228 | 28.11 | 14000 | 0.5154 | 0.3368 | | 0.0232 | 29.12 | 14500 | 0.5079 | 0.3365 |
ecd156eb58c6f44feebe8cf739290a04
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
sentence-transformers/paraphrase-xlm-r-multilingual-v1 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
3d576031fc96a52c3a3344f724860b2a
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/paraphrase-xlm-r-multilingual-v1') embeddings = model.encode(sentences) print(embeddings) ```
45379a7034a02eb4e96ecb49d3790d4f
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-xlm-r-multilingual-v1') model = AutoModel.from_pretrained('sentence-transformers/paraphrase-xlm-r-multilingual-v1')
906d406ba654a3182847c5a872d19650
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-xlm-r-multilingual-v1)
117c2541476321135d3dde7ade6676dc
mit
['spacy', 'token-classification']
false
| Feature | Description | | --- | --- | | **Name** | `en_core_med7_trf` | | **Version** | `3.4.2.1` | | **spaCy** | `>=3.4.2,<3.5.0` | | **Default Pipeline** | `transformer`, `ner` | | **Components** | `transformer`, `ner` | | **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) | | **Sources** | n/a | | **License** | `MIT` | | **Author** | [Andrey Kormilitzin](https://www.kormilitzin.com/) |
12e238ab08eebf0eb1c0db2300203d31
mit
['spacy', 'token-classification']
false
Label Scheme <details> <summary>View label scheme (7 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `DOSAGE`, `DRUG`, `DURATION`, `FORM`, `FREQUENCY`, `ROUTE`, `STRENGTH` | </details>
affe870be0feb70d9058b5db5cf1b5b7
mit
['spacy', 'token-classification']
false
BibTeX entry and citation info ```bibtex @article{kormilitzin2021med7, title={Med7: A transferable clinical natural language processing model for electronic health records}, author={Kormilitzin, Andrey and Vaci, Nemanja and Liu, Qiang and Nevado-Holgado, Alejo}, journal={Artificial Intelligence in Medicine}, volume={118}, pages={102086}, year={2021}, publisher={Elsevier} } ```
674acc822e0d04528b544a8e3c855349
apache-2.0
['generated_from_trainer']
false
finetuning-tweeteval-hate-speech This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8397 - Accuracy: 0.0 - F1: 0.0
08528a2ad7780a600cce111b68385cbb
apache-2.0
['Quality Estimation', 'monotransquest', 'hter']
false
Using Pre-trained Models ```python import torch from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-hter-en_cs-pharmaceutical", num_labels=1, use_cuda=torch.cuda.is_available()) predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]]) print(predictions) ```
a7981e4c04d2615f75fdbac4400296cf
mit
['generated_from_trainer']
false
camembert-base-finetuned-Train_RAW20-dd This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2380 - Precision: 0.8661 - Recall: 0.8900 - F1: 0.8779 - Accuracy: 0.9209
003d7ee1501944a3391116ec6b8ee0bc
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.23 | 1.0 | 14269 | 0.2282 | 0.8446 | 0.8714 | 0.8578 | 0.9088 | | 0.1787 | 2.0 | 28538 | 0.2380 | 0.8661 | 0.8900 | 0.8779 | 0.9209 |
baefef5b0a8cddb8321b899015274bda
apache-2.0
['multiberts', 'multiberts-seed_2', 'multiberts-seed_2-step_80k']
false
MultiBERTs, Intermediate Checkpoint - Seed 2, Step 80k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
35024b6c8c03116bc857423d0297d766
apache-2.0
['multiberts', 'multiberts-seed_2', 'multiberts-seed_2-step_80k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_2-step_80k') model = TFBertModel.from_pretrained("google/multiberts-seed_2-step_80k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_2-step_80k') model = BertModel.from_pretrained("google/multiberts-seed_2-step_80k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
48114b8a4116c50abece8f5a5979f25a
mit
['bert', 'cloze', 'distractor', 'generation']
false
Model description This model is a Candidate Set Generator in **"CDGP: Automatic Cloze Distractor Generation based on Pre-trained Language Model", Findings of EMNLP 2022**. Its input are stem and answer, and output is candidate set of distractors. It is fine-tuned by [**CLOTH**](https://www.cs.cmu.edu/~glai1/data/cloth/) dataset based on [**bert-base-uncased**](https://huggingface.co/bert-base-uncased) model. For more details, you can see our **paper** or [**GitHub**](https://github.com/AndyChiangSH/CDGP).
3b2afe63122d28b68bcea72ec2209bf4
mit
['bert', 'cloze', 'distractor', 'generation']
false
How to use? 1. Download the model by hugging face transformers. ```python from transformers import BertTokenizer, BertForMaskedLM, pipeline tokenizer = BertTokenizer.from_pretrained("AndyChiang/cdgp-csg-bert-cloth") csg_model = BertForMaskedLM.from_pretrained("AndyChiang/cdgp-csg-bert-cloth") ``` 2. Create a unmasker. ```python unmasker = pipeline("fill-mask", tokenizer=tokenizer, model=csg_model, top_k=10) ``` 3. Use the unmasker to generate the candidate set of distractors. ```python sent = "I feel [MASK] now. [SEP] happy" cs = unmasker(sent) print(cs) ```
fdfb7648cc877f018762668a7f3fb1fb
mit
['bert', 'cloze', 'distractor', 'generation']
false
Training hyperparameters The following hyperparameters were used during training: - Pre-train language model: [bert-base-uncased](https://huggingface.co/bert-base-uncased) - Optimizer: adam - Learning rate: 0.0001 - Max length of input: 64 - Batch size: 64 - Epoch: 1 - Device: NVIDIA® Tesla T4 in Google Colab
7f9b07d099af76d7d09fc7f572d436bf
mit
['bert', 'cloze', 'distractor', 'generation']
false
Testing The evaluations of this model as a Candidate Set Generator in CDGP is as follows: | P@1 | F1@3 | F1@10 | MRR | NDCG@10 | | ----- | ----- | ----- | ----- | ------- | | 18.50 | 13.80 | 15.37 | 29.96 | 37.82 |
edea1126afdd48baf7bf2a0a0a059f0d
mit
['bert', 'cloze', 'distractor', 'generation']
false
Candidate Set Generator | Models | CLOTH | DGen | | ----------- | ----------------------------------------------------------------------------------- | -------------------------------------------------------------------------------- | | **BERT** | [*cdgp-csg-bert-cloth*](https://huggingface.co/AndyChiang/cdgp-csg-bert-cloth) | [cdgp-csg-bert-dgen](https://huggingface.co/AndyChiang/cdgp-csg-bert-dgen) | | **SciBERT** | [cdgp-csg-scibert-cloth](https://huggingface.co/AndyChiang/cdgp-csg-scibert-cloth) | [cdgp-csg-scibert-dgen](https://huggingface.co/AndyChiang/cdgp-csg-scibert-dgen) | | **RoBERTa** | [cdgp-csg-roberta-cloth](https://huggingface.co/AndyChiang/cdgp-csg-roberta-cloth) | [cdgp-csg-roberta-dgen](https://huggingface.co/AndyChiang/cdgp-csg-roberta-dgen) | | **BART** | [cdgp-csg-bart-cloth](https://huggingface.co/AndyChiang/cdgp-csg-bart-cloth) | [cdgp-csg-bart-dgen](https://huggingface.co/AndyChiang/cdgp-csg-bart-dgen) |
cdeccaf6c47a648b0f981e1424d09607
creativeml-openrail-m
['text-to-image', 'stable-Diffusion', 'stable-diffusion-diffusers', 'diffusers', 'safetensors']
false
<p align="center"> <img src="https://s1.fileditch.ch/iMqyjOnUtxntHolBiNgT.png" width=35% height=35%> <p> <p align="center"> AniReal - A latent diffusion model fine-tuned to output High Quality Photorealistic anime illustrations! <img src="https://m1.afileditch.ch/uJoodjDNVWxDqhhQHeRH.png"> <p> ________
62c1f0e8a028221866a007ec5407f49b
creativeml-openrail-m
['text-to-image', 'stable-Diffusion', 'stable-diffusion-diffusers', 'diffusers', 'safetensors']
false
【 AniReal 】 Welcome to AniReal! a latent diffusion model Trained and Fine-Tuned on **Photorealistic High Quality** anime illustrations using the **Danbooru** tagging dataset aswell as **Blip** I made it so that it understands some natural text description alongside danbooru tags It may not work as well though but give it a shot! The model itself its made to output generally anything with an anime art style if u can think of it u can prompt it! ________
9a9a8ccd88d8bd3047fbfa4e73184c6d
creativeml-openrail-m
['text-to-image', 'stable-Diffusion', 'stable-diffusion-diffusers', 'diffusers', 'safetensors']
false
This project would be impossible without - [Haisenberg](https://huggingface.co/haisenberguwu) - [Thiros](https://huggingface.co/thiros) - [Closertodeath](https://huggingface.co/closertodeath) <img src="https://s1.fileditch.ch/FjLFnEcKHAFpEEEAawMP.png"> Many thanks, Hosioka.
234da254f8807c3d417c5c199f3e8e62
creativeml-openrail-m
['text-to-image', 'stable-Diffusion', 'stable-diffusion-diffusers', 'diffusers', 'safetensors']
false
0bb03505150d9b4b39975a9da8589b40190e7078 -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ba052021c71edb8879537478d7897bb6
creativeml-openrail-m
['text-to-image', 'stable-Diffusion', 'stable-diffusion-diffusers', 'diffusers', 'safetensors']
false
License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the **Model** to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the **Model** commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
d0ba5e5845639eeaa6ad7dc9b5776817
mit
['generated_from_keras_callback']
false
turkishReviews-ds-mini This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 9.1632 - Validation Loss: 9.2525 - Epoch: 2
607f5462ce6160c7c083086f0f44878c
mit
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -896, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16
eedacee0263d6a953a3f62c92b3fcc74
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 10.2835 | 9.9707 | 0 | | 9.6408 | 9.6241 | 1 | | 9.1632 | 9.2525 | 2 |
cf50719f3280f565427a3c0100f77a5d
apache-2.0
['vision', 'deep-stereo', 'depth-estimation', 'Tensorflow2', 'Keras']
false
MADNet Keras MADNet is a deep stereo depth estimation model. Its key defining features are: 1. It has a light-weight architecture which means it has low latency. 2. It supports self-supervised training, so it can be conveniently adapted in the field with no training data. 3. It's a stereo depth model, which means it's capable of high accuracy. The MADNet weights in this repository were trained using a Tensorflow 2 / Keras implementation of the original code. The model was created using the Keras Functional API, which enables the following features: 1. Good optimization. 2. High level Keras methods (.fit, .predict and .evaluate). 3. Little boilerplate code. 4. Decent support from external packages (like Weights and Biases). 5. Callbacks. The weights provided were either trained on the 2012 / 2015 kitti stereo dataset or flyingthings-3d dataset. The weights of the pretrained models from the original paper (tf1_conversion_kitti.h5 and tf1_conversion_synthetic.h5) are provided in tensorflow 2 format. The TF1 weights help speed up fine-tuning, but its recommended to use either synthetic.h5 (trained on flyingthings-3d) or kitti.h5 (trained on 2012 and 2015 kitti stereo datasets). **Abstract**: Deep convolutional neural networks trained end-to-end are the undisputed state-of-the-art methods to regress dense disparity maps directly from stereo pairs. However, such methods suffer from notable accuracy drops when exposed to scenarios significantly different from those seen in the training phase (e.g.real vs synthetic images, indoor vs outdoor, etc). As it is unlikely to be able to gather enough samples to achieve effective training/ tuning in any target domain, we propose to perform unsupervised and continuous online adaptation of a deep stereo network in order to preserve its accuracy independently of the sensed environment. However, such a strategy can be extremely demanding regarding computational resources and thus not enabling real-time performance. Therefore, we address this side effect by introducing a new lightweight, yet effective, deep stereo architecture Modularly ADaptive Network (MADNet) and by developing Modular ADaptation (MAD), an algorithm to train independently only sub-portions of our model. By deploying MADNet together with MAD we propose the first ever realtime self-adaptive deep stereo system.
b038d4a2ca241194b53898de6734da28
apache-2.0
['vision', 'deep-stereo', 'depth-estimation', 'Tensorflow2', 'Keras']
false
Usage Instructions See the accompanying codes readme for details on how to perform training and inferencing with the model: [madnet-deep-stereo-with-keras](https://github.com/ChristianOrr/madnet-deep-stereo-with-keras).
23c04ae883f05ad74a5b50dec8833afd
apache-2.0
['vision', 'deep-stereo', 'depth-estimation', 'Tensorflow2', 'Keras']
false
TF1 Kitti and TF1 Synthetic Training details for the TF1 weights are available in the supplementary material (at the end) of this paper: [Real-time self-adaptive deep stereo](https://arxiv.org/abs/1810.05424)
2601127bff7c5f482c92846c163b9cff
apache-2.0
['vision', 'deep-stereo', 'depth-estimation', 'Tensorflow2', 'Keras']
false
Synthetic The synthetic model was finetuned using the tf1 synthetic weights. It was trained on the flyingthings-3d dataset with the following parameters: - Steps: 1.5 million - Learning Rate: 0.0001 - Decay Rate: 0.999 - Minimum Learning Rate Cap: 0.000001 - Batch Size: 1 - Optimizer: Adam - Image Height: 480 - Image Width: 640
6350416db0032debe26e6af0f9d35d44
apache-2.0
['vision', 'deep-stereo', 'depth-estimation', 'Tensorflow2', 'Keras']
false
Kitti The kitti model was finetuned using the synthetic weights. Tensorboard events file is available in the logs directory. It was trained on the 2012 and 2015 kitti stereo dataset with the following parameters: - Steps: 0.5 million - Learning Rate: 0.0001 - Decay Rate: 0.999 - Minimum Learning Rate Cap: 0.0000001 - Batch Size: 1 - Optimizer: Adam - Image Height: 480 - Image Width: 640
0b77d18040c86f2eb1c34a3d4ecc1266
apache-2.0
['vision', 'deep-stereo', 'depth-estimation', 'Tensorflow2', 'Keras']
false
BibTeX entry and citation info ```bibtex @InProceedings{Tonioni_2019_CVPR, author = {Tonioni, Alessio and Tosi, Fabio and Poggi, Matteo and Mattoccia, Stefano and Di Stefano, Luigi}, title = {Real-time self-adaptive deep stereo}, booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2019} } ``` ```bibtex @article{Poggi2021continual, author={Poggi, Matteo and Tonioni, Alessio and Tosi, Fabio and Mattoccia, Stefano and Di Stefano, Luigi}, title={Continual Adaptation for Deep Stereo}, journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)}, year={2021} } ``` ```bibtex @InProceedings{MIFDB16, author = "N. Mayer and E. Ilg and P. Hausser and P. Fischer and D. Cremers and A. Dosovitskiy and T. Brox", title = "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation", booktitle = "IEEE International Conference on Computer Vision and Pattern Recognition (CVPR)", year = "2016", note = "arXiv:1512.02134", url = "http://lmb.informatik.uni-freiburg.de/Publications/2016/MIFDB16" } ``` ```bibtex @INPROCEEDINGS{Geiger2012CVPR, author = {Andreas Geiger and Philip Lenz and Raquel Urtasun}, title = {Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite}, booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2012} } ``` ```bibtex @INPROCEEDINGS{Menze2015CVPR, author = {Moritz Menze and Andreas Geiger}, title = {Object Scene Flow for Autonomous Vehicles}, booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2015} } ```
3d10f3114a26f676619dcc9922b2e9d8
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 40 | 4.5807 | | No log | 2.0 | 80 | 4.4023 | | No log | 3.0 | 120 | 4.3666 |
1f8ae353d16d4b624a123621a6b910a4
apache-2.0
['automatic-speech-recognition', 'fr']
false
exp_w2v2r_fr_xls-r_accent_france-2_belgium-8_s587 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
f8a8182153954713f1a87ee8b51ef1e5
apache-2.0
['generated_from_keras_callback']
false
nlp-esg-scoring/bert-base-finetuned-cleaned-esg-plus This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.7242 - Validation Loss: 2.5107 - Epoch: 9
7491a11c6608cf506d74a06f6164be92
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -146, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32
3ed3a1d4c8cf5a6028ee8e086136602f
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.7185 | 2.5414 | 0 | | 2.7167 | 2.5223 | 1 | | 2.7161 | 2.5627 | 2 | | 2.7189 | 2.5305 | 3 | | 2.7248 | 2.5103 | 4 | | 2.7173 | 2.5095 | 5 | | 2.7272 | 2.5135 | 6 | | 2.7215 | 2.5447 | 7 | | 2.7247 | 2.5632 | 8 | | 2.7242 | 2.5107 | 9 |
ac5cbd89b3fe3a2e8bcec47303b8f7f2
apache-2.0
['generated_from_trainer']
false
all-roberta-large-v1-work-2-16-5-oos This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3586 - Accuracy: 0.3689
47cd3512bd0bb542386ebc96df30d1d8
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3211 - Accuracy: 0.8633 - F1: 0.8638
d3c592e07348f889720d20b2705c5be1
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.6551 - Accuracy: 0.6633 - F1: 0.7248
2059e7fe253b11f798695c06e6cf291b
cc0-1.0
['kaggle', 'rembert', 'pytorch', 'question-answering']
false
<div align = "center"> <img src = "https://github.com/SauravMaheshkar/chaii-Hindi-Tamil-QA/blob/main/assets/Coffee%20Banner.png?raw=true"> </div> This dataset contains the [**google/rembert**](https://huggingface.co/transformers/model_doc/rembert.html) model weights according to my team's experimentation strategy during the [**chaii - Hindi and Tamil Question Answering**](https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering) competition. They are listed below with their corresponding public LB score:- | Huggingface Hub Link | Public LB Score | | :---: | :---: | | [**SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii) | 0.724 | | [**SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii) | 0.723 | | [**SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii) | 0.737 | | [**SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii) | 0.725 |
45b8913ff5870c38c59dbf44a073b612
apache-2.0
['bart', 'biobart', 'biomedical']
false
Paper: [BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model](https://arxiv.org/pdf/2204.03905.pdf) V2 adopts a new biomedical vocab. ``` @misc{BioBART, title={BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model}, author={Hongyi Yuan and Zheng Yuan and Ruyi Gan and Jiaxing Zhang and Yutao Xie and Sheng Yu}, year={2022}, eprint={2204.03905}, archivePrefix={arXiv} } ```
39e1a171bc516b5ff0b5eb102db14239
mit
['generated_from_trainer']
false
pubmedbert-fulltext-cord19 This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the pritamdeka/cord-19-fulltext dataset. It achieves the following results on the evaluation set: - Loss: 1.2667 - Accuracy: 0.7175
fd7dfbe8d2b181ba574c7913aa775754
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10000 - num_epochs: 3.0 - mixed_precision_training: Native AMP
692cd361a4c64d425f086f14a5d07afd
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.7985 | 0.27 | 5000 | 1.2710 | 0.7176 | | 1.7542 | 0.53 | 10000 | 1.3359 | 0.7070 | | 1.7462 | 0.8 | 15000 | 1.3489 | 0.7034 | | 1.8371 | 1.07 | 20000 | 1.4361 | 0.6891 | | 1.7102 | 1.33 | 25000 | 1.3502 | 0.7039 | | 1.6596 | 1.6 | 30000 | 1.3341 | 0.7065 | | 1.6265 | 1.87 | 35000 | 1.3228 | 0.7087 | | 1.605 | 2.13 | 40000 | 1.3079 | 0.7099 | | 1.5731 | 2.4 | 45000 | 1.2986 | 0.7121 | | 1.5602 | 2.67 | 50000 | 1.2929 | 0.7136 | | 1.5447 | 2.93 | 55000 | 1.2875 | 0.7143 |
4ff7a712e63c610d994ebb6800165b3b
apache-2.0
['generated_from_trainer']
false
tiny-mlm-glue-qnli-target-glue-sst2 This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-qnli](https://huggingface.co/muhtasham/tiny-mlm-glue-qnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5008 - Accuracy: 0.8211
68dbab67afa5ea9b6940719ad38941f5
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5757 | 0.24 | 500 | 0.4901 | 0.7775 | | 0.4436 | 0.48 | 1000 | 0.4673 | 0.7833 | | 0.3947 | 0.71 | 1500 | 0.4434 | 0.7970 | | 0.3751 | 0.95 | 2000 | 0.4601 | 0.7970 | | 0.3326 | 1.19 | 2500 | 0.4463 | 0.8005 | | 0.316 | 1.43 | 3000 | 0.4510 | 0.8005 | | 0.2981 | 1.66 | 3500 | 0.4367 | 0.8142 | | 0.2929 | 1.9 | 4000 | 0.4383 | 0.8108 | | 0.2746 | 2.14 | 4500 | 0.4873 | 0.8016 | | 0.256 | 2.38 | 5000 | 0.4395 | 0.8165 | | 0.246 | 2.61 | 5500 | 0.4444 | 0.8280 | | 0.2522 | 2.85 | 6000 | 0.4478 | 0.8245 | | 0.2371 | 3.09 | 6500 | 0.4556 | 0.8291 | | 0.2299 | 3.33 | 7000 | 0.4655 | 0.8326 | | 0.2143 | 3.56 | 7500 | 0.4581 | 0.8314 | | 0.2153 | 3.8 | 8000 | 0.4869 | 0.8291 | | 0.2134 | 4.04 | 8500 | 0.5008 | 0.8211 |
02c1467b53b79e88956004d82cbab465
cc-by-sa-4.0
[]
false
LegalBERT Tokenizer **LegalBERT** tokenizer is a word level byte-pair encoding with vocabulary size of 52k tokens (containing the most common words in legal documents), based on the [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) tokenizer. The tokenizer was trained on data provided by the **BRAZILIAN SUPREME FEDERAL TRIBUNAL**, through the terms of use: [LREC 2020](https://ailab.unb.br/victor/lrec2020). Tokenizer utilize `BertTokenizer` implementation from [transformers](https://github.com/huggingface/transformers). **NOTE**: The results of this project do not imply in any way the position of the BRAZILIAN SUPREME FEDERAL TRIBUNAL, all being the sole and exclusive responsibility of the author.
cd0c8c3f4f83651106d1d9a90aacb61f
cc-by-sa-4.0
[]
false
Tokenizer usage ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("dominguesm/legal-bert-tokenizer") example = "" tokens = tokenizer.tokenize(example) ```
a130e410c8953927ffb91059c4562e48
cc-by-sa-4.0
[]
false
Comparison of results **Original Text**: ```De ordem, a Secretaria Judiciária do Supremo Tribunal Federal INTIMA a parte abaixo identificada, ou quem as suas vezes fizer, do inteiro teor do(a) despacho/decisão presente nos autos (art. 270 do Código de Processo Cívil e art 5º da Lei 11.419/2006).``` | Tokenizer | Tokens | Num. Tokens | | --------- | ------ | ----------- | | BERTimbau | ```['De', 'ordem', ',', 'a', 'Secretaria', 'Judic', '
6b1ae6fed00ac6719681666ce766549f
cc-by-sa-4.0
[]
false
9', '/', '2006', ')', '.']``` | 66 | | LegalBERT | ```['De', 'ordem', ',', 'a', 'Secretaria', 'Judiciária', 'do', 'Supremo', 'Tribunal', 'Federal', 'INTIMA', 'a', 'parte', 'abaixo', 'identificada', ',', 'ou', 'quem', 'as', 'suas', 'vezes', 'fizer', ',', 'do', 'inteiro', 'teor', 'do', '(', 'a', ')', 'despacho', '/', 'decisão', 'presente', 'nos', 'autos', '(', 'art', '.', '270', 'do', 'Código', 'de', 'Processo', 'Cív', '
60e282175d0d84aa9952ccadcffd30e3
cc-by-sa-4.0
[]
false
Citation If you use this tokenizer, please cite: ``` @misc {maicon_domingues_2022, author = { {Maicon Domingues} }, title = { legal-bert-tokenizer (Revision d8e9d4a) }, year = 2022, url = { https://huggingface.co/dominguesm/legal-bert-tokenizer }, doi = { 10.57967/hf/0110 }, publisher = { Hugging Face } } ```
2b4590f582cac8488d3825fcc7683251
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Whisper Small GL - Santiago Paramés-Estévez This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3179 - Wer: 15.2334
6f81530707c79548d49ed265b961ad24
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP
fb92dcfe0dec92458a6bc54f90fbc577
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0707 | 2.69 | 1000 | 0.2596 | 16.4915 | | 0.0063 | 5.38 | 2000 | 0.2952 | 15.8583 | | 0.0014 | 8.06 | 3000 | 0.3105 | 15.2624 | | 0.0011 | 10.75 | 4000 | 0.3179 | 15.2334 |
c777721c290eafae5857b28c262e1aa5
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xlsr-hindi_commonvoice This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 3.5947 - Wer: 1.0
be732c9372c134ea5fbfd8e3e2e1148c
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 30 - mixed_precision_training: Native AMP
546deac694ef8c63832d7287d0a92fe5
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 24.0069 | 4.0 | 20 | 40.3956 | 1.0 | | 18.1097 | 8.0 | 40 | 15.3603 | 1.0 | | 7.1344 | 12.0 | 60 | 5.2695 | 1.0 | | 4.0032 | 16.0 | 80 | 3.7403 | 1.0 | | 3.4894 | 20.0 | 100 | 3.5724 | 1.0 | | 3.458 | 24.0 | 120 | 3.6164 | 1.0 | | 3.4412 | 28.0 | 140 | 3.5947 | 1.0 |
3066bb000cd5b75b1c0b68d48549665c
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-2']
false
MultiBERTs Seed 2 Checkpoint 80k (uncased) Seed 2 intermediate checkpoint 80k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
fe04412a17596d49a9686d3d88cc3d42
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-2']
false
How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-80k') model = BertModel.from_pretrained("multiberts-seed-2-80k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
27de633082c526c00c91244fb5f3b194
mit
[]
false
model by MrHidden This your the Stable Diffusion model fine-tuned the mexican_concha concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of sks Mexican Concha** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/sd-dreambooth-library/mexican-concha/resolve/main/concept_images/6.jpeg) ![image 1](https://huggingface.co/sd-dreambooth-library/mexican-concha/resolve/main/concept_images/0.jpeg) ![image 2](https://huggingface.co/sd-dreambooth-library/mexican-concha/resolve/main/concept_images/7.jpeg) ![image 3](https://huggingface.co/sd-dreambooth-library/mexican-concha/resolve/main/concept_images/2.jpeg) ![image 4](https://huggingface.co/sd-dreambooth-library/mexican-concha/resolve/main/concept_images/3.jpeg) ![image 5](https://huggingface.co/sd-dreambooth-library/mexican-concha/resolve/main/concept_images/5.jpeg) ![image 6](https://huggingface.co/sd-dreambooth-library/mexican-concha/resolve/main/concept_images/4.jpeg) ![image 7](https://huggingface.co/sd-dreambooth-library/mexican-concha/resolve/main/concept_images/1.jpeg)
44d56e750186b107c485e49147ddb738
apache-2.0
['generated_from_trainer']
false
84rry-xlsr-53-arabic This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.0025 - Wer: 0.4977
0e0299c3198ead166986a75fafe7b8bf
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP
46f7ba23b20e3f01f6144f7dfc182422
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.4906 | 2.25 | 500 | 1.3179 | 0.8390 | | 0.8851 | 4.5 | 1000 | 0.7385 | 0.6221 | | 0.6884 | 6.76 | 1500 | 0.7005 | 0.5765 | | 0.5525 | 9.01 | 2000 | 0.6931 | 0.5610 | | 0.474 | 11.26 | 2500 | 0.7977 | 0.5560 | | 0.3976 | 13.51 | 3000 | 0.7750 | 0.5375 | | 0.343 | 15.76 | 3500 | 0.7553 | 0.5206 | | 0.2838 | 18.02 | 4000 | 0.8162 | 0.5099 | | 0.2369 | 20.27 | 4500 | 0.8574 | 0.5124 | | 0.2298 | 22.52 | 5000 | 0.8848 | 0.5057 | | 0.1727 | 24.77 | 5500 | 0.9193 | 0.5070 | | 0.1675 | 27.03 | 6000 | 0.9959 | 0.4988 | | 0.1457 | 29.28 | 6500 | 1.0025 | 0.4977 |
2b542f58c0d1b56f376f61a1f3721ecf
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 150 - eval_batch_size: 40 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 1200 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6.0
ba922b0cf37625351a6aa2c822039c9a
mit
['pytorch', 'diffusers', 'unconditional-image-generation', 'diffusion-models-class']
false
Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. The model was trained with 1000 images using the [DDPM](https://arxiv.org/abs/2006.11239) architecture. Images generated are of 64x64 pixel size. The model was trained for 50 epochs with a batch size of 64, using around 10 GB of GPU memory.
7ad8cb8bdc10689e9bcb1700f55aa856
apache-2.0
['generated_from_trainer']
false
bert-base-uncased-finetuned-0505-2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4277 - Accuracy: 0.9206 - F1: 0.9205
803b60dd32f1297991093c892802f1b4