license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2986 | 1.0 | 835 | 0.1939 | 0.8077 | | 0.1547 | 2.0 | 1670 | 0.1813 | 0.8351 | | 0.1003 | 3.0 | 2505 | 0.1757 | 0.8513 |
4ed9ad06d6bd0059fd5cbfbef4da8767
apache-2.0
['t5-small', 'text2text-generation', 'natural language generation', 'conversational system', 'task-oriented dialog']
false
t5-small-nlg-multiwoz21 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on [MultiWOZ 2.1](https://huggingface.co/datasets/ConvLab/multiwoz21). Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage.
07e4a46d12aea81d85b9c925fe3bcc6f
apache-2.0
['t5-small', 'text2text-generation', 'natural language generation', 'conversational system', 'task-oriented dialog']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - optimizer: Adafactor - lr_scheduler_type: linear - num_epochs: 10.0
541df7bd03d2c2779c77a5119de7c4eb
apache-2.0
['translation']
false
opus-mt-en-bcl * source languages: en * target languages: bcl * OPUS readme: [en-bcl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-bcl/README.md) * dataset: opus+bt * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus+bt-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-bcl/opus+bt-2020-02-26.zip) * test set translations: [opus+bt-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bcl/opus+bt-2020-02-26.test.txt) * test set scores: [opus+bt-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bcl/opus+bt-2020-02-26.eval.txt)
cd583e34e51139eea6ac082df95f1241
apache-2.0
['generated_from_trainer']
false
fnet-large-finetuned-rte This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.7528 - Accuracy: 0.6426
07b55c41c534262c86a98033e9074e20
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0
a1fd78e73e9cf2e9906183b56b265afb
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7105 | 1.0 | 623 | 0.6887 | 0.5740 | | 0.6714 | 2.0 | 1246 | 0.6742 | 0.6209 | | 0.509 | 3.0 | 1869 | 0.7528 | 0.6426 |
bc2c6df6f24109051ba97b273d01134d
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'lora']
false
LoRA DreamBooth - a-photo-of-simbatheog These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "simbatheog" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. Test prompt: A photo of simbatheog in a bucket ![image_0](test_images/image_0.png) ![image_1](test_images/image_1.png) ![image_2](test_images/image_2.png) ![image_3](test_images/image_3.png)
330c543846bc24e602fa4d0a85a89f56
afl-3.0
[]
false
This model is used to detect **abusive speech** in **Code-Mixed Kannada**. It is finetuned on MuRIL model using Code-Mixed Kannada abusive speech dataset. The model is trained with learning rates of 2e-5. Training code can be found at this [url](https://github.com/hate-alert/IndicAbusive) LABEL_0 :-> Normal LABEL_1 :-> Abusive
86a151d3a4fd11c812710604018a6f06
cc0-1.0
[]
false
![VntgCrm_example_grid.jpg](https://s3.amazonaws.com/moonup/production/uploads/1670833927638-6334a32686c3fdcdc7adf4c0.jpeg) [![Become A Patreon](https://badgen.net/badge/become/a%20patron/F96854)](https://www.patreon.com/sebastiankamph)
bf2cdccc3c95eac26889959938852e88
cc0-1.0
[]
false
Vintage cream photo film Based on SD 2.1 768x768 **Token word: vntgcrm style** **Example prompt to start out with** RAW candid cinema, woman portrait, vntgcrm style, 16mm, ((remarkable color)), (ultra realistic) Negative: ugly, disfigured, deformed, too many hands, makeup, cartoon, render **Support my work on Patreon for Early access model releases** https://www.patreon.com/sebastiankamph **AI Art, Stable diffusion guides and tutorials on Youtube** https://www.youtube.com/@sebastiankamph **Chat in our community discord** https://discord.com/invite/dFB7zuXyFY **Installation** Download the .ckpt and the .yaml file. Put them inside \stable-diffusion-webui\Models\Stable-diffusion\ https://huggingface.co/SebastianKamphYT/VintageCream/blob/main/VintageCream.ckpt https://huggingface.co/SebastianKamphYT/VintageCream/blob/main/VintageCream.yaml
1778d3efdcb763e49d7b0e286ba793c2
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2
7405d8cf1316f709a695c7bbc90ec8c5
creativeml-openrail-m
[]
false
model by no3 This your waifu-diffusion v1.4 model fine-tuned kat concept taught to waifu-diffusion v1.4 with Dreambooth. It can be used by modifying the `instance_prompt`: **sks_kaatt** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts).
31cdecbe4799e3fc71bb610a62a830f9
creativeml-openrail-m
[]
false
note If you want to to use in UI like [AUTOMATIC1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui) or any UI that's uses .ckpt files just download one or more file from here for your convenience. [katFl-wd-1.4-beta1.ckpt](https://huggingface.co/no3/kat-wd-1.4-beta1/resolve/main/katFl-wd-1.4-beta1.ckpt) 5.16 GB [katFl-wd-1.4-beta1-pruned.ckpt](https://huggingface.co/no3/kat-wd-1.4-beta1/resolve/main/katFl-wd-1.4-beta1-pruned.ckpt) 2.58 GB Uses less storage space, but untested yet If you have issues or questions feel free to visit the Community Tab and start discussion about it. Here are images used for training this concept: ![image 1](https://huggingface.co/no3/kat-wd-1.4-beta1/resolve/main/concept_images/1.png) ![image 2](https://huggingface.co/no3/kat-wd-1.4-beta1/resolve/main/concept_images/2.png) ![image 3](https://huggingface.co/no3/kat-wd-1.4-beta1/resolve/main/concept_images/3.png) ![image 4](https://huggingface.co/no3/kat-wd-1.4-beta1/resolve/main/concept_images/1%20c.png) ![image 5](https://huggingface.co/no3/kat-wd-1.4-beta1/resolve/main/concept_images/2%20c.png)
5fc97281c7e151dce8b2b1b88f6cc78b
apache-2.0
['generated_from_trainer']
false
finetuned_token_2e-05_16_02_2022-14_37_42 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1722 - Precision: 0.3378 - Recall: 0.3615 - F1: 0.3492 - Accuracy: 0.9448
7d31e9850e81ec6e5274a5396b16e74c
apache-2.0
['generated_from_trainer']
false
mobilebert_sa_GLUE_Experiment_data_aug_rte_256 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 3.0847 - Accuracy: 0.4874
9b0879343b1aadd71f2520e5c0a6100c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2703 | 1.0 | 1136 | 3.2768 | 0.4657 | | 0.0555 | 2.0 | 2272 | 3.0847 | 0.4874 | | 0.0253 | 3.0 | 3408 | 5.4968 | 0.5018 | | 0.0149 | 4.0 | 4544 | 5.6020 | 0.4982 | | 0.0104 | 5.0 | 5680 | 6.6683 | 0.5090 | | 0.0082 | 6.0 | 6816 | 8.2220 | 0.5090 | | 0.0062 | 7.0 | 7952 | 8.2179 | 0.5054 |
40353f9360f009a82458488c1c1c85a7
cc-by-4.0
['question generation']
false
Model Card of `research-backup/bart-large-squadshifts-vanilla-amazon-qg` This model is fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) for question generation task on the [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (dataset_name: amazon) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
467e9427bc302ed81d3c161eefc2a434
cc-by-4.0
['question generation']
false
Overview - **Language model:** [facebook/bart-large](https://huggingface.co/facebook/bart-large) - **Language:** en - **Training data:** [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (amazon) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
1a1a47fb3ccead1c96804dcda1903fef
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "research-backup/bart-large-squadshifts-vanilla-amazon-qg") output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
39a7de82eb8fb359c94e917520db21d1
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/bart-large-squadshifts-vanilla-amazon-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.amazon.json) | | Score | Type | Dataset | |:-----------|--------:|:-------|:---------------------------------------------------------------------------| | BERTScore | 92.3 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_1 | 28.19 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_2 | 18.89 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_3 | 12.92 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_4 | 9.1 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | METEOR | 23.04 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | MoverScore | 62.81 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | ROUGE_L | 27.85 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
0235c24aa2202f89b73ec833162e8c25
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squadshifts - dataset_name: amazon - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: None - model: facebook/bart-large - max_length: 512 - max_length_output: 32 - epoch: 4 - batch: 32 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 2 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/bart-large-squadshifts-vanilla-amazon-qg/raw/main/trainer_config.json).
a9a6a520477753decc731f545bdceaab
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - distributed_type: IPU - gradient_accumulation_steps: 256 - total_train_batch_size: 2048 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 2.0 - training precision: Mixed Precision
5c33196b83d3427c843d59f34131176c
apache-2.0
['generated_from_keras_callback']
false
hsohn3/mayo-bert-uncased-wordlevel-block512-ep10 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3171 - Epoch: 9
13fd01f42551b6bea197a58d9043142d
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 - mlm_probability: 0.15 - batch_size: 8 - epochs: 10
0cc737e18339582b3f772bfc5894b1d2
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Epoch | |:----------:|:-----:| | 3.0885 | 0 | | 2.8340 | 1 | | 2.7975 | 2 | | 2.6720 | 3 | | 2.4868 | 4 | | 2.1750 | 5 | | 1.8143 | 6 | | 1.0948 | 7 | | 0.4915 | 8 | | 0.3171 | 9 |
f9bd4cad995f6b6dd4294dbdb86005b6
mit
['ja', 'japanese', 'gpt', 'text-generation', 'lm', 'nlp']
false
How to use the model *NOTE:* Use `T5Tokenizer` to initiate the tokenizer. ~~~~ import torch from transformers import T5Tokenizer, AutoModelForCausalLM tokenizer = T5Tokenizer.from_pretrained("rinna/japanese-gpt-1b") model = AutoModelForCausalLM.from_pretrained("rinna/japanese-gpt-1b") if torch.cuda.is_available(): model = model.to("cuda") text = "西田幾多郎は、" token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt") with torch.no_grad(): output_ids = model.generate( token_ids.to(model.device), max_length=100, min_length=100, do_sample=True, top_k=500, top_p=0.95, pad_token_id=tokenizer.pad_token_id, bos_token_id=tokenizer.bos_token_id, eos_token_id=tokenizer.eos_token_id, bad_word_ids=[[tokenizer.unk_token_id]] ) output = tokenizer.decode(output_ids.tolist()[0]) print(output)
4d2e091900107060cc98f7a0e0651cb6
mit
['ja', 'japanese', 'gpt', 'text-generation', 'lm', 'nlp']
false
sample output: 西田幾多郎は、その主著の「善の研究」などで、人間の内面に自然とその根源があると指摘し、その根源的な性格は、この西田哲学を象徴しているとして、カントの「純粋理性批判」と「判断力批判」を対比して捉えます。それは、「人が理性的存在であるかぎりにおいて、人はその当人に固有な道徳的に自覚された善悪の基準を持っている」とするもので、この理性的な善悪の観念を否定するのがカントの ~~~~
aa35cb3813de768c3fbc64b5f491b501
mit
['ja', 'japanese', 'gpt', 'text-generation', 'lm', 'nlp']
false
Training The model was trained on [Japanese C4](https://huggingface.co/datasets/allenai/c4), [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz) and [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch) to optimize a traditional language modelling objective. It reaches around 14 perplexity on a chosen validation set from the same data.
6c2db2e55686e2a8ec34354d7f3b03ce
mit
['ja', 'japanese', 'gpt', 'text-generation', 'lm', 'nlp']
false
Tokenization The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer. The vocabulary was first trained on a selected subset from the training data using the official sentencepiece training script, and then augmented with emojis and symbols.
abaefdb35f76504a7ae8ff31bfa84b82
apache-2.0
['generated_from_trainer']
false
results This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4578 - Precision: 0.0060 - Recall: 0.0286 - F1: 0.0099 - Accuracy: 0.4288
1570b1ba7de0efe29448b1bcfe388984
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 8 | 1.6449 | 0.0 | 0.0 | 0.0 | 0.3860 | | No log | 2.0 | 16 | 1.5439 | 0.0014 | 0.0071 | 0.0023 | 0.4025 | | No log | 3.0 | 24 | 1.4986 | 0.0068 | 0.0286 | 0.0110 | 0.4176 | | No log | 4.0 | 32 | 1.4603 | 0.0033 | 0.0143 | 0.0054 | 0.4285 | | No log | 5.0 | 40 | 1.4578 | 0.0060 | 0.0286 | 0.0099 | 0.4288 |
afe772310ee9315d9c6e61bac8259fca
apache-2.0
['deep-narrow']
false
T5-Efficient-TINY-NL8 (Deep-Narrow version) T5-Efficient-TINY-NL8 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block.
cd4b7200656779ed825eea22e7d835ff
apache-2.0
['deep-narrow']
false
Details model architecture This model checkpoint - **t5-efficient-tiny-nl8** - is of model type **Tiny** with the following variations: - **nl** is **8** It has **22.93** million parameters and thus requires *ca.* **91.74 MB** of memory in full precision (*fp32*) or **45.87 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh |
b0a3735b1e68812fa75434389545ca3b
cc-by-4.0
['automatic-speech-recognition', 'speech', 'Kinyarwanda', 'audio', 'CTC', 'Conformer', 'Transformer', 'NeMo', 'pytorch']
false
Transcribing many audio files ```shell python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="PaulChimzy/stt_rw_conformer_transducer_large" audio_dir="<DIRECTORY CONTAINING AUDIO FILES>" ```
f06fa9b9412bbd1df37627417d31b633
cc-by-4.0
['automatic-speech-recognition', 'speech', 'Kinyarwanda', 'audio', 'CTC', 'Conformer', 'Transformer', 'NeMo', 'pytorch']
false
Limitations <DECLARE ANY POTENTIAL LIMITATIONS OF THE MODEL> Eg: Since this model was trained on publically available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
79fa14827ae5d7470ad70be27e7ab8a3
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Wav2Vec2-Large-XLSR-53-breton Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Breton using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz.
ffb5ddd2e9f2a7e9bbb043ee9bfd328f
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "br", split="test[:2%]"). processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-breton") model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-breton") resampler = torchaudio.transforms.Resample(48_000, 16_000)
a33d85cb39cf5a4551ad426972a545d2
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ```
143306aa05ec3f0492d319ec2e51d580
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation The model can be evaluated as follows on the Breton test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "br", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-breton") model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-breton") model.to("cuda") chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000)
d1a35ba4466093d83dcf08a822ea5ec3
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn)
b3cd3421d034f6616dcc6f1c0b87e6f7
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 46.49 %
25b089e7f3e1941b2811265e4f7de116
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
**Object-Taped-To-Wall-Diffusion** This fine-tuned Stable Diffusion v1.5 model was trained for 2000 iterations with a batch size of 4, on a selection of photos of things taped to a wall. Training was performed using [ShivamShrirao/diffusers](https://github.com/ShivamShrirao/diffusers) with full precision, prior-preservation loss, the train-text-encoder feature, and the new [1.5 MSE VAE from Stability AI](https://huggingface.co/stabilityai/sd-vae-ft-mse). A total of 2100 regularization / class images were used from [here](https://huggingface.co/datasets/ProGamerGov/StableDiffusion-v1-5-Regularization-Images). Regularization images were generated using the prompt "artwork style" with 50 DPM++ 2S a Karras steps and a CFG of 7, using the MSE VAE. A negative prompt of "text" was also used for this dataset. Use the tokens **ttw style** in your prompts for the effect. Note that the effect also appears to occur at a much weaker strength on prompts that steer the output towards specific artistic styles. This model will likely not perform well on taping objects that are not traditionally able to be taped to walls. <div align="center"> <img src="https://huggingface.co/ProGamerGov/Object-Taped-To-Wall-Diffusion-V1/resolve/main/v1_size_512x512_t4x8.png"> </div> * [Full Image](https://huggingface.co/ProGamerGov/Object-Taped-To-Wall-Diffusion-V1/resolve/main/v1_size_512x512_t4x8.png) Example images were generated with the v1 2000 iteration model using DPM++ 2S a Karras: ``` ttw style, <object> taped to wall ``` This model was inspired by the 2019 art piece [*Comedian* by Italian artist Maurizio Cattelan](https://en.wikipedia.org/wiki/Comedian_(artwork\)), where a banana was duct taped to a wall.
51e7792fd538476766a925d77c8c3aba
apache-2.0
['generated_from_trainer']
false
whisper-small-ar This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8342 - Wer: 82.3706
151939c55501c59223400a60bcf20e9c
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP
ff5b107c5d8251303ad8771ae9eb9111
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.6454 | 5.0 | 1000 | 1.8790 | 86.8695 | | 0.0408 | 10.0 | 2000 | 2.4389 | 80.5579 | | 0.0043 | 15.0 | 3000 | 2.7456 | 82.2767 | | 0.002 | 20.0 | 4000 | 2.8342 | 82.3706 |
609344e9d2b0869b8f8ec21d7b98f1ad
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
sentence-transformers/gtr-t5-base This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space. The model was specifically trained for the task of sematic search. This model was converted from the Tensorflow model [gtr-base-1](https://tfhub.dev/google/gtr/gtr-base/1) to PyTorch. When using this model, have a look at the publication: [Large Dual Encoders Are Generalizable Retrievers](https://arxiv.org/abs/2112.07899). The tfhub model and this PyTorch model can produce slightly different embeddings, however, when run on the same benchmarks, they produce identical results. The model uses only the encoder from a T5-base model. The weights are stored in FP16.
1ef987f8dbd9ce2950f02e887f0d56b8
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/gtr-t5-base') embeddings = model.encode(sentences) print(embeddings) ``` The model requires sentence-transformers version 2.2.0 or newer.
ca6618633304c4d0f49015f4028f1085
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/gtr-t5-base)
0baa2dd50e9c653ffa32ea1871da77c2
apache-2.0
['generated_from_trainer']
false
bert-small-finer-longer This model is a fine-tuned version of [muhtasham/bert-small-finer](https://huggingface.co/muhtasham/bert-small-finer) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4264
3b4a300506ade3f8e549bd89cea061ef
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 20
d8130228621c2c2d94b2161a47294a7d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | No log | 0.49 | 500 | 1.6683 | | 1.5941 | 0.97 | 1000 | 1.6569 | | 1.5941 | 1.46 | 1500 | 1.6436 | | 1.5605 | 1.94 | 2000 | 1.6173 | | 1.5605 | 2.43 | 2500 | 1.6073 | | 1.5297 | 2.91 | 3000 | 1.6001 | | 1.5297 | 3.4 | 3500 | 1.5815 | | 1.5022 | 3.89 | 4000 | 1.5756 | | 1.5022 | 4.37 | 4500 | 1.5568 | | 1.4753 | 4.86 | 5000 | 1.5458 | | 1.4753 | 5.34 | 5500 | 1.5399 | | 1.4537 | 5.83 | 6000 | 1.5273 | | 1.4537 | 6.32 | 6500 | 1.5192 | | 1.433 | 6.8 | 7000 | 1.5099 | | 1.433 | 7.29 | 7500 | 1.5083 | | 1.4169 | 7.77 | 8000 | 1.4957 | | 1.4169 | 8.26 | 8500 | 1.4914 | | 1.3982 | 8.75 | 9000 | 1.4859 | | 1.3982 | 9.23 | 9500 | 1.4697 | | 1.3877 | 9.72 | 10000 | 1.4711 | | 1.3877 | 10.2 | 10500 | 1.4608 | | 1.3729 | 10.69 | 11000 | 1.4583 | | 1.3729 | 11.18 | 11500 | 1.4513 | | 1.3627 | 11.66 | 12000 | 1.4498 | | 1.3627 | 12.15 | 12500 | 1.4396 | | 1.357 | 12.63 | 13000 | 1.4415 | | 1.357 | 13.12 | 13500 | 1.4347 | | 1.3484 | 13.61 | 14000 | 1.4316 | | 1.3484 | 14.09 | 14500 | 1.4319 | | 1.3442 | 14.58 | 15000 | 1.4268 | | 1.3442 | 15.06 | 15500 | 1.4293 | | 1.3387 | 15.55 | 16000 | 1.4217 | | 1.3387 | 16.03 | 16500 | 1.4241 | | 1.3358 | 16.52 | 17000 | 1.4250 | | 1.3358 | 17.01 | 17500 | 1.4196 | | 1.3344 | 17.49 | 18000 | 1.4193 | | 1.3344 | 17.98 | 18500 | 1.4200 | | 1.3274 | 18.46 | 19000 | 1.4250 | | 1.3274 | 18.95 | 19500 | 1.4168 | | 1.3348 | 19.44 | 20000 | 1.4164 | | 1.3348 | 19.92 | 20500 | 1.4264 |
f2d56c4b78e8e67059c228476984f91b
apache-2.0
['image-classification', 'timm']
false
Model card for maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k A timm specific MaxxViT-V2 (w/ a MLP Log-CPB (continuous log-coordinate relative position bias motivated by Swin-V2) image classification model. Pretrained in `timm` on ImageNet-12k (a 11821 class subset of full ImageNet-22k) and fine-tuned on ImageNet-1k by Ross Wightman. ImageNet-12k pretraining and ImageNet-1k fine-tuning performed on 8x GPU [Lambda Labs](https://lambdalabs.com/) cloud instances..
16e07ef387d050b086aed1d33cfc899f
apache-2.0
['image-classification', 'timm']
false
Model Variants in [maxxvit.py](https://github.com/rwightman/pytorch-image-models/blob/main/timm/models/maxxvit.py) MaxxViT covers a number of related model architectures that share a common structure including: - CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages. - MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid). - CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate. Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations. All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released.
4ec58fa1e25b841f7be3679158af66d2
apache-2.0
['image-classification', 'timm']
false
Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 116.1 - GMACs: 24.2 - Activations (M): 62.8 - Image size: 224 x 224 - **Papers:** - MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697 - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-12k
82d40b2ca0a060a0c7f71c07d5a8e703
apache-2.0
['image-classification', 'timm']
false
Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k', pretrained=True) model = model.eval()
b0d1b13f2ab5a6453f18d30083a09514
apache-2.0
['image-classification', 'timm']
false
Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k', pretrained=True, features_only=True, ) model = model.eval()
6dcd6cf7d8c203d70a9bfbe1e6b71db1
apache-2.0
['image-classification', 'timm']
false
Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k', pretrained=True, num_classes=0,
bbcab2004fabc13fb5f0057020be0d50
apache-2.0
['image-classification', 'timm']
false
By Top-1 |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
24b26c10c551a267a7bbc60f4943ae18
apache-2.0
['image-classification', 'timm']
false
By Throughput (samples / sec) |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
f7350c185e37a9384f05d4ffd11f3d29
apache-2.0
['image-classification', 'timm']
false
Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/rwightman/pytorch-image-models}} } ``` ```bibtex @article{tu2022maxvit, title={MaxViT: Multi-Axis Vision Transformer}, author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao}, journal={ECCV}, year={2022}, } ``` ```bibtex @article{dai2021coatnet, title={CoAtNet: Marrying Convolution and Attention for All Data Sizes}, author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing}, journal={arXiv preprint arXiv:2106.04803}, year={2021} } ```
1bae4a6e7491becb24b6fe58072355bd
apache-2.0
['pythae', 'reproducibility']
false
This model was trained with pythae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from pythae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="clementchadebec/reproduced_wrapped_poincare_vae") ```
3def247a415f137eb607eb5cbc3baa4f
apache-2.0
['pythae', 'reproducibility']
false
Reproducibility This trained model reproduces the results of the official implementation of [1]. | Model | Dataset | Metric | Obtained value | Reference value | |:---:|:---:|:---:|:---:|:---:| | PoincareVAE | MNIST | NLL (500 IS) | 101.66 (0.00) | 101.47 (0.01) | [1] Mathieu, E., Le Lan, C., Maddison, C. J., Tomioka, R., & Teh, Y. W. (2019). Continuous hierarchical representations with poincaré variational auto-encoders. Advances in neural information processing systems, 32.
dbc2433a1c7bf33739074b7ff3e31a94
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape']
false
DreamBooth model for the norweigen-fjords concept trained by StatsGary on the StatsGary/dreambooth-hackathon-images dataset. This is a Stable Diffusion model fine-tuned on the norweigen-fjords concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a viking on the fjords** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
73bcdc3913d1d9d3ac456a6b4d94d56d
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape']
false
Lobster swimming in a Fjord The below example uses a prompt similar to *lobster swimming in a fjord* to generate the output: ![lobster.jpeg](https://s3.amazonaws.com/moonup/production/uploads/1673018851087-63b83d10e60862785afef49f.jpeg)
506a2b94644318402faab3dce5216cf8
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape']
false
Viking warrior in a Fjord This represents a generated Viking warrior on or near a Fjord. The prompt used to generate is **prompt**=*a viking warrior on a fjord*: ![viking_on_fjord.jpeg](https://s3.amazonaws.com/moonup/production/uploads/1673019199954-627cebc6cecd686d4cd7411c.jpeg)
da8b581a7ecce700774b65cf04baba33
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape']
false
A yellow submarine (inspired by The Beetles) Here, we see a yellow submarine inspired by the popular Beetles album. The prompt used to generate is **prompt**=a beetles like yellow submarines on a fjord*: ![Beetles_submarine.jpeg](https://s3.amazonaws.com/moonup/production/uploads/1673019576047-627cebc6cecd686d4cd7411c.jpeg)
80f69ec60764403b5965449f9d18b24c
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape']
false
A cruise ship on a fjord This is based on the **prompt**=*a cruise ship on a fjord*: ![6bd7a6b7-9716-478e-81ea-7f58b59707e8.jpeg](https://s3.amazonaws.com/moonup/production/uploads/1673271806453-627cebc6cecd686d4cd7411c.jpeg)
98add435e7dde382ddc1bd85fac7c390
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape']
false
Taj Mahal on a Fjord This generates landmarks near or on the fjord: ![68dd6b17-bb8c-45e7-bfe6-79442f633121.jpeg](https://s3.amazonaws.com/moonup/production/uploads/1674057958178-627cebc6cecd686d4cd7411c.jpeg)
8b0113ff090a95d23b880e77c6d9398a
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape']
false
Watersports on a Fjord This is an example of a kayaker on a fjord - generated using *prompt*="a kayaker on a fjord": ![1e730131-63c4-4095-9f36-61e8659c946a.jpeg](https://s3.amazonaws.com/moonup/production/uploads/1674058117373-627cebc6cecd686d4cd7411c.jpeg) What about a surfer on a fjord: ![surfer.jpeg](https://s3.amazonaws.com/moonup/production/uploads/1674058620579-627cebc6cecd686d4cd7411c.jpeg)
9a37a540bd09363a1727734612740279
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape']
false
Godzilla wading through a Fjord This one is a generated image of Godzilla wading through a Fjord: ![45618490-f4d3-44e4-ac8b-a0375b983576.jpeg](https://s3.amazonaws.com/moonup/production/uploads/1674058731220-627cebc6cecd686d4cd7411c.jpeg)
54a4947d7473fd2f91ef93b718681987
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape']
false
How about T-Rex On the theme of Godzilla, what about T-Rex: ![eef051e5-267b-426e-97a1-fbd947185dba.jpeg](https://s3.amazonaws.com/moonup/production/uploads/1674058942184-627cebc6cecd686d4cd7411c.jpeg)
33f48d2b1b37ca0d211bbac8136e1494
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape']
false
Paintings on a Fjord We could explore what a **Da Vinci** type painting would look like on a Fjord: ![davinci.jpeg](https://s3.amazonaws.com/moonup/production/uploads/1674664480840-627cebc6cecd686d4cd7411c.jpeg)
1b2f35907457ae9fe1e1f7404e544840
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape']
false
Generating your own predictions The following Python code will allow you to get up and running quickly, just replace the *prompt* field for your own generation, wait for HuggingFace to compute and you should have your own Stable Diffusion object generated against a backdrop of the fjords. Idyllic! ```python from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained('StatsGary/norweigen-fjords-fjords') image = pipeline(prompt='a viking on a fjord').images[0] image ```
f714aa7f902cf761509d25419a5324c3
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape']
false
Supporting article(s) I have undertaken a blog to explain this: - Fjord stable diffusion model: https://hutsons-hacks.info/stable-diffusion-model-for-generating-images-of-fjords - Stable diffusion application with Streamlit: https://hutsons-hacks.info/stable-diffusion-application-with-streamlit
422c27650e8fa3539e37328257ab4b4f
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2133 - Accuracy: 0.9265 - F1: 0.9265
0bf12517150c5d1f0dba9117933f1e5e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8401 | 1.0 | 250 | 0.3144 | 0.9085 | 0.9058 | | 0.2524 | 2.0 | 500 | 0.2133 | 0.9265 | 0.9265 |
e4dd44016b3cf3fbda081d4ac96f00ff
mit
['generated_from_trainer']
false
bertdbmdzIhate This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6880 - Accuracy: 0.726 - F1: 0.4170
a65aff55e04485c8627f185645d798b7
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-colab_2 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3801 - Wer: 0.3035
e1e2793af861e000cf5a623d5e3447b0
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.7227 | 3.52 | 500 | 2.6961 | 1.0 | | 1.1237 | 7.04 | 1000 | 0.6088 | 0.5315 | | 0.4886 | 10.56 | 1500 | 0.4709 | 0.4353 | | 0.3148 | 14.08 | 2000 | 0.4341 | 0.3942 | | 0.2229 | 17.61 | 2500 | 0.4035 | 0.3616 | | 0.1693 | 21.13 | 3000 | 0.3868 | 0.3289 | | 0.1393 | 24.65 | 3500 | 0.3993 | 0.3135 | | 0.118 | 28.17 | 4000 | 0.3801 | 0.3035 |
800fb9da01d570e2f06db69c60bd67ad
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-scratch This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 6.6235
6ffb26161b2b2ce186150249b4efd074
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 8.388 | 1.0 | 157 | 7.3651 | | 6.9902 | 2.0 | 314 | 6.7300 | | 6.659 | 3.0 | 471 | 6.6304 |
c309b1e8853ed40d3b4f4dd4117ea531
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.3623 - Accuracy: 0.903 - F1: 0.9003
3e15d5160a9436a5df32ed73c8de76d7
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2
a6579000e307eef4b701503c8b8707d7
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 125 | 0.5960 | 0.8025 | 0.7750 | | 0.7853 | 2.0 | 250 | 0.3623 | 0.903 | 0.9003 |
63c520407a1cef1948ab55907dccfc84
apache-2.0
['generated_from_trainer']
false
convnext-base-224_finetuned_on_ImageIn_annotations This model is a fine-tuned version of [facebook/convnext-base-224](https://huggingface.co/facebook/convnext-base-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0749 - Precision: 0.9722 - Recall: 0.9811 - F1: 0.9765 - Accuracy: 0.9824
97e11c7d07739b01290608e700f76cfb
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP
7ef5ce248809cf8ad3b35a3d3adca18a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 83 | 0.1368 | 0.9748 | 0.9632 | 0.9688 | 0.9772 | | No log | 2.0 | 166 | 0.0734 | 0.9750 | 0.9727 | 0.9739 | 0.9807 | | No log | 3.0 | 249 | 0.0693 | 0.9750 | 0.9727 | 0.9739 | 0.9807 | | No log | 4.0 | 332 | 0.0698 | 0.9750 | 0.9727 | 0.9739 | 0.9807 | | No log | 5.0 | 415 | 0.0688 | 0.9750 | 0.9727 | 0.9739 | 0.9807 | | No log | 6.0 | 498 | 0.0690 | 0.9729 | 0.9751 | 0.9740 | 0.9807 | | 0.0947 | 7.0 | 581 | 0.0666 | 0.9689 | 0.9800 | 0.9743 | 0.9807 | | 0.0947 | 8.0 | 664 | 0.0642 | 0.9689 | 0.9800 | 0.9743 | 0.9807 | | 0.0947 | 9.0 | 747 | 0.0790 | 0.9763 | 0.9763 | 0.9763 | 0.9824 | | 0.0947 | 10.0 | 830 | 0.0813 | 0.9750 | 0.9727 | 0.9739 | 0.9807 | | 0.0947 | 11.0 | 913 | 0.0797 | 0.9750 | 0.9727 | 0.9739 | 0.9807 | | 0.0947 | 12.0 | 996 | 0.0791 | 0.9763 | 0.9763 | 0.9763 | 0.9824 | | 0.0205 | 13.0 | 1079 | 0.0871 | 0.9750 | 0.9727 | 0.9739 | 0.9807 | | 0.0205 | 14.0 | 1162 | 0.0716 | 0.9722 | 0.9811 | 0.9765 | 0.9824 | | 0.0205 | 15.0 | 1245 | 0.0746 | 0.9776 | 0.9799 | 0.9787 | 0.9842 | | 0.0205 | 16.0 | 1328 | 0.0917 | 0.9738 | 0.9692 | 0.9714 | 0.9789 | | 0.0205 | 17.0 | 1411 | 0.0694 | 0.9776 | 0.9799 | 0.9787 | 0.9842 | | 0.0205 | 18.0 | 1494 | 0.0697 | 0.9768 | 0.9859 | 0.9812 | 0.9859 | | 0.0166 | 19.0 | 1577 | 0.0689 | 0.9702 | 0.9835 | 0.9766 | 0.9824 | | 0.0166 | 20.0 | 1660 | 0.0995 | 0.9738 | 0.9692 | 0.9714 | 0.9789 | | 0.0166 | 21.0 | 1743 | 0.0847 | 0.9776 | 0.9799 | 0.9787 | 0.9842 | | 0.0166 | 22.0 | 1826 | 0.0843 | 0.9776 | 0.9799 | 0.9787 | 0.9842 | | 0.0166 | 23.0 | 1909 | 0.0869 | 0.9750 | 0.9727 | 0.9739 | 0.9807 | | 0.0166 | 24.0 | 1992 | 0.0762 | 0.9789 | 0.9835 | 0.9811 | 0.9859 | | 0.0125 | 25.0 | 2075 | 0.0778 | 0.9789 | 0.9835 | 0.9811 | 0.9859 | | 0.0125 | 26.0 | 2158 | 0.0834 | 0.9763 | 0.9763 | 0.9763 | 0.9824 | | 0.0125 | 27.0 | 2241 | 0.0818 | 0.9776 | 0.9799 | 0.9787 | 0.9842 | | 0.0125 | 28.0 | 2324 | 0.0756 | 0.9684 | 0.9859 | 0.9768 | 0.9824 | | 0.0125 | 29.0 | 2407 | 0.1150 | 0.9591 | 0.9824 | 0.9700 | 0.9772 | | 0.0125 | 30.0 | 2490 | 0.0781 | 0.9748 | 0.9883 | 0.9813 | 0.9859 | | 0.0111 | 31.0 | 2573 | 0.0793 | 0.9716 | 0.9871 | 0.9790 | 0.9842 | | 0.0111 | 32.0 | 2656 | 0.0713 | 0.9748 | 0.9883 | 0.9813 | 0.9859 | | 0.0111 | 33.0 | 2739 | 0.0802 | 0.9748 | 0.9883 | 0.9813 | 0.9859 | | 0.0111 | 34.0 | 2822 | 0.0636 | 0.9802 | 0.9870 | 0.9835 | 0.9877 | | 0.0111 | 35.0 | 2905 | 0.0702 | 0.9789 | 0.9835 | 0.9811 | 0.9859 | | 0.0111 | 36.0 | 2988 | 0.0773 | 0.9748 | 0.9883 | 0.9813 | 0.9859 | | 0.0145 | 37.0 | 3071 | 0.0663 | 0.9781 | 0.9894 | 0.9836 | 0.9877 | | 0.0145 | 38.0 | 3154 | 0.0721 | 0.9789 | 0.9835 | 0.9811 | 0.9859 | | 0.0145 | 39.0 | 3237 | 0.0708 | 0.9789 | 0.9835 | 0.9811 | 0.9859 | | 0.0145 | 40.0 | 3320 | 0.0729 | 0.9748 | 0.9883 | 0.9813 | 0.9859 | | 0.0145 | 41.0 | 3403 | 0.0760 | 0.9748 | 0.9883 | 0.9813 | 0.9859 | | 0.0145 | 42.0 | 3486 | 0.0771 | 0.9716 | 0.9871 | 0.9790 | 0.9842 | | 0.0106 | 43.0 | 3569 | 0.0713 | 0.9748 | 0.9883 | 0.9813 | 0.9859 | | 0.0106 | 44.0 | 3652 | 0.0721 | 0.9748 | 0.9883 | 0.9813 | 0.9859 | | 0.0106 | 45.0 | 3735 | 0.0732 | 0.9768 | 0.9859 | 0.9812 | 0.9859 | | 0.0106 | 46.0 | 3818 | 0.0783 | 0.9789 | 0.9835 | 0.9811 | 0.9859 | | 0.0106 | 47.0 | 3901 | 0.0770 | 0.9789 | 0.9835 | 0.9811 | 0.9859 | | 0.0106 | 48.0 | 3984 | 0.0744 | 0.9735 | 0.9847 | 0.9789 | 0.9842 | | 0.0082 | 49.0 | 4067 | 0.0752 | 0.9722 | 0.9811 | 0.9765 | 0.9824 | | 0.0082 | 50.0 | 4150 | 0.0749 | 0.9722 | 0.9811 | 0.9765 | 0.9824 |
db4017cdb27b4bce58492a43d97da171
apache-2.0
['generated_from_trainer']
false
t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the eli5 dataset. It achieves the following results on the evaluation set: - Loss: 3.6744 - Rouge1: 13.2843 - Rouge2: 2.006 - Rougel: 10.6541 - Rougelsum: 12.0343 - Gen Len: 18.9984
f09cb3c682761f7e45b1cf208b240c10
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 3.8822 | 1.0 | 17040 | 3.6744 | 13.2843 | 2.006 | 10.6541 | 12.0343 | 18.9984 |
210ae00df6bd6f12de8b6378a51b0a37
apache-2.0
['generated_from_trainer']
false
![SGH logo.png](https://s3.amazonaws.com/moonup/production/uploads/1667143139655-631feef1124782a19eff4243.png) This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the SGH news articles and summaries dataset. It achieves the following results on the evaluation set: - Loss: 1.9680 - Rouge1 Precision: 0.4404 - Rouge1 Recall: 0.5874 - Rouge1 Fmeasure: 0.4653 - Rouge2 Precision: 0.2673 - Rouge2 Recall: 0.3871 - Rouge2 Fmeasure: 0.2897 - Rougel Precision: 0.3059 - Rougel Recall: 0.4418 - Rougel Fmeasure: 0.3308 - Rougelsum Precision: 0.3059 - Rougelsum Recall: 0.4418 - Rougelsum Fmeasure: 0.3308
151f6ccc4a73904776484e42513e4404
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 Precision | Rouge1 Recall | Rouge1 Fmeasure | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | Rougel Precision | Rougel Recall | Rougel Fmeasure | Rougelsum Precision | Rougelsum Recall | Rougelsum Fmeasure | |:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:-------------------:|:----------------:|:------------------:| | 1.4834 | 0.43 | 10 | 1.7001 | 0.2304 | 0.6761 | 0.3152 | 0.1326 | 0.4034 | 0.1797 | 0.1495 | 0.4624 | 0.2069 | 0.1495 | 0.4624 | 0.2069 | | 1.5011 | 0.87 | 20 | 1.6051 | 0.4301 | 0.5372 | 0.4087 | 0.2481 | 0.3439 | 0.245 | 0.2878 | 0.3928 | 0.2834 | 0.2878 | 0.3928 | 0.2834 | | 0.9289 | 1.3 | 30 | 1.5501 | 0.431 | 0.597 | 0.4364 | 0.2653 | 0.393 | 0.2736 | 0.3007 | 0.4233 | 0.3037 | 0.3007 | 0.4233 | 0.3037 | | 1.0895 | 1.74 | 40 | 1.5969 | 0.4661 | 0.5481 | 0.4486 | 0.2736 | 0.3439 | 0.2689 | 0.3318 | 0.4045 | 0.3221 | 0.3318 | 0.4045 | 0.3221 | | 0.7785 | 2.17 | 50 | 1.5875 | 0.4527 | 0.5405 | 0.4209 | 0.2942 | 0.3634 | 0.272 | 0.3268 | 0.4047 | 0.3042 | 0.3268 | 0.4047 | 0.3042 | | 0.635 | 2.61 | 60 | 1.6081 | 0.4142 | 0.5649 | 0.4172 | 0.242 | 0.3659 | 0.2549 | 0.2787 | 0.4156 | 0.2909 | 0.2787 | 0.4156 | 0.2909 | | 0.514 | 3.04 | 70 | 1.6150 | 0.4431 | 0.5665 | 0.4569 | 0.2656 | 0.3754 | 0.2853 | 0.3252 | 0.441 | 0.3434 | 0.3252 | 0.441 | 0.3434 | | 0.5617 | 3.48 | 80 | 1.6447 | 0.3956 | 0.6304 | 0.451 | 0.2353 | 0.425 | 0.2776 | 0.2883 | 0.4904 | 0.3332 | 0.2883 | 0.4904 | 0.3332 | | 0.396 | 3.91 | 90 | 1.7423 | 0.4276 | 0.609 | 0.4506 | 0.2657 | 0.4142 | 0.2858 | 0.3091 | 0.4677 | 0.3316 | 0.3091 | 0.4677 | 0.3316 | | 0.3427 | 4.35 | 100 | 1.7572 | 0.3877 | 0.5633 | 0.4169 | 0.216 | 0.3635 | 0.2468 | 0.2706 | 0.4314 | 0.3018 | 0.2706 | 0.4314 | 0.3018 | | 0.3059 | 4.78 | 110 | 1.7705 | 0.4255 | 0.5524 | 0.4429 | 0.2495 | 0.3488 | 0.2671 | 0.3184 | 0.4275 | 0.3358 | 0.3184 | 0.4275 | 0.3358 | | 0.2083 | 5.22 | 120 | 1.7840 | 0.4533 | 0.5896 | 0.4655 | 0.284 | 0.4142 | 0.308 | 0.3164 | 0.4442 | 0.3376 | 0.3164 | 0.4442 | 0.3376 | | 0.2591 | 5.65 | 130 | 1.8396 | 0.4391 | 0.5315 | 0.4209 | 0.2768 | 0.3661 | 0.2707 | 0.3194 | 0.4124 | 0.3111 | 0.3194 | 0.4124 | 0.3111 | | 0.2609 | 6.09 | 140 | 1.8220 | 0.4425 | 0.5712 | 0.4465 | 0.2642 | 0.3738 | 0.2727 | 0.3093 | 0.4349 | 0.3208 | 0.3093 | 0.4349 | 0.3208 | | 0.1696 | 6.52 | 150 | 1.8916 | 0.475 | 0.5557 | 0.4686 | 0.2959 | 0.3783 | 0.3019 | 0.3409 | 0.4268 | 0.3442 | 0.3409 | 0.4268 | 0.3442 | | 0.2683 | 6.96 | 160 | 1.8957 | 0.445 | 0.5918 | 0.4748 | 0.285 | 0.4021 | 0.3075 | 0.3249 | 0.4551 | 0.3522 | 0.3249 | 0.4551 | 0.3522 | | 0.1259 | 7.39 | 170 | 1.9371 | 0.4473 | 0.5368 | 0.4664 | 0.2608 | 0.3355 | 0.282 | 0.3276 | 0.4071 | 0.3492 | 0.3276 | 0.4071 | 0.3492 | | 0.1919 | 7.83 | 180 | 1.9521 | 0.4026 | 0.5528 | 0.438 | 0.2362 | 0.3427 | 0.2604 | 0.2751 | 0.3957 | 0.3042 | 0.2751 | 0.3957 | 0.3042 | | 0.1279 | 8.26 | 190 | 1.9398 | 0.413 | 0.6053 | 0.4575 | 0.2511 | 0.403 | 0.2881 | 0.2662 | 0.4195 | 0.3027 | 0.2662 | 0.4195 | 0.3027 | | 0.1176 | 8.7 | 200 | 1.9556 | 0.4363 | 0.565 | 0.4492 | 0.2591 | 0.3727 | 0.2806 | 0.3107 | 0.428 | 0.3289 | 0.3107 | 0.428 | 0.3289 | | 0.1299 | 9.13 | 210 | 1.9642 | 0.4385 | 0.5728 | 0.4587 | 0.2687 | 0.3744 | 0.2888 | 0.3212 | 0.436 | 0.3404 | 0.3212 | 0.436 | 0.3404 | | 0.1303 | 9.57 | 220 | 1.9649 | 0.43 | 0.5648 | 0.439 | 0.2605 | 0.3624 | 0.2691 | 0.2958 | 0.4135 | 0.3067 | 0.2958 | 0.4135 | 0.3067 | | 0.1129 | 10.0 | 230 | 1.9680 | 0.4404 | 0.5874 | 0.4653 | 0.2673 | 0.3871 | 0.2897 | 0.3059 | 0.4418 | 0.3308 | 0.3059 | 0.4418 | 0.3308 |
e51201e1800b4a507006a962a8cc2dde
mit
['generated_from_trainer']
false
deberta-base-finetuned-squad1 This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 0.8037
d4d7bb60704b3bbc524e57df081c4254
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.7928 | 1.0 | 7380 | 0.7810 | | 0.5795 | 2.0 | 14760 | 0.8037 |
fa1b3e75f136a2a633d0be8d5fd3479b
apache-2.0
['generated_from_trainer']
false
t5-small-finetuned-en-es This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8937 - Rouge1: 32.6939 - Rouge2: 11.794 - Rougel: 31.9982 - Rougelsum: 31.9902 - Gen Len: 15.7947
00cf71e599f21f2c6f9e6c195ae031e7
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.251 | 1.0 | 7061 | 1.8937 | 32.6939 | 11.794 | 31.9982 | 31.9902 | 15.7947 |
9e333da2a6a47f43e705d23f87dcc3ed
mit
[]
false
Introduction XDoc is a unified pre-trained model that deals with different document formats in a single model. With only 36.7% parameters, XDoc achieves comparable or better performance on downstream tasks, which is cost-effective for real-world deployment. [XDoc: Unified Pre-training for Cross-Format Document Understanding](https://arxiv.org/abs/2210.02849) Jingye Chen, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei, [EMNLP 2022](
88d53faedacbce1939a4e434ff2a89cb
mit
[]
false
Citation If you find XDoc helpful, please cite us: ``` @article{chen2022xdoc, title={XDoc: Unified Pre-training for Cross-Format Document Understanding}, author={Chen, Jingye and Lv, Tengchao and Cui, Lei and Zhang, Cha and Wei, Furu}, journal={arXiv preprint arXiv:2210.02849}, year={2022} } ```
278376f25378f09a250a5e31d7e91ae1
apache-2.0
['automatic-speech-recognition', 'fa']
false
exp_w2v2t_fa_vp-sv_s689 Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
262c2850b42a8c3e0efe66c5ba59d3a4
mit
[]
false
Collage3 on Stable Diffusion This is the `<Collage3>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<Collage3> 0](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/19.jpeg) ![<Collage3> 1](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/5.jpeg) ![<Collage3> 2](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/6.jpeg) ![<Collage3> 3](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/15.jpeg) ![<Collage3> 4](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/20.jpeg) ![<Collage3> 5](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/14.jpeg) ![<Collage3> 6](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/9.jpeg) ![<Collage3> 7](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/3.jpeg) ![<Collage3> 8](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/0.jpeg) ![<Collage3> 9](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/17.jpeg) ![<Collage3> 10](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/12.jpeg) ![<Collage3> 11](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/13.jpeg) ![<Collage3> 12](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/2.jpeg) ![<Collage3> 13](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/16.jpeg) ![<Collage3> 14](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/18.jpeg) ![<Collage3> 15](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/22.jpeg) ![<Collage3> 16](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/10.jpeg) ![<Collage3> 17](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/7.jpeg) ![<Collage3> 18](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/1.jpeg) ![<Collage3> 19](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/21.jpeg) ![<Collage3> 20](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/23.jpeg) ![<Collage3> 21](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/11.jpeg) ![<Collage3> 22](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/4.jpeg) ![<Collage3> 23](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/8.jpeg)
63b3bdd1cb7229281c6a779e0297d527