repo_id stringlengths 4 110 | author stringlengths 2 27 ⌀ | model_type stringlengths 2 29 ⌀ | files_per_repo int64 2 15.4k | downloads_30d int64 0 19.9M | library stringlengths 2 37 ⌀ | likes int64 0 4.34k | pipeline stringlengths 5 30 ⌀ | pytorch bool 2
classes | tensorflow bool 2
classes | jax bool 2
classes | license stringlengths 2 30 | languages stringlengths 4 1.63k ⌀ | datasets stringlengths 2 2.58k ⌀ | co2 stringclasses 29
values | prs_count int64 0 125 | prs_open int64 0 120 | prs_merged int64 0 15 | prs_closed int64 0 28 | discussions_count int64 0 218 | discussions_open int64 0 148 | discussions_closed int64 0 70 | tags stringlengths 2 513 | has_model_index bool 2
classes | has_metadata bool 1
class | has_text bool 1
class | text_length int64 401 598k | is_nc bool 1
class | readme stringlengths 0 598k | hash stringlengths 32 32 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggan/fastgan-few-shot-shells | huggan | null | 8 | 0 | null | 0 | unconditional-image-generation | true | false | false | mit | null | ['huggan/few-shot-shells'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['huggan', 'gan', 'unconditional-image-generation'] | false | true | true | 1,870 | false |
# Generate shell image using FastGAN
## Model description
[FastGAN model](https://arxiv.org/abs/2101.04775) is a Generative Adversarial Networks (GAN) training on a small amount of high-fidelity images with minimum computing cost. Using a skip-layer channel-wise excitation module and a self-supervised discriminator ... | 8993f2a232d06c9aa203c0a39945cb2c |
MaryaAI/opus-mt-ar-en-finetunedTanzil-v7-ar-to-en | MaryaAI | marian | 9 | 1 | transformers | 0 | text2text-generation | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 2,061 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# opus-mt-ar-en-finetunedTanzil-v7-ar-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/He... | bd8112472fc5ed21208a03b4fc0d0ecf |
LeBenchmark/wav2vec2-FR-1K-large | LeBenchmark | wav2vec2 | 7 | 13 | transformers | 0 | feature-extraction | true | false | true | apache-2.0 | ['fr'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['wav2vec2'] | false | true | true | 4,537 | false |
# LeBenchmark: wav2vec2 large model trained on 1K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. For more information on the different benchmarks that can be used to evaluate the wav2vec2 mode... | cd05bf36ee40f433ed296675d7630eb1 |
patrickvonplaten/lora_dreambooth_dog_example | patrickvonplaten | null | 38 | 0 | diffusers | 1 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'lora'] | false | true | true | 401 | false |
# LoRA DreamBooth - https://huggingface.co/patrickvonplaten/dummy
These are LoRA adaption weights for https://huggingface.co/patrickvonplaten/dummy. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
 on an unknown datas... | cd51f43805a001ef4df615f61de6a64c |
fathyshalab/domain_transfer_general-massive_general-roberta-large-v1-5-95 | fathyshalab | roberta | 14 | 2 | sentence-transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['setfit', 'sentence-transformers', 'text-classification'] | false | true | true | 1,512 | false |
# fathyshalab/domain_transfer_general-massive_general-roberta-large-v1-5-95
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https:... | 4e11cfb0d07ec4e9aff05ff89d720dac |
datauma/bert-finetuned-ner | datauma | bert | 12 | 5 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['conll2003'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,518 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2... | 6c01284a7ea5138bcaa88eb232e8c448 |
jonatasgrosman/exp_w2v2t_nl_no-pretraining_s512 | jonatasgrosman | wav2vec2 | 10 | 2 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['nl'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'nl'] | false | true | true | 414 | false | # exp_w2v2t_nl_no-pretraining_s512
Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (nl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has be... | 147c1abb3e169ff160264d2b1c854694 |
sumedh/wav2vec2-large-xlsr-marathi | sumedh | wav2vec2 | 8 | 431 | transformers | 1 | automatic-speech-recognition | true | false | false | apache-2.0 | ['mr'] | ['openslr'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | true | true | true | 4,503 | false |
# Wav2Vec2-Large-XLSR-53-Marathi
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Marathi using the [Open SLR64](http://openslr.org/64/) dataset. When using this model, make sure that your speech input is sampled at 16kHz. This data contains only female voices but... | be9e5df5f2ff5b9d5099e922d26d35c8 |
sd-concepts-library/ralph-mcquarrie | sd-concepts-library | null | 10 | 0 | null | 2 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,196 | false | ### Ralph McQuarrie on Stable Diffusion
This is the `<ralph-mcquarrie>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook.... | d8606f4f8eb752ee0d14f66dbb48ce8a |
msavel-prnt/distilbert-base-uncased-finetuned-clinc | msavel-prnt | distilbert | 12 | 5 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['clinc_oos'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | false | true | true | 1,479 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/d... | d271d08c5666802131d97cd82b97a0b5 |
jangmin/ddpm-butterflies-128 | jangmin | null | 13 | 2 | diffusers | 0 | null | false | false | false | apache-2.0 | ['en'] | ['huggan/smithsonian_butterflies_subset'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,229 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/hu... | 88c091e2f905629779401e7e8a0a9283 |
davanstrien/convnext_flyswot | davanstrien | convnext | 7 | 4 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | ['image_folder'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,002 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext_flyswot
This model is a fine-tuned version of [facebook/convnext-base-224-22k](https://huggingface.co/facebook/convnext... | 4938ba555d68b94740ba01110f23985a |
jojoUla/bert-large-cased-sigir-support-refute-no-label-40 | jojoUla | bert | 14 | 0 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,247 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-sigir-support-refute-no-label-40
This model is a fine-tuned version of [bert-large-cased](https://huggingface.c... | 17cadf11407e3162738ec4dbeb566753 |
BumBelDumBel/ZORK-AI-TEST | BumBelDumBel | gpt2 | 16 | 2 | transformers | 0 | text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | false | true | true | 899 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ZORK-AI-TEST
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unkown dataset.
## Model descripti... | e6fc4fc1314487f7f1aa549c7ac90593 |
silviacamplani/distilbert-finetuned-dapt_tapt-lm-ai | silviacamplani | distilbert | 8 | 2 | transformers | 0 | fill-mask | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,504 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert-finetuned-dapt_tapt-lm-ai
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert... | 133d5d8e861b6bb9fd2884df7dccb4a6 |
microsoft/cvt-13-384-22k | microsoft | cvt | 6 | 74 | transformers | 0 | image-classification | true | true | false | apache-2.0 | null | ['imagenet-1k'] | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['vision', 'image-classification'] | false | true | true | 1,301 | false |
# Convolutional Vision Transformer (CvT)
CvT-13 model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 384x384. It was introduced in the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Wu et al. and first released in [this repository](https://gi... | 18954f7869fb70dd981a255eec3b9919 |
jonatasgrosman/exp_w2v2r_es_xls-r_age_teens-10_sixties-0_s109 | jonatasgrosman | wav2vec2 | 10 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['es'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'es'] | false | true | true | 476 | false | # exp_w2v2r_es_xls-r_age_teens-10_sixties-0_s109
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure ... | 682297db3a1fe910a941726e64140a4e |
Helsinki-NLP/opus-mt-gem-gem | Helsinki-NLP | marian | 11 | 3,053 | transformers | 0 | translation | true | true | false | apache-2.0 | ['da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'en', 'lb', 'yi', 'gem'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 17,101 | false |
### gem-gem
* source group: Germanic languages
* target group: Germanic languages
* OPUS readme: [gem-gem](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gem-gem/README.md)
* model: transformer
* source language(s): afr ang_Latn dan deu eng enm_Latn fao frr fry gos got_Goth gsw isl ksh ltz ... | 80aec5a67a1c3b25169199cda347fe5a |
Imene/vit-base-patch16-384-wi4 | Imene | vit | 7 | 2 | transformers | 0 | image-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 3,213 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Imene/vit-base-patch16-384-wi4
This model is a fine-tuned version of [google/vit-base-patch16-384](https://huggingface.co/google/vit-b... | 2f9ea97f8460d928feecdcb8fb7dc94a |
Rubens/Wav2Vec2-Large-XLSR-53-Portuguese | Rubens | wav2vec2 | 9 | 8 | transformers | 0 | automatic-speech-recognition | true | false | true | apache-2.0 | ['pt'] | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'speech', 'wav2vec2', 'pt', 'apache-2.0', 'portuguese-speech-corpus', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week', 'PyTorch'] | true | true | true | 3,424 | false |
# Wav2Vec2-Large-XLSR-53-Portuguese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Portuguese using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
## Usage
The model can be used directly (without a language model) as follows:
```p... | d9be71f1c963b931c0bb1554d50435df |
Wusul/aperturescience | Wusul | null | 16 | 14 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['text-to-image', 'stable-diffusion'] | false | true | true | 422 | false | ### aperturescience Dreambooth model trained by Wusul with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-sta... | fd94334454bb3c4cd0728807a7917dfe |
saattrupdan/xlmr-base-texas-squad-es | saattrupdan | xlm-roberta | 12 | 7 | transformers | 0 | question-answering | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,457 | false |
# TExAS-SQuAD-es
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the TExAS-SQuAD-es dataset.
It achieves the following results on the evaluation set:
- Exact match: xx.xx%
- F1-score: xx.xx%
## Training procedure
### Training hyperparameters
The following hyperp... | 9ae60a36aeb84e757a5bb8b9a665e73f |
sd-concepts-library/on-kawara | sd-concepts-library | null | 9 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,020 | false | ### On Kawara on Stable Diffusion
This is the `<on-kawara>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can als... | de5f6c287f76ec824ee5bf301f15a670 |
fathyshalab/massive_transport-roberta-large-v1-2-3 | fathyshalab | roberta | 14 | 2 | sentence-transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['setfit', 'sentence-transformers', 'text-classification'] | false | true | true | 1,466 | false |
# fathyshalab/massive_transport-roberta-large-v1-2-3
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with c... | 6ee22d68bf4024eb93479d2ee3836999 |
Mustafa21/segformer-b0-scene-parse-150 | Mustafa21 | segformer | 6 | 0 | transformers | 0 | null | true | false | false | other | null | ['scene_parse_150'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 30,876 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the ... | 288aa60d70fdd062185ae8c555efbd4d |
jonatasgrosman/exp_w2v2t_zh-cn_wavlm_s677 | jonatasgrosman | wavlm | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['zh-CN'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'zh-CN'] | false | true | true | 445 | false | # exp_w2v2t_zh-cn_wavlm_s677
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (zh-CN)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampl... | 5439750b51e3811c49bcd642e6ad4656 |
emre/whisper-medium-turkish-2 | emre | whisper | 20 | 28 | transformers | 1 | automatic-speech-recognition | true | false | false | apache-2.0 | ['tr'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | true | true | true | 1,505 | false |
# Whisper Medium TR
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.211673
- Wer: 18.51
## Model description
This model is the openai whisper medium tr... | 4941728e82d546132dbb725a90c5aa37 |
ku-nlp/deberta-v2-large-japanese | ku-nlp | deberta-v2 | 8 | 1,306 | transformers | 2 | fill-mask | true | false | false | cc-by-sa-4.0 | ['ja'] | ['wikipedia', 'cc100', 'oscar'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['deberta', 'deberta-v2', 'fill-mask'] | false | true | true | 3,307 | false |
# Model Card for Japanese DeBERTa V2 large
## Model description
This is a Japanese DeBERTa V2 large model pre-trained on Japanese Wikipedia, the Japanese portion of CC-100, and the Japanese portion of OSCAR.
## How to use
You can use this model for masked language modeling as follows:
```python
from transformers ... | 335f0c2682f23692dd992ee0dc4b821e |
emre/wav2vec2-xls-r-300m-W2V2-XLSR-300M-YAKUT-SMALL | emre | wav2vec2 | 15 | 6 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['sah'] | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard'] | true | true | true | 1,438 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-W2V2-XLSR-300M-YAKUT-SMALL
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://hugg... | e726ef725045d7551c70129c57162ebd |
Heerak/xlm-roberta-base-finetuned-panx-de-fr | Heerak | xlm-roberta | 10 | 4 | transformers | 0 | token-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,321 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-robert... | bf6dad2e6000753733f8005ea6a57663 |
Apocalypse-19/Genshin-Landscape-Diffusion | Apocalypse-19 | null | 15 | 512 | diffusers | 48 | text-to-image | true | false | false | creativeml-openrail-m | null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'dreambooth-hackathon', 'landscape'] | false | true | true | 1,993 | false |
# Dreambooth Model for Landscapes trained on images from Genshin Impact.
This is a Stable Diffusion model fine-tuned on the landscape concept with DreamBooth. It can be used by modifying the `instance_prompt`: **ggenshin landscape**
This model was created as part of the DreamBooth Hackathon 🔥.
## Description
Mode... | 4d16013c2eb2e22e9abcddb74cfa295e |
law-ai/InCaseLawBERT | law-ai | bert | 7 | 92 | transformers | 3 | fill-mask | true | false | false | mit | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['legal'] | false | true | true | 3,882 | false | ### InCaseLawBERT
Model and tokenizer files for the InCaseLawBERT model from the paper [Pre-training Transformers on Indian Legal Text](https://arxiv.org/abs/2209.06049).
### Training Data
For building the pre-training corpus of Indian legal text, we collected a large corpus of case documents from the Indian Supreme... | 4d19afd13bfd8c3b4e0c468a54327a37 |
jonaskoenig/topic_classification_03 | jonaskoenig | bert | 8 | 1 | transformers | 0 | text-classification | false | true | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 2,093 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# topic_classification_03
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsof... | bd3a777cce4f7e6a771823db78ac6064 |
gokuls/tiny-bert-sst2-1_mobilebert-only-distillation | gokuls | bert | 13 | 2 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,612 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-bert-sst2-1_mobilebert-only-distillation
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://... | 6fbb508378578ed24b3dafe5e41edb97 |
Helsinki-NLP/opus-mt-fi-ln | Helsinki-NLP | marian | 10 | 7 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 768 | false |
### opus-mt-fi-ln
* source languages: fi
* target languages: ln
* OPUS readme: [fi-ln](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-ln/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](http... | 396db7d6ff69d597b8a1c606b6b6f8fd |
Helsinki-NLP/opus-mt-fi-ee | Helsinki-NLP | marian | 10 | 7 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 768 | false |
### opus-mt-fi-ee
* source languages: fi
* target languages: ee
* OPUS readme: [fi-ee](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-ee/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](http... | e1d787cd6691874e76e499941d91a2af |
AT/distilroberta-base-finetuned-wikitext2 | AT | roberta | 27 | 4 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 942 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilr... | 5cfc4c6c19c854da97ee73613e56228b |
Positroniy/first_finetuning-sentiment-model-3000-samples | Positroniy | distilbert | 16 | 11 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['imdb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,059 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# first_finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingfac... | 0eaf0801166adec12c040b9fcbfa8580 |
nandysoham16/16-clustered_aug | nandysoham16 | distilbert | 8 | 0 | keras | 0 | null | false | true | false | mit | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 5,151 | false |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
['To_Kill_a_Mockingbird', 'Dog', 'Pub', 'Paper', 'Brain', 'Wood', 'The_Times', 'Immunology', 'Animal', 'Beer', 'Emotion', 'Digestion... | c7a0d9b5a521d4925a7040e11df6245a |
glissa/finetuning-sentiment-model-3000-samples | glissa | distilbert | 13 | 9 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['imdb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,053 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/d... | d130d9c5b28411ff87da27b0381a4a80 |
lora-library/lora-dreambooth-sample-dog | lora-library | null | 41 | 0 | diffusers | 4 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers'] | false | true | true | 537 | false | # LoRA DreamBooth - lora-dreambooth-sample-dog
These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "sksdog" using [DreamBooth](https://dreambooth.github.io/). You can find some example... | f43a0565e02a9224436f833d85d30a32 |
ticoAg/distilbert-base-uncased-finetuned-emotion | ticoAg | distilbert | 10 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,336 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co... | 7af05cf241e73e46e208e0fb24894733 |
jbreuch/bert-news-v3 | jbreuch | bert | 4 | 0 | transformers | 0 | fill-mask | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,323 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-news-v3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset... | 6c1b5766a87bbce5959f2b037e48aaf9 |
levinlab/neuroscience-to-dev-bio-5 | levinlab | bart | 11 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,028 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# neuroscience-to-dev-bio-5
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large... | 8d001735632191a0154b1533d5be9f1f |
jbreuch/bert-news-cad-v3 | jbreuch | bert | 4 | 0 | transformers | 0 | fill-mask | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,327 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-news-cad-v3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dat... | baefdf87dee1027e884bf8e85ed03d92 |
toeinriver/distilbert-base-uncased-finetuned-emotion | toeinriver | distilbert | 12 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,344 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co... | f4bd0210c6ee07b8ff5e9fd2cad341b3 |
Intel/t5-small-xsum-int8-dynamic | Intel | t5 | 8 | 4 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['mnli'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['int8', 'Intel® Neural Compressor', 'neural-compressor', 'PostTrainingDynamic'] | false | true | true | 875 | false |
# INT8 T5 small finetuned on XSum
### Post-training dynamic quantization
This is an INT8 PyTorch model quantized with [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes... | 4536169941660b930801f373a56dcb68 |
KarelDO/roberta-base.CEBaB_confounding.observational.absa.5-class.seed_42 | KarelDO | roberta | 15 | 2 | transformers | 0 | null | true | false | false | mit | ['en'] | ['OpenTable'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,115 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base.CEBaB_confounding.observational.absa.5-class.seed_42
This model is a fine-tuned version of [roberta-base](https://h... | c4ebb55fe07d45e58ef646b60c1dbd12 |
0xAnders/pai-symbol-heywhale | 0xAnders | null | 17 | 92 | diffusers | 3 | text-to-image | true | false | false | creativeml-openrail-m | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'science'] | false | true | true | 955 | false |
# DreamBooth model for the pai concept trained by 0xAnders.
This is a Stable Diffusion model fine-tuned on the pai concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of pai symbol**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https:/... | a53e26b862a80ae2248c5cc0f188ae5b |
mqy/mt5-small-finetuned-6feb-5 | mqy | mt5 | 14 | 2 | transformers | 0 | summarization | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['summarization', 'generated_from_trainer'] | true | true | true | 1,592 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-6feb-5
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on ... | 5d8eed0c34fcf04a694353f3608c7cd6 |
Helsinki-NLP/opus-mt-bcl-es | Helsinki-NLP | marian | 10 | 28 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-bcl-es
* source languages: bcl
* target languages: es
* OPUS readme: [bcl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](... | 104b178a7f771b12c385d2230f6124b4 |
jonatasgrosman/exp_w2v2t_pt_r-wav2vec2_s732 | jonatasgrosman | wav2vec2 | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['pt'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'pt'] | false | true | true | 462 | false | # exp_w2v2t_pt_r-wav2vec2_s732
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your spee... | db4f80aa98abbe053c03878a0c635509 |
multimodalart/polisteps-768 | multimodalart | null | 39 | 10 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['text-to-image'] | false | true | true | 2,877 | false | ### polisteps 768 Dreambooth model trained by multimodalart with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v2-768 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingf... | d9dfe3f8a4a52c088b09a42b66f9e87f |
kzipa/ddpm-butterflies-128-retrain | kzipa | null | 11 | 2 | diffusers | 0 | null | false | false | false | apache-2.0 | ['en'] | ['huggan/smithsonian_butterflies_subset'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,243 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128-retrain
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://githu... | baddd77cffcacbd488ca022486828b2d |
dmiller1/distilbert-base-uncased-finetuned-emotion | dmiller1 | distilbert | 9 | 24 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,337 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co... | fe0f2349d4a574314dc49eb64bbc81c6 |
andi611/bert-large-uncased-whole-word-masking-squad2-with-ner-mit-movie-with-neg-with-repeat | andi611 | bert | 13 | 5 | transformers | 0 | question-answering | true | false | false | cc-by-4.0 | ['en'] | ['squad_v2', 'mit_movie'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | false | true | true | 1,135 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-whole-word-masking-squad2-with-ner-mit-movie-with-neg-with-repeat
This model is a fine-tuned version of [deep... | 0afda7e81a0a29c5b4105222dca312ae |
evegarcianz/bert-finetuned-squad | evegarcianz | distilbert | 8 | 2 | transformers | 0 | question-answering | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,337 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# evegarcianz/bert-finetuned-squad
This model is a fine-tuned version of [distilbert-base-cased-distilled-squad](https://huggingface.co/... | 32cbbac567f309371a83add796b395d5 |
theojolliffe/bart-large-cnn-pubmed1o3-pubmed2o3-pubmed3o3-arxiv1o3 | theojolliffe | bart | 15 | 1 | transformers | 0 | text2text-generation | true | false | false | mit | null | ['scientific_papers'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,565 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-pubmed1o3-pubmed2o3-pubmed3o3-arxiv1o3
This model is a fine-tuned version of [theojolliffe/bart-large-cnn-pubmed1... | 4e7ef593feed87ff7a9fe81c11a0821c |
PereLluis13/wav2vec2-xls-r-300m-ca | PereLluis13 | wav2vec2 | 51 | 7 | transformers | 1 | automatic-speech-recognition | true | false | false | apache-2.0 | ['ca'] | ['mozilla-foundation/common_voice_8_0', 'collectivat/tv3_parla', 'projecte-aina/parlament_parla'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'collectivat/tv3_parla', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'projecte-aina/parlament_parla', 'robust-speech-event'] | true | true | true | 6,205 | false |
# wav2vec2-xls-r-300m-ca
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datase... | 26fdd0fdd7aa560b47be955d1fc0bd94 |
Kovalev/opus-mt-en-ru-finetuned-en-to-ru-PSUR | Kovalev | marian | 13 | 2 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,911 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ru-finetuned-en-to-ru-PSUR
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ru](https://huggingface.co/... | bc694e080bcd0abddcb27ec8e910c14b |
PaddlePaddle/uie-senta-nano | PaddlePaddle | ernie | 7 | 0 | paddlenlp | 0 | null | false | false | false | apache-2.0 | ['zh'] | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,997 | false | [](https://github.com/PaddlePaddle/PaddleNLP)
# PaddlePaddle/uie-senta-nano
Sentiment analysis is a research hotspot in recent years, aiming at analyzing, processing, summarizing and reasoning emot... | 1446aa281929608ce67ea88c96109540 |
Geotrend/distilbert-base-th-cased | Geotrend | distilbert | 6 | 5 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | ['th'] | ['wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,215 | false |
# distilbert-base-th-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accura... | 162864736e6454f7c4ffe8c9332bb721 |
it5/it5-small-formal-to-informal | it5 | t5 | 9 | 12 | transformers | 0 | text2text-generation | true | true | true | apache-2.0 | ['it'] | ['yahoo/xformal_it'] | {'emissions': '8g', 'source': 'Google Cloud Platform Carbon Footprint', 'training_type': 'fine-tuning', 'geographical_location': 'Eemshaven, Netherlands, Europe', 'hardware_used': '1 TPU v3-8 VM'} | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['italian', 'sequence-to-sequence', 'style-transfer', 'formality-style-transfer'] | true | true | true | 1,801 | false |
# IT5 Small for Formal-to-informal Style Transfer 🤗
This repository contains the checkpoint for the [IT5 Small](https://huggingface.co/gsarti/it5-small) model fine-tuned on Formal-to-informal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper [IT5: Large-scale Text-to... | 2dfb2441977fb06b3e8bf50379ab3276 |
oliverguhr/fullstop-punctuation-multilingual-base | oliverguhr | xlm-roberta | 12 | 4,371 | transformers | 3 | token-classification | true | false | false | mit | ['en', 'de', 'fr', 'it', 'nl', 'multilingual'] | ['wmt/europarl'] | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 | ['punctuation prediction', 'punctuation'] | false | true | true | 1,722 | false |
# Work in progress
## Classification report over all languages
```
precision recall f1-score support
0 0.99 0.99 0.99 47903344
. 0.94 0.95 0.95 2798780
, 0.85 0.84 0.85 3451618
? 0.88 0.85 ... | d919d4df91050d4bea80a02cb00255ee |
akadriu/wav2vec2-large-xlsr-53-Total2e-4_4 | akadriu | wav2vec2 | 13 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 4,354 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-Total2e-4_4
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.c... | d36ff36c66c87065a3da6127bd2b15d1 |
phamvanlinh143/bert-finetuned-ner | phamvanlinh143 | bert | 12 | 3 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['conll2003'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,518 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2... | 8d3dd17993d2c2fe477089b236ce58c3 |
Cryonicus/Gemini_Anime | Cryonicus | null | 25 | 0 | null | 5 | null | false | false | false | openrail | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['Angelcore', 'Devilcore', 'Dieselpunk', 'Steampunk', 'Clockpunk', 'Fantasy', 'Gothic Style', 'Nu-Gothic Style', 'Gothic Art', 'Dark Fantasy', 'Dark Art', 'Medieval', 'Modern', 'Futuristic', 'Cybernetic', 'Magic tech', 'Magic Circles', 'Cute Girls', 'Beautiful Women', 'Creepy Women', 'Creepy Girls', 'Evil', 'Wicked', '... | false | true | true | 2,476 | false | Gemini is a Dark Fantasy Anime focused merge of several models using various combination methods to attempt and extract specific styles.
The first version of Gemini_Anime is for darker renders or more fantasy based.\
This particular model has a heavier lean on dark art, gothic art and scene based (action etc rather tha... | b43f8c39dde55111849e030697851df7 |
sunnyujjawal/ToDo-app-Javascript | sunnyujjawal | null | 3 | 0 | null | 0 | null | false | false | false | cc | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,011 | false | To create a todo application with JavaScript, you will need to use HTML and CSS to build the user interface, and JavaScript to add functionality to the app.
Here is an outline of the steps you can follow to build a simple todo app:
Create an HTML page with a textarea element and a button element. The textarea will be... | 4135c2e705a346bf4fd8b41de779823f |
l3cube-pune/hindi-bert-v2 | l3cube-pune | bert | 8 | 13 | transformers | 1 | fill-mask | true | false | false | cc-by-4.0 | ['hi'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 620 | false |
## HindBERT
HindBERT is a Hindi BERT model. It is a multilingual BERT (google/muril-base-cased) model fine-tuned on publicly available Hindi monolingual datasets.
[project link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [<a href='https:/... | 5e2cc7cc3d37ed6d83e93b02d09e7f1b |
ALM/whisper-cy-small-augmented | ALM | whisper | 20 | 2 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['cy'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['whisper-event', 'generated_from_trainer'] | true | true | true | 1,567 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Welsh - Robust
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-... | 3e40870296c5371346b008109ef17e06 |
yhchoi/distilbert-base-uncased-finetuned-emotion | yhchoi | distilbert | 14 | 2 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 931 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co... | ba98ffdf6b7818e82a31f49a51c21e36 |
delpart/distilbert-base-uncased-finetuned-ner | delpart | distilbert | 13 | 3 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['conll2003'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,555 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/dis... | 85899df2cae2fee5cafbe4c704a25f12 |
HarrisDePerceptron/xls-r-300m-ur-cv8-hi | HarrisDePerceptron | wav2vec2 | 22 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['ur'] | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer'] | true | true | true | 4,403 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [DrishtiSharma/wav2vec2-large-xls-r-300m-hi-d3](https://huggingface.co/DrishtiSharma/wav2... | dad0c9f5633c2da6749f10c657200715 |
SenseTime/deformable-detr-single-scale | SenseTime | deformable_detr | 5 | 82 | transformers | 0 | object-detection | true | false | false | apache-2.0 | null | ['coco'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['object-detection', 'vision'] | false | true | true | 4,111 | false |
# Deformable DETR model with ResNet-50 backbone, single scale
Deformable DEtection TRansformer (DETR), single scale model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.or... | 89dfdd102d7cdc7ae732c51202e62f25 |
steveabecassis/t5-small-finetuned-xsum | steveabecassis | t5 | 10 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,930 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.... | cf139b805a54409b5d80de39fb16c521 |
eatdianatoday/yiwu | eatdianatoday | clip | 15 | 5 | diffusers | 2 | text-to-image | true | false | false | unknown | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['stable-diffusion', 'text-to-image'] | false | true | true | 3,314 | false |
# Novelai-Diffusion
Novelai-Diffusion is a latent diffusion model which can create best quality anime image.
Here is the diffusers version of the model. Just to make it easier to use Novelai-Diffusion for all.
# Gradio & Colab Demo
There is a [Gradio](https://github.com/gradio-app/gradio) Web UI and Colab with Dif... | e27aa54a70a9c18bc3930383d0fe5747 |
Yuri/xlm-roberta-base-finetuned-marc | Yuri | xlm-roberta | 12 | 5 | transformers | 0 | text-classification | true | false | false | mit | null | ['amazon_reviews_multi'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,271 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base... | 4fd3c2af3f328d288ff4f4129608f7e7 |
ali2066/correct_distilBERT_token_itr0_1e-05_essays_01_03_2022-15_41_29 | ali2066 | distilbert | 13 | 10 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,808 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_distilBERT_token_itr0_1e-05_essays_01_03_2022-15_41_29
This model is a fine-tuned version of [distilbert-base-uncased-fi... | 837bde510c771533d2f6d2ec62504054 |
masapasa/wav2vec2-large-xls-r-300m-turkish-colab | masapasa | wav2vec2 | 11 | 8 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,080 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface... | 8961eb0741f2e8809598fe722e1bdd09 |
XLab/rst-word-sense-disambiguation-11b | XLab | t5 | 6 | 4 | transformers | 2 | text2text-generation | true | false | false | afl-3.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 11,216 | false | <p align="center">
<br>
<img src="https://expressai-xlab.s3.amazonaws.com/rst/intro_rst.png" width="1000"/>
<br>
</p>
# reStructured Pre-training (RST)
official [repository](https://github.com/ExpressAI/reStructured-Pretraining), [paper](https://arxiv.org/pdf/2206.11147.pdf), [easter eggs](http://expressai... | 011e3b78c4659607e22bb58330bea7bc |
paulopirozelli/modelo-teste | paulopirozelli | bert | 8 | 12 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['yelp_review_full'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,104 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modelo-teste
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_... | c12e709527a636d6ccf1be56a45ac9f7 |
sd-concepts-library/jozef-tominc2 | sd-concepts-library | null | 10 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,166 | false | ### jozef-tominc2 on Stable Diffusion
This is the `<jozef-tominc>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You ... | 6496e740ec03847c7a69d7241c541731 |
LegolasTheElf/Wav2Vec2_xls_r_300m_hi_cv7 | LegolasTheElf | wav2vec2 | 10 | 6 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,543 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2Vec2_xls_r_300m_hi_cv7
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/... | 3ca15973d83a100f40d37142166c3dbc |
javilonso/Mex_Rbta_Opinion_Augmented_Attraction | javilonso | roberta | 9 | 4 | transformers | 0 | text-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,477 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# javilonso/Mex_Rbta_Opinion_Augmented_Attraction
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://hugging... | afc1c4c6036986cd6756feca64018fba |
Bharathdamu/wav2vec2-large-xls-r-300m-hindi-colab | Bharathdamu | wav2vec2 | 14 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,103 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hindi-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.c... | 9ffa86ae4d71a623e238987704b11c09 |
drscotthawley/wav2vec2-base-timit-demo-google-colab | drscotthawley | wav2vec2 | 12 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,998 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/face... | 431a96fba4911a89efa7017febfba311 |
cardiffnlp/twitter-roberta-base-dec2020 | cardiffnlp | roberta | 9 | 6 | transformers | 0 | fill-mask | true | false | false | mit | ['en'] | ['twitter-api'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['timelms', 'twitter'] | false | true | true | 4,648 | false |
# Twitter December 2020 (RoBERTa-base, 107M)
This is a RoBERTa-base model trained on 107.06M tweets until the end of December 2020.
More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829).
Below, we provide some usage examples using the standard Transformers interfa... | ffb08eb3dd2d83eeb39d07edde564c4e |
galverse/Galverse8888_V01 | galverse | null | 15 | 4 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-to-image', 'stable-diffusion'] | false | true | true | 437 | false | ### Galverse-Diffusion-wf-8888 Dreambooth model trained by jarvissan with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheL... | 859dbeff1b70b3fb6a6da0476c0ff24b |
tiennvcs/bert-large-uncased-finetuned-infovqa | tiennvcs | bert | 16 | 25 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | false | true | true | 3,770 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-infovqa
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-larg... | 167ced283318f178ac39614ef936a796 |
UpperLeftSide/tombartek | UpperLeftSide | null | 8 | 0 | null | 1 | null | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['stable-diffusion'] | false | true | true | 768 | false | Prompt: painting in the style tombartek

 on th... | 62a8428536acd7d39d0845558a2f8847 |
gokuls/mobilebert_sa_GLUE_Experiment_data_aug_stsb_256 | gokuls | mobilebert | 17 | 0 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,959 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_data_aug_stsb_256
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggin... | 3498b2bd09229ad0add2ab990f719e15 |
spacy/es_core_news_md | spacy | null | 28 | 81 | spacy | 0 | token-classification | false | false | false | gpl-3.0 | ['es'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['spacy', 'token-classification'] | false | true | true | 29,912 | false | ### Details: https://spacy.io/models/es#es_core_news_md
Spanish pipeline optimized for CPU. Components: tok2vec, morphologizer, parser, senter, ner, attribute_ruler, lemmatizer.
| Feature | Description |
| --- | --- |
| **Name** | `es_core_news_md` |
| **Version** | `3.5.0` |
| **spaCy** | `>=3.5.0,<3.6.0` |
| **Defa... | d9b9b1a3d36ad8b1128ba65e2793c14a |
abdalrahmanshahrour/AraBART-summ | abdalrahmanshahrour | mbart | 10 | 9 | transformers | 2 | summarization | true | false | false | apache-2.0 | ['ar'] | ['xlsum'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['summarization', 'AraBERT', 'BERT', 'BERT2BERT', 'MSA', 'Arabic Text Summarization', 'Arabic News Title Generation', 'Arabic Paraphrasing', 'Summarization', 'generated_from_trainer', 'Transformers', 'PyTorch'] | true | true | true | 1,047 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AraBART-summ
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training... | b5e3771592048246c2138ef9d59f8808 |
nandysoham16/Materialism-clustered | nandysoham16 | distilbert | 8 | 10 | transformers | 0 | question-answering | false | true | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,861 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham16/Materialism-clustered
This model is a fine-tuned version of [nandysoham16/7-clustered_aug](https://huggingface.co/nandyso... | a6d09605779aa43289e41bbd4559e620 |
jonatasgrosman/exp_w2v2r_de_xls-r_accent_germany-10_austria-0_s886 | jonatasgrosman | wav2vec2 | 10 | 3 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['de'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'de'] | false | true | true | 481 | false | # exp_w2v2r_de_xls-r_accent_germany-10_austria-0_s886
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make ... | 9b7167b5f295b1b58323ccffeeb02452 |
Helsinki-NLP/opus-mt-cs-de | Helsinki-NLP | marian | 10 | 43 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 1,099 | false |
### opus-mt-cs-de
* source languages: cs
* target languages: de
* OPUS readme: [cs-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/cs-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](http... | 1904c57fc1df6961b6cbc40487ae2963 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.