repo_id stringlengths 4 110 | author stringlengths 2 27 ⌀ | model_type stringlengths 2 29 ⌀ | files_per_repo int64 2 15.4k | downloads_30d int64 0 19.9M | library stringlengths 2 37 ⌀ | likes int64 0 4.34k | pipeline stringlengths 5 30 ⌀ | pytorch bool 2
classes | tensorflow bool 2
classes | jax bool 2
classes | license stringlengths 2 30 | languages stringlengths 4 1.63k ⌀ | datasets stringlengths 2 2.58k ⌀ | co2 stringclasses 29
values | prs_count int64 0 125 | prs_open int64 0 120 | prs_merged int64 0 15 | prs_closed int64 0 28 | discussions_count int64 0 218 | discussions_open int64 0 148 | discussions_closed int64 0 70 | tags stringlengths 2 513 | has_model_index bool 2
classes | has_metadata bool 1
class | has_text bool 1
class | text_length int64 401 598k | is_nc bool 1
class | readme stringlengths 0 598k | hash stringlengths 32 32 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
xenon3134-mc/empty-eyes-LoRAs | xenon3134-mc | null | 7 | 0 | null | 9 | null | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,993 | false | # LoRAs
When using these LoRAs, it may be a better image to redraw only face or eyes using inpaint.
Or, it is recommended to reduce the Weight of LoRA.
- [utsurome_v3.safetensors](#utsurome_v3.safetensors)
- base model: [7th_anime_3.1_Cg](https://huggingface.co/syaimu/7th_test)
- [yorime.safetensors](#yorime.safetens... | a73b06f87900a81db85c46eb6aeb07a2 |
jonatasgrosman/exp_w2v2t_et_wavlm_s455 | jonatasgrosman | wavlm | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['et'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'et'] | false | true | true | 439 | false | # exp_w2v2t_et_wavlm_s455
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (et)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at ... | 577518e679ef3361d11c55efc4be1bab |
Farshid/bert-large-uncased-financial-phrasebank-allagree2 | Farshid | bert | 12 | 41 | transformers | 1 | text-classification | true | false | false | apache-2.0 | null | ['financial_phrasebank'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,566 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-financial-phrasebank-allagree2
This model is a fine-tuned version of [bert-large-uncased](https://huggingface... | 7f3db03f717667b4a06a9d835502482b |
yuhuizhang/finetuned_gpt2-medium_sst2_negation0.05_pretrainedFalse | yuhuizhang | gpt2 | 11 | 0 | transformers | 0 | text-generation | true | false | false | mit | null | ['sst2'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,268 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_gpt2-medium_sst2_negation0.05_pretrainedFalse
This model is a fine-tuned version of [gpt2-medium](https://huggingface.... | de34378d1bca2d6e58204630714eca86 |
vesteinn/ScandiBERT | vesteinn | xlm-roberta | 8 | 348 | transformers | 2 | fill-mask | true | false | false | agpl-3.0 | ['is', 'da', 'sv', False, 'fo'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['roberta', 'icelandic', 'norwegian', 'faroese', 'danish', 'swedish', 'masked-lm', 'pytorch'] | false | true | true | 1,020 | false |
# ScandiBERT
Note note: The model has been updated on 2022-09-27
The model was trained on the data shown in the table below. Batch size was 8.8k, the model was trained for 72 epochs on 24 V100 cards for about 2 weeks.
| Language | Data | Size |
|-----------|----------------------... | ae3aca82cbb9e76b9bd0f527321bea64 |
emendes3/cancer_diffusion_model_glioma | emendes3 | null | 88 | 2 | diffusers | 0 | null | false | false | false | apache-2.0 | ['en'] | ['glioma'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,217 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# cancer_diffusion_model_glioma
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://gith... | 302560cd9c4794c1795346a312901928 |
raquelsmv/clasificador-rotten_tomatoes | raquelsmv | electra | 10 | 4 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['classification', 'generated_from_trainer'] | true | true | true | 1,361 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-rotten_tomatoes
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/go... | 1c9e659706eada209c08e23636d1bd4c |
vishalpc6191/mt5-small-finetuned-amazon-en-es | vishalpc6191 | mt5 | 9 | 2 | transformers | 0 | text2text-generation | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,407 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vishalpc6191/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/m... | 7baaaad53100d2a39a57da532198d1c5 |
dbmdz/bert-base-turkish-uncased | dbmdz | bert | 8 | 3,572 | transformers | 8 | null | true | true | true | mit | ['tr'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,846 | false |
# 🤗 + 📚 dbmdz Turkish BERT model
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources an uncased model for Turkish 🎉
# 🇹🇷 BERTurk
BERTurk is a community-driven uncased BERT model for Turkish.
Some datasets used for pretraining and evaluation are contributed from t... | 86b41d0e43bab61f1134b4da3381230c |
Anjoe/kant-gpt2-large | Anjoe | gpt2 | 15 | 68 | transformers | 0 | text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,294 | false |
# kant-gpt2-large
This model is a fine-tuned version of [benjamin/gerpt2-large](https://huggingface.co/benjamin/gerpt2-large). It was trained on the "Akademie Ausgabe" of the works of Immanuel Kant.
It achieves the following results on the evaluation set:
- Loss: 3.4257
## Model description
A large version of gpt2... | d9b24709a66f7e0f672124b6831d64ee |
jonatasgrosman/exp_w2v2t_it_no-pretraining_s764 | jonatasgrosman | wav2vec2 | 10 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['it'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'it'] | false | true | true | 414 | false | # exp_w2v2t_it_no-pretraining_s764
Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has be... | 2c6e6c1d28bfc02e5cd1440fab8224a8 |
rifkiaputri/mt5-base-id-finetune-unans-qg | rifkiaputri | mt5 | 11 | 4 | transformers | 0 | text2text-generation | true | false | false | mit | ['id'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['mt5', 'question-generation'] | false | true | true | 647 | false |
# mt5-base for Indonesian Unanswerable Question Generation (cased)
[mT5-base](https://huggingface.co/google/mt5-base) model fine-tuned on machine-translated SQuAD 2.0 dataset for generating unanswerable questions in Indonesian. Please refer to [this paper](https://arxiv.org/abs/2210.13778) for more details on the mod... | bfc777d057230c4c28e4707f2269856f |
cptanalatriste/request-for-help | cptanalatriste | bert | 8 | 2 | transformers | 0 | text-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 3,710 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# cptanalatriste/request-for-help
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on... | 0bfb2b9f81893827e4a1308414f46c80 |
paola-md/recipe-lr1e05-wd0.01-bs32 | paola-md | roberta | 6 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,701 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr1e05-wd0.01-bs32
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-... | b0b8bb9b247e8730df1d014d467df7a9 |
Helsinki-NLP/opus-mt-ro-fr | Helsinki-NLP | marian | 10 | 50 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 770 | false |
### opus-mt-ro-fr
* source languages: ro
* target languages: fr
* OPUS readme: [ro-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ro-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](http... | ae28012306a531b408181c6f4f1e377e |
sd-concepts-library/rikiboy-art | sd-concepts-library | null | 9 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,038 | false | ### Rikiboy Art on Stable Diffusion
This is the `<Rikiboy-Art>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can... | a212652b4cbffa8cb56e66890228c699 |
bullmount/hseBert-it-cased | bullmount | bert | 29 | 107 | transformers | 2 | fill-mask | true | false | false | mit | ['it'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 572 | false |
# hseBERT
**hseBert-it-cased** is a BERT model obtained by MLM adaptive-tuning [**bert-base-italian-xxl-cased**](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on texts of Italian regulation (Testo unico sulla sicurezza sul lavoro - D.lgs. 9 aprile 2008, n. 81, Codice dell'Ambiente - D.lgs. 3 aprile 2006, ... | 670027901d16c4b40651409ca3bdc583 |
anas-awadalla/t5-base-few-shot-k-256-finetuned-squad-seed-2 | anas-awadalla | t5 | 17 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 957 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-few-shot-k-256-finetuned-squad-seed-2
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co... | 78915974ab47122cc366e6b41f5c17d1 |
philosucker/xlm-roberta-base-finetuned-panx-de-fr | philosucker | xlm-roberta | 10 | 2 | transformers | 0 | token-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,356 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-robert... | bd33efff88109a2e72f500be0116a833 |
google/t5-efficient-small-el32 | google | t5 | 12 | 7 | transformers | 0 | text2text-generation | true | true | true | apache-2.0 | ['en'] | ['c4'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['deep-narrow'] | false | true | true | 6,256 | false |
# T5-Efficient-SMALL-EL32 (Deep-Narrow version)
T5-Efficient-SMALL-EL32 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoin... | 31b77b1e8ef401ee76944c9903e16f0f |
grullborg/ChonkyLotus | grullborg | null | 3 | 0 | null | 1 | text-to-image | false | false | false | creativeml-openrail-m | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['stable-diffusion', 'text-to-image', 'lora'] | false | true | true | 1,703 | false |
# ChonkyLotus Character LoRA
## Usage
To use this LoRA you have to download the file, as well as drop it into the "\stable-diffusion-webui\models\Lora" folder
To use it in a prompt, please refer to the extra networks panel in your Automatic1111 webui.
I highly recommend using it at around 0.8 strength for the best r... | d2482aac4bda1f12a6d1e0e25358cbb9 |
yaakov/test-distilbert-to-cola | yaakov | distilbert | 13 | 5 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,556 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-distilbert-to-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-u... | 6386dc4bd28046ecf0efdde77ff219e9 |
W4nkel/microsoftTurkishTrain | W4nkel | bert | 8 | 1 | transformers | 0 | text-classification | false | true | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,617 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# W4nkel/microsoftTurkishTrain
This model is a fine-tuned version of [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/mic... | 403fb1681ef8e767bc093f0c18300c6d |
skr1125/xlm-roberta-base-finetuned-panx-en | skr1125 | xlm-roberta | 10 | 4 | transformers | 0 | token-classification | true | false | false | mit | null | ['xtreme'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,320 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-b... | a48498f983bd85458e1a52aadb209557 |
DunnBC22/mbart-large-50-English_German_Translation | DunnBC22 | mbart | 10 | 8 | transformers | 1 | text2text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,138 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-50-English_German_Translation
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co... | c7a4313aa4427cde02b287b6bf6f7f29 |
jmparejaz/QA-finetuned-distilbert-TFv3 | jmparejaz | distilbert | 8 | 9 | transformers | 0 | question-answering | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,867 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jmparejaz/QA-finetuned-distilbert-TFv3
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert... | eb72b9645cc6a863dbec1fcf3da38ee0 |
nyaaaaa/bert-finetuned-ner | nyaaaaa | bert | 12 | 5 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['conll2003'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,518 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2... | 7995ffd0d5ac1d397faa4706cc540ab1 |
google/multiberts-seed_1-step_400k | google | bert | 8 | 12 | transformers | 0 | null | true | true | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['multiberts', 'multiberts-seed_1', 'multiberts-seed_1-step_400k'] | false | true | true | 3,521 | false |
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 400k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with differen... | 521236409e9152e02b5bc0a9f6afd82a |
aapot/wav2vec2-large-xlsr-53-finnish | aapot | wav2vec2 | 9 | 12 | transformers | 0 | automatic-speech-recognition | true | false | true | apache-2.0 | ['fi'] | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | true | true | true | 4,155 | false |
# NOTE: this is an old model and should not be used anymore!! There are a lot better newer models available at our orgnization hub: [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2) and [Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm](https://huggingface.co/Finn... | 5765248b7bdaeb507b7bd939ae42778f |
bko/bert-base-uncased-finetuned-swag | bko | bert | 12 | 2 | transformers | 0 | multiple-choice | true | false | false | apache-2.0 | null | ['swag'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,338 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-unca... | 85f81d9ab02f0521eecf173e903d1769 |
Helsinki-NLP/opus-mt-en-ig | Helsinki-NLP | marian | 10 | 12 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 802 | false |
### opus-mt-en-ig
* source languages: en
* target languages: ig
* OPUS readme: [en-ig](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ig/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](http... | 4d844fdd1f01224b5f3e5ada62e9f24f |
google/multiberts-seed_2-step_1500k | google | bert | 8 | 12 | transformers | 0 | null | true | true | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['multiberts', 'multiberts-seed_2', 'multiberts-seed_2-step_1500k'] | false | true | true | 3,527 | false |
# MultiBERTs, Intermediate Checkpoint - Seed 2, Step 1500k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with differe... | 2724ac814c9c9fe5f7a66059403cbf56 |
tomekkorbak/nostalgic_jones | tomekkorbak | gpt2 | 137 | 7 | transformers | 0 | null | true | false | false | mit | ['en'] | ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile... | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 8,803 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nostalgic_jones
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pi... | f244e2b4314b2c6d39d65d04093322b4 |
Manirathinam21/DistilBert_SMSSpam_classifier | Manirathinam21 | distilbert | 8 | 4 | transformers | 0 | text-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,808 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Manirathinam21/DistilBert_SMSSpam_classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/di... | 0d12dd3c194d43e4a0367c92be0b1613 |
ThePioneer/MoeSharpV1 | ThePioneer | null | 29 | 10 | diffusers | 1 | text-to-image | false | false | false | other | ['ja', 'en'] | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['art'] | false | true | true | 14,715 | false | <style>
code {
white-space : pre-wrap !important;
word-break: break-word;
}
</style>
# モデル説明 (model explanation)
- [MoeDiffusionPlusPlus](https://huggingface.co/ThePioneer/MoeDiffusionPlusPlus/blob/main/MoeDiffusion%2B%2B_V2.ckpt) 0.7 : [DreamShaper 3.3 (full)](https://civitai.com/models/4384/dreamshaper) ... | a99bcf7a9d5e8d04cd77459ab05fe453 |
carlosabadia/hasbulla | carlosabadia | null | 17 | 204 | diffusers | 68 | text-to-image | true | false | false | creativeml-openrail-m | null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard'] | false | true | true | 2,309 | false |
# DreamBooth model for the hasbulla concept trained by carlosabadia on the carlosabadia/hasbulla dataset.
This is a Stable Diffusion model fine-tuned on the hasbulla concept with DreamBooth. It can be used by modifying the `instance_prompt`: **hasbulla person**
This model was created as part of the DreamBooth Hackat... | 88776ef2d4bb6ef0808b6938fe0b2ce7 |
ApoTro/slovak-t5-small | ApoTro | t5 | 9 | 29 | transformers | 0 | text2text-generation | true | false | true | mit | ['sk'] | ['oscar'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,210 | false |
# SlovakT5-small
This model was trained on slightly adapted code from [run_t5_mlm_flax.py](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling).
If you want to know about training details or evaluation results, see [SlovakT5_report.pdf](https://huggingface.co/ApoTro/slovak-t5-small/r... | 5b0f80b619aecd11dddaae77db223eb5 |
parambharat/whisper-base-ta | parambharat | whisper | 13 | 12 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['ta'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['whisper-event', 'generated_from_trainer'] | true | true | true | 2,109 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Ta - Bharat Ramanathan
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/wh... | 764531c8b564fbe41f650fa761b559d0 |
eugenesiow/edsr-base | eugenesiow | EDSR | 8 | 4,882 | transformers | 1 | null | false | false | false | apache-2.0 | null | ['eugenesiow/Div2k', 'eugenesiow/Set5', 'eugenesiow/Set14', 'eugenesiow/BSD100', 'eugenesiow/Urban100'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['super-image', 'image-super-resolution'] | false | true | true | 8,262 | false | # Enhanced Deep Residual Networks for Single Image Super-Resolution (EDSR)
EDSR model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Enhanced Deep Residual Networks for Single Image Super-Resolution](h... | 53d7f5d622894acded172c718b0b6f51 |
joniponi/facility-classifier | joniponi | distilbert | 12 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,603 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# facility-classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncas... | 428d37d351c58728e9268774ec65edc4 |
yoshitomo-matsubara/bert-base-uncased-stsb_from_bert-large-uncased-stsb | yoshitomo-matsubara | bert | 9 | 17 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['stsb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['bert', 'stsb', 'glue', 'kd', 'torchdistill'] | false | true | true | 704 | false |
`bert-base-uncased` fine-tuned on STS-B dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb... | dcd9758f47304cf39911de0ec6bb9d2e |
sayakpaul/kerascv_sd_tflite | sayakpaul | null | 5 | 0 | null | 0 | null | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 653 | false |
This repository hosts the TFLite models for the [KerasCV Stable Diffusion model](https://github.com/keras-team/keras-cv/blob/master/keras_cv/models/stable_diffusion). The model can be broken into three parts:
* Text encoder
* Image decoder
* Denoiser
For each model, there is an equivalent TFLite model in this reposi... | 6ecb7dd733489290095cb19032df88c1 |
facebook/s2t-medium-librispeech-asr | facebook | speech_to_text | 11 | 779 | transformers | 4 | automatic-speech-recognition | true | true | false | mit | ['en'] | ['librispeech_asr'] | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['audio', 'automatic-speech-recognition'] | false | true | true | 4,856 | false |
# S2T-MEDIUM-LIBRISPEECH-ASR
`s2t-medium-librispeech-asr` is a Speech to Text Transformer (S2T) model trained for automatic speech recognition (ASR).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/... | babb935be4d8f31e7ec42a135915cd34 |
kejian/final-cond-10-0.1 | kejian | gpt2 | 25 | 1 | transformers | 0 | null | true | false | false | apache-2.0 | ['en'] | ['kejian/codeparrot-train-more-filter-3.3b-cleaned'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 4,892 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kejian/final-cond-10-0.1
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
#... | 2a8f66d354bd9edbe3fd06afd3c1c672 |
gchhablani/wav2vec2-large-xlsr-or | gchhablani | wav2vec2 | 10 | 9 | transformers | 0 | automatic-speech-recognition | true | false | true | apache-2.0 | ['or'] | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | true | true | true | 3,388 | false | # Wav2Vec2-Large-XLSR-53-Odia
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Odia using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used ... | d86b51ead33d7b88fcc951ddd827524e |
jkang/espnet2_an4_transformer | jkang | null | 33 | 0 | espnet | 0 | automatic-speech-recognition | false | false | false | cc-by-4.0 | ['en'] | ['an4'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | true | true | 6,823 | false |
## ESPnet2 ASR model
### `jkang/espnet2_an4_transformer`
This model was trained by jaekookang using an4 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that a... | 2988b13867b68f522e10f536530e0a9e |
MaggieXM/deberta-base-finetuned-squad | MaggieXM | deberta | 17 | 7 | transformers | 0 | question-answering | true | false | false | mit | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,096 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-finetuned-squad
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deb... | b730802fbad100acc18fe2d2ed5a2bf5 |
deprem-ml/deprem-loodos-bert-base-uncased | deprem-ml | bert | 49 | 0 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 5 | 2 | 3 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,237 | false | ### Deprem NER Training Results
```
precision recall f1-score support
0 0.85 0.91 0.88 734
1 0.77 0.84 0.80 207
2 0.71 0.88 0.79 130
3 0.68 0.76 0.72 94
4... | 912faa1e151a110af53f9c8a99fa7349 |
aniltrkkn/wav2vec2-large-xlsr-53-turkish | aniltrkkn | wav2vec2 | 9 | 8 | transformers | 0 | automatic-speech-recognition | true | false | true | apache-2.0 | ['tr'] | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | false | true | true | 3,989 | false |
# Wav2Vec2-Large-XLSR-53-Turkish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Turkish using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can... | c48e03a48b9f47b41e5709bed5e16e2c |
mayank-soni/mt5-small-finetuned-amazon-en-es | mayank-soni | mt5 | 8 | 1 | transformers | 0 | text2text-generation | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,652 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mayank-soni/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt... | e8dae5aea0fc017225900299b9ae3461 |
anhcanvasasia/bert-large-japanese-wikipedia-ud-head-finetuned-inquiry | anhcanvasasia | bert | 12 | 32 | transformers | 0 | question-answering | true | false | false | cc-by-sa-4.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,348 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-japanese-wikipedia-ud-head-finetuned-squad
This model is a fine-tuned version of [KoichiYasuoka/bert-large-japanese-w... | 45809175894be1aa686050c72cfbd694 |
varadhbhatnagar/fc-claim-det-DBART | varadhbhatnagar | bart | 9 | 10 | transformers | 0 | summarization | true | false | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 6,263 | false |
# Model Card for Pegasus for Claim Summarization
<!-- Provide a quick summary of what the model is/does. -->
This model can be used to summarize noisy claims on social media into clean and concise claims which can be used for downstream tasks in a fact-checking pipeline.
# Model Details
This is the fine-tuned D BA... | 1ecf1826db6eaa54c5c6f0ee64f1572d |
sentence-transformers/all-distilroberta-v1 | sentence-transformers | roberta | 16 | 237,355 | sentence-transformers | 4 | sentence-similarity | true | false | false | apache-2.0 | ['en'] | ['s2orc', 'flax-sentence-embeddings/stackexchange_xml', 'MS Marco', 'gooaq', 'yahoo_answers_topics', 'code_search_net', 'search_qa', 'eli5', 'snli', 'multi_nli', 'wikihow', 'natural_questions', 'trivia_qa', 'embedding-data/sentence-compression', 'embedding-data/flickr30k-captions', 'embedding-data/altlex', 'embedding-d... | null | 1 | 0 | 1 | 0 | 1 | 1 | 0 | ['sentence-transformers', 'feature-extraction', 'sentence-similarity'] | false | true | true | 9,710 | false |
# all-distilroberta-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transforme... | 592093979250553fc1aa5c19206c6984 |
mideind/IceBERT-ic3 | mideind | roberta | 8 | 9 | transformers | 0 | fill-mask | true | false | false | agpl-3.0 | ['is'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['roberta', 'icelandic', 'masked-lm', 'pytorch'] | false | true | true | 1,541 | false |
# IceBERT-ic3
This model was trained with fairseq using the RoBERTa-base architecture. It is one of many models we have trained for Icelandic, see the paper referenced below for further details. The training data used is shown in the table below.
| Dataset | Size | Tok... | 962845b527613a01f187bbd1395efc2d |
HugoSchtr/yolov5_datacat | HugoSchtr | null | 5 | 0 | null | 0 | null | true | false | false | cc-by-4.0 | null | ['datacatalogue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['yolov5', 'yolo', 'digital humanities', 'object detection', 'computer-vision', 'document layout analysis', 'pytorch'] | false | true | true | 1,569 | false |
# What's YOLOv5
YOLOv5 is an open-source object detection model released by [Ultralytics](https://ultralytics.com/), on [Github](https://github.com/ultralytics/yolov5).
# DataCatalogue (or DataCat)
[DataCatalogue](https://github.com/DataCatalogue) is a research project jointly led by Inria, the Bibliothèque nationa... | 054d45bef447846515b92fc3e210a750 |
sd-concepts-library/floral | sd-concepts-library | null | 4 | 0 | null | 1 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,073 | false | ### Floral-orchid on Stable Diffusion
This is the `<floral-orchid>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You... | 493b396ad4ec2f4e9bff64c85a05c322 |
anishchada12/distilgpt2-finetuned-PanoAI2 | anishchada12 | gpt2 | 12 | 3 | transformers | 0 | text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,235 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-PanoAI2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None d... | 0f8797237ba750da34fff737770eb78e |
speechbrain/sepformer-wham16k-enhancement | speechbrain | null | 9 | 356 | speechbrain | 4 | audio-to-audio | true | false | false | apache-2.0 | ['en'] | ['WHAM!'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio-to-audio', 'Speech Enhancement', 'WHAM!', 'SepFormer', 'Transformer', 'pytorch', 'speechbrain'] | false | true | true | 3,405 | false |
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# SepFormer trained on WHAM! for speech enhancement (16k sampling frequency)
This repository provides all the... | 6c742ab512590dc03d09899bc7779518 |
d2niraj555/mt5-eng2nep | d2niraj555 | mt5 | 17 | 3 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | ['ne', 'en'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['English to Nepali Translator', 'MT5 Fine Tuned', 'Nepali Translator Dataset'] | true | true | true | 1,928 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-eng2nep
## Model description
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base... | de03330cb954cbc6db0386b4c1332bd6 |
Evelyn18/distilbert-base-uncased-prueba2 | Evelyn18 | distilbert | 13 | 5 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['becasv2'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,528 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-prueba2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilber... | 4eeefa0c0554a85dc0bd8ae13c61181c |
AdwayK/hugging_face_biobert_MLMA | AdwayK | bert | 8 | 12 | transformers | 0 | token-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,713 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# AdwayK/hugging_face_biobert_MLMA
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) o... | 983a36466539619240592c81fa02d48b |
nickapch/distilbert-base-uncased-finetuned-emotion1 | nickapch | distilbert | 9 | 6 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 933 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.c... | 4ebd68a4f1a7e0fc0d2926644974d7b9 |
botisan-ai/mt5-translate-zh-yue | botisan-ai | mt5 | 12 | 80 | transformers | 3 | text2text-generation | true | false | false | apache-2.0 | ['zh', 'yue'] | ['x-tech/cantonese-mandarin-translations'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,510 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on dataset [x-tech/canton... | b44d93fb4e344590a407499fc75e930c |
evolvingstuff/bert-base-cased-wikitext2 | evolvingstuff | bert | 9 | 4 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,248 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the... | f9d9096563aa57a36939c9ebdcd0eca5 |
uygarkurt/distilbert-base-uncased-finetuned-emotion | uygarkurt | distilbert | 24 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,342 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co... | 0810ebb3bc342cb352da7f800db4dbb4 |
habiba/egy-slang-model | habiba | wav2vec2 | 11 | 11 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,998 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# egy-slang-model
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2... | d84324d59b42bb260b215d7defd18655 |
papsebestyen/hubert-base-cc-finetuned-forum | papsebestyen | bert | 13 | 4 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,320 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-base-cc-finetuned-forum
This model is a fine-tuned version of [SZTAKI-HLT/hubert-base-cc](https://huggingface.co/SZTAKI-H... | 29ea1067d237c252f03c9c558afd38aa |
DLL888/bert-base-uncased-squad | DLL888 | bert | 10 | 5 | transformers | 0 | question-answering | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 2,284 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# DLL888/bert-base-uncased-squad
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on ... | 6345ef7509410a61f33c37c0675d7633 |
Migueluao123/roberta-base-bne-finetuned-amazon_reviews_multi | Migueluao123 | roberta | 13 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,287 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggin... | 3a1cea70a27c1468d17286a1087c4bb9 |
TransQuest/monotransquest-hter-en_de-it-smt | TransQuest | xlm-roberta | 8 | 5 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en-de'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['Quality Estimation', 'monotransquest', 'hter'] | false | true | true | 5,312 | false |
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commer... | 01e66d1cd684b846599712b25d69eb2c |
team-nave/xlm-roberta-base-finetuned-panx-all | team-nave | xlm-roberta | 10 | 5 | transformers | 0 | token-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,313 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-... | 26a98b60fd50334e078cb276c1bc2caa |
Asma-Kehila/finetuning-sentiment-model-3000-samples | Asma-Kehila | distilbert | 21 | 9 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,040 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/d... | bdac55f2aa4821cbd9a34b6f05bb58b7 |
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-8_england-2_s875 | jonatasgrosman | wav2vec2 | 10 | 3 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['en'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'en'] | false | true | true | 497 | false | # exp_w2v2r_en_vp-100k_accent_us-8_england-2_s875
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using t... | ab1a2c1be76315adee2d113d3a6ba928 |
eshanck/apm1 | eshanck | distilbert | 10 | 34 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,451 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# apm1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unkno... | adee7efd73828432000efc84f0471f09 |
ClueAI/PromptCLUE-base-v1-5-paddle | ClueAI | t5 | 7 | 0 | paddlenlp | 1 | text2text-generation | false | false | false | creativeml-openrail-m | ['zh'] | null | null | 2 | 0 | 2 | 0 | 0 | 0 | 0 | [] | false | true | true | 5,668 | false |
<a href="https://colab.research.google.com/drive/1hlSMYEq3pyX-fwTSqIOT1um80kU1yOJF?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg"></a>
PromptCLUE:全中文任务零样本学习模型
这个模型是PromptCLUE-base-v1-5模型适应PaddleNLP转化得到的。PromptCLUE-base-V1-5是基于PromptCLUE-base进一步训练(+50%步数),以及更多任务(+50%任务)以及更多任务类型上进行训... | 05000609013bd17cd0cb58a8dac35271 |
sd-concepts-library/stretch-re1-robot | sd-concepts-library | null | 10 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,162 | false | ### Stretch RE1 Robot on Stable Diffusion
This is the `<stretch>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You c... | 599fe988e8bdabcc68e93fc971c9d1a5 |
SkyR/albert-base-ours-run-1 | SkyR | albert | 9 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,064 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-ours-run-1
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unkno... | 9ad11947d207811879acf459142188a6 |
Likang/distilbert-base-uncased-finetuned-cola | Likang | distilbert | 13 | 4 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,571 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/di... | fdff7037440b69c1cde7bf39ee65579d |
KarelDO/gpt2.CEBaB_confounding.price_food_ambiance_negative.absa.5-class.seed_43 | KarelDO | gpt2 | 15 | 2 | transformers | 0 | null | true | false | false | mit | ['en'] | ['OpenTable'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,106 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2.CEBaB_confounding.price_food_ambiance_negative.absa.5-class.seed_43
This model is a fine-tuned version of [gpt2](https://hu... | 80c488cf815e4b0ca937bc321b21f4f4 |
marcolatella/hate_trained_42 | marcolatella | distilbert | 10 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['tweet_eval'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,396 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hate_trained_42
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) ... | 30c091c97d064cdf24977b84d0fe8b8c |
chrishistewandb/hugging-face | chrishistewandb | distilbert | 10 | 4 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 902 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hugging-face
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on ... | 2c567524cbb19b9b8e2e9a29d592d2e5 |
Akash7897/distilbert-base-uncased-finetuned-cola | Akash7897 | distilbert | 18 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['glue'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,572 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/di... | aa3dc6c29c8b029f7939fa60af40ce8f |
gokuls/mobilebert_sa_GLUE_Experiment_qnli_128 | gokuls | mobilebert | 17 | 4 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,583 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_qnli_128
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/... | 90571f0a737f19567af84d3282681a56 |
xkang/distilbert-base-uncased-finetuned-imdb-whole-word-masking | xkang | distilbert | 14 | 11 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | ['imdb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,342 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb-whole-word-masking
This model is a fine-tuned version of [distilbert-base-uncased](https:... | 8a361448f45eb63322096e0965ed0635 |
lmz/rust-stable-diffusion-v2-1 | lmz | null | 6 | 0 | null | 4 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'rust'] | false | true | true | 3,168 | false |
This repository hosts weights for a Rust based version of Stable Diffusion.
These weights have been directly adapted from the
[stabilityai/stable-diffusion-2-1](https://huggingface.co/stabilityai/stable-diffusion-2-1)
weights, they can be used with the
[diffusers-rs](https://github.com/LaurentMazare/diffusers-rs) crat... | c3c8eb08375adef9c9476f1652ef4471 |
AykeeSalazar/vc-bantai-vit-withoutAMBI-adunest | AykeeSalazar | vit | 35 | 11 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | ['imagefolder'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,397 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vc-bantai-vit-withoutAMBI-adunest
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.... | cc509d5b27534b5f5bdb9d6df7e35021 |
jonatasgrosman/exp_w2v2t_it_unispeech_s714 | jonatasgrosman | unispeech | 10 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['it'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'it'] | false | true | true | 469 | false | # exp_w2v2t_it_unispeech_s714
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that yo... | 73df4b565cc7af8a67496d1f86ffe622 |
Helsinki-NLP/opus-mt-bg-uk | Helsinki-NLP | marian | 11 | 17 | transformers | 0 | translation | true | true | false | apache-2.0 | ['bg', 'uk'] | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 2,018 | false |
### bul-ukr
* source group: Bulgarian
* target group: Ukrainian
* OPUS readme: [bul-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-ukr/README.md)
* model: transformer-align
* source language(s): bul
* target language(s): ukr
* model: transformer-align
* pre-processing: normalizatio... | 053a571a498e10fe2ab34df287faacbd |
spacy/de_dep_news_trf | spacy | null | 27 | 8 | spacy | 0 | token-classification | false | false | false | mit | ['de'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['spacy', 'token-classification'] | false | true | true | 31,053 | false | ### Details: https://spacy.io/models/de#de_dep_news_trf
German transformer pipeline (bert-base-german-cased). Components: transformer, tagger, morphologizer, parser, lemmatizer (trainable_lemmatizer).
| Feature | Description |
| --- | --- |
| **Name** | `de_dep_news_trf` |
| **Version** | `3.5.0` |
| **spaCy** | `>=3... | a1d5265fca207bddbc7164de1d5d9b09 |
asdc/roberta-base-biomedical-clinical-es-finetuned-ner | asdc | roberta | 19 | 15 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,900 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-biomedical-clinical-es-finetuned-ner
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-c... | 6c2373661fadf338a936c7ef9f40c876 |
kabachuha/elynia-diffusion | kabachuha | null | 5 | 0 | null | 5 | null | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,129 | false |
## Description
Elynia Diffusion is a latent text-to-image diffusion model based on the original CompVis Stable Diffusion v1.4 and then fine-tuned on the main character of 'Battle for Wesnoth' add-ons using Dreambooth. This model has been created to explore the possibilities and limitations of Dreambooth training and ... | c9de3555171955bb43a3af8a48f46a8a |
doctorderp/Invicible | doctorderp | null | 3 | 0 | null | 18 | null | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,147 | false | Preview Images
https://imgur.com/a/CnIPfrQ
IMPORTANT INSTRUCTIONS!
This model was trained on SD base 1.5 version BUT It does also work for 1.4 as they both share the same Clip encoder.
Install instructions.
Simply place the invisible.pt file inside the \stable-diffusion-webui\models\hypernetworks folder. Load the ... | 04a21de2e7e672733995e3f3a2f0c63a |
jonatasgrosman/exp_w2v2t_th_vp-es_s26 | jonatasgrosman | wav2vec2 | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['th'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'th'] | false | true | true | 468 | false | # exp_w2v2t_th_vp-es_s26
Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (th)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that you... | 771994ff40483e93b92a059636034584 |
facebook/regnet-y-040 | facebook | regnet | 6 | 523 | transformers | 0 | image-classification | true | true | false | apache-2.0 | null | ['imagenet-1k'] | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['vision', 'image-classification'] | false | true | true | 1,895 | false |
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this... | d289aff619327662355608c80d58486c |
navteca/multi-qa-mpnet-base-cos-v1 | navteca | mpnet | 7 | 10 | sentence-transformers | 0 | sentence-similarity | true | false | false | mit | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['feature-extraction', 'sentence-similarity', 'sentence-transformers'] | false | true | true | 4,637 | false |
# Multi QA MPNet base model for Semantic Search
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and was designed for **semantic search**. It has been trained on 215M (question, answer) pairs from diverse sources.
This model uses [... | adff08d9acb3fa90e7c6e3d3246a6498 |
thkkvui/xlm-roberta-base-finetuned-panx-de | thkkvui | xlm-roberta | 10 | 7 | transformers | 0 | token-classification | true | false | false | mit | null | ['xtreme'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,325 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-b... | 18cd7e009c50c7857e36f00455a45915 |
google/t5-efficient-xxl-nl4 | google | t5 | 12 | 15 | transformers | 0 | text2text-generation | true | true | true | apache-2.0 | ['en'] | ['c4'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['deep-narrow'] | false | true | true | 6,247 | false |
# T5-Efficient-XXL-NL4 (Deep-Narrow version)
T5-Efficient-XXL-NL4 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and ... | de67e1ad654135c1bd6f0cf86fe27fd3 |
ozcangundes/mt5-multitask-qa-qg-turkish | ozcangundes | mt5 | 10 | 439 | transformers | 1 | question-answering | true | false | true | apache-2.0 | ['tr'] | ['TQUAD'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering', 'question-generation', 'multitask-model'] | false | true | true | 3,472 | false |
# mT5-small based Turkish Multitask (Answer Extraction, Question Generation and Question Answering) System
[Google's Multilingual T5-small](https://github.com/google-research/multilingual-t5) is fine-tuned on [Turkish Question Answering dataset](https://github.com/okanvk/Turkish-Reading-Comprehension-Question-Answeri... | a9f99b2121e4294297b0cda274ed323e |
Helsinki-NLP/opus-mt-pap-de | Helsinki-NLP | marian | 10 | 9 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-pap-de
* source languages: pap
* target languages: de
* OPUS readme: [pap-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pap-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](... | 7251b1ebafedad390ac3e18717f84c52 |
OpenMatch/ance-tele_triviaqa_psg-encoder | OpenMatch | bert | 7 | 2 | transformers | 0 | feature-extraction | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 906 | false |
This model is the **passage** encoder of ANCE-Tele trained on TriviaQA, described in the EMNLP 2022 paper ["Reduce Catastrophic Forgetting of Dense Retrieval Training with Teleportation Negatives"](https://arxiv.org/pdf/2210.17167.pdf). The associated GitHub repository is available at https://github.com/OpenMatch/ANCE... | d2ee91843e3b17c4e1e4a2b143da9184 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.