repo_id stringlengths 4 110 | author stringlengths 2 27 ⌀ | model_type stringlengths 2 29 ⌀ | files_per_repo int64 2 15.4k | downloads_30d int64 0 19.9M | library stringlengths 2 37 ⌀ | likes int64 0 4.34k | pipeline stringlengths 5 30 ⌀ | pytorch bool 2
classes | tensorflow bool 2
classes | jax bool 2
classes | license stringlengths 2 30 | languages stringlengths 4 1.63k ⌀ | datasets stringlengths 2 2.58k ⌀ | co2 stringclasses 29
values | prs_count int64 0 125 | prs_open int64 0 120 | prs_merged int64 0 15 | prs_closed int64 0 28 | discussions_count int64 0 218 | discussions_open int64 0 148 | discussions_closed int64 0 70 | tags stringlengths 2 513 | has_model_index bool 2
classes | has_metadata bool 1
class | has_text bool 1
class | text_length int64 401 598k | is_nc bool 1
class | readme stringlengths 0 598k | hash stringlengths 32 32 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
lmqg/mt5-base-dequad-qg | lmqg | mt5 | 20 | 189 | transformers | 0 | text2text-generation | true | false | false | cc-by-4.0 | ['de'] | ['lmqg/qg_dequad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question generation'] | true | true | true | 6,519 | false |
# Model Card of `lmqg/mt5-base-dequad-qg`
This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for question generation task on the [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-gener... | 4de004e2d0863e5beafbb7d59a3a40dd |
Mirelle/t5-small-finetuned-ro-to-en | Mirelle | t5 | 12 | 3 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['wmt16'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,570 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-ro-to-en
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 datas... | 65b8db644725c27cb897b9fe58105171 |
google/multiberts-seed_0-step_60k | google | bert | 8 | 15 | transformers | 0 | null | true | true | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['multiberts', 'multiberts-seed_0', 'multiberts-seed_0-step_60k'] | false | true | true | 3,515 | false |
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 60k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different... | 91c2d4e2078ff630fa229149d0079a86 |
jonfd/electra-base-igc-is | jonfd | electra | 7 | 2 | transformers | 0 | null | true | false | false | cc-by-4.0 | ['is'] | ['igc'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 607 | false |
# Icelandic ELECTRA-Base
This model was pretrained on the [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/), which contains approximately 1.69B tokens, using default settings. The model uses a WordPiece tokenizer with a vocabulary size of 32,105.
# Acknowledgments
This research was supported with Cloud TPUs fro... | 60559a876ff24f32f3df16dfa8098623 |
kingery/hyc-06-512-sd15-2e-6-1500-man-ddim | kingery | null | 24 | 5 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['text-to-image'] | false | true | true | 1,590 | false | ### hyc-06-512-sd15-2e-6-1500-man-ddim on Stable Diffusion via Dreambooth
#### model by kingery
This your the Stable Diffusion model fine-tuned the hyc-06-512-sd15-2e-6-1500-man-ddim concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of yangguangkechuang man... | c5bc3c056f2003e62d438f0ac7ee4d71 |
hassnain/wav2vec2-base-timit-demo-colab3 | hassnain | wav2vec2 | 16 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,462 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab3
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/w... | 7aadba30b8a9bb4f9dbe3869edbbc0a1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.