modelId stringlengths 4 122 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 74.7M | likes int64 0 9.67k | library_name stringlengths 2 84 ⌀ | tags list | pipeline_tag stringlengths 5 30 ⌀ | createdAt timestamp[us, tz=UTC] | card stringlengths 1 901k | embedding list |
|---|---|---|---|---|---|---|---|---|---|---|
Helsinki-NLP/opus-mt-it-en | Helsinki-NLP | 2023-08-16T11:58:49Z | 347,083 | 8 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"it",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | translation | 2022-03-02T23:29:04Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-it-en
* source languages: it
* target languages: en
* OPUS readme: [it-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/it-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* downl... | [
-0.30154159665107727,
-0.3538741171360016,
0.23018285632133484,
0.4775233268737793,
-0.532994270324707,
-0.3029305338859558,
-0.48083698749542236,
0.023819101974368095,
0.011831932701170444,
0.4315045177936554,
-0.6908758282661438,
-0.6955005526542664,
-0.6355893015861511,
0.18773448467254... |
busecarik/berturk-sunlp-ner-turkish | busecarik | 2023-01-09T19:38:54Z | 346,584 | 3 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"tr",
"dataset:SUNLP-NER-Twitter",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-08-26T12:34:48Z | ---
language: tr
datasets:
- SUNLP-NER-Twitter
---
# berturk-sunlp-ner-turkish
## Introduction
[berturk-sunlp-ner-turkish] is a NER model that was fine-tuned from the BERTurk-cased model on the SUNLP-NER-Twitter dataset.
## Training data
The model was trained on the SUNLP-NER-Twitter dataset (5000 tweets). The datas... | [
-0.4709576964378357,
-0.6603236198425293,
0.07867318391799927,
0.20458786189556122,
-0.40226495265960693,
-0.05418102443218231,
-0.25420597195625305,
-0.6522764563560486,
0.5319838523864746,
0.3229658603668213,
-0.5786323547363281,
-0.6135159730911255,
-0.7297863960266113,
0.25316026806831... |
peft-internal-testing/tiny-clip-text-2 | peft-internal-testing | 2023-09-20T16:26:50Z | 343,424 | 0 | transformers | [
"transformers",
"pytorch",
"clip_text_model",
"endpoints_compatible",
"region:us"
] | null | 2023-09-20T16:25:32Z | Entry not found | [
-0.3227650225162506,
-0.22568431496620178,
0.862226128578186,
0.43461495637893677,
-0.5282987952232361,
0.7012965679168701,
0.7915717363357544,
0.07618638128042221,
0.7746025919914246,
0.2563219666481018,
-0.7852817177772522,
-0.22573819756507874,
-0.9104480743408203,
0.5715669393539429,
... |
facebook/dinov2-large | facebook | 2023-09-06T11:23:50Z | 340,153 | 11 | transformers | [
"transformers",
"pytorch",
"safetensors",
"dinov2",
"feature-extraction",
"dino",
"vision",
"arxiv:2304.07193",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2023-07-17T16:47:01Z | ---
license: apache-2.0
tags:
- dino
- vision
---
# Vision Transformer (large-sized model) trained using DINOv2
Vision Transformer (ViT) model trained using the DINOv2 method. It was introduced in the paper [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) by Oquab et al.... | [
-0.503858208656311,
-0.41586336493492126,
0.1123390644788742,
-0.10851157456636429,
-0.4783145487308502,
-0.048147644847631454,
0.08340232819318771,
-0.4200822412967682,
0.2744600772857666,
0.5083916783332825,
-0.4499063789844513,
-0.23159167170524597,
-0.6893734931945801,
-0.1780444830656... |
t5-large | null | 2023-04-06T13:42:27Z | 340,054 | 115 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"translation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:c4",
"arxiv:1805.12471",
"arxiv:1708.00055",
"arxiv:1704.05426",
"arxiv:1606.05250",
"arxiv:1808.09121",
"arxi... | translation | 2022-03-02T23:29:04Z | ---
language:
- en
- fr
- ro
- de
- multilingual
license: apache-2.0
tags:
- summarization
- translation
datasets:
- c4
---
# Model Card for T5 Large

<div style="border: 2px solid orange; border-radius:10px; padding:0px 10px; width: fit-content;">
* This model was previously named **"PubMedBERT (abstracts + full text)"**.... | [
-0.21585315465927124,
-0.5420801639556885,
0.5537952184677124,
0.11075347661972046,
-0.37942370772361755,
0.08830364793539047,
-0.2372446358203888,
-0.51375412940979,
0.257464736700058,
0.28810450434684753,
-0.4106403589248657,
-0.6094641089439392,
-0.7220058441162109,
0.3102344870567322,
... |
gpt2-xl | null | 2023-10-23T13:09:53Z | 333,878 | 219 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"gpt2",
"text-generation",
"en",
"arxiv:1910.09700",
"license:mit",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | ---
language: en
license: mit
---
# GPT-2 XL
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environm... | [
-0.23501858115196228,
-0.7340529561042786,
0.30944904685020447,
-0.015281861647963524,
-0.28827789425849915,
-0.42630910873413086,
-0.37092119455337524,
-0.5977503061294556,
-0.38285595178604126,
0.4311160743236542,
-0.26973050832748413,
-0.22954387962818146,
-0.7291529178619385,
-0.110975... |
sentence-transformers/all-distilroberta-v1 | sentence-transformers | 2022-07-11T21:04:19Z | 332,241 | 14 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"rust",
"roberta",
"feature-extraction",
"sentence-similarity",
"en",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- MS Marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- nat... | [
-0.32286006212234497,
-0.8583473563194275,
0.3013629913330078,
0.17643126845359802,
-0.11528848856687546,
-0.24185596406459808,
-0.2472035139799118,
-0.2254004329442978,
0.3476235866546631,
0.16865696012973785,
-0.4411233961582184,
-0.5739397406578064,
-0.6701619625091553,
0.14779153466224... |
microsoft/deberta-v3-base | microsoft | 2022-09-22T12:34:19Z | 331,303 | 128 | transformers | [
"transformers",
"pytorch",
"tf",
"rust",
"deberta-v2",
"deberta",
"deberta-v3",
"fill-mask",
"en",
"arxiv:2006.03654",
"arxiv:2111.09543",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: en
tags:
- deberta
- deberta-v3
- fill-mask
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
---
## DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT... | [
-0.4211574196815491,
-0.6925390958786011,
0.22727900743484497,
0.4078376889228821,
-0.24408270418643951,
0.18189267814159393,
-0.13321079313755035,
-0.5217856168746948,
0.3471117317676544,
-0.013874631375074387,
-0.40347832441329956,
-0.5373973250389099,
-0.9435217976570129,
-0.08218596130... |
baichuan-inc/Baichuan-13B-Base | baichuan-inc | 2023-07-19T03:37:12Z | 323,368 | 169 | transformers | [
"transformers",
"pytorch",
"baichuan",
"text-generation",
"custom_code",
"zh",
"en",
"arxiv:2104.09864",
"arxiv:2108.12409",
"arxiv:2009.03300",
"has_space",
"region:us"
] | text-generation | 2023-07-08T16:55:46Z | ---
language:
- zh
- en
pipeline_tag: text-generation
inference: false
---
# Baichuan-13B-Base
<!-- Provide a quick summary of what the model is/does. -->
## 介绍
Baichuan-13B-Base为Baichuan-13B系列模型中的预训练版本,经过对齐后的模型可见[Baichuan-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan-13B-Chat)。
[Baichuan-13B](https://githu... | [
-0.3977890610694885,
-0.7103672027587891,
0.08895663172006607,
0.5931674242019653,
-0.38752707839012146,
-0.35259413719177246,
-0.24732816219329834,
-0.48791760206222534,
0.2052077054977417,
0.30629679560661316,
-0.4593929350376129,
-0.6131119728088379,
-0.551624596118927,
-0.0305685941129... |
allenai/scibert_scivocab_uncased | allenai | 2022-10-03T22:06:12Z | 323,067 | 81 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"en",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: en
---
# SciBERT
This is the pretrained model presented in [SciBERT: A Pretrained Language Model for Scientific Text](https://www.aclweb.org/anthology/D19-1371/), which is a BERT model trained on scientific text.
The training corpus was papers taken from [Semantic Scholar](https://www.semanticscholar.or... | [
-0.02370210736989975,
-0.23963333666324615,
0.442303866147995,
0.2800910174846649,
-0.48369255661964417,
0.14989227056503296,
-0.23614685237407684,
-0.34799546003341675,
0.39306265115737915,
0.31978026032447815,
-0.40424057841300964,
-0.5679024457931519,
-0.5820777416229248,
0.192669689655... |
hustvl/yolos-tiny | hustvl | 2023-06-05T11:57:44Z | 318,699 | 154 | transformers | [
"transformers",
"pytorch",
"safetensors",
"yolos",
"object-detection",
"vision",
"dataset:coco",
"arxiv:2106.00666",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | object-detection | 2022-04-26T09:28:47Z | ---
license: apache-2.0
tags:
- object-detection
- vision
datasets:
- coco
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg
example_title: Savanna
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
example_title: Football Match
- s... | [
-0.6748453378677368,
-0.5885289907455444,
0.10013791173696518,
-0.20098567008972168,
-0.2960749864578247,
-0.2889569401741028,
-0.007500054780393839,
-0.7953598499298096,
0.14650605618953705,
0.4597609341144562,
-0.4952044188976288,
-0.44069337844848633,
-0.5632848143577576,
0.279638856649... |
sentence-transformers/distiluse-base-multilingual-cased-v2 | sentence-transformers | 2023-11-02T09:41:26Z | 317,241 | 97 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"tf",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"multilingual",
"ar",
"bg",
"ca",
"cs",
"da",
"de",
"el",
"en",
"es",
"et",
"fa",
"fi",
"fr",
"gl",
"gu",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
... | sentence-similarity | 2022-03-02T23:29:05Z | ---
language:
- multilingual
- ar
- bg
- ca
- cs
- da
- de
- el
- en
- es
- et
- fa
- fi
- fr
- gl
- gu
- he
- hi
- hr
- hu
- hy
- id
- it
- ja
- ka
- ko
- ku
- lt
- lv
- mk
- mn
- mr
- ms
- my
- nb
- nl
- pl
- pt
- ro
- ru
- sk
- sl
- sq
- sr
- sv
- th
- tr
- uk
- ur
- vi
language_bcp47:
- fr-ca
- pt-br
- ... | [
-0.2247537225484848,
-0.8036711812019348,
0.36996039748191833,
0.45779985189437866,
-0.3317817151546478,
-0.2460796982049942,
-0.2612016797065735,
0.08905141055583954,
0.19247795641422272,
0.3784251809120178,
-0.5565471649169922,
-0.4800286591053009,
-0.6727561950683594,
0.1701829433441162... |
stabilityai/stable-diffusion-2 | stabilityai | 2023-07-05T16:19:01Z | 316,545 | 1,673 | diffusers | [
"diffusers",
"stable-diffusion",
"text-to-image",
"arxiv:2202.00512",
"arxiv:2112.10752",
"arxiv:1910.09700",
"license:openrail++",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-11-23T11:54:34Z | ---
license: openrail++
tags:
- stable-diffusion
- text-to-image
---
# Stable Diffusion v2 Model Card
This model card focuses on the model associated with the Stable Diffusion v2 model, available [here](https://github.com/Stability-AI/stablediffusion).
This `stable-diffusion-2` model is resumed from [stable-diffusion... | [
-0.36517083644866943,
-0.79401034116745,
0.3239303231239319,
0.11584324389696121,
-0.20737335085868835,
-0.3734061121940613,
0.0708913803100586,
-0.39867112040519714,
-0.1219097226858139,
0.36739879846572876,
-0.366839736700058,
-0.3336407542228699,
-0.7103841304779053,
-0.1063878312706947... |
laion/CLIP-ViT-bigG-14-laion2B-39B-b160k | laion | 2023-04-18T18:35:30Z | 313,408 | 141 | open_clip | [
"open_clip",
"pytorch",
"clip",
"zero-shot-image-classification",
"arxiv:1910.04867",
"license:mit",
"has_space",
"region:us"
] | zero-shot-image-classification | 2023-01-23T07:12:35Z | ---
license: mit
widget:
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
library_name: open_clip
pipeline_tag: zero-shot-image-classification
---
# Model Card for CLIP ViT-bigG/14 - LAION-2B
#... | [
-0.2830107808113098,
-0.5380174517631531,
0.21612557768821716,
0.05961059778928757,
-0.3818960189819336,
-0.4723406732082367,
-0.28220298886299133,
-0.6622700095176697,
-0.005809947848320007,
0.44110816717147827,
-0.361144483089447,
-0.559183657169342,
-0.6229318380355835,
-0.0719100683927... |
facebook/blenderbot-400M-distill | facebook | 2023-03-30T16:12:30Z | 312,267 | 304 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"blenderbot",
"text2text-generation",
"convAI",
"conversational",
"facebook",
"en",
"dataset:blended_skill_talk",
"arxiv:2004.13637",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | conversational | 2022-03-02T23:29:05Z | ---
language:
- en
thumbnail:
tags:
- convAI
- conversational
- facebook
license: apache-2.0
datasets:
- blended_skill_talk
metrics:
- perplexity
---
## Model description
+ Paper: [Recipes for building an open-domain chatbot]( https://arxiv.org/abs/2004.13637)
+ [Original PARLAI Code](https://parl.ai/projects/recipe... | [
-0.33370596170425415,
-0.7645142078399658,
0.3294665813446045,
0.31663066148757935,
0.30595511198043823,
-0.08419536054134369,
-0.4075551927089691,
-0.20130565762519836,
-0.008311532437801361,
0.6358470320701599,
-0.25735631585121155,
-0.24586662650108337,
-0.7238476872444153,
-0.354398548... |
gpt2-medium | null | 2023-06-30T02:23:32Z | 311,497 | 92 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"onnx",
"safetensors",
"gpt2",
"text-generation",
"en",
"arxiv:1910.09700",
"license:mit",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | ---
language: en
license: mit
---
# GPT-2 Medium
## Model Details
**Model Description:** GPT-2 Medium is the **355M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
... | [
-0.15819847583770752,
-0.7585635781288147,
0.2973330616950989,
0.09222531318664551,
-0.23518820106983185,
-0.36733174324035645,
-0.38461384177207947,
-0.5325049757957458,
-0.2682662308216095,
0.45440590381622314,
-0.33017218112945557,
-0.23084889352321625,
-0.6647229790687561,
-0.021034510... |
speechbrain/spkrec-ecapa-voxceleb | speechbrain | 2022-06-26T23:15:06Z | 309,494 | 100 | speechbrain | [
"speechbrain",
"embeddings",
"Speaker",
"Verification",
"Identification",
"pytorch",
"ECAPA",
"TDNN",
"en",
"dataset:voxceleb",
"arxiv:2106.04624",
"license:apache-2.0",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: "en"
thumbnail:
tags:
- speechbrain
- embeddings
- Speaker
- Verification
- Identification
- pytorch
- ECAPA
- TDNN
license: "apache-2.0"
datasets:
- voxceleb
metrics:
- EER
widget:
- example_title: VoxCeleb Speaker id10003
src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb1_00003.wav
- examp... | [
-0.4750549793243408,
-0.9279873371124268,
0.05488090217113495,
0.11012265086174011,
-0.20285537838935852,
-0.20332252979278564,
-0.5441562533378601,
-0.29208365082740784,
0.4262755811214447,
0.21693125367164612,
-0.4296494126319885,
-0.6925245523452759,
-0.5884914398193359,
0.0231282431632... |
TurkuNLP/sbert-cased-finnish-paraphrase | TurkuNLP | 2021-11-29T08:43:26Z | 307,219 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"fi",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
language:
- fi
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
widget:
- text: "Minusta täällä on ihana asua!"
---
# Cased Finnish Sentence BERT model
Finnish Sentence BERT trained from FinBERT. A demo on retrieving the most similar senten... | [
-0.17021265625953674,
-0.7382681369781494,
0.42749595642089844,
0.42456305027008057,
-0.5051300525665283,
-0.44077882170677185,
-0.08041699975728989,
-0.01935681700706482,
0.36102551221847534,
0.5445750951766968,
-0.44529274106025696,
-0.43633371591567993,
-0.6251233220100403,
0.2076228410... |
deepset/bert-base-cased-squad2 | deepset | 2023-05-05T07:00:52Z | 305,980 | 18 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:squad_v2",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
language: en
license: cc-by-4.0
datasets:
- squad_v2
model-index:
- name: deepset/bert-base-cased-squad2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- type: e... | [
-0.1764392852783203,
-0.3432127833366394,
0.17561663687229156,
0.2303844392299652,
-0.19539864361286163,
0.18206433951854706,
0.4249970018863678,
-0.10329295694828033,
0.021554622799158096,
0.7530311346054077,
-1.204522728919983,
0.00034101406345143914,
-0.4427310526371002,
-0.433505386114... |
intfloat/e5-small-v2 | intfloat | 2023-08-16T02:50:15Z | 304,124 | 45 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"tf",
"onnx",
"safetensors",
"bert",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"en",
"arxiv:2212.03533",
"arxiv:2104.08663",
"arxiv:2210.07316",
"license:mit",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | sentence-similarity | 2023-05-19T06:45:35Z | ---
tags:
- mteb
- Sentence Transformers
- sentence-similarity
- sentence-transformers
model-index:
- name: e5-small-v2
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
re... | [
-0.12574732303619385,
-0.6991908550262451,
0.24159446358680725,
0.1607864946126938,
-0.2529694437980652,
-0.48071080446243286,
0.01860949583351612,
-0.41347625851631165,
0.09015463292598724,
0.2796018421649933,
-0.467468798160553,
-0.5951641798019409,
-0.9766666293144226,
0.228475108742713... |
pyannote/voice-activity-detection | pyannote | 2023-08-30T15:43:42Z | 304,075 | 110 | pyannote-audio | [
"pyannote-audio",
"pyannote",
"pyannote-audio-pipeline",
"audio",
"voice",
"speech",
"speaker",
"voice-activity-detection",
"automatic-speech-recognition",
"dataset:ami",
"dataset:dihard",
"dataset:voxconverse",
"license:mit",
"has_space",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- pyannote
- pyannote-audio
- pyannote-audio-pipeline
- audio
- voice
- speech
- speaker
- voice-activity-detection
- automatic-speech-recognition
datasets:
- ami
- dihard
- voxconverse
license: mit
extra_gated_prompt: "The collected information will help acquire a better knowledge of pyannote.audio userbase ... | [
-0.2086612433195114,
-0.4517973065376282,
0.27553075551986694,
0.5059627890586853,
-0.17213329672813416,
-0.09346852451562881,
-0.40257909893989563,
-0.6835025548934937,
0.5474710464477539,
0.5943823456764221,
-0.4611557424068451,
-0.5853223204612732,
-0.0949467271566391,
-0.29032972455024... |
deepset/tinyroberta-squad2 | deepset | 2023-09-27T11:51:22Z | 303,702 | 70 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"question-answering",
"en",
"dataset:squad_v2",
"arxiv:1909.10351",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
language: en
license: cc-by-4.0
datasets:
- squad_v2
model-index:
- name: deepset/tinyroberta-squad2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- type: exact... | [
-0.4122050106525421,
-0.595669150352478,
0.443416565656662,
0.04117059335112572,
-0.05173494294285774,
0.16601713001728058,
-0.20492640137672424,
-0.41144177317619324,
0.2624568045139313,
0.222785085439682,
-0.842913031578064,
-0.6105228066444397,
-0.38272902369499207,
0.025424210354685783... |
madhurjindal/autonlp-Gibberish-Detector-492513457 | madhurjindal | 2023-11-22T19:52:09Z | 300,846 | 27 | transformers | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"autonlp",
"en",
"dataset:madhurjindal/autonlp-data-Gibberish-Detector",
"co2_eq_emissions",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags: [autonlp]
language: en
widget:
- text: "I love Machine Learning!"
datasets:
- madhurjindal/autonlp-data-Gibberish-Detector
co2_eq_emissions: 5.527544460835904
---
# Problem Description
The ability to process and understand user input is crucial for various applications, such as chatbots or downstream tasks. ... | [
-0.40375348925590515,
-0.9768760204315186,
0.0033303285017609596,
0.24188265204429626,
-0.15142251551151276,
0.09728570282459259,
-0.03476859629154205,
-0.5747662782669067,
0.3669281601905823,
0.22770904004573822,
-0.22228765487670898,
-0.51356041431427,
-0.9110183715820312,
0.062540970742... |
Helsinki-NLP/opus-mt-tc-big-en-pt | Helsinki-NLP | 2023-10-10T10:20:34Z | 300,586 | 17 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc",
"tc",
"big",
"en",
"pt",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | translation | 2022-04-13T14:49:04Z | ---
language:
- en
- pt
- pt_br
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-en-pt
results:
- task:
name: Translation eng-por
type: translation
args: eng-por
dataset:
name: flores101-devtest
type: flores_101
args: eng por devtest
... | [
-0.3330439329147339,
-0.6136744022369385,
0.25713974237442017,
0.3284514248371124,
-0.48048070073127747,
-0.25572583079338074,
-0.56629878282547,
-0.3299106955528259,
0.17060373723506927,
0.3884783387184143,
-0.46238166093826294,
-0.6828002333641052,
-0.6741193532943726,
0.3853898942470550... |
runwayml/stable-diffusion-inpainting | runwayml | 2023-07-05T01:09:17Z | 299,577 | 1,365 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"arxiv:2207.12598",
"arxiv:2112.10752",
"arxiv:2103.00020",
"arxiv:2205.11487",
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"has_space",
"diffusers:StableDiffusionInpaintPipeline",
"region:us"
] | text-to-image | 2022-10-17T02:48:32Z | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: false
library_name: diffusers
extra_gated_prompt: |-
One more step before getting this model.
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying ri... | [
-0.3728431165218353,
-0.7606770396232605,
0.3867223262786865,
0.41485705971717834,
-0.1364966183900833,
-0.06636982411146164,
0.12865085899829865,
-0.34935855865478516,
0.08399518579244614,
0.4209074079990387,
-0.39109429717063904,
-0.3791137933731079,
-0.567057192325592,
-0.12002099305391... |
cross-encoder/ms-marco-TinyBERT-L-2-v2 | cross-encoder | 2021-08-05T08:39:45Z | 297,735 | 9 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
---
# Cross-Encoder for MS Marco
This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch).... | [
-0.4464038610458374,
-0.602876603603363,
0.34621721506118774,
0.16133926808834076,
-0.17513102293014526,
0.14829601347446442,
-0.18506364524364471,
-0.5315775871276855,
0.3474007248878479,
0.35316556692123413,
-0.5687869191169739,
-0.7053859233856201,
-0.800533652305603,
0.0421725995838642... |
stabilityai/stable-diffusion-x4-upscaler | stabilityai | 2023-07-05T16:19:13Z | 296,851 | 520 | diffusers | [
"diffusers",
"stable-diffusion",
"arxiv:2112.10752",
"arxiv:2202.00512",
"arxiv:1910.09700",
"license:openrail++",
"has_space",
"diffusers:StableDiffusionUpscalePipeline",
"region:us"
] | null | 2022-11-23T17:42:04Z | ---
license: openrail++
tags:
- stable-diffusion
inference: false
---
# Stable Diffusion x4 upscaler model card
This model card focuses on the model associated with the Stable Diffusion Upscaler, available [here](https://github.com/Stability-AI/stablediffusion).
This model is trained for 1.25M steps on a 10M subset of... | [
-0.45395249128341675,
-0.7622145414352417,
0.29738950729370117,
0.1185518130660057,
-0.1764773726463318,
-0.3083048462867737,
0.020208371803164482,
-0.44637367129325867,
-0.09166790544986725,
0.3522205948829651,
-0.3678286075592041,
-0.37857720255851746,
-0.6489000916481018,
-0.10028079152... |
microsoft/trocr-base-handwritten | microsoft | 2023-01-26T12:56:57Z | 295,485 | 112 | transformers | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"trocr",
"image-to-text",
"arxiv:2109.10282",
"endpoints_compatible",
"has_space",
"region:us"
] | image-to-text | 2022-03-02T23:29:05Z | ---
tags:
- trocr
- image-to-text
widget:
- src: https://fki.tic.heia-fr.ch/static/img/a01-122-02.jpg
example_title: Note 1
- src: https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSoolxi9yWGAT5SLZShv8vVd0bz47UWRzQC19fDTeE8GmGv_Rn-PCF1pP1rrUx8kOjA4gg&usqp=CAU
example_title: Note 2
- src: https://encrypted-tbn0.... | [
-0.19652219116687775,
-0.3729129135608673,
0.12126913666725159,
-0.37605783343315125,
-0.3793249726295471,
0.003786081913858652,
-0.003608815371990204,
-0.8655459880828857,
0.10815340280532837,
0.6764953136444092,
-0.3753763735294342,
-0.4760826528072357,
-0.673095166683197,
0.135452896356... |
google/flan-t5-xl | google | 2023-11-28T09:14:33Z | 294,421 | 369 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:svakulenk0/qrecc",
"dataset:taskmaster2",
"dataset:djaym7/wiki_dialog",
"dataset:deepmind/code_contests",
"dataset:lambada",
"dataset:gsm8k",
"dat... | text2text-generation | 2022-10-21T15:43:52Z | ---
language:
- en
- fr
- ro
- de
- multilingual
widget:
- text: "Translate to German: My name is Arthur"
example_title: "Translation"
- text: "Please answer to the following question. Who is going to be the next Ballon d'or?"
example_title: "Question Answering"
- text: "Q: Can Geoffrey Hinton have a conversatio... | [
-0.4469079375267029,
-0.5761972665786743,
0.30898094177246094,
0.011384683661162853,
-0.10341539233922958,
-0.11937547475099564,
-0.41139623522758484,
-0.6318620443344116,
-0.14215734601020813,
0.09950324892997742,
-0.5122485756874084,
-0.5050493478775024,
-0.6618294715881348,
0.0671908482... |
sshleifer/tiny-gpt2 | sshleifer | 2021-05-23T12:55:11Z | 293,520 | 17 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | Entry not found | [
-0.3227650225162506,
-0.22568431496620178,
0.862226128578186,
0.43461495637893677,
-0.5282987952232361,
0.7012965679168701,
0.7915717363357544,
0.07618638128042221,
0.7746025919914246,
0.2563219666481018,
-0.7852817177772522,
-0.22573819756507874,
-0.9104480743408203,
0.5715669393539429,
... |
google/flan-t5-large | google | 2023-07-17T12:49:05Z | 291,334 | 316 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:svakulenk0/qrecc",
"dataset:taskmaster2",
"dataset:djaym7/wiki_dialog",
"dataset:deepmind/code_contests",
"dataset:lambada",
"dataset:gsm8k",
"dat... | text2text-generation | 2022-10-21T10:07:08Z | ---
language:
- en
- fr
- ro
- de
- multilingual
widget:
- text: "Translate to German: My name is Arthur"
example_title: "Translation"
- text: "Please answer to the following question. Who is going to be the next Ballon d'or?"
example_title: "Question Answering"
- text: "Q: Can Geoffrey Hinton have a conversatio... | [
-0.4610769748687744,
-0.5820000767707825,
0.3105415105819702,
0.0020028266590088606,
-0.09287659823894501,
-0.1524735689163208,
-0.4470739960670471,
-0.6550846099853516,
-0.13306500017642975,
0.13006164133548737,
-0.497055321931839,
-0.5158846378326416,
-0.6641572713851929,
0.0730594918131... |
microsoft/codebert-base | microsoft | 2022-02-11T19:59:44Z | 288,967 | 140 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"roberta",
"feature-extraction",
"arxiv:2002.08155",
"endpoints_compatible",
"has_space",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | ## CodeBERT-base
Pretrained weights for [CodeBERT: A Pre-Trained Model for Programming and Natural Languages](https://arxiv.org/abs/2002.08155).
### Training Data
The model is trained on bi-modal data (documents & code) of [CodeSearchNet](https://github.com/github/CodeSearchNet)
### Training Objective
This model is i... | [
-0.15000185370445251,
-0.18983039259910583,
0.17599600553512573,
0.4042523503303528,
0.028797578066587448,
0.1198720932006836,
-0.3866422176361084,
-0.08078131079673767,
0.15767498314380646,
0.5232177972793579,
-0.49142998456954956,
-0.7690262198448181,
-0.5970048308372498,
-0.100646667182... |
microsoft/deberta-xlarge-mnli | microsoft | 2022-06-27T15:47:33Z | 287,831 | 15 | transformers | [
"transformers",
"pytorch",
"tf",
"deberta",
"text-classification",
"deberta-v1",
"deberta-mnli",
"en",
"arxiv:2006.03654",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language: en
tags:
- deberta-v1
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
widget:
- text: "[CLS] I love you. [SEP] I like you. [SEP]"
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) impro... | [
-0.4928748607635498,
-0.6684625148773193,
0.2888447046279907,
0.5022820234298706,
-0.18837030231952667,
0.20616041123867035,
0.011030166409909725,
-0.6990737915039062,
0.3044406771659851,
0.18768364191055298,
-0.8944010138511658,
-0.36391058564186096,
-0.9852786064147949,
-0.08067969977855... |
xlnet-base-cased | null | 2023-01-24T14:50:31Z | 285,675 | 49 | transformers | [
"transformers",
"pytorch",
"tf",
"rust",
"xlnet",
"text-generation",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1906.08237",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | ---
language: en
license: mit
datasets:
- bookcorpus
- wikipedia
---
# XLNet (base-sized model)
XLNet model pre-trained on English language. It was introduced in the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Yang et al. and first released in... | [
-0.37620019912719727,
-0.692796528339386,
0.23369093239307404,
0.05447549745440483,
-0.1598220318555832,
-0.13080111145973206,
-0.23221758008003235,
-0.4414084851741791,
0.2855101525783539,
0.3757573068141937,
-0.3945465683937073,
-0.3788275420665741,
-0.6007722020149231,
0.076696224510669... |
prithivida/parrot_adequacy_model | prithivida | 2022-05-27T02:47:22Z | 282,753 | 6 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | 2022-05-27T02:04:37Z | ---
license: apache-2.0
---
Parrot
THIS IS AN ANCILLARY MODEL FOR PARROT PARAPHRASER
1. What is Parrot?
Parrot is a paraphrase-based utterance augmentation framework purpose-built to accelerate training NLU models. A paraphrase framework is more than just a paraphrasing model. Please refer to the GitHub page or The mo... | [
-0.10157536715269089,
-1.072129249572754,
-0.000429558742325753,
0.48505765199661255,
-0.19428329169750214,
-0.3085114061832428,
0.2563242018222809,
-0.3527683615684509,
0.07239455729722977,
0.7536970973014832,
-0.5993884801864624,
0.27334335446357727,
-0.030462969094514847,
0.271001815795... |
google/electra-small-discriminator | google | 2021-04-29T15:24:16Z | 282,491 | 20 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"electra",
"pretraining",
"en",
"arxiv:1406.2661",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://huggingface.co/front/thumbnails/google.png
license: apache-2.0
---
## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
**ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks usi... | [
-0.4730367958545685,
-0.49488940834999084,
0.15989743173122406,
0.17168261110782623,
-0.24812017381191254,
0.32072868943214417,
-0.2580536901950836,
-0.17946647107601166,
0.4031728208065033,
0.4429510831832886,
-0.34970441460609436,
-0.200064554810524,
-0.5095455646514893,
0.41112461686134... |
timm/convnext_small.fb_in22k | timm | 2023-03-31T22:34:14Z | 282,245 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-22k",
"arxiv:2201.03545",
"license:apache-2.0",
"region:us"
] | image-classification | 2022-12-13T07:13:23Z | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
datasets:
- imagenet-22k
---
# Model card for convnext_small.fb_in22k
A ConvNeXt image classification model. Pretrained on ImageNet-22k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model ... | [
-0.922447144985199,
-0.45697784423828125,
-0.040288522839546204,
0.5219123363494873,
-0.4341062307357788,
-0.19902922213077545,
-0.18124353885650635,
-0.48874399065971375,
0.9088695049285889,
0.23801866173744202,
-0.6007756590843201,
-0.5729562640190125,
-0.6978760361671448,
-0.03712354227... |
stabilityai/stable-diffusion-2-base | stabilityai | 2023-07-05T16:19:03Z | 282,105 | 306 | diffusers | [
"diffusers",
"stable-diffusion",
"text-to-image",
"arxiv:2112.10752",
"arxiv:2202.00512",
"arxiv:1910.09700",
"license:openrail++",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-11-23T17:41:31Z | ---
license: openrail++
tags:
- stable-diffusion
- text-to-image
---
# Stable Diffusion v2-base Model Card
This model card focuses on the model associated with the Stable Diffusion v2-base model, available [here](https://github.com/Stability-AI/stablediffusion).
The model is trained from scratch 550k steps at resolut... | [
-0.3623008728027344,
-0.7732398509979248,
0.2768454849720001,
0.12266045063734055,
-0.2590923011302948,
-0.356234073638916,
0.07617881894111633,
-0.4324091076850891,
-0.15257491171360016,
0.41229236125946045,
-0.3376127779483795,
-0.40003079175949097,
-0.7150945663452148,
-0.15283840894699... |
flair/ner-english-ontonotes-large | flair | 2021-05-08T15:35:21Z | 280,041 | 72 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"dataset:ontonotes",
"arxiv:2011.06993",
"has_space",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: en
datasets:
- ontonotes
widget:
- text: "On September 1st George won 1 dollar while watching Game of Thrones."
---
## English NER in Flair (Ontonotes large model)
This is the large 18-class NER model for English that ships with [Flair](https:... | [
-0.3452865481376648,
-0.5214062333106995,
0.15153083205223083,
0.08882220089435577,
-0.13863860070705414,
-0.14179398119449615,
-0.21003173291683197,
-0.4156215190887451,
0.5845874547958374,
0.44301342964172363,
-0.41096460819244385,
-0.5644538402557373,
-0.5816996097564697,
0.281567275524... |
cardiffnlp/twitter-roberta-base-emotion | cardiffnlp | 2023-05-28T05:08:00Z | 278,272 | 36 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"arxiv:2010.12421",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | # Twitter-roBERTa-base for Emotion Recognition
This is a RoBERTa-base model trained on ~58M tweets and finetuned for emotion recognition with the TweetEval benchmark.
- Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf).
- Git Repo: [Tweeteval official repository](https://g... | [
-0.03895709663629532,
-0.47568413615226746,
0.08824215084314346,
0.43991726636886597,
-0.13052432239055634,
0.20854316651821136,
-0.39654698967933655,
-0.1435495764017105,
0.2721105217933655,
-0.06032973527908325,
-0.39173340797424316,
-0.8324195146560669,
-0.8298298120498657,
0.1015545502... |
philschmid/bart-large-cnn-samsum | philschmid | 2022-12-23T19:48:57Z | 277,839 | 211 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"sagemaker",
"summarization",
"en",
"dataset:samsum",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | summarization | 2022-03-02T23:29:05Z | ---
language: en
license: mit
tags:
- sagemaker
- bart
- summarization
datasets:
- samsum
widget:
- text: "Jeff: Can I train a \U0001F917 Transformers model on Amazon SageMaker? \n\
Philipp: Sure you can use the new Hugging Face Deep Learning Container. \nJeff:\
\ ok.\nJeff: and how can I get started? \nJeff: w... | [
-0.7882456183433533,
-0.6947547793388367,
0.11773502826690674,
0.23530510067939758,
-0.21321414411067963,
-0.009269225411117077,
-0.32475629448890686,
-0.45803749561309814,
0.3432733416557312,
0.43343862891197205,
-0.9516642689704895,
-0.5467766523361206,
-0.8267200589179993,
-0.2527081668... |
cointegrated/rubert-tiny2 | cointegrated | 2023-10-14T21:23:32Z | 277,516 | 54 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"safetensors",
"bert",
"russian",
"fill-mask",
"pretraining",
"embeddings",
"masked-lm",
"tiny",
"feature-extraction",
"sentence-similarity",
"transformers",
"ru",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
language:
- ru
pipeline_tag: sentence-similarity
tags:
- russian
- fill-mask
- pretraining
- embeddings
- masked-lm
- tiny
- feature-extraction
- sentence-similarity
- sentence-transformers
- transformers
license: mit
widget:
- text: Миниатюрная модель для [MASK] разных задач.
---
This is an updated version of [coi... | [
-0.1272447407245636,
-0.7402242422103882,
0.22640077769756317,
-0.00823963899165392,
-0.39813926815986633,
-0.1333993822336197,
-0.6702576279640198,
-0.20323266088962555,
0.32521602511405945,
0.38524898886680603,
-0.3882121443748474,
-0.18846458196640015,
-0.5432763695716858,
-0.0660661980... |
hfl/chinese-bert-wwm-ext | hfl | 2021-05-19T19:06:39Z | 277,357 | 125 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"arxiv:1906.08101",
"arxiv:2004.13922",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language:
- zh
license: "apache-2.0"
---
## Chinese BERT with Whole Word Masking
For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**.
**[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)**
Yiming Cu... | [
-0.4177989959716797,
-0.7425766587257385,
0.3220056891441345,
0.5764369368553162,
-0.4150307774543762,
-0.12495538592338562,
-0.586872935295105,
-0.7361405491828918,
0.35627368092536926,
0.4685896635055542,
-0.4704820513725281,
-0.4724794328212738,
-0.5826440453529358,
-0.00765018863603472... |
NousResearch/Llama-2-7b-chat-hf | NousResearch | 2023-07-18T20:57:56Z | 276,404 | 61 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"en",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | 2023-07-18T19:45:53Z | ---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license te... | [
-0.22563326358795166,
-0.7182027697563171,
0.3827623128890991,
0.1981249302625656,
-0.3884676694869995,
0.2319033443927765,
-0.04514666646718979,
-0.76518714427948,
0.06008138880133629,
0.32113802433013916,
-0.7158095836639404,
-0.587417721748352,
-0.6827578544616699,
0.08643511682748795,
... |
amunchet/rorshark-vit-base | amunchet | 2023-11-18T20:58:42Z | 275,882 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"vision",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-11-18T20:49:21Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- vision
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: rorshark-vit-base
results:
- task:
name: Image Classification
type: image-classification
dataset:
... | [
-0.28639546036720276,
-0.5920270085334778,
-0.109117791056633,
0.017209164798259735,
-0.4818778932094574,
-0.3650229275226593,
-0.03680683672428131,
-0.25972336530685425,
0.12359341233968735,
0.31905320286750793,
-0.7540500164031982,
-0.6500799059867859,
-0.7591380476951599,
-0.25495061278... |
flair/ner-english | flair | 2021-03-02T22:11:28Z | 275,715 | 22 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"dataset:conll2003",
"has_space",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: en
datasets:
- conll2003
widget:
- text: "George Washington went to Washington"
---
## English NER in Flair (default model)
This is the standard 4-class NER model for English that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Sco... | [
-0.40985098481178284,
-0.612025797367096,
0.14851021766662598,
0.07254189252853394,
-0.10924401134252548,
-0.11063447594642639,
-0.26752769947052,
-0.41027069091796875,
0.5524793863296509,
0.23583592474460602,
-0.5032674074172974,
-0.5700594782829285,
-0.42729830741882324,
0.38735187053680... |
bigscience/T0_3B | bigscience | 2022-06-21T01:31:56Z | 275,632 | 91 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:bigscience/P3",
"arxiv:2110.08207",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
datasets:
- bigscience/P3
language: en
license: apache-2.0
widget:
- text: "A is the son's of B's uncle. What is the family relationship between A and B?"
- text: "Reorder the words in this sentence: justin and name bieber years is my am I 27 old."
- text: "Task: copy but say the opposite.\n
PSG won its match again... | [
-0.36170729994773865,
-0.7779497504234314,
0.31381168961524963,
0.12433524429798126,
-0.11486009508371353,
-0.08650419116020203,
-0.16151200234889984,
-0.34350624680519104,
-0.08413995802402496,
0.34273168444633484,
-0.46281084418296814,
-0.5965882539749146,
-0.5986722707748413,
0.29370146... |
ai-forever/sbert_large_nlu_ru | ai-forever | 2023-10-28T10:40:17Z | 274,992 | 32 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"PyTorch",
"Transformers",
"ru",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | ---
language:
- ru
tags:
- PyTorch
- Transformers
---
# BERT large model (uncased) for Sentence Embeddings in Russian language.
The model is described [in this article](https://habr.com/ru/company/sberdevices/blog/527576/)
For better quality, use mean token embeddings.
## Usage (HuggingFace Models Repository)
You ... | [
-0.17058870196342468,
-0.7625309228897095,
0.3420388996601105,
0.4523608982563019,
-0.38548797369003296,
-0.13509470224380493,
-0.3086884617805481,
-0.24789994955062866,
0.3660627603530884,
0.2854122519493103,
-0.6430864334106445,
-0.6334342956542969,
-0.6685304045677185,
-0.13969457149505... |
valhalla/t5-base-e2e-qg | valhalla | 2021-06-23T14:40:07Z | 274,519 | 24 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question-generation",
"dataset:squad",
"arxiv:1910.10683",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
datasets:
- squad
tags:
- question-generation
widget:
- text: "Python is a programming language. It is developed by Guido Van Rossum and released in 1991. </s>"
license: mit
---
## T5 for question-generation
This is [t5-base](https://arxiv.org/abs/1910.10683) model trained for end-to-end question generation task. ... | [
-0.4713422358036041,
-1.007559895515442,
0.3735017478466034,
0.09874578565359116,
-0.016646908596158028,
-0.1386893391609192,
0.09869945794343948,
-0.2246360033750534,
-0.117984339594841,
0.5948060154914856,
-0.7067289352416992,
-0.32856351137161255,
-0.2272094190120697,
0.3219171166419983... |
facebook/mbart-large-50-many-to-many-mmt | facebook | 2023-09-28T16:42:59Z | 269,202 | 94 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"mbart",
"text2text-generation",
"mbart-50",
"translation",
"multilingual",
"ar",
"cs",
"de",
"en",
"es",
"et",
"fi",
"fr",
"gu",
"hi",
"it",
"ja",
"kk",
"ko",
"lt",
"lv",
"my",
"ne",
"nl",
"ro... | translation | 2022-03-02T23:29:05Z | ---
language:
- multilingual
- ar
- cs
- de
- en
- es
- et
- fi
- fr
- gu
- hi
- it
- ja
- kk
- ko
- lt
- lv
- my
- ne
- nl
- ro
- ru
- si
- tr
- vi
- zh
- af
- az
- bn
- fa
- he
- hr
- id
- ka
- km
- mk
- ml
- mn
- mr
- pl
- ps
- pt
- sv
- sw
- ta
- te
- th
- tl
- uk
- ur
- xh
- gl
- sl
tags:
- mbart-50
pipeline_tag: ... | [
-0.5913527607917786,
-0.4904816150665283,
0.1176021620631218,
0.4099632203578949,
-0.2473485916852951,
0.08088479936122894,
-0.3542409837245941,
-0.31501588225364685,
0.2392563670873642,
0.19321143627166748,
-0.5929279327392578,
-0.6367784142494202,
-0.6725703477859497,
0.2521763741970062,... |
blanchefort/rubert-base-cased-sentiment | blanchefort | 2023-04-06T04:06:36Z | 266,823 | 8 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"text-classification",
"sentiment",
"ru",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- ru
tags:
- sentiment
- text-classification
---
# RuBERT for Sentiment Analysis
Short Russian texts sentiment classification
This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on aggregated corpus of 351.797 texts.
... | [
-0.15465384721755981,
-0.681480884552002,
0.1173025518655777,
0.39597779512405396,
-0.6530241966247559,
0.16166651248931885,
-0.30807313323020935,
-0.20094875991344452,
0.38825732469558716,
0.04733294993638992,
-0.46542900800704956,
-0.8271676898002625,
-0.6370717287063599,
0.0805893838405... |
THUDM/chatglm2-6b-int4 | THUDM | 2023-10-09T08:23:08Z | 265,674 | 208 | transformers | [
"transformers",
"pytorch",
"chatglm",
"glm",
"thudm",
"custom_code",
"zh",
"en",
"arxiv:2103.10360",
"arxiv:2210.02414",
"arxiv:1911.02150",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2023-06-25T12:46:22Z | ---
language:
- zh
- en
tags:
- glm
- chatglm
- thudm
---
# ChatGLM2-6B
<p align="center">
💻 <a href="https://github.com/THUDM/ChatGLM2-6B" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/thukeg" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2103.10360" target="_blank">[GLM@ACL 22... | [
-0.46847212314605713,
-0.8587514758110046,
0.09360215812921524,
0.35752400755882263,
-0.3978419899940491,
-0.015743158757686615,
-0.3019861578941345,
-0.5763663649559021,
0.09849712997674942,
0.18048332631587982,
-0.5700622200965881,
-0.613480269908905,
-0.5467686057090759,
-0.236686304211... |
sshleifer/tiny-marian-en-de | sshleifer | 2020-06-25T02:27:15Z | 261,346 | 0 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | Entry not found | [
-0.3227650225162506,
-0.22568431496620178,
0.862226128578186,
0.43461495637893677,
-0.5282987952232361,
0.7012965679168701,
0.7915717363357544,
0.07618638128042221,
0.7746025919914246,
0.2563219666481018,
-0.7852817177772522,
-0.22573819756507874,
-0.9104480743408203,
0.5715669393539429,
... |
prithivida/parrot_paraphraser_on_T5 | prithivida | 2021-05-18T07:53:27Z | 254,724 | 121 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | # Parrot
## 1. What is Parrot?
Parrot is a paraphrase based utterance augmentation framework purpose built to accelerate training NLU models. A paraphrase framework is more than just a paraphrasing model. For more details on the library and usage please refer to the [github page](https://github.com/PrithivirajDamodar... | [
-0.368630588054657,
-1.138657808303833,
0.3362707197666168,
0.4000568091869354,
-0.31408554315567017,
-0.3262268900871277,
-0.031129861250519753,
-0.33240756392478943,
0.16960342228412628,
0.5181246399879456,
-0.17107798159122467,
-0.12093489617109299,
-0.30947431921958923,
0.3509892821311... |
oliverguhr/fullstop-punctuation-multilang-large | oliverguhr | 2023-11-16T09:35:35Z | 252,593 | 95 | transformers | [
"transformers",
"pytorch",
"tf",
"onnx",
"safetensors",
"xlm-roberta",
"token-classification",
"punctuation prediction",
"punctuation",
"en",
"de",
"fr",
"it",
"multilingual",
"dataset:wmt/europarl",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"r... | token-classification | 2022-03-02T23:29:05Z | ---
language:
- en
- de
- fr
- it
- multilingual
tags:
- punctuation prediction
- punctuation
datasets: wmt/europarl
license: mit
widget:
- text: "Ho sentito che ti sei laureata il che mi fa molto piacere"
example_title: "Italian"
- text: "Tous les matins vers quatre heures mon père ouvrait la porte de ma chambre"
... | [
-0.1880018711090088,
-0.870668351650238,
0.5598078370094299,
0.5800278186798096,
-0.18031466007232666,
0.1964372843503952,
-0.5507814288139343,
-0.42199423909187317,
0.202569380402565,
0.3295525312423706,
-0.5270503759384155,
-0.916211724281311,
-0.5846936702728271,
0.6202841401100159,
0... |
kk08/CryptoBERT | kk08 | 2023-09-12T06:37:34Z | 250,327 | 11 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"crypto",
"sentiment",
"analysis",
"en",
"base_model:ProsusAI/finbert",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | 2023-04-13T17:52:32Z | ---
language:
- en
tags:
- generated_from_trainer
- crypto
- sentiment
- analysis
pipeline_tag: text-classification
base_model: ProsusAI/finbert
model-index:
- name: CryptoBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should pro... | [
-0.509645938873291,
-0.5405246019363403,
0.009664746932685375,
0.0574946254491806,
-0.3918880522251129,
-0.08946644514799118,
-0.12794649600982666,
-0.37833040952682495,
0.318204790353775,
0.39880189299583435,
-0.6277275085449219,
-0.8024358749389648,
-0.7708226442337036,
-0.11653351783752... |
huggyllama/llama-7b | huggyllama | 2023-04-07T15:50:47Z | 250,202 | 226 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"license:other",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | 2023-04-03T23:16:48Z | ---
license: other
---
This contains the weights for the LLaMA-7b model. This model is under a non-commercial license (see the LICENSE file).
You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4... | [
-0.03279222548007965,
-0.07185257226228714,
0.48988500237464905,
0.722726583480835,
-0.7117372751235962,
-0.08364154398441315,
0.4580734968185425,
-0.3675478994846344,
0.4558961093425751,
0.9305098056793213,
-0.6471413373947144,
-0.41874513030052185,
-0.954289972782135,
0.2523752450942993,... |
facebook/dinov2-small | facebook | 2023-09-06T11:24:10Z | 249,084 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"dinov2",
"feature-extraction",
"dino",
"vision",
"arxiv:2304.07193",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2023-07-31T16:53:09Z | ---
license: apache-2.0
tags:
- dino
- vision
---
# Vision Transformer (small-sized model) trained using DINOv2
Vision Transformer (ViT) model trained using the DINOv2 method. It was introduced in the paper [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) by Oquab et al.... | [
-0.5005313158035278,
-0.41063547134399414,
0.10173531621694565,
-0.138998344540596,
-0.4837265908718109,
-0.06250375509262085,
0.10567750036716461,
-0.4102470874786377,
0.27670717239379883,
0.4889487028121948,
-0.4758565425872803,
-0.21568997204303741,
-0.6738555431365967,
-0.1710104942321... |
ckpt/sd15 | ckpt | 2023-07-05T16:18:39Z | 248,915 | 1 | diffusers | [
"diffusers",
"license:openrail",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2022-10-21T03:51:16Z | ---
license: openrail
---
| [
-0.12853388488292694,
-0.18616782128810883,
0.6529127359390259,
0.4943625330924988,
-0.19319313764572144,
0.23607465624809265,
0.36071982979774475,
0.05056332051753998,
0.5793652534484863,
0.740013837814331,
-0.6508103013038635,
-0.2378396987915039,
-0.710224986076355,
-0.04782581701874733... |
nlpaueb/legal-bert-base-uncased | nlpaueb | 2022-04-28T14:42:50Z | 248,331 | 88 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"pretraining",
"legal",
"fill-mask",
"en",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: en
pipeline_tag: fill-mask
license: cc-by-sa-4.0
thumbnail: https://i.ibb.co/p3kQ7Rw/Screenshot-2020-10-06-at-12-16-36-PM.png
tags:
- legal
widget:
- text: "The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of police."
---
# LEGAL-BERT: The Mupp... | [
-0.278313010931015,
-0.5673346519470215,
0.45116475224494934,
0.0625097006559372,
-0.4226571321487427,
-0.19297315180301666,
-0.11129426211118698,
-0.621921956539154,
0.44562840461730957,
0.6566071510314941,
-0.23454952239990234,
-0.539972186088562,
-0.5319437980651855,
-0.1226803287863731... |
jackaduma/SecBERT | jackaduma | 2023-06-26T05:54:48Z | 246,839 | 21 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"exbert",
"security",
"cybersecurity",
"cyber security",
"threat hunting",
"threat intelligence",
"en",
"dataset:APTnotes",
"dataset:Stucco-Data",
"dataset:CASIE",
"license:apache-2.0",
"autotrain_compatible",
"endpoint... | fill-mask | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://github.com/jackaduma
tags:
- exbert
- security
- cybersecurity
- cyber security
- threat hunting
- threat intelligence
license: apache-2.0
datasets:
- APTnotes
- Stucco-Data
- CASIE
---
# SecBERT
This is the pretrained model presented in [SecBERT: A Pretrained Language Model for C... | [
-0.40570294857025146,
-0.7357547879219055,
0.2840985059738159,
-0.04719579964876175,
-0.369606614112854,
0.24503475427627563,
-0.12173525989055634,
-0.6604324579238892,
0.35989460349082947,
0.6413383483886719,
-0.6478796601295471,
-0.5982702970504761,
-0.6239611506462097,
-0.11176122725009... |
cross-encoder/ms-marco-MiniLM-L-4-v2 | cross-encoder | 2021-08-05T08:39:32Z | 244,490 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
---
# Cross-Encoder for MS Marco
This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch).... | [
-0.4464038610458374,
-0.602876603603363,
0.34621721506118774,
0.16133926808834076,
-0.17513102293014526,
0.14829601347446442,
-0.18506364524364471,
-0.5315775871276855,
0.3474007248878479,
0.35316556692123413,
-0.5687869191169739,
-0.7053859233856201,
-0.800533652305603,
0.0421725995838642... |
google/t5-v1_1-xl | google | 2023-01-24T16:52:38Z | 242,025 | 15 | transformers | [
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2002.05202",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- c4
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Version 1.1
## Version 1.1
[T5 Version 1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/released_checkpoints.md#t511) includes the f... | [
-0.2746664583683014,
-0.34255164861679077,
0.3779716491699219,
0.20183885097503662,
-0.19887511432170868,
0.13384810090065002,
-0.22841110825538635,
-0.6767667531967163,
-0.15885072946548462,
0.42784780263900757,
-0.6730371117591858,
-0.5576782822608948,
-0.894743025302887,
0.1923360675573... |
lambdalabs/sd-image-variations-diffusers | lambdalabs | 2023-02-08T15:10:13Z | 239,599 | 300 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"image-to-image",
"dataset:ChristophSchuhmann/improved_aesthetics_6plus",
"license:creativeml-openrail-m",
"has_space",
"diffusers:StableDiffusionImageVariationPipeline",
"region:us"
] | image-to-image | 2022-09-09T14:53:35Z | ---
thumbnail: "https://repository-images.githubusercontent.com/523487884/fdb03a69-8353-4387-b5fc-0d85f888a63f"
datasets:
- ChristophSchuhmann/improved_aesthetics_6plus
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- image-to-image
---
# Stable Diffusion Image Variations Model Ca... | [
-0.49949759244918823,
-0.8010143041610718,
0.1781340390443802,
0.13475586473941803,
-0.3514064848423004,
-0.2697363495826721,
-0.010866411961615086,
-0.4861367642879486,
0.08869737386703491,
0.3328617215156555,
-0.45151588320732117,
-0.45907309651374817,
-0.7574523687362671,
-0.12069124728... |
sentence-transformers/paraphrase-MiniLM-L6-v2 | sentence-transformers | 2022-06-15T18:39:43Z | 239,404 | 56 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"tf",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:1908.10084",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/paraphrase-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dens... | [
-0.21672263741493225,
-0.7220319509506226,
0.4021681249141693,
0.20594340562820435,
-0.37869298458099365,
-0.46314093470573425,
-0.11258243024349213,
0.02763775922358036,
0.12921775877475739,
0.44833245873451233,
-0.5586857795715332,
-0.29187124967575073,
-0.5826951861381531,
0.13562308251... |
mrm8488/longformer-base-4096-finetuned-squadv2 | mrm8488 | 2022-12-05T13:36:25Z | 238,712 | 11 | transformers | [
"transformers",
"pytorch",
"tf",
"longformer",
"question-answering",
"QA",
"long context",
"Q&A",
"en",
"dataset:squad_v2",
"arxiv:2004.05150",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
language: en
tags:
- QA
- long context
- Q&A
datasets:
- squad_v2
model-index:
- name: mrm8488/longformer-base-4096-finetuned-squadv2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validati... | [
-0.46381497383117676,
-0.6658297777175903,
0.09233322739601135,
0.568914532661438,
-0.04363630712032318,
0.0116110322996974,
-0.30902406573295593,
-0.45094072818756104,
0.4340131878852844,
0.36900946497917175,
-0.9865384697914124,
-0.3577090799808502,
-0.6189082860946655,
0.299217760562896... |
MIT/ast-finetuned-audioset-10-10-0.4593 | MIT | 2023-09-06T14:49:15Z | 237,475 | 81 | transformers | [
"transformers",
"pytorch",
"safetensors",
"audio-spectrogram-transformer",
"audio-classification",
"arxiv:2104.01778",
"license:bsd-3-clause",
"endpoints_compatible",
"has_space",
"region:us"
] | audio-classification | 2022-11-14T18:41:48Z | ---
license: bsd-3-clause
tags:
- audio-classification
---
# Audio Spectrogram Transformer (fine-tuned on AudioSet)
Audio Spectrogram Transformer (AST) model fine-tuned on AudioSet. It was introduced in the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Gong et al. and first released... | [
-0.7788687944412231,
-0.22866615653038025,
0.10212727636098862,
0.1252254992723465,
-0.32091352343559265,
0.062356408685445786,
-0.22839370369911194,
-0.6871361136436462,
0.4269195795059204,
0.5312556624412537,
-0.8323712348937988,
-0.47967278957366943,
-0.6342211961746216,
-0.127616524696... |
xlm-roberta-large-finetuned-conll03-english | null | 2023-11-28T09:51:38Z | 235,492 | 79 | transformers | [
"transformers",
"pytorch",
"rust",
"onnx",
"safetensors",
"xlm-roberta",
"token-classification",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr... | token-classification | 2022-03-02T23:29:04Z | ---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
... | [
-0.3676998019218445,
-0.5534974336624146,
0.17838282883167267,
0.08581897616386414,
-0.1455267071723938,
-0.19337265193462372,
-0.3972790539264679,
-0.5021405220031738,
0.15394221246242523,
0.46527594327926636,
-0.4163527190685272,
-0.5968206524848938,
-0.7832571268081665,
0.11448957771062... |
google/flan-t5-xxl | google | 2023-07-27T11:42:14Z | 232,665 | 994 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:svakulenk0/qrecc",
"dataset:taskmaster2",
"dataset:djaym7/wiki_dialog",
"dataset:deepmind/code_contests",
"dataset:lambada",
"dataset:gsm8k",
"dat... | text2text-generation | 2022-10-21T15:54:59Z | ---
language:
- en
- fr
- ro
- de
- multilingual
widget:
- text: "Translate to German: My name is Arthur"
example_title: "Translation"
- text: "Please answer to the following question. Who is going to be the next Ballon d'or?"
example_title: "Question Answering"
- text: "Q: Can Geoffrey Hinton have a conversatio... | [
-0.5141759514808655,
-0.5739010572433472,
0.28543275594711304,
0.021856755018234253,
-0.058133967220783234,
-0.16920867562294006,
-0.4264768958091736,
-0.6200794577598572,
-0.11808659136295319,
0.08945875614881516,
-0.5360116958618164,
-0.4168476462364197,
-0.6586325168609619,
0.0204132571... |
facebook/vit-mae-base | facebook | 2023-06-13T19:42:42Z | 231,853 | 14 | transformers | [
"transformers",
"pytorch",
"tf",
"vit_mae",
"pretraining",
"vision",
"dataset:imagenet-1k",
"arxiv:2111.06377",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- vision
datasets:
- imagenet-1k
---
# Vision Transformer (base-sized model) pre-trained with MAE
Vision Transformer (ViT) model pre-trained using the MAE method. It was introduced in the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaimin... | [
-0.6072143316268921,
-0.46939709782600403,
-0.048354629427194595,
0.15414616465568542,
-0.2800132930278778,
-0.06105101853609085,
0.09822873771190643,
-0.5088661313056946,
0.4841447174549103,
0.4320984482765198,
-0.5992250442504883,
-0.265250563621521,
-0.8571335673332214,
-0.1135821565985... |
rinna/japanese-clip-vit-b-16 | rinna | 2023-09-09T02:15:59Z | 230,513 | 15 | transformers | [
"transformers",
"pytorch",
"safetensors",
"clip",
"zero-shot-image-classification",
"feature-extraction",
"ja",
"japanese",
"vision",
"arxiv:2103.00020",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-04-27T07:52:33Z | ---
language: ja
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
license: apache-2.0
tags:
- feature-extraction
- ja
- japanese
- clip
- vision
---
# rinna/japanese-clip-vit-b-16

This is a Japanese [CLIP (Contrastive Language-Image Pre-Training)](http... | [
-0.3538474440574646,
-0.8184272646903992,
0.2249954342842102,
0.3122679889202118,
-0.5894460082054138,
-0.02764124795794487,
-0.19806431233882904,
-0.38541460037231445,
0.39704272150993347,
0.44119536876678467,
-0.5646711587905884,
-0.5715044140815735,
-0.7152783870697021,
0.06856339424848... |
finiteautomata/beto-sentiment-analysis | finiteautomata | 2023-02-25T14:23:57Z | 229,332 | 21 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"sentiment-analysis",
"es",
"arxiv:2106.09462",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- es
tags:
- sentiment-analysis
---
# Sentiment Analysis in Spanish
## beto-sentiment-analysis
**NOTE: this model will be removed soon -- use [pysentimiento/robertuito-sentiment-analysis](https://huggingface.co/pysentimiento/robertuito-sentiment-analysis) instead**
Repository: [https://github.com/... | [
-0.2638602554798126,
-0.49319547414779663,
0.26733002066612244,
1.030317783355713,
-0.5684672594070435,
0.07010827213525772,
-0.37590721249580383,
-0.39691057801246643,
0.5964926481246948,
0.034621261060237885,
-0.4992242753505707,
-0.8764703273773193,
-0.6149522662162781,
0.10460997372865... |
facebook/dpr-ctx_encoder-single-nq-base | facebook | 2022-12-21T15:16:53Z | 228,955 | 16 | transformers | [
"transformers",
"pytorch",
"tf",
"dpr",
"en",
"dataset:nq_open",
"arxiv:2004.04906",
"arxiv:1702.08734",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: en
license: cc-by-nc-4.0
tags:
- dpr
datasets:
- nq_open
inference: false
---
# `dpr-ctx_encoder-single-nq-base`
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limit... | [
-0.5775387287139893,
-0.9347667694091797,
0.32217979431152344,
0.16041550040245056,
-0.1471717357635498,
-0.05240190774202347,
-0.1320933997631073,
-0.28024062514305115,
0.031894221901893616,
0.40841642022132874,
-0.6775912642478943,
-0.4171791672706604,
-0.46398746967315674,
0.28152427077... |
madebyollin/sdxl-vae-fp16-fix | madebyollin | 2023-09-25T14:55:46Z | 227,558 | 243 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"license:mit",
"has_space",
"diffusers:AutoencoderKL",
"region:us"
] | null | 2023-07-11T04:03:50Z | ---
license: mit
tags:
- stable-diffusion
- stable-diffusion-diffusers
inference: false
---
# SDXL-VAE-FP16-Fix
SDXL-VAE-FP16-Fix is the [SDXL VAE](https://huggingface.co/stabilityai/sdxl-vae)*, but modified to run in fp16 precision without generating NaNs.
| VAE | Decoding in `float32` / `bfloat16`... | [
-0.5465339422225952,
-0.3628389537334442,
0.39015746116638184,
0.4068374037742615,
-0.216382697224617,
-0.1397242695093155,
-0.011988040059804916,
-0.13334406912326813,
0.4964955449104309,
0.476259708404541,
-0.5727225542068481,
-0.47788330912590027,
-0.612947404384613,
-0.0056890621781349... |
lewtun/tiny-random-mt5 | lewtun | 2022-09-15T15:04:49Z | 226,779 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"feature-extraction",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | feature-extraction | 2022-09-15T15:03:33Z | Entry not found | [
-0.3227650225162506,
-0.22568431496620178,
0.862226128578186,
0.43461495637893677,
-0.5282987952232361,
0.7012965679168701,
0.7915717363357544,
0.07618638128042221,
0.7746025919914246,
0.2563219666481018,
-0.7852817177772522,
-0.22573819756507874,
-0.9104480743408203,
0.5715669393539429,
... |
Hello-SimpleAI/chatgpt-detector-roberta | Hello-SimpleAI | 2023-01-19T11:03:04Z | 226,345 | 37 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"chatgpt",
"en",
"dataset:Hello-SimpleAI/HC3",
"arxiv:2301.07597",
"doi:10.57967/hf/1203",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | 2023-01-18T16:38:53Z | ---
datasets:
- Hello-SimpleAI/HC3
language:
- en
pipeline_tag: text-classification
tags:
- chatgpt
---
# Model Card for `Hello-SimpleAI/chatgpt-detector-roberta`
This model is trained on **the mix of full-text and splitted sentences** of `answer`s from [Hello-SimpleAI/HC3](https://huggingface.co/datasets/Hello-Simpl... | [
-0.46604153513908386,
-0.7575124502182007,
0.44017279148101807,
-0.1044602021574974,
-0.32476580142974854,
-0.3908401429653168,
-0.20694944262504578,
-0.3102174699306488,
-0.02076668106019497,
0.35341402888298035,
-0.6566519141197205,
-0.38597699999809265,
-0.7319904565811157,
-0.111659817... |
setu4993/LEALLA-small | setu4993 | 2023-10-19T06:22:00Z | 225,772 | 5 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"feature-extraction",
"sentence_embedding",
"multilingual",
"google",
"sentence-similarity",
"lealla",
"labse",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"bo",
"bs",
"ca",
"ceb",
"co",
"cs",
"c... | sentence-similarity | 2023-05-21T08:17:47Z | ---
pipeline_tag: sentence-similarity
language:
- af
- am
- ar
- as
- az
- be
- bg
- bn
- bo
- bs
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- he
- hi
- hmn
- hr
- ht
-... | [
-0.044353730976581573,
-0.8877517580986023,
0.5966794490814209,
0.16939452290534973,
-0.07647154480218887,
-0.22834943234920502,
-0.5289684534072876,
-0.2592085003852844,
0.2882457375526428,
-0.006238623522222042,
-0.3826368451118469,
-0.554742693901062,
-0.5787931084632874,
0.037791546434... |
finiteautomata/bertweet-base-sentiment-analysis | finiteautomata | 2023-02-17T02:17:31Z | 225,680 | 87 | transformers | [
"transformers",
"pytorch",
"tf",
"roberta",
"text-classification",
"sentiment-analysis",
"en",
"arxiv:2106.09462",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- sentiment-analysis
---
# Sentiment Analysis in English
## bertweet-sentiment-analysis
Repository: [https://github.com/finiteautomata/pysentimiento/](https://github.com/finiteautomata/pysentimiento/)
Model trained with SemEval 2017 corpus (around ~40k tweets). Base model is [BERTweet... | [
-0.15303368866443634,
-0.5744877457618713,
0.27282950282096863,
0.9624878168106079,
-0.6759429574012756,
0.013552662916481495,
-0.39962875843048096,
-0.24965102970600128,
0.45349767804145813,
-0.0024470360949635506,
-0.3403446078300476,
-0.8790870308876038,
-0.6800591945648193,
-0.00482833... |
Helsinki-NLP/opus-mt-ar-en | Helsinki-NLP | 2023-08-16T11:25:35Z | 225,461 | 20 | transformers | [
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"ar",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | translation | 2022-03-02T23:29:04Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ar-en
* source languages: ar
* target languages: en
* OPUS readme: [ar-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ar-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* downl... | [
-0.31014105677604675,
-0.4446490406990051,
0.2264184206724167,
0.38155999779701233,
-0.49311569333076477,
-0.3914905786514282,
-0.38162267208099365,
-0.10355963557958603,
-0.008968794718384743,
0.5162110328674316,
-0.6190001964569092,
-0.6538227200508118,
-0.7253750562667847,
0.23900547623... |
mrm8488/bert-spanish-cased-finetuned-ner | mrm8488 | 2021-05-20T00:35:25Z | 225,044 | 19 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"token-classification",
"es",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language: es
thumbnail: https://i.imgur.com/jgBdimh.png
---
# Spanish BERT (BETO) + NER
This model is a fine-tuned on [NER-C](https://www.kaggle.com/nltkdata/conll-corpora) version of the Spanish BERT cased [(BETO)](https://github.com/dccuchile/beto) for **NER** downstream task.
## Details of the downstream task... | [
-0.4460482895374298,
-0.5736901760101318,
0.08550769835710526,
0.5433961749076843,
-0.18223769962787628,
0.12845765054225922,
-0.48437169194221497,
-0.5495510697364807,
0.6458991169929504,
0.3587822914123535,
-0.7945590019226074,
-0.7581899166107178,
-0.8274465203285217,
0.2618066370487213... |
flair/ner-english-ontonotes | flair | 2023-04-07T09:23:02Z | 224,275 | 16 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"dataset:ontonotes",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: en
datasets:
- ontonotes
widget:
- text: "On September 1st George Washington won 1 dollar."
---
## English NER in Flair (Ontonotes default model)
This is the 18-class NER model for English that ships with [Flair](https://github.com/flairNLP/fl... | [
-0.31342071294784546,
-0.5913069248199463,
0.11864541471004486,
0.1771598607301712,
-0.1655953973531723,
-0.11890819668769836,
-0.20995426177978516,
-0.4346923232078552,
0.7105306386947632,
0.3443385362625122,
-0.4069838523864746,
-0.5726970434188843,
-0.5363482236862183,
0.325304388999938... |
jbetker/wav2vec2-large-robust-ft-libritts-voxpopuli | jbetker | 2022-02-25T19:07:57Z | 223,547 | 7 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"has_space",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | This checkpoint is a wav2vec2-large model that is useful for generating transcriptions with punctuation. It is intended for use in building transcriptions for TTS models, where punctuation is very important for prosody.
This model was created by fine-tuning the `facebook/wav2vec2-large-robust-ft-libri-960h` checkpoint... | [
-0.09183622151613235,
-0.8696128129959106,
0.6671246290206909,
0.3418934941291809,
-0.4113052487373352,
-0.0029179034754633904,
-0.05544443055987358,
-0.4761694371700287,
0.11917222291231155,
0.5174658894538879,
-0.670107364654541,
-0.6070524454116821,
-0.4045546054840088,
-0.1134659275412... |
Lykon/DreamShaper | Lykon | 2023-08-01T15:02:43Z | 223,369 | 792 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"art",
"artistic",
"anime",
"en",
"doi:10.57967/hf/0453",
"license:other",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-01-12T09:14:06Z | ---
language:
- en
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- art
- artistic
- diffusers
- anime
inference: false
---
# Dream Shaper
## Official Repository
Read more about this model here: https://civitai.com/models/4384/dreamshaper
Also please support by giving 5 stars an... | [
-0.17902104556560516,
-0.06115304306149483,
0.3463464677333832,
0.37118417024612427,
-0.3133675754070282,
0.08583256602287292,
0.26471084356307983,
-0.35531359910964966,
0.6737615466117859,
0.8562721014022827,
-0.8937750458717346,
-0.23105871677398682,
-0.43728333711624146,
-0.051628459244... |
HuggingFaceM4/siglip-so400m-14-384 | HuggingFaceM4 | 2023-10-20T12:35:52Z | 222,708 | 2 | transformers | [
"transformers",
"pytorch",
"siglip",
"feature-extraction",
"custom_code",
"region:us"
] | feature-extraction | 2023-10-17T12:10:20Z | Entry not found | [
-0.3227650225162506,
-0.22568431496620178,
0.862226128578186,
0.43461495637893677,
-0.5282987952232361,
0.7012965679168701,
0.7915717363357544,
0.07618638128042221,
0.7746025919914246,
0.2563219666481018,
-0.7852817177772522,
-0.22573819756507874,
-0.9104480743408203,
0.5715669393539429,
... |
stabilityai/stable-diffusion-2-inpainting | stabilityai | 2023-07-05T16:19:10Z | 217,970 | 374 | diffusers | [
"diffusers",
"stable-diffusion",
"arxiv:2112.10752",
"arxiv:2202.00512",
"arxiv:1910.09700",
"license:openrail++",
"has_space",
"diffusers:StableDiffusionInpaintPipeline",
"region:us"
] | null | 2022-11-23T17:41:55Z | ---
license: openrail++
tags:
- stable-diffusion
inference: false
---
# Stable Diffusion v2 Model Card
This model card focuses on the model associated with the Stable Diffusion v2, available [here](https://github.com/Stability-AI/stablediffusion).
This `stable-diffusion-2-inpainting` model is resumed from [stable-dif... | [
-0.450370728969574,
-0.7834217548370361,
0.3374694883823395,
0.2897995412349701,
-0.17101143300533295,
-0.1844548135995865,
0.06947880983352661,
-0.46474793553352356,
0.06761258095502853,
0.4036063253879547,
-0.42686912417411804,
-0.32712018489837646,
-0.5678645372390747,
-0.10247651487588... |
valentinafeve/yolos-fashionpedia | valentinafeve | 2023-03-10T13:11:26Z | 217,623 | 42 | transformers | [
"transformers",
"pytorch",
"yolos",
"object-detection",
"YOLOS",
"Object detection",
"en",
"dataset:detection-datasets/fashionpedia",
"endpoints_compatible",
"has_space",
"region:us"
] | object-detection | 2022-11-17T16:04:03Z | ---
datasets:
- detection-datasets/fashionpedia
language:
- en
pipeline_tag: object-detection
tags:
- YOLOS
- Object detection
---
This is a fine-tunned object detection model for fashion.
For more details of the implementation you can check the source code [here](https://github.com/valntinaf/fine_tunning_YOLOS_for_f... | [
-0.6475762128829956,
-0.818427562713623,
0.15255849063396454,
-0.14163292944431305,
-0.5499687790870667,
-0.06342677026987076,
0.2724381983280182,
-0.5440884828567505,
0.30753540992736816,
0.47729241847991943,
-0.951789379119873,
-0.9453959465026855,
-0.4087992310523987,
0.0996115133166313... |
DeepPavlov/rubert-base-cased | DeepPavlov | 2021-11-23T08:03:04Z | 216,793 | 46 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"ru",
"arxiv:1905.07213",
"endpoints_compatible",
"has_space",
"region:us"
] | feature-extraction | 2022-03-02T23:29:04Z | ---
language:
- ru
---
# rubert-base-cased
RuBERT \(Russian, cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\) was trained on the Russian part of Wikipedia and news data. We used this training data to build a vocabulary of Russian subtokens and took a multilingual version of BERT‑base as an initialization for ... | [
-0.330594539642334,
-0.557574987411499,
0.01627473346889019,
0.16922582685947418,
-0.4432569146156311,
0.1686265766620636,
-0.7621351480484009,
-0.08854412287473679,
0.20369945466518402,
0.29741808772087097,
-0.6960573196411133,
-0.431662917137146,
-0.6526588797569275,
-0.11071902513504028... |
google/canine-c | google | 2022-08-08T13:44:46Z | 215,886 | 8 | transformers | [
"transformers",
"pytorch",
"canine",
"feature-extraction",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi... | feature-extraction | 2022-03-02T23:29:05Z | ---
language:
- multilingual
- af
- sq
- ar
- an
- hy
- ast
- az
- ba
- eu
- bar
- be
- bn
- inc
- bs
- br
- bg
- my
- ca
- ceb
- ce
- zh
- cv
- hr
- cs
- da
- nl
- en
- et
- fi
- fr
- gl
- ka
- de
- el
- gu
- ht
- he
- hi
- hu
- is
- io
- id
- ga
- it
- ja
- jv
- kn
- kk
- ky
- ko
- la
- lv
- lt
- roa
- nds
- lm
- mk... | [
-0.5241236686706543,
-0.7141175270080566,
0.05178764462471008,
0.3939940333366394,
-0.17912426590919495,
0.052829790860414505,
-0.3400464653968811,
-0.5336041450500488,
0.1911952942609787,
0.22498413920402527,
-0.6266404986381531,
-0.478036493062973,
-0.37303492426872253,
0.132714211940765... |
fusing/karlo-image-variations-diffusers | fusing | 2022-12-21T02:27:02Z | 215,010 | 0 | diffusers | [
"diffusers",
"has_space",
"diffusers:UnCLIPImageVariationPipeline",
"region:us"
] | null | 2022-12-21T02:13:09Z | Entry not found | [
-0.3227650225162506,
-0.22568431496620178,
0.862226128578186,
0.43461495637893677,
-0.5282987952232361,
0.7012965679168701,
0.7915717363357544,
0.07618638128042221,
0.7746025919914246,
0.2563219666481018,
-0.7852817177772522,
-0.22573819756507874,
-0.9104480743408203,
0.5715669393539429,
... |
WinKawaks/vit-tiny-patch16-224 | WinKawaks | 2023-03-30T14:56:06Z | 214,837 | 5 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vit",
"image-classification",
"vision",
"dataset:imagenet",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://... | [
-0.17637445032596588,
-0.547081470489502,
0.46291807293891907,
0.04028879106044769,
-0.2653295695781708,
-0.2902849018573761,
0.006528464145958424,
-0.5112335681915283,
0.38212451338768005,
0.30687135457992554,
-0.7568330764770508,
-0.4086489677429199,
-0.6417765617370605,
-0.1414546072483... |
facebook/wav2vec2-base | facebook | 2021-12-28T12:44:31Z | 211,065 | 38 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2006.11477",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# Wav2Vec2-Base
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech... | [
-0.0600566528737545,
-0.532793402671814,
0.05064170062541962,
-0.0649271309375763,
-0.32846829295158386,
-0.09859603643417358,
-0.277445524930954,
-0.6956556439399719,
-0.23656772077083588,
0.14015813171863556,
-0.5433253049850464,
-0.4247603416442871,
-0.6299493312835693,
-0.3526416122913... |
openai/whisper-small | openai | 2023-09-08T13:08:05Z | 209,410 | 106 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"... | automatic-speech-recognition | 2022-09-26T06:51:27Z | ---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- no
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
... | [
-0.19668921828269958,
-0.5901548266410828,
0.10750962793827057,
0.46743056178092957,
-0.12570390105247498,
-0.13874252140522003,
-0.3303639888763428,
-0.5488724112510681,
0.2129666805267334,
0.47566312551498413,
-0.8519209027290344,
-0.5951669812202454,
-0.7871646285057068,
-0.005301500204... |
distilbert-base-cased-distilled-squad | null | 2023-04-12T12:06:44Z | 209,375 | 149 | transformers | [
"transformers",
"pytorch",
"tf",
"rust",
"safetensors",
"openvino",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"arxiv:1910.09700",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
language: en
license: apache-2.0
datasets:
- squad
metrics:
- squad
model-index:
- name: distilbert-base-cased-distilled-squad
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad
type: squad
config: plain_text
split: validation
metr... | [
-0.35096123814582825,
-0.8619423508644104,
0.23672522604465485,
0.1442592591047287,
-0.12075002491474152,
0.19107450544834137,
-0.14562568068504333,
-0.21442386507987976,
-0.0514499805867672,
0.15491603314876556,
-0.7960163354873657,
-0.29770809412002563,
-0.7210885286331177,
0.08499596267... |
stabilityai/stable-diffusion-2-depth | stabilityai | 2023-07-05T16:19:06Z | 208,100 | 355 | diffusers | [
"diffusers",
"stable-diffusion",
"arxiv:2112.10752",
"arxiv:2202.00512",
"arxiv:1910.09700",
"license:openrail++",
"has_space",
"diffusers:StableDiffusionDepth2ImgPipeline",
"region:us"
] | null | 2022-11-23T17:41:46Z | ---
license: openrail++
tags:
- stable-diffusion
inference: false
---
# Stable Diffusion v2 Model Card
This model card focuses on the model associated with the Stable Diffusion v2 model, available [here](https://github.com/Stability-AI/stablediffusion).
This `stable-diffusion-2-depth` model is resumed from [stable-di... | [
-0.4188733994960785,
-0.8288089632987976,
0.3146495819091797,
0.196977898478508,
-0.2108154296875,
-0.3187848627567291,
0.08413808047771454,
-0.4326992630958557,
-0.05139628052711487,
0.37123972177505493,
-0.361415833234787,
-0.3728540241718292,
-0.6882968544960022,
-0.13993892073631287,
... |
Meina/MeinaMix_V10 | Meina | 2023-05-25T11:22:20Z | 207,516 | 28 | diffusers | [
"diffusers",
"art",
"anime",
"stable diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-05-24T04:44:20Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
- anime
- stable diffusion
---
MeinaMix Objective is to be able to do good art with little prompting.
For examples and prompts, please checkout: https://civitai.com/models/7240/meinamix
I have a discord s... | [
-0.7832015752792358,
-0.5385299324989319,
0.7348135113716125,
0.3701297342777252,
-0.5252428650856018,
-0.34477710723876953,
0.07591038942337036,
-0.6524826884269714,
0.42745909094810486,
0.41525137424468994,
-0.6755115985870361,
-0.680155336856842,
-0.4237144887447357,
0.18681369721889496... |
codellama/CodeLlama-13b-Instruct-hf | codellama | 2023-10-27T18:11:57Z | 206,979 | 87 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"llama-2",
"code",
"arxiv:2308.12950",
"license:llama2",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-24T16:33:54Z | ---
language:
- code
pipeline_tag: text-generation
tags:
- llama-2
license: llama2
---
# **Code Llama**
Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the 13 instruct-tuned version in the Hugging Face Tr... | [
-0.39383187890052795,
-0.6477416753768921,
0.3027331233024597,
0.5650025606155396,
-0.24211017787456512,
0.16880476474761963,
-0.08753236383199692,
-0.6547418832778931,
0.2540854215621948,
0.5282954573631287,
-0.41824227571487427,
-0.5732213258743286,
-0.6063459515571594,
0.334913522005081... |
MCG-NJU/videomae-base-finetuned-kinetics | MCG-NJU | 2023-04-22T11:30:54Z | 202,083 | 14 | transformers | [
"transformers",
"pytorch",
"videomae",
"video-classification",
"vision",
"arxiv:2203.12602",
"arxiv:2111.06377",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"has_space",
"region:us"
] | video-classification | 2022-07-08T15:01:34Z | ---
license: "cc-by-nc-4.0"
tags:
- vision
- video-classification
---
# VideoMAE (base-sized model, fine-tuned on Kinetics-400)
VideoMAE model pre-trained for 1600 epochs in a self-supervised way and fine-tuned in a supervised way on Kinetics-400. It was introduced in the paper [VideoMAE: Masked Autoencoders are Dat... | [
-0.45991599559783936,
-0.23943272233009338,
0.11834558099508286,
-0.21724779903888702,
-0.38011661171913147,
0.020677272230386734,
0.11127777397632599,
0.008039037697017193,
0.31351613998413086,
0.40585070848464966,
-0.5348342657089233,
-0.4288516044616699,
-0.9720766544342041,
-0.30829334... |
Jean-Baptiste/camembert-ner | Jean-Baptiste | 2023-06-01T01:32:51Z | 199,093 | 82 | transformers | [
"transformers",
"pytorch",
"onnx",
"safetensors",
"camembert",
"token-classification",
"fr",
"dataset:Jean-Baptiste/wikiner_fr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language: fr
datasets:
- Jean-Baptiste/wikiner_fr
widget:
- text: "Je m'appelle jean-baptiste et je vis à montréal"
- text: "george washington est allé à washington"
license: mit
---
# camembert-ner: model fine-tuned from camemBERT for NER task.
## Introduction
[camembert-ner] is a NER model that was fine-tuned ... | [
-0.40280118584632874,
-0.824504554271698,
0.3139019012451172,
0.08511029183864594,
-0.35734349489212036,
-0.04813624545931816,
-0.2419602870941162,
-0.2450299710035324,
0.5909332036972046,
0.4368961751461029,
-0.5803562998771667,
-1.0024274587631226,
-0.8909282088279724,
0.1972110718488693... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.