modelId stringlengths 4 122 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 74.7M | likes int64 0 9.67k | library_name stringlengths 2 84 ⌀ | tags list | pipeline_tag stringlengths 5 30 ⌀ | createdAt timestamp[us, tz=UTC] | card stringlengths 1 901k | embedding list |
|---|---|---|---|---|---|---|---|---|---|---|
jonatasgrosman/wav2vec2-large-xlsr-53-english | jonatasgrosman | 2023-03-25T10:56:55Z | 74,653,058 | 317 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"en",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_6_0",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"dataset:mozilla-foundation/common_vo... | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- common_voice
- mozilla-foundation/common_voice_6_0
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- en
- hf-asr-leaderboard
- mozilla-foundation/common_voice_6_0
- robust-speech-event
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 ... | [
-0.2986782491207123,
-0.6198410987854004,
0.15314045548439026,
0.2171814888715744,
-0.08972680568695068,
-0.23627611994743347,
-0.350484162569046,
-0.6717195510864258,
0.13748575747013092,
0.3199954330921173,
-0.6518785357475281,
-0.41586291790008545,
-0.39903849363327026,
0.01539389509707... |
bert-base-uncased | null | 2023-06-30T01:42:19Z | 55,618,661 | 1,219 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"coreml",
"onnx",
"safetensors",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](http... | [
-0.13803230226039886,
-0.6192845106124878,
0.1602371782064438,
0.3106997609138489,
-0.5290508270263672,
0.004081550054252148,
-0.12402311712503433,
-0.2276398241519928,
0.45079508423805237,
0.5585923194885254,
-0.5558132529258728,
-0.44755035638809204,
-0.7645061612129211,
0.14166030287742... |
openai/clip-vit-large-patch14 | openai | 2023-09-15T15:49:35Z | 31,237,964 | 734 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"clip",
"zero-shot-image-classification",
"vision",
"arxiv:2103.00020",
"arxiv:1908.04913",
"endpoints_compatible",
"has_space",
"region:us"
] | zero-shot-image-classification | 2022-03-02T23:29:05Z | ---
tags:
- vision
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
---
# Model Card: CLIP
Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found ... | [
-0.5028195381164551,
-0.5711244940757751,
0.16508552432060242,
-0.030113739892840385,
-0.1611039787530899,
-0.251861035823822,
0.021790342405438423,
-0.7077082395553589,
0.12783248722553253,
0.38478925824165344,
-0.2797425389289856,
-0.40631571412086487,
-0.6301049590110779,
0.120856180787... |
distilbert-base-uncased-finetuned-sst-2-english | null | 2023-10-26T16:14:11Z | 30,654,240 | 355 | transformers | [
"transformers",
"pytorch",
"tf",
"rust",
"onnx",
"safetensors",
"distilbert",
"text-classification",
"en",
"dataset:sst2",
"dataset:glue",
"arxiv:1910.01108",
"doi:10.57967/hf/0181",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language: en
license: apache-2.0
datasets:
- sst2
- glue
model-index:
- name: distilbert-base-uncased-finetuned-sst-2-english
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: sst2
split: validation
metrics:
... | [
-0.39455512166023254,
-0.7664994597434998,
0.17849937081336975,
0.16509194672107697,
-0.4213687777519226,
-0.0027747880667448044,
-0.1830003559589386,
-0.32761508226394653,
0.10122796148061752,
0.4247645139694214,
-0.6012458205223083,
-0.6134636402130127,
-0.9000230431556702,
-0.2018812298... |
gpt2 | null | 2023-06-30T02:19:43Z | 21,850,807 | 1,516 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"onnx",
"safetensors",
"gpt2",
"text-generation",
"exbert",
"en",
"doi:10.57967/hf/0039",
"license:mit",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | ---
language: en
tags:
- exbert
license: mit
---
# GPT-2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better... | [
-0.26800060272216797,
-0.7213711738586426,
0.30211585760116577,
-0.029406607151031494,
-0.25600627064704895,
-0.30617034435272217,
-0.3942148983478546,
-0.5192710161209106,
-0.10086898505687714,
0.30821824073791504,
-0.47000324726104736,
-0.2688843607902527,
-0.7264645099639893,
-0.0329642... |
timm/mobilenetv3_large_100.ra_in1k | timm | 2023-04-27T22:49:21Z | 15,846,252 | 17 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2110.00476",
"arxiv:1905.02244",
"license:apache-2.0",
"has_space",
"region:us"
] | image-classification | 2022-12-16T05:38:07Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for mobilenetv3_large_100.ra_in1k
A MobileNet-v3 image classification model. Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* RandAugment `RA` recipe. Inspi... | [
-0.42348533868789673,
-0.28704380989074707,
-0.060248397290706635,
0.08704610168933868,
-0.3158630132675171,
-0.41070929169654846,
-0.07284087687730789,
-0.35634562373161316,
0.38355836272239685,
0.4785507321357727,
-0.37447744607925415,
-0.741416871547699,
-0.5964348316192627,
-0.09770839... |
distilgpt2 | null | 2023-04-29T12:24:21Z | 14,735,848 | 275 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"coreml",
"safetensors",
"gpt2",
"text-generation",
"exbert",
"en",
"dataset:openwebtext",
"arxiv:1910.01108",
"arxiv:2201.08542",
"arxiv:2203.12574",
"arxiv:1910.09700",
"arxiv:1503.02531",
"license:apache-2.0",
"model-... | text-generation | 2022-03-02T23:29:04Z | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- openwebtext
model-index:
- name: distilgpt2
results:
- task:
type: text-generation
name: Text Generation
dataset:
type: wikitext
name: WikiText-103
metrics:
- type: perplexity
name: Perplexity
... | [
-0.14997319877147675,
-0.7801599502563477,
0.3517022430896759,
0.20232826471328735,
-0.27720993757247925,
-0.2595542073249817,
-0.2875414192676544,
-0.4347073435783386,
-0.4004024565219879,
0.15011632442474365,
-0.325300931930542,
-0.021060721948742867,
-0.9394680857658386,
-0.013787766918... |
roberta-base | null | 2023-03-06T15:14:53Z | 14,521,862 | 247 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"roberta",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1907.11692",
"arxiv:1806.02847",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
language: en
tags:
- exbert
license: mit
datasets:
- bookcorpus
- wikipedia
---
# RoBERTa base model
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1907.11692) and first released in
[this repository](https://github.com... | [
-0.15874525904655457,
-0.7864440083503723,
0.2143075317144394,
-0.01406010426580906,
-0.36069953441619873,
-0.06675083190202713,
-0.35677042603492737,
-0.37536904215812683,
0.27712708711624146,
0.408024400472641,
-0.5700899958610535,
-0.5863821506500244,
-0.9068484306335449,
0.016096128150... |
stabilityai/stable-diffusion-xl-base-1.0 | stabilityai | 2023-10-30T16:03:47Z | 10,649,877 | 3,677 | diffusers | [
"diffusers",
"onnx",
"text-to-image",
"stable-diffusion",
"arxiv:2307.01952",
"arxiv:2211.01324",
"arxiv:2108.01073",
"arxiv:2112.10752",
"license:openrail++",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2023-07-25T13:25:51Z | ---
license: openrail++
tags:
- text-to-image
- stable-diffusion
---
# SD-XL 1.0-base Model Card

## Model

[SDXL](https://arxiv.org/abs/2307.01952) consists of an [ensemble of experts](https://arxiv.org/abs/2211.01324) pipeline for latent diffusion:
In a first step, the bas... | [
-0.39037227630615234,
-0.807611346244812,
0.49455687403678894,
0.12964530289173126,
-0.1020255982875824,
-0.295228511095047,
-0.1315089613199234,
-0.07478132843971252,
0.12635961174964905,
0.40680065751075745,
-0.2821299731731415,
-0.495692640542984,
-0.5914804935455322,
-0.160748392343521... |
distilbert-base-uncased | null | 2023-08-18T14:59:41Z | 10,387,599 | 308 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"distilbert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1910.01108",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# DistilBERT base model (uncased)
This model is a distilled version of the [BERT base model](https://huggingface.co/bert-base-uncased). It was
introduced in [this paper](https://arxiv.org/abs/1910.01108). The code for the disti... | [
-0.05776984989643097,
-0.6613970994949341,
0.2539806365966797,
0.28165116906166077,
-0.55666583776474,
0.05044003203511238,
-0.023595793172717094,
-0.10337857156991959,
0.3686756491661072,
0.3903352618217468,
-0.5313125252723694,
-0.43703749775886536,
-0.9344381093978882,
0.161969065666198... |
xlm-roberta-base | null | 2023-04-07T12:46:17Z | 10,334,322 | 423 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"onnx",
"safetensors",
"xlm-roberta",
"fill-mask",
"exbert",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi... | fill-mask | 2022-03-02T23:29:04Z | ---
tags:
- exbert
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
... | [
-0.441704124212265,
-0.7515607476234436,
0.2002364993095398,
0.07332059741020203,
-0.2072240561246872,
-0.003866141429170966,
-0.3805343508720398,
-0.3852306008338928,
0.18655133247375488,
0.584320604801178,
-0.4482552707195282,
-0.5778307318687439,
-0.709205150604248,
0.2140282392501831,
... |
stabilityai/stable-diffusion-xl-refiner-1.0 | stabilityai | 2023-09-25T13:42:56Z | 9,964,151 | 1,117 | diffusers | [
"diffusers",
"stable-diffusion",
"image-to-image",
"arxiv:2307.01952",
"arxiv:2211.01324",
"arxiv:2108.01073",
"arxiv:2112.10752",
"license:openrail++",
"has_space",
"diffusers:StableDiffusionXLImg2ImgPipeline",
"region:us"
] | image-to-image | 2023-07-26T07:38:01Z | ---
license: openrail++
tags:
- stable-diffusion
- image-to-image
---
# SD-XL 1.0-refiner Model Card

## Model

[SDXL](https://arxiv.org/abs/2307.01952) consists of an [ensemble of experts](https://arxiv.org/abs/2211.01324) pipeline for latent diffusion:
In a first step, the... | [
-0.5029902458190918,
-0.7840523719787598,
0.45905566215515137,
0.12346493452787399,
-0.23371489346027374,
-0.25351467728614807,
-0.06729798018932343,
-0.2705375850200653,
0.062089476734399796,
0.4141872823238373,
-0.45243367552757263,
-0.4871370196342468,
-0.6644377112388611,
-0.0440383665... |
sentence-transformers/all-mpnet-base-v2 | sentence-transformers | 2023-11-02T09:35:52Z | 9,291,921 | 492 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"datas... | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- nat... | [
-0.3521747589111328,
-0.7242094874382019,
0.3295871913433075,
0.19640401005744934,
-0.12623131275177002,
-0.31733378767967224,
-0.23419201374053955,
-0.1912698894739151,
0.3395887017250061,
0.19213509559631348,
-0.4126448333263397,
-0.48855286836624146,
-0.7262453436851501,
0.0836281105875... |
openai/clip-vit-base-patch32 | openai | 2022-10-04T09:42:04Z | 8,407,998 | 275 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"clip",
"zero-shot-image-classification",
"vision",
"arxiv:2103.00020",
"arxiv:1908.04913",
"endpoints_compatible",
"has_space",
"region:us"
] | zero-shot-image-classification | 2022-03-02T23:29:05Z | ---
tags:
- vision
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
---
# Model Card: CLIP
Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found ... | [
-0.49621811509132385,
-0.5704834461212158,
0.16633814573287964,
-0.022949809208512306,
-0.162152498960495,
-0.24756482243537903,
0.032062090933322906,
-0.7097179889678955,
0.12218382954597473,
0.3784947693347931,
-0.2756716012954712,
-0.4045153558254242,
-0.6304255723953247,
0.118285447359... |
facebook/bart-large-cnn | facebook | 2023-11-28T09:50:47Z | 7,509,246 | 687 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"bart",
"text2text-generation",
"summarization",
"en",
"dataset:cnn_dailymail",
"arxiv:1910.13461",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | summarization | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- summarization
license: mit
thumbnail: https://huggingface.co/front/thumbnails/facebook.png
datasets:
- cnn_dailymail
model-index:
- name: facebook/bart-large-cnn
results:
- task:
type: summarization
name: Summarization
dataset:
name: cnn_dailymail
type: cnn_dai... | [
-0.42362090945243835,
-0.6604821681976318,
0.3420267105102539,
0.3303101360797882,
-0.45881226658821106,
-0.2290572226047516,
0.0642693042755127,
-0.2755007743835449,
0.3589901924133301,
0.5544784069061279,
-0.2507328987121582,
-0.36056992411613464,
-0.5100838541984558,
0.39330941438674927... |
microsoft/resnet-50 | microsoft | 2023-03-10T17:35:03Z | 7,442,731 | 174 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"resnet",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:1512.03385",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | image-classification | 2022-03-16T15:42:43Z | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
---
# ResNet-50 v1.5
ResNet model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by He et al.
Disclaimer: The team ... | [
-0.6228454113006592,
-0.13954868912696838,
-0.21387724578380585,
-0.09084169566631317,
-0.28794339299201965,
-0.14664319157600403,
-0.08141709119081497,
-0.6906808018684387,
0.30955904722213745,
0.43072310090065,
-0.5989305973052979,
-0.31050950288772583,
-0.5381338000297546,
0.16376647353... |
runwayml/stable-diffusion-v1-5 | runwayml | 2023-08-23T21:14:19Z | 7,261,639 | 9,670 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"arxiv:2207.12598",
"arxiv:2112.10752",
"arxiv:2103.00020",
"arxiv:2205.11487",
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"re... | text-to-image | 2022-10-19T23:38:35Z | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
extra_gated_prompt: |-
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. ... | [
-0.37994083762168884,
-0.9187833070755005,
0.4418250322341919,
0.2584875226020813,
-0.2324058562517166,
-0.37639373540878296,
0.08209457993507385,
-0.42550569772720337,
-0.17664596438407898,
0.43061110377311707,
-0.30325940251350403,
-0.5391732454299927,
-0.6823850274085999,
-0.16417132318... |
tiiuae/falcon-7b-instruct | tiiuae | 2023-09-29T14:32:23Z | 7,034,797 | 755 | transformers | [
"transformers",
"pytorch",
"coreml",
"falcon",
"text-generation",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2306.01116",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"t... | text-generation | 2023-04-25T06:21:01Z | ---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: true
widget:
- text: "Hey Falcon! Any recommendations for my holidays in Abu Dhabi?"
example_title: "Abu Dhabi Trip"
- text: "What's the Everett interpretation of quantum mechanics?"
example_title: "Q/A: Quantum & Answers"
- text: "Giv... | [
-0.4791226387023926,
-0.9742989540100098,
0.07581397891044617,
0.3736799955368042,
-0.09798933565616608,
-0.09743838012218475,
-0.12375251948833466,
-0.4656293988227844,
0.22187945246696472,
0.38349342346191406,
-0.4570043385028839,
-0.4862375855445862,
-0.7623247504234314,
0.0747865363955... |
cl-tohoku/bert-base-japanese | cl-tohoku | 2021-09-23T13:45:36Z | 7,006,975 | 16 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
widget:
- text: 東北大学で[MASK]の研究をしています。
---
# BERT base Japanese (IPA dictionary)
This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level to... | [
-0.48709484934806824,
-0.7303788065910339,
0.3274160325527191,
0.24097615480422974,
-0.6851524114608765,
-0.23972973227500916,
-0.22710870206356049,
-0.5099837779998779,
0.46850067377090454,
0.45562049746513367,
-0.7149782776832581,
-0.45628342032432556,
-0.6342303156852722,
-0.00377199635... |
roberta-large | null | 2023-03-22T09:25:01Z | 6,875,857 | 136 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"onnx",
"safetensors",
"roberta",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1907.11692",
"arxiv:1806.02847",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
language: en
tags:
- exbert
license: mit
datasets:
- bookcorpus
- wikipedia
---
# RoBERTa large model
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1907.11692) and first released in
[this repository](htt... | [
-0.17982277274131775,
-0.7766240239143372,
0.24953117966651917,
-0.003307288745418191,
-0.3167768120765686,
-0.07523341476917267,
-0.3935757875442505,
-0.3610670268535614,
0.25057652592658997,
0.4453185498714447,
-0.5581656694412231,
-0.5485971570014954,
-0.8755766749382019,
0.086484886705... |
cardiffnlp/twitter-roberta-base-irony | cardiffnlp | 2023-08-02T00:36:09Z | 6,796,271 | 14 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"en",
"dataset:tweet_eval",
"arxiv:2010.12421",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
datasets:
- tweet_eval
language:
- en
---
# Twitter-roBERTa-base for Irony Detection
This is a roBERTa-base model trained on ~58M tweets and finetuned for irony detection with the TweetEval benchmark.
This model has integrated into the [TweetNLP Python library](https://github.com/cardiffnlp/tweetnlp/).
- Paper: ... | [
0.033425141125917435,
-0.6753321886062622,
0.2983172833919525,
0.3764384984970093,
-0.10266564041376114,
0.0074552567675709724,
-0.31367364525794983,
-0.27630677819252014,
0.2132779061794281,
0.05215977504849434,
-0.2954155504703522,
-0.780415415763855,
-0.6136558055877686,
0.2143725901842... |
mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis | mrm8488 | 2023-03-16T20:03:13Z | 6,550,039 | 123 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"financial",
"stocks",
"sentiment",
"dataset:financial_phrasebank",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
- financial
- stocks
- sentiment
widget:
- text: "Operating profit totaled EUR 9.4 mn , down from EUR 11.7 mn in 2004 ."
datasets:
- financial_phrasebank
metrics:
- accuracy
model-index:
- name: distilRoberta-financial-sentiment
results:
- task:
name: Tex... | [
-0.4302907884120941,
-0.6286759972572327,
-0.014874963089823723,
0.4192722737789154,
-0.36996448040008545,
-0.09113570302724838,
-0.10067697614431381,
-0.015241319313645363,
0.0903741791844368,
0.16841593384742737,
-0.6947463750839233,
-0.7709058523178101,
-0.8554448485374451,
-0.154498651... |
lxyuan/distilbert-base-multilingual-cased-sentiments-student | lxyuan | 2023-06-24T04:09:07Z | 6,548,691 | 56 | transformers | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"sentiment-analysis",
"zero-shot-distillation",
"distillation",
"zero-shot-classification",
"debarta-v3",
"en",
"ar",
"de",
"es",
"fr",
"ja",
"zh",
"id",
"hi",
"it",
"ms",
"pt",
"dataset:tyqian... | text-classification | 2023-05-05T16:22:55Z | ---
license: apache-2.0
tags:
- sentiment-analysis
- text-classification
- zero-shot-distillation
- distillation
- zero-shot-classification
- debarta-v3
model-index:
- name: distilbert-base-multilingual-cased-sentiments-student
results: []
datasets:
- tyqiangz/multilingual-sentiments
language:
- en
- ar
- de
- es
- f... | [
-0.405286967754364,
-0.7423002123832703,
0.23364600539207458,
0.37132173776626587,
-0.20056839287281036,
0.0042899907566607,
-0.30528658628463745,
0.042576953768730164,
0.13890016078948975,
0.13750828802585602,
-0.4759770333766937,
-0.7798067331314087,
-0.7353076338768005,
0.06219965592026... |
SamLowe/roberta-base-go_emotions | SamLowe | 2023-10-04T10:00:58Z | 6,502,514 | 191 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"emotions",
"multi-class-classification",
"multi-label-classification",
"en",
"dataset:go_emotions",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | 2022-09-15T13:04:21Z | ---
language: en
tags:
- text-classification
- pytorch
- roberta
- emotions
- multi-class-classification
- multi-label-classification
datasets:
- go_emotions
license: mit
widget:
- text: I am not having a great day.
---
#### Overview
Model trained from [roberta-base](https://huggingface.co/roberta-base) on the [go_em... | [
-0.5685542821884155,
-0.5224471092224121,
0.16980227828025818,
0.2412414848804474,
-0.008656367659568787,
0.11321122199296951,
0.099518783390522,
-0.33568471670150757,
0.7067083716392517,
0.3838074207305908,
-0.416912317276001,
-0.7787486910820007,
-0.8770251274108887,
-0.02863449603319168... |
marieke93/MiniLM-evidence-types | marieke93 | 2022-06-11T13:32:27Z | 6,416,609 | 5 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-07T14:19:25Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: MiniLM-evidence-types
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ... | [
-0.6040719747543335,
-0.5923892855644226,
0.28548353910446167,
-0.01651153899729252,
-0.052557945251464844,
-0.18988734483718872,
0.05741618201136589,
-0.12304792553186417,
0.48467984795570374,
0.30147993564605713,
-0.6968698501586914,
-0.7641069889068604,
-0.7401612997055054,
-0.167790800... |
microsoft/layoutlmv3-base | microsoft | 2023-04-12T12:49:21Z | 6,021,905 | 209 | transformers | [
"transformers",
"pytorch",
"tf",
"onnx",
"layoutlmv3",
"en",
"arxiv:2204.08387",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-04-18T06:53:05Z | ---
language: en
license: cc-by-nc-sa-4.0
---
# LayoutLMv3
[Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/) | [GitHub](https://aka.ms/layoutlmv3)
## Model description
LayoutLMv3 is a pre-trained multimodal Transformer for Document AI with unified text and image masking. The sim... | [
-0.3843768835067749,
-0.37922823429107666,
0.4191456437110901,
0.4241741895675659,
-0.22060993313789368,
-0.1401519477367401,
0.22712405025959015,
-0.1399936079978943,
-0.15698814392089844,
0.5354766249656677,
-0.5785039067268372,
-0.5539814233779907,
-0.5112203359603882,
-0.19056218862533... |
stabilityai/stable-diffusion-2-1 | stabilityai | 2023-07-05T16:19:17Z | 5,511,350 | 3,346 | diffusers | [
"diffusers",
"stable-diffusion",
"text-to-image",
"arxiv:2112.10752",
"arxiv:2202.00512",
"arxiv:1910.09700",
"license:openrail++",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-12-06T17:24:51Z | ---
license: openrail++
tags:
- stable-diffusion
- text-to-image
pinned: true
---
# Stable Diffusion v2-1 Model Card
This model card focuses on the model associated with the Stable Diffusion v2-1 model, codebase available [here](https://github.com/Stability-AI/stablediffusion).
This `stable-diffusion-2-1` model is fi... | [
-0.3695681691169739,
-0.8252284526824951,
0.3427104353904724,
0.18827693164348602,
-0.23837435245513916,
-0.35212984681129456,
0.09912405908107758,
-0.3806692659854889,
-0.10752823948860168,
0.36848267912864685,
-0.4294458031654358,
-0.36916348338127136,
-0.7324264645576477,
-0.10265906155... |
sentence-transformers/all-MiniLM-L6-v2 | sentence-transformers | 2022-11-07T08:44:33Z | 5,487,445 | 1,150 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"tf",
"rust",
"bert",
"feature-extraction",
"sentence-similarity",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:sea... | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- nat... | [
-0.3425259590148926,
-0.8404478430747986,
0.319079726934433,
0.10410725325345993,
-0.13494659960269928,
-0.27600976824760437,
-0.22713953256607056,
-0.277400940656662,
0.33308425545692444,
0.18307343125343323,
-0.4866493046283722,
-0.525094747543335,
-0.6338952779769897,
0.1216018050909042... |
distilbert-base-multilingual-cased | null | 2023-04-06T13:40:24Z | 5,421,444 | 78 | transformers | [
"transformers",
"pytorch",
"tf",
"onnx",
"safetensors",
"distilbert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",... | fill-mask | 2022-03-02T23:29:04Z | ---
language:
- multilingual
- af
- sq
- ar
- an
- hy
- ast
- az
- ba
- eu
- bar
- be
- bn
- inc
- bs
- br
- bg
- my
- ca
- ceb
- ce
- zh
- cv
- hr
- cs
- da
- nl
- en
- et
- fi
- fr
- gl
- ka
- de
- el
- gu
- ht
- he
- hi
- hu
- is
- io
- id
- ga
- it
- ja
- jv
- kn
- kk
- ky
- ko
- la
- lv
- lt
- roa
- nds
- lm
- mk... | [
-0.38927263021469116,
-0.7309663891792297,
0.2625051438808441,
0.2817128300666809,
-0.19239817559719086,
0.04243817180395126,
-0.41277891397476196,
-0.360832542181015,
0.058010514825582504,
0.34979283809661865,
-0.5735239386558533,
-0.4491490423679352,
-0.7449554204940796,
0.03394285216927... |
bert-base-cased | null | 2022-11-16T15:18:28Z | 5,326,802 | 162 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT base model (cased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https:... | [
-0.10652753710746765,
-0.6248297691345215,
0.2197728008031845,
0.23527343571186066,
-0.5516867637634277,
0.038627006113529205,
-0.032347872853279114,
-0.13558587431907654,
0.4068670868873596,
0.49688300490379333,
-0.5742049217224121,
-0.4455358684062958,
-0.801167905330658,
0.1387893110513... |
microsoft/deberta-base | microsoft | 2022-09-26T08:50:43Z | 5,050,985 | 57 | transformers | [
"transformers",
"pytorch",
"tf",
"rust",
"deberta",
"deberta-v1",
"fill-mask",
"en",
"arxiv:2006.03654",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: en
tags:
- deberta-v1
- fill-mask
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced m... | [
-0.3490139842033386,
-0.6190047860145569,
0.24900153279304504,
0.5731582045555115,
-0.2694041430950165,
0.2976255416870117,
-0.11898764967918396,
-0.6899542808532715,
0.1617627888917923,
0.058031823486089706,
-0.6317624449729919,
-0.4253840744495392,
-1.0415276288986206,
0.0671712085604667... |
cardiffnlp/twitter-roberta-base-sentiment | cardiffnlp | 2023-01-20T09:52:13Z | 4,806,206 | 225 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"en",
"dataset:tweet_eval",
"arxiv:2010.12421",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
datasets:
- tweet_eval
language:
- en
---
# Twitter-roBERTa-base for Sentiment Analysis
This is a roBERTa-base model trained on ~58M tweets and finetuned for sentiment analysis with the TweetEval benchmark. This model is suitable for English (for a similar multilingual model, see [XLM-T](https://huggingface.co/car... | [
-0.05480519309639931,
-0.6883974671363831,
0.11295696347951889,
0.3965694010257721,
-0.17153975367546082,
0.17101004719734192,
-0.4280341565608978,
-0.18449129164218903,
0.3555622100830078,
0.044263772666454315,
-0.3705562949180603,
-0.9291340112686157,
-0.6859224438667297,
0.0890733525156... |
jonatasgrosman/wav2vec2-large-xlsr-53-portuguese | jonatasgrosman | 2022-12-14T01:59:47Z | 4,751,160 | 18 | transformers | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_6_0",
"pt",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"dataset:mozilla-foundation/common_voice_6_0",
"lice... | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: pt
license: apache-2.0
datasets:
- common_voice
- mozilla-foundation/common_voice_6_0
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
- mozilla-foundation/common_voice_6_0
- pt
- robust-speech-event
- speech
- xlsr-fine-tuning-week
model-index:
- name: XLSR Wav2Vec2 ... | [
-0.4338238835334778,
-0.6374335885047913,
0.13567115366458893,
0.2450847178697586,
-0.247168630361557,
-0.19712358713150024,
-0.3501100242137909,
-0.5621572136878967,
0.23057128489017487,
0.2715253531932831,
-0.5745862126350403,
-0.5381931662559509,
-0.575222373008728,
-0.06999600678682327... |
pyannote/segmentation | pyannote | 2023-10-04T18:52:36Z | 4,256,827 | 328 | pyannote-audio | [
"pyannote-audio",
"pytorch",
"pyannote",
"pyannote-audio-model",
"audio",
"voice",
"speech",
"speaker",
"speaker-segmentation",
"voice-activity-detection",
"overlapped-speech-detection",
"resegmentation",
"arxiv:2104.04045",
"license:mit",
"has_space",
"region:us"
] | voice-activity-detection | 2022-03-02T23:29:05Z | ---
tags:
- pyannote
- pyannote-audio
- pyannote-audio-model
- audio
- voice
- speech
- speaker
- speaker-segmentation
- voice-activity-detection
- overlapped-speech-detection
- resegmentation
license: mit
inference: false
extra_gated_prompt: "The collected information will help acquire a better knowledge of pyannote.a... | [
-0.6616150736808777,
-0.6818209290504456,
0.34610089659690857,
0.3079899847507477,
-0.40718820691108704,
-0.30719733238220215,
-0.35858261585235596,
-0.35218945145606995,
0.43828141689300537,
0.4550400972366333,
-0.691686749458313,
-0.6610769629478455,
-0.2644743323326111,
-0.2778740823268... |
bert-base-multilingual-cased | null | 2022-11-16T23:22:54Z | 4,090,354 | 251 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl"... | fill-mask | 2022-03-02T23:29:04Z | ---
language:
- multilingual
- af
- sq
- ar
- an
- hy
- ast
- az
- ba
- eu
- bar
- be
- bn
- inc
- bs
- br
- bg
- my
- ca
- ceb
- ce
- zh
- cv
- hr
- cs
- da
- nl
- en
- et
- fi
- fr
- gl
- ka
- de
- el
- gu
- ht
- he
- hi
- hu
- is
- io
- id
- ga
- it
- ja
- jv
- kn
- kk
- ky
- ko
- la
- lv
- lt
- roa
- nds
- lm
- mk... | [
-0.3475418984889984,
-0.7952669262886047,
0.16184143722057343,
0.343375027179718,
-0.41012120246887207,
0.07138051837682724,
-0.2665919065475464,
-0.30819544196128845,
0.3789016306400299,
0.5304887890815735,
-0.6878768801689148,
-0.40776360034942627,
-0.6420354843139648,
0.0518222786486148... |
trpakov/vit-face-expression | trpakov | 2022-11-09T12:56:19Z | 4,056,771 | 4 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | image-classification | 2022-11-09T12:50:30Z | Entry not found | [
-0.3227650225162506,
-0.22568431496620178,
0.862226128578186,
0.43461495637893677,
-0.5282987952232361,
0.7012965679168701,
0.7915717363357544,
0.07618638128042221,
0.7746025919914246,
0.2563219666481018,
-0.7852817177772522,
-0.22573819756507874,
-0.9104480743408203,
0.5715669393539429,
... |
pyannote/speaker-diarization | pyannote | 2023-10-04T18:53:17Z | 3,790,498 | 553 | pyannote-audio | [
"pyannote-audio",
"pyannote",
"pyannote-audio-pipeline",
"audio",
"voice",
"speech",
"speaker",
"speaker-diarization",
"speaker-change-detection",
"voice-activity-detection",
"overlapped-speech-detection",
"automatic-speech-recognition",
"dataset:ami",
"dataset:dihard",
"dataset:voxconve... | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- pyannote
- pyannote-audio
- pyannote-audio-pipeline
- audio
- voice
- speech
- speaker
- speaker-diarization
- speaker-change-detection
- voice-activity-detection
- overlapped-speech-detection
- automatic-speech-recognition
datasets:
- ami
- dihard
- voxconverse
- aishell
- repere
- voxceleb
license: mit
e... | [
-0.6493508815765381,
-0.7479451298713684,
0.10091114044189453,
0.53783118724823,
-0.17286746203899384,
0.033714570105075836,
-0.5556049942970276,
-0.3532935380935669,
0.5969036817550659,
0.38563060760498047,
-0.3841119706630707,
-0.7813142538070679,
-0.4215218424797058,
0.09837833046913147... |
pyannote/segmentation-3.0 | pyannote | 2023-10-04T18:53:59Z | 3,600,431 | 34 | pyannote-audio | [
"pyannote-audio",
"pytorch",
"pyannote",
"pyannote-audio-model",
"audio",
"voice",
"speech",
"speaker",
"speaker-diarization",
"speaker-change-detection",
"speaker-segmentation",
"voice-activity-detection",
"overlapped-speech-detection",
"resegmentation",
"license:mit",
"has_space",
... | voice-activity-detection | 2023-09-22T12:03:10Z | ---
tags:
- pyannote
- pyannote-audio
- pyannote-audio-model
- audio
- voice
- speech
- speaker
- speaker-diarization
- speaker-change-detection
- speaker-segmentation
- voice-activity-detection
- overlapped-speech-detection
- resegmentation
license: mit
inference: false
extra_gated_prompt: "The collected information w... | [
-0.3095853328704834,
-0.6219159364700317,
0.20473948121070862,
0.3008125126361847,
-0.5283315777778625,
-0.22801320254802704,
-0.5428075790405273,
-0.38922184705734253,
0.42560186982154846,
0.5076965093612671,
-0.4289165437221527,
-0.5818568468093872,
-0.23382873833179474,
-0.3248846530914... |
pyannote/speaker-diarization-3.0 | pyannote | 2023-10-04T18:54:33Z | 3,399,557 | 118 | pyannote-audio | [
"pyannote-audio",
"pyannote",
"pyannote-audio-pipeline",
"audio",
"voice",
"speech",
"speaker",
"speaker-diarization",
"speaker-change-detection",
"voice-activity-detection",
"overlapped-speech-detection",
"automatic-speech-recognition",
"arxiv:2111.14448",
"arxiv:2012.01477",
"license:m... | automatic-speech-recognition | 2023-09-22T13:40:36Z | ---
tags:
- pyannote
- pyannote-audio
- pyannote-audio-pipeline
- audio
- voice
- speech
- speaker
- speaker-diarization
- speaker-change-detection
- voice-activity-detection
- overlapped-speech-detection
- automatic-speech-recognition
license: mit
extra_gated_prompt: "The collected information will help acquire a bet... | [
-0.6659835577011108,
-0.7951610088348389,
0.11587908864021301,
0.5107489228248596,
-0.2107565999031067,
0.07246318459510803,
-0.5225633978843689,
-0.30586913228034973,
0.49465566873550415,
0.37571975588798523,
-0.4108301103115082,
-0.7389363050460815,
-0.4401939809322357,
0.018751163035631... |
camembert-base | null | 2023-05-30T14:36:19Z | 3,380,891 | 45 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"camembert",
"fill-mask",
"fr",
"dataset:oscar",
"arxiv:1911.03894",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
language: fr
license: mit
datasets:
- oscar
---
# CamemBERT: a Tasty French Language Model
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-... | [
-0.2070927619934082,
-0.7769386768341064,
0.266449898481369,
0.2875369191169739,
-0.19093383848667145,
-0.0912766307592392,
-0.3455781042575836,
-0.12575390934944153,
0.41002902388572693,
0.4979154169559479,
-0.48610177636146545,
-0.6467690467834473,
-0.7167073488235474,
0.0815236195921897... |
google/electra-base-discriminator | google | 2021-04-30T07:33:10Z | 3,343,622 | 38 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"electra",
"pretraining",
"en",
"arxiv:1406.2661",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://huggingface.co/front/thumbnails/google.png
license: apache-2.0
---
## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
**ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks usi... | [
-0.46786731481552124,
-0.5035800933837891,
0.15870589017868042,
0.18237003684043884,
-0.2396235466003418,
0.3421207070350647,
-0.24695323407649994,
-0.1805099993944168,
0.38204070925712585,
0.45707786083221436,
-0.34906333684921265,
-0.22032558917999268,
-0.516406774520874,
0.4077769517898... |
allenai/longformer-base-4096 | allenai | 2023-04-05T18:24:00Z | 3,326,607 | 113 | transformers | [
"transformers",
"pytorch",
"tf",
"rust",
"longformer",
"en",
"arxiv:2004.05150",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: en
license: apache-2.0
---
# longformer-base-4096
[Longformer](https://arxiv.org/abs/2004.05150) is a transformer model for long documents.
`longformer-base-4096` is a BERT-like model started from the RoBERTa checkpoint and pretrained for MLM on long documents. It supports sequences of length up to 4,0... | [
-0.2357880175113678,
-0.5452734231948853,
0.5648034811019897,
0.39662879705429077,
0.10629427433013916,
-0.23787488043308258,
-0.3413194715976715,
-0.4734293818473816,
0.13723695278167725,
0.6008812785148621,
-0.6647308468818665,
-0.08220064640045166,
-0.643878698348999,
0.1665275096893310... |
distilroberta-base | null | 2022-11-16T23:22:40Z | 3,272,076 | 88 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"roberta",
"fill-mask",
"exbert",
"en",
"dataset:openwebtext",
"arxiv:1910.01108",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- openwebtext
---
# Model Card for DistilRoBERTa base
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluat... | [
-0.23263071477413177,
-0.7745702266693115,
0.24639993906021118,
0.16414979100227356,
-0.2596116364002228,
-0.024702753871679306,
-0.25477370619773865,
-0.25066423416137695,
0.1312868893146515,
0.4469575881958008,
-0.5757401585578918,
-0.5130375623703003,
-0.767797589302063,
0.2032451182603... |
facebook/mbart-large-50 | facebook | 2023-03-28T08:28:50Z | 3,260,064 | 93 | transformers | [
"transformers",
"pytorch",
"tf",
"mbart",
"text2text-generation",
"mbart-50",
"multilingual",
"ar",
"cs",
"de",
"en",
"es",
"et",
"fi",
"fr",
"gu",
"hi",
"it",
"ja",
"kk",
"ko",
"lt",
"lv",
"my",
"ne",
"nl",
"ro",
"ru",
"si",
"tr",
"vi",
"zh",
"af",
... | text2text-generation | 2022-03-02T23:29:05Z | ---
language:
- multilingual
- ar
- cs
- de
- en
- es
- et
- fi
- fr
- gu
- hi
- it
- ja
- kk
- ko
- lt
- lv
- my
- ne
- nl
- ro
- ru
- si
- tr
- vi
- zh
- af
- az
- bn
- fa
- he
- hr
- id
- ka
- km
- mk
- ml
- mn
- mr
- pl
- ps
- pt
- sv
- sw
- ta
- te
- th
- tl
- uk
- ur
- xh
- gl
- sl
license: mit
tags:
- mbart-50... | [
-0.474958211183548,
-0.5132153630256653,
0.06597525626420975,
0.35463082790374756,
-0.39388760924339294,
0.13274335861206055,
-0.29443350434303284,
-0.2931402623653412,
0.2646331787109375,
0.19878819584846497,
-0.5877816081047058,
-0.6339622735977173,
-0.6813580393791199,
0.313170909881591... |
albert-base-v2 | null | 2023-05-30T07:52:10Z | 3,217,580 | 73 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# ALBERT Base v2
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-rese... | [
-0.09252004325389862,
-0.507871150970459,
0.19068437814712524,
0.3494366705417633,
-0.47714725136756897,
0.02343524619936943,
0.0905412882566452,
-0.17930316925048828,
0.31694284081459045,
0.6178159117698669,
-0.5070419907569885,
-0.47385677695274353,
-0.8251692056655884,
0.108419165015220... |
timm/resnet50.a1_in1k | timm | 2023-04-05T18:08:16Z | 3,143,968 | 9 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:2110.00476",
"arxiv:1512.03385",
"license:apache-2.0",
"has_space",
"region:us"
] | image-classification | 2023-04-05T18:07:45Z | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
---
# Model card for resnet50.a1_in1k
A ResNet-B image classification model.
This model features:
* ReLU activations
* single layer 7x7 convolution with pooling
* 1x1 convolution shortcut downsample
Trained on ImageNet-1k in `timm` usin... | [
-0.8923001289367676,
-0.21555957198143005,
0.02489861659705639,
0.38675379753112793,
-0.41563940048217773,
-0.11546292901039124,
-0.13616183400154114,
-0.3938218057155609,
1.182558536529541,
0.2945999801158905,
-0.6678751111030579,
-0.5494182705879211,
-0.6189887523651123,
-0.0089224940165... |
stabilityai/StableBeluga-7B | stabilityai | 2023-08-29T20:21:36Z | 3,092,291 | 124 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:conceptofmind/cot_submix_original",
"dataset:conceptofmind/flan2021_submix_original",
"dataset:conceptofmind/t0_submix_original",
"dataset:conceptofmind/niv2_submix_original",
"arxiv:2307.09288",
"arxiv:2306.02707",
"end... | text-generation | 2023-07-27T02:01:15Z | ---
datasets:
- conceptofmind/cot_submix_original
- conceptofmind/flan2021_submix_original
- conceptofmind/t0_submix_original
- conceptofmind/niv2_submix_original
language:
- en
pipeline_tag: text-generation
---
# Stable Beluga 7B
Use [Stable Chat (Research Preview)](https://chat.stability.ai/chat) to test Stability A... | [
-0.4614315330982208,
-1.0016154050827026,
0.05207406356930733,
0.3998693823814392,
-0.2949399948120117,
0.061662640422582626,
-0.13664621114730835,
-0.5331628322601318,
0.015180854126811028,
0.30468323826789856,
-0.5543938875198364,
-0.5328834056854248,
-0.6630096435546875,
-0.061476349830... |
facebook/bart-large-mnli | facebook | 2023-09-05T14:49:34Z | 2,900,740 | 754 | transformers | [
"transformers",
"pytorch",
"jax",
"rust",
"safetensors",
"bart",
"text-classification",
"zero-shot-classification",
"dataset:multi_nli",
"arxiv:1910.13461",
"arxiv:1909.00161",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | zero-shot-classification | 2022-03-02T23:29:05Z | ---
license: mit
thumbnail: https://huggingface.co/front/thumbnails/facebook.png
pipeline_tag: zero-shot-classification
datasets:
- multi_nli
---
# bart-large-mnli
This is the checkpoint for [bart-large](https://huggingface.co/facebook/bart-large) after being trained on the [MultiNLI (MNLI)](https://huggingface.co/da... | [
-0.37607020139694214,
-0.5773875117301941,
0.3298405706882477,
0.13194409012794495,
-0.024266503751277924,
-0.13219919800758362,
0.024292996153235435,
-0.37892594933509827,
0.31868380308151245,
0.33929935097694397,
-0.6636547446250916,
-0.6483103036880493,
-0.4334248900413513,
0.1817155927... |
nateraw/vit-age-classifier | nateraw | 2023-09-19T15:53:10Z | 2,820,991 | 62 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"dataset:fairface",
"doi:10.57967/hf/1259",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
tags:
- image-classification
- pytorch
datasets:
- fairface
---
A vision transformer finetuned to classify the age of a given person's face.
```python
import requests
from PIL import Image
from io import BytesIO
from transformers import ViTFeatureExtractor, ViTForImageClassification
# Get example image fr... | [
-0.4082125127315521,
-0.3418203294277191,
0.26698926091194153,
0.19221821427345276,
-0.06216701865196228,
-0.32742831110954285,
0.19444367289543152,
-0.43738415837287903,
-0.29527661204338074,
0.43287214636802673,
-0.6152831315994263,
-0.15836867690086365,
-0.5482438802719116,
-0.007518466... |
t5-small | null | 2023-06-30T02:31:26Z | 2,629,831 | 174 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"onnx",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"translation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:c4",
"arxiv:1805.12471",
"arxiv:1708.00055",
"arxiv:1704.05426",
"arxiv:1606.05250",
"arxiv:... | translation | 2022-03-02T23:29:04Z | ---
language:
- en
- fr
- ro
- de
- multilingual
license: apache-2.0
tags:
- summarization
- translation
datasets:
- c4
---
# Model Card for T5 Small
.
This is one of the smaller ... | [
-0.43266937136650085,
-0.569463849067688,
0.46697044372558594,
-0.023636186495423317,
-0.171895831823349,
-0.2951265573501587,
-0.3187333643436432,
-0.4421079158782959,
0.10849983990192413,
0.16958558559417725,
-0.7454162836074829,
-0.33432474732398987,
-0.5199422836303711,
-0.141503855586... |
openai/clip-vit-base-patch16 | openai | 2022-10-04T09:42:28Z | 2,581,161 | 46 | transformers | [
"transformers",
"pytorch",
"jax",
"clip",
"zero-shot-image-classification",
"vision",
"arxiv:2103.00020",
"arxiv:1908.04913",
"endpoints_compatible",
"has_space",
"region:us"
] | zero-shot-image-classification | 2022-03-02T23:29:05Z | ---
tags:
- vision
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
---
# Model Card: CLIP
Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found [h... | [
-0.49865058064460754,
-0.5712653994560242,
0.1651592254638672,
-0.02610836550593376,
-0.16207505762577057,
-0.2513534128665924,
0.030952269211411476,
-0.7080349326133728,
0.12376299500465393,
0.3799942135810852,
-0.2780819237232208,
-0.40410417318344116,
-0.629612922668457,
0.1176211610436... |
Intel/dpt-large | Intel | 2023-11-13T16:32:34Z | 2,375,031 | 115 | transformers | [
"transformers",
"pytorch",
"dpt",
"depth-estimation",
"vision",
"arxiv:2103.13413",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | depth-estimation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- vision
- depth-estimation
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/m... | [
-0.7237185835838318,
-0.6006385087966919,
0.17728391289710999,
0.1309063732624054,
-0.5687975287437439,
-0.13780273497104645,
0.1714637130498886,
-0.5124877691268921,
0.4148351848125458,
0.3505004942417145,
-0.6907749772071838,
-0.5057336091995239,
-0.7120600938796997,
-0.1500415802001953,... |
nlpconnect/vit-gpt2-image-captioning | nlpconnect | 2023-02-27T15:00:09Z | 2,338,112 | 608 | transformers | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-to-text",
"image-captioning",
"doi:10.57967/hf/0222",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | image-to-text | 2022-03-02T23:29:05Z | ---
tags:
- image-to-text
- image-captioning
license: apache-2.0
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg
example_title: Savanna
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
example_title: Football Match
- src: https:... | [
-0.2498638778924942,
-0.4316709339618683,
0.13126741349697113,
0.3771032989025116,
-0.560312032699585,
0.03426176309585571,
-0.0009354806388728321,
-0.3404082655906677,
-0.018504083156585693,
0.40083879232406616,
-0.58589106798172,
-0.2802312970161438,
-0.8293564319610596,
-0.0033811905886... |
bert-base-chinese | null | 2023-03-21T17:15:55Z | 2,265,659 | 630 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"zh",
"arxiv:1810.04805",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
language: zh
---
# Bert-base-chinese
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
## Model Deta... | [
-0.20674894750118256,
-0.7278012037277222,
0.009171775542199612,
0.4384777545928955,
-0.42811697721481323,
-0.23680901527404785,
-0.18656761944293976,
-0.4516254663467407,
0.19607020914554596,
0.5089585185050964,
-0.6043787002563477,
-0.5200222134590149,
-0.8793781399726868,
-0.21508620679... |
cmarkea/distilcamembert-base-ner | cmarkea | 2023-08-01T10:05:12Z | 2,168,725 | 16 | transformers | [
"transformers",
"pytorch",
"tf",
"onnx",
"safetensors",
"camembert",
"token-classification",
"fr",
"dataset:Jean-Baptiste/wikiner_fr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language: fr
license: mit
datasets:
- Jean-Baptiste/wikiner_fr
widget:
- text: "Boulanger, habitant à Boulanger et travaillant dans le magasin Boulanger situé dans la ville de Boulanger. Boulanger a écrit le livre éponyme Boulanger édité par la maison d'édition Boulanger."
- text: "Quentin Jerome Tarantino naît le ... | [
-0.566847562789917,
-0.6617075204849243,
0.3628891706466675,
0.2641156315803528,
-0.3110843002796173,
0.007771946024149656,
-0.2815709710121155,
-0.17793188989162445,
0.3815026879310608,
0.32578474283218384,
-0.5707331895828247,
-0.7206076979637146,
-0.8063475489616394,
0.2517969608306885,... |
t5-base | null | 2023-04-06T13:42:36Z | 2,122,157 | 381 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"translation",
"en",
"fr",
"ro",
"de",
"dataset:c4",
"arxiv:1805.12471",
"arxiv:1708.00055",
"arxiv:1704.05426",
"arxiv:1606.05250",
"arxiv:1808.09121",
"arxiv:1810.1... | translation | 2022-03-02T23:29:04Z | ---
language:
- en
- fr
- ro
- de
datasets:
- c4
tags:
- summarization
- translation
license: apache-2.0
---
# Model Card for T5 Base

pko-t5 는 한국어 전용 데이터로 학습한 [t5 v1.1 모델](https://github.com/google-research/text-to-text-transfer-transformer/blob/84f8bcc14b5f2c03de51bd3587609ba8f6bbd1cd/released_checkpoints.md)입니다.
한국어를 tokenize 하기 위해서 senten... | [
-0.603765606880188,
-0.26935088634490967,
0.3677661120891571,
0.5644078850746155,
-0.5082579255104065,
0.14336201548576355,
-0.1466781049966812,
-0.3060792088508606,
0.438251793384552,
0.2744762599468231,
-0.49393483996391296,
-0.76495760679245,
-0.7939327359199524,
0.268401175737381,
0.... |
CompVis/stable-diffusion-safety-checker | CompVis | 2022-11-25T17:21:38Z | 2,017,585 | 80 | transformers | [
"transformers",
"pytorch",
"clip",
"arxiv:2103.00020",
"arxiv:1910.09700",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-08-22T10:22:34Z | ---
tags:
- clip
---
# Model Card for stable-diffusion-safety-checker
# Model Details
## Model Description
More information needed
- **Developed by:** More information needed
- **Shared by [Optional]:** CompVis
- **Model type:** Image Identification
- **Language(s) (NLP):** More information needed
- **License... | [
-0.4237004220485687,
-0.7081676125526428,
0.2338799089193344,
0.1043306291103363,
-0.16376177966594696,
-0.2435457408428192,
0.017312999814748764,
-0.5261197686195374,
0.04899721220135689,
0.37794390320777893,
-0.33831489086151123,
-0.531202495098114,
-0.798128604888916,
-0.071261569857597... |
timm/resnet18.a1_in1k | timm | 2023-04-05T18:03:00Z | 1,995,085 | 4 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:2110.00476",
"arxiv:1512.03385",
"license:apache-2.0",
"region:us"
] | image-classification | 2023-04-05T18:02:50Z | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
---
# Model card for resnet18.a1_in1k
A ResNet-B image classification model.
This model features:
* ReLU activations
* single layer 7x7 convolution with pooling
* 1x1 convolution shortcut downsample
Trained on ImageNet-1k in `timm` usin... | [
-0.8981196284294128,
-0.23639924824237823,
0.025744814425706863,
0.39163604378700256,
-0.4302903115749359,
-0.12407177686691284,
-0.1384650319814682,
-0.4113723039627075,
1.1844819784164429,
0.3047533333301544,
-0.6795831918716431,
-0.5426521301269531,
-0.6238150000572205,
0.00273046689108... |
nlptown/bert-base-multilingual-uncased-sentiment | nlptown | 2023-07-27T18:14:29Z | 1,963,455 | 208 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"en",
"nl",
"de",
"fr",
"it",
"es",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- en
- nl
- de
- fr
- it
- es
license: mit
---
# bert-base-multilingual-uncased-sentiment
This is a bert-base-multilingual-uncased model finetuned for sentiment analysis on product reviews in six languages: English, Dutch, German, French, Spanish, and Italian. It predicts the sentiment of the review as... | [
-0.6470620036125183,
-0.5991786122322083,
0.24653929471969604,
0.8643379211425781,
-0.3933452367782593,
-0.11885306239128113,
-0.41678205132484436,
-0.7027345895767212,
0.436942994594574,
0.5373976826667786,
-0.748100996017456,
-0.8639733791351318,
-0.6168043613433838,
0.04462071880698204,... |
facebook/contriever | facebook | 2022-01-19T17:23:28Z | 1,930,363 | 33 | transformers | [
"transformers",
"pytorch",
"bert",
"arxiv:2112.09118",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:05Z | This model has been trained without supervision following the approach described in [Towards Unsupervised Dense Information Retrieval with Contrastive Learning](https://arxiv.org/abs/2112.09118). The associated GitHub repository is available here https://github.com/facebookresearch/contriever.
## Usage (HuggingFace Tr... | [
-0.13535311818122864,
-0.6261586546897888,
0.32140347361564636,
0.36873579025268555,
-0.250387042760849,
-0.426110178232193,
-0.2917504608631134,
-0.13200199604034424,
0.36264675855636597,
0.4799898564815521,
-0.6685725450515747,
-0.4890047013759613,
-0.6281148791313171,
-0.200579792261123... |
YituTech/conv-bert-base | YituTech | 2021-02-24T11:26:14Z | 1,875,685 | 5 | transformers | [
"transformers",
"pytorch",
"tf",
"convbert",
"feature-extraction",
"endpoints_compatible",
"has_space",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | Entry not found | [
-0.3227650225162506,
-0.22568431496620178,
0.862226128578186,
0.43461495637893677,
-0.5282987952232361,
0.7012965679168701,
0.7915717363357544,
0.07618638128042221,
0.7746025919914246,
0.2563219666481018,
-0.7852817177772522,
-0.22573819756507874,
-0.9104480743408203,
0.5715669393539429,
... |
google/fnet-base | google | 2021-10-31T07:33:21Z | 1,855,293 | 13 | transformers | [
"transformers",
"pytorch",
"rust",
"fnet",
"pretraining",
"en",
"dataset:c4",
"arxiv:2105.03824",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: en
tags:
- fnet
license: apache-2.0
datasets:
- c4
---
# FNet base model
Pretrained model on English language using a masked language modeling (MLM) and next sentence prediction (NSP) objective. It was
introduced in [this paper](https://arxiv.org/abs/2105.03824) and first released in [this repository](... | [
-0.4886886179447174,
-0.8374145030975342,
-0.0520111545920372,
0.29622378945350647,
-0.33859559893608093,
-0.1567678600549698,
-0.3271211087703705,
-0.6824376583099365,
0.5576075911521912,
0.1503290981054306,
-0.6475221514701843,
-0.23556941747665405,
-0.5770143270492554,
-0.01530788466334... |
openai/clip-vit-large-patch14-336 | openai | 2022-10-04T09:41:39Z | 1,841,237 | 66 | transformers | [
"transformers",
"pytorch",
"tf",
"clip",
"zero-shot-image-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"has_space",
"region:us"
] | zero-shot-image-classification | 2022-04-22T14:57:43Z | ---
tags:
- generated_from_keras_callback
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
model-index:
- name: clip-vit-large-patch14-336
results: []
---
<!-- This model card has been gener... | [
-0.5086750388145447,
-0.5448575615882874,
0.43512871861457825,
0.0329984575510025,
-0.5834593176841736,
-0.41330331563949585,
0.0019670112524181604,
-0.31098926067352295,
0.1490759551525116,
0.4989697337150574,
-0.6617173552513123,
-0.4851478934288025,
-0.8483626246452332,
-0.2391672879457... |
google/vit-base-patch16-224 | google | 2023-09-05T15:27:12Z | 1,828,301 | 384 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"vit",
"image-classification",
"vision",
"dataset:imagenet-1k",
"dataset:imagenet-21k",
"arxiv:2010.11929",
"arxiv:2006.03677",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
- imagenet-21k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teap... | [
-0.597277045249939,
-0.1520426720380783,
-0.014423793181777,
-0.0765932947397232,
-0.3712384104728699,
-0.15664531290531158,
-0.05828433111310005,
-0.5923717617988586,
0.15120747685432434,
0.47804486751556396,
-0.3025684654712677,
-0.24577678740024567,
-0.7233230471611023,
-0.0668680444359... |
nvidia/speakerverification_en_titanet_large | nvidia | 2023-11-14T16:58:18Z | 1,790,918 | 37 | nemo | [
"nemo",
"speaker",
"speech",
"audio",
"speaker-verification",
"speaker-recognition",
"speaker-diarization",
"titanet",
"NeMo",
"pytorch",
"en",
"dataset:VOXCELEB-1",
"dataset:VOXCELEB-2",
"dataset:FISHER",
"dataset:switchboard",
"dataset:librispeech_asr",
"dataset:SRE",
"license:cc... | null | 2022-07-15T00:26:00Z | ---
language:
- en
library_name: nemo
datasets:
- VOXCELEB-1
- VOXCELEB-2
- FISHER
- switchboard
- librispeech_asr
- SRE
thumbnail: null
tags:
- speaker
- speech
- audio
- speaker-verification
- speaker-recognition
- speaker-diarization
- titanet
- NeMo
- pytorch
license: cc-by-4.0
widget:
- src: https://huggingface.co... | [
-0.5861972570419312,
-0.8936721682548523,
0.06860203295946121,
-0.06277909129858017,
-0.1476210355758667,
-0.17160621285438538,
-0.24887864291667938,
-0.3966623544692993,
0.22572335600852966,
0.2711067497730255,
-0.4686437249183655,
-0.47664156556129456,
-0.5080291032791138,
-0.06838804483... |
ckiplab/bert-base-chinese-ner | ckiplab | 2022-05-10T03:28:12Z | 1,774,062 | 61 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"token-classification",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language:
- zh
thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png
tags:
- pytorch
- token-classification
- bert
- zh
license: gpl-3.0
---
# CKIP BERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segment... | [
-0.3094051480293274,
-0.37639573216438293,
0.01640866883099079,
0.7887858748435974,
-0.41035550832748413,
0.05451396480202675,
-0.19907128810882568,
-0.26700690388679504,
-0.0395166277885437,
0.4669400155544281,
-0.37522950768470764,
-0.30250468850135803,
-0.6235068440437317,
0.02518985979... |
facebook/esm2_t12_35M_UR50D | facebook | 2023-03-21T15:04:57Z | 1,763,510 | 4 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"esm",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | 2022-09-27T14:30:05Z | ---
license: mit
widget:
- text: "MQIFVKTLTGKTITLEVEPS<mask>TIENVKAKIQDKEGIPPDQQRLIFAGKQLEDGRTLSDYNIQKESTLHLVLRLRGG"
---
## ESM-2
ESM-2 is a state-of-the-art protein model trained on a masked language modelling objective. It is suitable for fine-tuning on a wide range of tasks that take protein sequences as input. F... | [
-0.43286916613578796,
-0.5948188304901123,
0.3451700806617737,
0.25143834948539734,
-0.21601180732250214,
0.07263188809156418,
0.14439348876476288,
-0.5148124098777771,
0.26199525594711304,
0.41369765996932983,
-0.823645830154419,
-0.5298078060150146,
-0.9321447610855103,
0.082856029272079... |
vinai/bertweet-base | vinai | 2022-10-22T08:52:39Z | 1,763,160 | 21 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | # <a name="introduction"></a> BERTweet: A pre-trained language model for English Tweets
BERTweet is the first public large-scale language model pre-trained for English Tweets. BERTweet is trained based on the [RoBERTa](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) pre-training procedure.... | [
-0.5186978578567505,
-0.6306571364402771,
0.29735836386680603,
0.2943166196346283,
-0.4795588552951813,
0.030664963647723198,
-0.3726380467414856,
-0.5504613518714905,
0.5431251525878906,
0.15793870389461517,
-0.5621097683906555,
-0.6429836750030518,
-0.6499027609825134,
-0.199736759066581... |
ProsusAI/finbert | ProsusAI | 2023-05-23T12:43:35Z | 1,720,184 | 391 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"financial-sentiment-analysis",
"sentiment-analysis",
"en",
"arxiv:1908.10063",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language: "en"
tags:
- financial-sentiment-analysis
- sentiment-analysis
widget:
- text: "Stocks rallied and the British pound gained."
---
FinBERT is a pre-trained NLP model to analyze sentiment of financial text. It is built by further training the BERT language model in the finance domain, using a large financi... | [
-0.632326602935791,
-0.6054813861846924,
0.10096311569213867,
0.258531391620636,
-0.5443767309188843,
0.07644841074943542,
-0.11298477649688721,
-0.3709738254547119,
0.3733147978782654,
0.8145055770874023,
-0.7346611618995667,
-0.78035569190979,
-0.44957247376441956,
-0.13567088544368744,
... |
cardiffnlp/twitter-roberta-base-sentiment-latest | cardiffnlp | 2023-05-28T05:45:10Z | 1,632,960 | 268 | transformers | [
"transformers",
"pytorch",
"tf",
"roberta",
"text-classification",
"en",
"dataset:tweet_eval",
"arxiv:2202.03829",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | 2022-03-15T01:21:58Z | ---
language: en
widget:
- text: Covid cases are increasing fast!
datasets:
- tweet_eval
---
# Twitter-roBERTa-base for Sentiment Analysis - UPDATED (2022)
This is a RoBERTa-base model trained on ~124M tweets from January 2018 to December 2021, and finetuned for sentiment analysis with the TweetEval benchmark.
The ... | [
-0.17200317978858948,
-0.7293568849563599,
0.2638394236564636,
0.4035067856311798,
-0.2594057023525238,
0.23109057545661926,
-0.3279559314250946,
-0.3370872437953949,
0.23672634363174438,
0.01346618216484785,
-0.5867047905921936,
-0.8321694135665894,
-0.6822740435600281,
0.0473787114024162... |
Kyle1668/boss-sentiment-t5-large | Kyle1668 | 2023-08-09T17:50:47Z | 1,613,295 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2023-08-08T16:33:42Z | Entry not found | [
-0.3227650225162506,
-0.22568431496620178,
0.862226128578186,
0.43461495637893677,
-0.5282987952232361,
0.7012965679168701,
0.7915717363357544,
0.07618638128042221,
0.7746025919914246,
0.2563219666481018,
-0.7852817177772522,
-0.22573819756507874,
-0.9104480743408203,
0.5715669393539429,
... |
indobenchmark/indobert-base-p1 | indobenchmark | 2021-05-19T20:22:23Z | 1,483,227 | 10 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"indobert",
"indobenchmark",
"indonlu",
"id",
"dataset:Indo4B",
"arxiv:2009.05387",
"license:mit",
"has_space",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | ---
language: id
tags:
- indobert
- indobenchmark
- indonlu
license: mit
inference: false
datasets:
- Indo4B
---
# IndoBERT Base Model (phase1 - uncased)
[IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a ma... | [
-0.40831658244132996,
-0.5132914185523987,
0.10795985162258148,
0.5591232776641846,
-0.5080158710479736,
-0.3135017454624176,
-0.5402799248695374,
-0.35649001598358154,
0.2509886920452118,
0.544610321521759,
-0.39392533898353577,
-0.43498244881629944,
-0.7309859395027161,
0.335668534040451... |
davidkim205/komt-mistral-7b-v1 | davidkim205 | 2023-10-24T04:41:07Z | 1,450,870 | 5 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"finetuned",
"en",
"ko",
"arxiv:2308.06502",
"arxiv:2308.06259",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-10-24T01:58:52Z | ---
language:
- en
- ko
pipeline_tag: text-generation
tags:
- finetuned
---
# komt : korean multi task instruction tuning model

Recently, due to the success of ChatGPT, numerous large languag... | [
-0.5289522409439087,
-0.6946714520454407,
0.3030605912208557,
0.3562195301055908,
-0.39383119344711304,
0.0818999707698822,
-0.03390306234359741,
-0.2856239974498749,
0.3462279736995697,
0.2994280457496643,
-0.5225958824157715,
-0.60051429271698,
-0.6821227073669434,
0.049837708473205566,
... |
rsvp-ai/bertserini-bert-base-squad | rsvp-ai | 2022-06-23T14:13:40Z | 1,427,127 | 5 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"question-answering",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | Entry not found | [
-0.3227650225162506,
-0.22568431496620178,
0.862226128578186,
0.43461495637893677,
-0.5282987952232361,
0.7012965679168701,
0.7915717363357544,
0.07618638128042221,
0.7746025919914246,
0.2563219666481018,
-0.7852817177772522,
-0.22573819756507874,
-0.9104480743408203,
0.5715669393539429,
... |
timm/efficientnet_b0.ra_in1k | timm | 2023-04-27T21:09:50Z | 1,421,528 | 2 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2110.00476",
"arxiv:1905.11946",
"license:apache-2.0",
"region:us"
] | image-classification | 2022-12-12T23:52:52Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for efficientnet_b0.ra_in1k
A EfficientNet image classification model. Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* RandAugment `RA` recipe. Inspired by... | [
-0.39059004187583923,
-0.5088057518005371,
-0.11751069128513336,
0.06030486896634102,
-0.21740694344043732,
-0.47933852672576904,
-0.3147055506706238,
-0.3762834966182709,
0.2926810085773468,
0.43902572989463806,
-0.45833879709243774,
-0.5898841619491577,
-0.7615395188331604,
-0.1091977655... |
alexandrainst/scandi-nli-large | alexandrainst | 2023-09-20T11:55:47Z | 1,355,403 | 5 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"zero-shot-classification",
"da",
"no",
"nb",
"sv",
"dataset:strombergnlp/danfever",
"dataset:KBLab/overlim",
"dataset:MoritzLaurer/multilingual-NLI-26lang-2mil7",
"license:apache-2.0",
"endpoints_compatible",
"ha... | zero-shot-classification | 2022-11-28T07:05:27Z | ---
pipeline_tag: zero-shot-classification
language:
- da
- 'no'
- nb
- sv
license: apache-2.0
datasets:
- strombergnlp/danfever
- KBLab/overlim
- MoritzLaurer/multilingual-NLI-26lang-2mil7
widget:
- example_title: Danish
text: >-
Mexicansk bokser advarer Messi - 'Du skal bede til gud, om at jeg ikke
finder d... | [
-0.6841660737991333,
-0.41172879934310913,
0.20555807650089264,
0.3014991581439972,
-0.29256460070610046,
-0.1302579641342163,
-0.2678139805793762,
-0.6229797005653381,
0.8457328081130981,
0.008774171583354473,
-0.6788859367370605,
-0.8692259192466736,
-0.5754921436309814,
0.27649191021919... |
Kyle1668/boss-toxicity-t5-large | Kyle1668 | 2023-09-23T00:03:08Z | 1,286,682 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2023-08-08T16:40:42Z | Entry not found | [
-0.3227650225162506,
-0.22568431496620178,
0.862226128578186,
0.43461495637893677,
-0.5282987952232361,
0.7012965679168701,
0.7915717363357544,
0.07618638128042221,
0.7746025919914246,
0.2563219666481018,
-0.7852817177772522,
-0.22573819756507874,
-0.9104480743408203,
0.5715669393539429,
... |
microsoft/table-transformer-detection | microsoft | 2023-09-06T14:49:09Z | 1,278,488 | 98 | transformers | [
"transformers",
"pytorch",
"safetensors",
"table-transformer",
"object-detection",
"arxiv:2110.00061",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | object-detection | 2022-10-14T09:14:13Z | ---
license: mit
widget:
- src: https://www.invoicesimple.com/wp-content/uploads/2018/06/Sample-Invoice-printable.png
example_title: Invoice
---
# Table Transformer (fine-tuned for Table Detection)
Table Transformer (DETR) model trained on PubTables1M. It was introduced in the paper [PubTables-1M: Towards Comprehe... | [
-0.5323225259780884,
-0.5682713389396667,
0.2637466490268707,
-0.3004693388938904,
-0.3316004276275635,
-0.1775510460138321,
0.45144280791282654,
-0.422410249710083,
0.011674889363348484,
0.6638921499252319,
-0.7020613551139832,
-0.46261557936668396,
-0.6199028491973877,
0.1560775935649871... |
distilbert-base-uncased-distilled-squad | null | 2023-04-06T13:40:56Z | 1,269,799 | 73 | transformers | [
"transformers",
"pytorch",
"tf",
"tflite",
"coreml",
"safetensors",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
language: en
datasets:
- squad
widget:
- text: "Which name is also used to describe the Amazon rainforest in English?"
context: "The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also know... | [
-0.35065045952796936,
-0.8725157976150513,
0.22245916724205017,
0.14677977561950684,
-0.10524468123912811,
0.19723621010780334,
-0.19225279986858368,
-0.28156936168670654,
-0.06112495809793472,
0.1462583988904953,
-0.7896097898483276,
-0.27438434958457947,
-0.7389736771583557,
0.1122565716... |
shibing624/text2vec-base-chinese | shibing624 | 2023-08-28T08:58:03Z | 1,250,320 | 494 | transformers | [
"transformers",
"pytorch",
"onnx",
"bert",
"feature-extraction",
"text2vec",
"sentence-similarity",
"zh",
"dataset:shibing624/nli_zh",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- text2vec
- feature-extraction
- sentence-similarity
- transformers
datasets:
- shibing624/nli_zh
language:
- zh
metrics:
- spearmanr
library_name: transformers
---
# shibing624/text2vec-base-chinese
This is a CoSENT(Cosine Sentence) model: shibing624/tex... | [
-0.09042555093765259,
-0.786501944065094,
0.31972551345825195,
0.4172061085700989,
-0.31662964820861816,
-0.47093871235847473,
-0.2835773229598999,
-0.18274708092212677,
0.09688406437635422,
0.41482487320899963,
-0.42954006791114807,
-0.5778605937957764,
-0.575556755065918,
0.1114528924226... |
bigscience/bloom-560m | bigscience | 2023-09-26T09:16:49Z | 1,241,498 | 285 | transformers | [
"transformers",
"pytorch",
"jax",
"onnx",
"safetensors",
"bloom",
"text-generation",
"ak",
"ar",
"as",
"bm",
"bn",
"ca",
"code",
"en",
"es",
"eu",
"fon",
"fr",
"gu",
"hi",
"id",
"ig",
"ki",
"kn",
"lg",
"ln",
"ml",
"mr",
"ne",
"nso",
"ny",
"or",
"pa",... | text-generation | 2022-05-19T11:51:24Z | ---
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zhs
- zht
- zu
pipeline_tag: text-gener... | [
-0.2479514479637146,
-0.5836013555526733,
0.4350180923938751,
0.27048131823539734,
-0.12169936299324036,
-0.24183975160121918,
-0.5098379850387573,
-0.5694324970245361,
0.07920429110527039,
0.5146728157997131,
-0.444816529750824,
-0.6907485127449036,
-0.6505861282348633,
0.0470875389873981... |
csebuetnlp/banglabert | csebuetnlp | 2022-12-23T18:49:36Z | 1,207,089 | 12 | transformers | [
"transformers",
"pytorch",
"electra",
"pretraining",
"bn",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language:
- bn
licenses:
- cc-by-nc-sa-4.0
---
# BanglaBERT
This repository contains the pretrained discriminator checkpoint of the model **BanglaBERT**. This is an [ELECTRA](https://openreview.net/pdf?id=r1xMH1BtvB) discriminator model pretrained with the Replaced Token Detection (RTD) objective. Finetuned mode... | [
-0.45374587178230286,
-0.769209086894989,
-0.07506414502859116,
0.41757792234420776,
-0.2560822665691376,
0.05141713097691536,
-0.4727419912815094,
-0.42056113481521606,
0.19191104173660278,
0.14498668909072876,
-0.4285649061203003,
-0.5767320990562439,
-0.5864207148551941,
0.2183058112859... |
laion/CLIP-ViT-B-32-laion2B-s34B-b79K | laion | 2023-04-18T06:49:43Z | 1,204,033 | 49 | open_clip | [
"open_clip",
"pytorch",
"clip",
"zero-shot-image-classification",
"arxiv:1910.04867",
"license:mit",
"has_space",
"region:us"
] | zero-shot-image-classification | 2022-09-14T22:49:28Z | ---
license: mit
widget:
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
pipeline_tag: zero-shot-image-classification
---
# Model Card for CLIP ViT-B/32 - LAION-2B
# Table of Contents
1. [Mo... | [
-0.2775880992412567,
-0.5689245462417603,
0.17562779784202576,
0.06699049472808838,
-0.3972177505493164,
-0.4110003411769867,
-0.16550928354263306,
-0.6360536217689514,
0.023332374170422554,
0.41557443141937256,
-0.4149303734302521,
-0.567352831363678,
-0.5902985334396362,
-0.1487317234277... |
jonatasgrosman/wav2vec2-large-xlsr-53-russian | jonatasgrosman | 2022-12-14T01:58:43Z | 1,184,530 | 33 | transformers | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_6_0",
"robust-speech-event",
"ru",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"dataset:mozilla-foundation/common_voice_6_0",
"lice... | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: ru
license: apache-2.0
datasets:
- common_voice
- mozilla-foundation/common_voice_6_0
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
- mozilla-foundation/common_voice_6_0
- robust-speech-event
- ru
- speech
- xlsr-fine-tuning-week
model-index:
- name: XLSR Wav2Vec2 ... | [
-0.49102237820625305,
-0.5980230569839478,
0.20731595158576965,
0.1410287618637085,
-0.3018975555896759,
-0.11786732822656631,
-0.24717485904693604,
-0.4116494953632355,
0.30818966031074524,
0.15552197396755219,
-0.5913642048835754,
-0.541431725025177,
-0.4035887122154236,
-0.1142936795949... |
laion/CLIP-ViT-H-14-laion2B-s32B-b79K | laion | 2023-04-18T17:45:56Z | 1,182,059 | 193 | open_clip | [
"open_clip",
"pytorch",
"clip",
"zero-shot-image-classification",
"arxiv:1910.04867",
"license:mit",
"has_space",
"region:us"
] | zero-shot-image-classification | 2022-09-14T22:52:28Z | ---
license: mit
widget:
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
library_name: open_clip
pipeline_tag: zero-shot-image-classification
---
# Model Card for CLIP ViT-H/14 - LAION-2B
# T... | [
-0.2912047803401947,
-0.5678616166114807,
0.20812733471393585,
0.02528180554509163,
-0.3768022656440735,
-0.43453097343444824,
-0.19491781294345856,
-0.6510435938835144,
-0.007727808319032192,
0.43786609172821045,
-0.4258979856967926,
-0.5864419937133789,
-0.5891714096069336,
-0.0757661759... |
martin-ha/toxic-comment-model | martin-ha | 2022-05-06T02:24:31Z | 1,172,236 | 32 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"en",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language: en
---
## Model description
This model is a fine-tuned version of the [DistilBERT model](https://huggingface.co/transformers/model_doc/distilbert.html) to classify toxic comments.
## How to use
You can use the model with the following code.
```python
from transformers import AutoModelForSequenceClass... | [
-0.3922950327396393,
-0.4740905165672302,
0.18749642372131348,
0.1335316151380539,
-0.15836480259895325,
-0.10166865587234497,
0.03602038696408272,
-0.40231478214263916,
0.01824433170258999,
0.23490272462368011,
-0.5330604910850525,
-0.7474910020828247,
-0.9154424667358398,
0.1708005219697... |
tiiuae/falcon-40b-instruct | tiiuae | 2023-09-29T14:32:27Z | 1,158,762 | 1,120 | transformers | [
"transformers",
"pytorch",
"falcon",
"text-generation",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2306.01116",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"text-generati... | text-generation | 2023-05-25T10:14:36Z | ---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: false
license: apache-2.0
---
# ✨ Falcon-40B-Instruct
**Falcon-40B-Instruct is a 40B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) and finetuned on a mixture of ... | [
-0.5245065689086914,
-0.9391388893127441,
0.10132857412099838,
0.37688905000686646,
-0.0656701847910881,
-0.07384902983903885,
-0.19400599598884583,
-0.521453320980072,
0.1596158891916275,
0.3435472548007965,
-0.5734294056892395,
-0.44747111201286316,
-0.668843686580658,
-0.046887185424566... |
cardiffnlp/twitter-xlm-roberta-base-sentiment | cardiffnlp | 2023-07-19T20:41:38Z | 1,128,530 | 152 | transformers | [
"transformers",
"pytorch",
"tf",
"xlm-roberta",
"text-classification",
"multilingual",
"arxiv:2104.12250",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language: multilingual
widget:
- text: "🤗"
- text: "T'estimo! ❤️"
- text: "I love you!"
- text: "I hate you 🤮"
- text: "Mahal kita!"
- text: "사랑해!"
- text: "난 너가 싫어"
- text: "😍😍😍"
---
# twitter-XLM-roBERTa-base for Sentiment Analysis
This is a multilingual XLM-roBERTa-base model trained on ~198M tweets and ... | [
-0.21140699088573456,
-0.6133162379264832,
0.19658339023590088,
0.40007078647613525,
-0.20551961660385132,
0.25525936484336853,
-0.427446573972702,
-0.2090844064950943,
0.23859182000160217,
0.13396018743515015,
-0.5945858359336853,
-0.9647396802902222,
-0.721899688243866,
0.167344674468040... |
microsoft/layoutlmv2-base-uncased | microsoft | 2022-09-16T03:40:56Z | 1,115,792 | 39 | transformers | [
"transformers",
"pytorch",
"layoutlmv2",
"en",
"arxiv:2012.14740",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: en
license: cc-by-nc-sa-4.0
---
# LayoutLMv2
**Multimodal (text + layout/format + image) pre-training for document AI**
The documentation of this model in the Transformers library can be found [here](https://huggingface.co/docs/transformers/model_doc/layoutlmv2).
[Microsoft Document AI](https://www.mi... | [
-0.24813884496688843,
-0.5359470844268799,
0.5049474835395813,
0.2781062722206116,
-0.1691926121711731,
0.019840208813548088,
0.13748787343502045,
-0.2951689064502716,
-0.18741048872470856,
0.3231786787509918,
-0.770416796207428,
-0.47540482878685,
-0.6762189269065857,
-0.26158127188682556... |
CIDAS/clipseg-rd64-refined | CIDAS | 2023-01-04T11:56:08Z | 1,115,254 | 64 | transformers | [
"transformers",
"pytorch",
"clipseg",
"vision",
"image-segmentation",
"arxiv:2112.10003",
"license:apache-2.0",
"has_space",
"region:us"
] | image-segmentation | 2022-11-01T14:25:57Z | ---
license: apache-2.0
tags:
- vision
- image-segmentation
inference: false
---
# CLIPSeg model
CLIPSeg model with reduce dimension 64, refined (using a more complex convolution). It was introduced in the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Lüddecke et al. an... | [
-0.8298375606536865,
-0.5458164215087891,
0.6076344847679138,
0.10645419359207153,
-0.5790215730667114,
-0.39590778946876526,
0.25823232531547546,
-0.3526495099067688,
0.025415541604161263,
0.38337811827659607,
-0.9556897878646851,
-0.623732328414917,
-0.7718104720115662,
0.014494373463094... |
facebook/wav2vec2-base-960h | facebook | 2022-11-14T21:37:23Z | 1,103,109 | 175 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"en",
"dataset:librispeech_asr",
"arxiv:2006.11477",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.... | [
-0.198020800948143,
-0.6546816229820251,
0.16972270607948303,
0.16908106207847595,
-0.17613594233989716,
-0.15908141434192657,
-0.4845995604991913,
-0.5395611524581909,
-0.05200781673192978,
0.15351442992687225,
-0.6113046407699585,
-0.5992768406867981,
-0.5833753347396851,
-0.409840762615... |
google/mt5-base | google | 2023-01-24T16:37:25Z | 1,068,223 | 115 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"mt5",
"text2text-generation",
"multilingual",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",... | text2text-generation | 2022-03-02T23:29:05Z | ---
language:
- multilingual
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
-... | [
-0.5287659168243408,
-0.17020736634731293,
0.29125913977622986,
0.411001056432724,
-0.2939440906047821,
0.36009055376052856,
-0.38297703862190247,
-0.4471434950828552,
0.17177559435367584,
0.3612290918827057,
-0.7018294930458069,
-0.8553211092948914,
-0.9299465417861938,
0.7416996955871582... |
neuralmind/bert-base-portuguese-cased | neuralmind | 2022-06-14T14:37:09Z | 997,572 | 102 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"pt",
"dataset:brWaC",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: pt
license: mit
tags:
- bert
- pytorch
datasets:
- brWaC
---
# BERTimbau Base (aka "bert-base-portuguese-cased")

## Introduction
BERTimbau Base is a pretrained BERT model for Brazilian Portuguese that achieves state-of-the-art performance... | [
-0.18304821848869324,
-0.4717995524406433,
0.10968445241451263,
0.5005441308021545,
-0.513286828994751,
-0.008465152233839035,
-0.16737939417362213,
0.012110767886042595,
0.5818616151809692,
0.3750711977481842,
-0.4980522394180298,
-0.6814043521881104,
-0.7359874844551086,
-0.0541285052895... |
facebook/bart-base | facebook | 2022-11-16T23:23:10Z | 992,862 | 113 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bart",
"feature-extraction",
"en",
"arxiv:1910.13461",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | ---
license: apache-2.0
language: en
---
# BART (base-sized model)
BART model pre-trained on English language. It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Lewis et al. and first... | [
-0.5895666480064392,
-1.041480541229248,
0.16506583988666534,
0.2032175362110138,
-0.3447984755039215,
-0.006318152416497469,
-0.23662404716014862,
-0.3935691714286804,
0.3891613483428955,
0.42154982686042786,
-0.5027112364768982,
-0.3990526497364044,
-0.49174830317497253,
0.33777844905853... |
deepset/roberta-base-squad2 | deepset | 2023-09-26T11:36:30Z | 977,030 | 498 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"roberta",
"question-answering",
"en",
"dataset:squad_v2",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
language: en
license: cc-by-4.0
datasets:
- squad_v2
model-index:
- name: deepset/roberta-base-squad2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- type: exac... | [
-0.409990131855011,
-0.6528233289718628,
0.41005268692970276,
0.0615079328417778,
-0.04213952273130417,
0.10646788775920868,
-0.10608834028244019,
-0.3803134262561798,
0.30146458745002747,
0.29229697585105896,
-0.8476130962371826,
-0.6700934767723083,
-0.27746444940567017,
0.08169548958539... |
dslim/bert-base-NER-uncased | dslim | 2023-05-09T16:37:36Z | 974,454 | 22 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"token-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
license: mit
--- | [
-0.12853388488292694,
-0.18616782128810883,
0.6529127359390259,
0.4943625330924988,
-0.19319313764572144,
0.23607465624809265,
0.36071982979774475,
0.05056332051753998,
0.5793652534484863,
0.740013837814331,
-0.6508103013038635,
-0.2378396987915039,
-0.710224986076355,
-0.04782581701874733... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.