modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
jonatasgrosman/wav2vec2-large-xlsr-53-english | 2023-03-25T10:56:55.000Z | [
"transformers",
"pytorch",
"jax",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"en",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_6_0",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"dataset:mozilla-foundation/common_vo... | automatic-speech-recognition | jonatasgrosman | null | null | jonatasgrosman/wav2vec2-large-xlsr-53-english | 299 | 73,582,776 | transformers | 2022-03-02T23:29:05 | ---
language: en
datasets:
- common_voice
- mozilla-foundation/common_voice_6_0
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- en
- hf-asr-leaderboard
- mozilla-foundation/common_voice_6_0
- robust-speech-event
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 ... | 5,327 | [
[
-0.0232391357421875,
-0.048248291015625,
0.011932373046875,
0.0168914794921875,
-0.0070037841796875,
-0.018402099609375,
-0.0272369384765625,
-0.052276611328125,
0.0106964111328125,
0.0249176025390625,
-0.05072021484375,
-0.0323486328125,
-0.031036376953125,
... |
timm/mobilenetv3_large_100.ra_in1k | 2023-04-27T22:49:21.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2110.00476",
"arxiv:1905.02244",
"license:apache-2.0",
"region:us",
"has_space"
] | image-classification | timm | null | null | timm/mobilenetv3_large_100.ra_in1k | 9 | 61,880,982 | timm | 2022-12-16T05:38:07 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for mobilenetv3_large_100.ra_in1k
A MobileNet-v3 image classification model. Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* RandAugment `RA` recipe. Inspi... | 4,793 | [
[
-0.0309295654296875,
-0.0209808349609375,
-0.004390716552734375,
0.006359100341796875,
-0.0230865478515625,
-0.0299835205078125,
-0.00531005859375,
-0.0260467529296875,
0.0279998779296875,
0.035003662109375,
-0.02734375,
-0.054229736328125,
-0.0435791015625,
... |
bert-base-uncased | 2023-06-30T01:42:19.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"coreml",
"onnx",
"safetensors",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | null | null | null | bert-base-uncased | 1,182 | 52,250,055 | transformers | 2022-03-02T23:29:04 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](http... | 10,517 | [
[
-0.010284423828125,
-0.046142578125,
0.0119476318359375,
0.023162841796875,
-0.0394287109375,
0.0003082752227783203,
-0.00923919677734375,
-0.0169677734375,
0.033599853515625,
0.041656494140625,
-0.04144287109375,
-0.03338623046875,
-0.0570068359375,
0.01056... |
distilbert-base-uncased-finetuned-sst-2-english | 2023-10-26T16:14:11.000Z | [
"transformers",
"pytorch",
"tf",
"rust",
"onnx",
"safetensors",
"distilbert",
"text-classification",
"en",
"dataset:sst2",
"dataset:glue",
"arxiv:1910.01108",
"doi:10.57967/hf/0181",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | null | null | null | distilbert-base-uncased-finetuned-sst-2-english | 331 | 41,670,892 | transformers | 2022-03-02T23:29:04 | ---
language: en
license: apache-2.0
datasets:
- sst2
- glue
model-index:
- name: distilbert-base-uncased-finetuned-sst-2-english
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: sst2
split: validation
metrics:
... | 10,458 | [
[
-0.0304412841796875,
-0.05908203125,
0.0137481689453125,
0.012725830078125,
-0.032501220703125,
-0.0002455711364746094,
-0.01410675048828125,
-0.0252838134765625,
0.007808685302734375,
0.032745361328125,
-0.04638671875,
-0.04730224609375,
-0.0693359375,
-0.0... |
openai/clip-vit-large-patch14 | 2023-09-15T15:49:35.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"clip",
"zero-shot-image-classification",
"vision",
"arxiv:2103.00020",
"arxiv:1908.04913",
"endpoints_compatible",
"has_space",
"region:us"
] | zero-shot-image-classification | openai | null | null | openai/clip-vit-large-patch14 | 676 | 26,212,915 | transformers | 2022-03-02T23:29:05 | ---
tags:
- vision
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
---
# Model Card: CLIP
Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found ... | 7,935 | [
[
-0.039031982421875,
-0.0443115234375,
0.0128173828125,
-0.0023288726806640625,
-0.01251983642578125,
-0.019561767578125,
0.001708984375,
-0.054962158203125,
0.0099334716796875,
0.0298919677734375,
-0.0217132568359375,
-0.03155517578125,
-0.048919677734375,
0... |
gpt2 | 2023-06-30T02:19:43.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"onnx",
"safetensors",
"gpt2",
"text-generation",
"exbert",
"en",
"doi:10.57967/hf/0039",
"license:mit",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | null | null | null | gpt2 | 1,471 | 23,269,709 | transformers | 2022-03-02T23:29:04 | ---
language: en
tags:
- exbert
license: mit
---
# GPT-2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better... | 8,090 | [
[
-0.0205841064453125,
-0.055419921875,
0.0232086181640625,
-0.0022525787353515625,
-0.019683837890625,
-0.0235137939453125,
-0.030242919921875,
-0.03985595703125,
-0.00772857666015625,
0.023651123046875,
-0.0361328125,
-0.0206756591796875,
-0.055755615234375,
... |
tiiuae/falcon-7b-instruct | 2023-09-29T14:32:23.000Z | [
"transformers",
"pytorch",
"coreml",
"falcon",
"text-generation",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2306.01116",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"t... | text-generation | tiiuae | null | null | tiiuae/falcon-7b-instruct | 710 | 15,487,847 | transformers | 2023-04-25T06:21:01 | ---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: true
widget:
- text: "Hey Falcon! Any recommendations for my holidays in Abu Dhabi?"
example_title: "Abu Dhabi Trip"
- text: "What's the Everett interpretation of quantum mechanics?"
example_title: "Q/A: Quantum & Answers"
- text: "Giv... | 9,798 | [
[
-0.035675048828125,
-0.07257080078125,
0.005641937255859375,
0.02783203125,
-0.00731658935546875,
-0.007244110107421875,
-0.00921630859375,
-0.034698486328125,
0.01654052734375,
0.0285797119140625,
-0.0340576171875,
-0.036224365234375,
-0.056793212890625,
0.... |
xlm-roberta-base | 2023-04-07T12:46:17.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"onnx",
"safetensors",
"xlm-roberta",
"fill-mask",
"exbert",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi... | fill-mask | null | null | null | xlm-roberta-base | 406 | 12,048,443 | transformers | 2022-03-02T23:29:04 | ---
tags:
- exbert
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
... | 5,238 | [
[
-0.03326416015625,
-0.056610107421875,
0.01509857177734375,
0.005535125732421875,
-0.015625,
-0.0003008842468261719,
-0.0286407470703125,
-0.029022216796875,
0.01404571533203125,
0.044036865234375,
-0.033782958984375,
-0.04351806640625,
-0.05340576171875,
0.... |
distilbert-base-uncased | 2023-08-18T14:59:41.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"distilbert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1910.01108",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | null | null | null | distilbert-base-uncased | 292 | 11,014,465 | transformers | 2022-03-02T23:29:04 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# DistilBERT base model (uncased)
This model is a distilled version of the [BERT base model](https://huggingface.co/bert-base-uncased). It was
introduced in [this paper](https://arxiv.org/abs/1910.01108). The code for the disti... | 8,577 | [
[
-0.004299163818359375,
-0.049346923828125,
0.018951416015625,
0.0210113525390625,
-0.041534423828125,
0.003765106201171875,
-0.0017490386962890625,
-0.00771331787109375,
0.0274810791015625,
0.02911376953125,
-0.0396728515625,
-0.032623291015625,
-0.0697021484375... |
sentence-transformers/all-mpnet-base-v2 | 2023-11-02T09:35:52.000Z | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"datas... | sentence-similarity | sentence-transformers | null | null | sentence-transformers/all-mpnet-base-v2 | 452 | 10,816,338 | sentence-transformers | 2022-03-02T23:29:05 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- nat... | 10,571 | [
[
-0.0270233154296875,
-0.0555419921875,
0.0252685546875,
0.01505279541015625,
-0.00969696044921875,
-0.0243377685546875,
-0.0179595947265625,
-0.014678955078125,
0.02606201171875,
0.0147552490234375,
-0.03167724609375,
-0.0374755859375,
-0.05572509765625,
0.0... |
stabilityai/stable-diffusion-xl-base-1.0 | 2023-10-30T16:03:47.000Z | [
"diffusers",
"onnx",
"text-to-image",
"stable-diffusion",
"arxiv:2307.01952",
"arxiv:2211.01324",
"arxiv:2108.01073",
"arxiv:2112.10752",
"license:openrail++",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | stabilityai | null | null | stabilityai/stable-diffusion-xl-base-1.0 | 3,405 | 9,097,397 | diffusers | 2023-07-25T13:25:51 | ---
license: openrail++
tags:
- text-to-image
- stable-diffusion
---
# SD-XL 1.0-base Model Card

## Model

[SDXL](https://arxiv.org/abs/2307.01952) consists of an [ensemble of experts](https://arxiv.org/abs/2211.01324) pipeline for latent diffusion:
In a first step, the bas... | 8,661 | [
[
-0.0302581787109375,
-0.0626220703125,
0.038360595703125,
0.01006317138671875,
-0.00792694091796875,
-0.02288818359375,
-0.01019287109375,
-0.00579071044921875,
0.00980377197265625,
0.031585693359375,
-0.021881103515625,
-0.0384521484375,
-0.045867919921875,
... |
cardiffnlp/twitter-roberta-base-sentiment | 2023-01-20T09:52:13.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"en",
"dataset:tweet_eval",
"arxiv:2010.12421",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | cardiffnlp | null | null | cardiffnlp/twitter-roberta-base-sentiment | 219 | 8,846,250 | transformers | 2022-03-02T23:29:05 | ---
datasets:
- tweet_eval
language:
- en
---
# Twitter-roBERTa-base for Sentiment Analysis
This is a roBERTa-base model trained on ~58M tweets and finetuned for sentiment analysis with the TweetEval benchmark. This model is suitable for English (for a similar multilingual model, see [XLM-T](https://huggingface.co/car... | 3,717 | [
[
-0.004131317138671875,
-0.05206298828125,
0.00853729248046875,
0.0300140380859375,
-0.01299285888671875,
0.01293182373046875,
-0.032379150390625,
-0.01396942138671875,
0.02685546875,
0.00331878662109375,
-0.0279998779296875,
-0.0701904296875,
-0.0518798828125,
... |
openai/clip-vit-base-patch32 | 2022-10-04T09:42:04.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"clip",
"zero-shot-image-classification",
"vision",
"arxiv:2103.00020",
"arxiv:1908.04913",
"endpoints_compatible",
"has_space",
"region:us"
] | zero-shot-image-classification | openai | null | null | openai/clip-vit-base-patch32 | 262 | 8,682,996 | transformers | 2022-03-02T23:29:05 | ---
tags:
- vision
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
---
# Model Card: CLIP
Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found ... | 7,930 | [
[
-0.03851318359375,
-0.044281005859375,
0.012908935546875,
-0.0017910003662109375,
-0.01258087158203125,
-0.0192108154296875,
0.00249481201171875,
-0.055084228515625,
0.009490966796875,
0.0294189453125,
-0.0214080810546875,
-0.03143310546875,
-0.048919677734375,
... |
stabilityai/stable-diffusion-xl-refiner-1.0 | 2023-09-25T13:42:56.000Z | [
"diffusers",
"stable-diffusion",
"image-to-image",
"arxiv:2307.01952",
"arxiv:2211.01324",
"arxiv:2108.01073",
"arxiv:2112.10752",
"license:openrail++",
"has_space",
"diffusers:StableDiffusionXLImg2ImgPipeline",
"region:us"
] | image-to-image | stabilityai | null | null | stabilityai/stable-diffusion-xl-refiner-1.0 | 1,032 | 8,432,740 | diffusers | 2023-07-26T07:38:01 | ---
license: openrail++
tags:
- stable-diffusion
- image-to-image
---
# SD-XL 1.0-refiner Model Card

## Model

[SDXL](https://arxiv.org/abs/2307.01952) consists of an [ensemble of experts](https://arxiv.org/abs/2211.01324) pipeline for latent diffusion:
In a first step, the... | 5,535 | [
[
-0.039093017578125,
-0.060943603515625,
0.035675048828125,
0.00960540771484375,
-0.0181884765625,
-0.0196990966796875,
-0.0052337646484375,
-0.0210418701171875,
0.004833221435546875,
0.032196044921875,
-0.03515625,
-0.037872314453125,
-0.0516357421875,
-0.00... |
roberta-base | 2023-03-06T15:14:53.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"roberta",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1907.11692",
"arxiv:1806.02847",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | null | null | null | roberta-base | 236 | 8,107,369 | transformers | 2022-03-02T23:29:04 | ---
language: en
tags:
- exbert
license: mit
datasets:
- bookcorpus
- wikipedia
---
# RoBERTa base model
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1907.11692) and first released in
[this repository](https://github.com... | 9,064 | [
[
-0.0119171142578125,
-0.059051513671875,
0.0160980224609375,
-0.0010471343994140625,
-0.0270843505859375,
-0.005008697509765625,
-0.0267791748046875,
-0.0281829833984375,
0.0208282470703125,
0.0306243896484375,
-0.042816162109375,
-0.0440673828125,
-0.0680541992... |
MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli | 2023-03-20T08:28:44.000Z | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"text-classification",
"zero-shot-classification",
"en",
"dataset:multi_nli",
"dataset:anli",
"dataset:fever",
"arxiv:2006.03654",
"license:mit",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | zero-shot-classification | MoritzLaurer | null | null | MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli | 105 | 8,058,718 | transformers | 2022-03-02T23:29:04 | ---
language:
- en
license: mit
tags:
- text-classification
- zero-shot-classification
datasets:
- multi_nli
- anli
- fever
metrics:
- accuracy
pipeline_tag: zero-shot-classification
model-index:
- name: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli
results:
- task:
type: natural-language-inference
name:... | 23,366 | [
[
-0.040802001953125,
-0.04534912109375,
0.01319122314453125,
0.011016845703125,
-0.005153656005859375,
0.002948760986328125,
0.01454925537109375,
-0.027252197265625,
0.0254058837890625,
0.020172119140625,
-0.042327880859375,
-0.03302001953125,
-0.049285888671875,... |
runwayml/stable-diffusion-v1-5 | 2023-08-23T21:14:19.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"arxiv:2207.12598",
"arxiv:2112.10752",
"arxiv:2103.00020",
"arxiv:2205.11487",
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"re... | text-to-image | runwayml | null | null | runwayml/stable-diffusion-v1-5 | 9,529 | 7,769,173 | diffusers | 2022-10-19T23:38:35 | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
extra_gated_prompt: |-
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. ... | 14,448 | [
[
-0.0296478271484375,
-0.0716552734375,
0.03448486328125,
0.02020263671875,
-0.018157958984375,
-0.02935791015625,
0.006397247314453125,
-0.033203125,
-0.01377105712890625,
0.033599853515625,
-0.023651123046875,
-0.042083740234375,
-0.053192138671875,
-0.0127... |
stabilityai/StableBeluga-7B | 2023-08-29T20:21:36.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:conceptofmind/cot_submix_original",
"dataset:conceptofmind/flan2021_submix_original",
"dataset:conceptofmind/t0_submix_original",
"dataset:conceptofmind/niv2_submix_original",
"arxiv:2307.09288",
"arxiv:2306.02707",
"end... | text-generation | stabilityai | null | null | stabilityai/StableBeluga-7B | 119 | 7,521,844 | transformers | 2023-07-27T02:01:15 | ---
datasets:
- conceptofmind/cot_submix_original
- conceptofmind/flan2021_submix_original
- conceptofmind/t0_submix_original
- conceptofmind/niv2_submix_original
language:
- en
pipeline_tag: text-generation
---
# Stable Beluga 7B
Use [Stable Chat (Research Preview)](https://chat.stability.ai/chat) to test Stability A... | 5,307 | [
[
-0.0335693359375,
-0.0728759765625,
0.0037937164306640625,
0.029083251953125,
-0.021453857421875,
0.00447845458984375,
-0.00992584228515625,
-0.038787841796875,
0.0010976791381835938,
0.0221710205078125,
-0.040283203125,
-0.03875732421875,
-0.0482177734375,
... |
cardiffnlp/twitter-roberta-base-irony | 2023-08-02T00:36:09.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"en",
"dataset:tweet_eval",
"arxiv:2010.12421",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | cardiffnlp | null | null | cardiffnlp/twitter-roberta-base-irony | 13 | 7,044,288 | transformers | 2022-03-02T23:29:05 | ---
datasets:
- tweet_eval
language:
- en
---
# Twitter-roBERTa-base for Irony Detection
This is a roBERTa-base model trained on ~58M tweets and finetuned for irony detection with the TweetEval benchmark.
This model has integrated into the [TweetNLP Python library](https://github.com/cardiffnlp/tweetnlp/).
- Paper: ... | 3,305 | [
[
0.0025501251220703125,
-0.051666259765625,
0.0228118896484375,
0.0288238525390625,
-0.00787353515625,
0.0005745887756347656,
-0.02398681640625,
-0.021148681640625,
0.0163116455078125,
0.00397491455078125,
-0.0225830078125,
-0.0596923828125,
-0.04693603515625,
... |
SamLowe/roberta-base-go_emotions | 2023-10-04T10:00:58.000Z | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"emotions",
"multi-class-classification",
"multi-label-classification",
"en",
"dataset:go_emotions",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | SamLowe | null | null | SamLowe/roberta-base-go_emotions | 159 | 6,800,543 | transformers | 2022-09-15T13:04:21 | ---
language: en
tags:
- text-classification
- pytorch
- roberta
- emotions
- multi-class-classification
- multi-label-classification
datasets:
- go_emotions
license: mit
widget:
- text: I am not having a great day.
---
#### Overview
Model trained from [roberta-base](https://huggingface.co/roberta-base) on the [go_em... | 9,565 | [
[
-0.038543701171875,
-0.03546142578125,
0.01154327392578125,
0.0163726806640625,
-0.0005674362182617188,
0.00769805908203125,
0.006744384765625,
-0.0227813720703125,
0.04791259765625,
0.026031494140625,
-0.028289794921875,
-0.052825927734375,
-0.0595703125,
-... |
marieke93/MiniLM-evidence-types | 2022-06-11T13:32:27.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | marieke93 | null | null | marieke93/MiniLM-evidence-types | 3 | 6,543,049 | transformers | 2022-06-07T14:19:25 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: MiniLM-evidence-types
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ... | 3,742 | [
[
-0.039520263671875,
-0.038787841796875,
0.0187225341796875,
-0.0010738372802734375,
-0.0034332275390625,
-0.012451171875,
0.0037631988525390625,
-0.0081024169921875,
0.03173828125,
0.019744873046875,
-0.045654296875,
-0.050079345703125,
-0.0484619140625,
-0.... |
Ashishkr/query_wellformedness_score | 2023-10-13T16:08:29.000Z | [
"transformers",
"pytorch",
"jax",
"safetensors",
"roberta",
"text-classification",
"dataset:google_wellformed_query",
"license:apache-2.0",
"region:us"
] | text-classification | Ashishkr | null | null | Ashishkr/query_wellformedness_score | 13 | 6,540,192 | transformers | 2022-03-02T23:29:05 | ---
license: apache-2.0
inference: false
datasets: google_wellformed_query
---
**Model name**: Query Wellformedness Scoring
**Description** : Evaluate the well-formedness of sentences by checking grammatical correctness and completeness. Sensitive to case and penalizes sentences for incorrect grammar and case.
**Fea... | 2,209 | [
[
-0.00690460205078125,
-0.078369140625,
0.01397705078125,
0.04833984375,
-0.00553131103515625,
-0.00585174560546875,
-0.002742767333984375,
-0.0077056884765625,
0.01849365234375,
0.0201263427734375,
-0.040130615234375,
-0.0606689453125,
-0.042572021484375,
0.... |
benjamin/wtp-canine-s-1l | 2023-05-31T09:10:23.000Z | [
"transformers",
"pytorch",
"la-canine",
"token-classification",
"multilingual",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"... | token-classification | benjamin | null | null | benjamin/wtp-canine-s-1l | 0 | 6,521,575 | transformers | 2023-05-10T20:48:35 | ---
license: mit
language:
- multilingual
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hu
- hy
- id
- ig
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ... | 551 | [
[
-0.0229034423828125,
-0.031951904296875,
0.01824951171875,
0.042236328125,
-0.032562255859375,
-0.00981903076171875,
0.020782470703125,
-0.01507568359375,
0.035614013671875,
0.032501220703125,
-0.060943603515625,
-0.0177154541015625,
-0.028472900390625,
-0.0... |
stabilityai/stable-diffusion-2-1 | 2023-07-05T16:19:17.000Z | [
"diffusers",
"stable-diffusion",
"text-to-image",
"arxiv:2112.10752",
"arxiv:2202.00512",
"arxiv:1910.09700",
"license:openrail++",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | stabilityai | null | null | stabilityai/stable-diffusion-2-1 | 3,284 | 6,158,960 | diffusers | 2022-12-06T17:24:51 | ---
license: openrail++
tags:
- stable-diffusion
- text-to-image
pinned: true
---
# Stable Diffusion v2-1 Model Card
This model card focuses on the model associated with the Stable Diffusion v2-1 model, codebase available [here](https://github.com/Stability-AI/stablediffusion).
This `stable-diffusion-2-1` model is fi... | 12,209 | [
[
-0.02923583984375,
-0.0653076171875,
0.02716064453125,
0.014923095703125,
-0.0188751220703125,
-0.0278472900390625,
0.007843017578125,
-0.0301361083984375,
-0.00852203369140625,
0.0291595458984375,
-0.034027099609375,
-0.029205322265625,
-0.057952880859375,
... |
albert-base-v2 | 2023-05-30T07:52:10.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | null | null | null | albert-base-v2 | 72 | 6,140,346 | transformers | 2022-03-02T23:29:04 | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# ALBERT Base v2
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-rese... | 9,719 | [
[
-0.006855010986328125,
-0.03765869140625,
0.0141448974609375,
0.025909423828125,
-0.035369873046875,
0.001735687255859375,
0.006717681884765625,
-0.01329803466796875,
0.0234832763671875,
0.045806884765625,
-0.03759765625,
-0.035125732421875,
-0.061187744140625,
... |
microsoft/deberta-base | 2022-09-26T08:50:43.000Z | [
"transformers",
"pytorch",
"tf",
"rust",
"deberta",
"deberta-v1",
"fill-mask",
"en",
"arxiv:2006.03654",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | microsoft | null | null | microsoft/deberta-base | 53 | 5,097,824 | transformers | 2022-03-02T23:29:05 | ---
language: en
tags:
- deberta-v1
- fill-mask
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced m... | 1,303 | [
[
-0.0241851806640625,
-0.04290771484375,
0.01727294921875,
0.03973388671875,
-0.0186767578125,
0.0206298828125,
-0.00824737548828125,
-0.04779052734375,
0.0112152099609375,
0.004024505615234375,
-0.043731689453125,
-0.02947998046875,
-0.0721435546875,
0.00465... |
distilbert-base-multilingual-cased | 2023-04-06T13:40:24.000Z | [
"transformers",
"pytorch",
"tf",
"onnx",
"safetensors",
"distilbert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",... | fill-mask | null | null | null | distilbert-base-multilingual-cased | 78 | 5,091,482 | transformers | 2022-03-02T23:29:04 | ---
language:
- multilingual
- af
- sq
- ar
- an
- hy
- ast
- az
- ba
- eu
- bar
- be
- bn
- inc
- bs
- br
- bg
- my
- ca
- ceb
- ce
- zh
- cv
- hr
- cs
- da
- nl
- en
- et
- fi
- fr
- gl
- ka
- de
- el
- gu
- ht
- he
- hi
- hu
- is
- io
- id
- ga
- it
- ja
- jv
- kn
- kk
- ky
- ko
- la
- lv
- lt
- roa
- nds
- lm
- mk... | 7,316 | [
[
-0.0290985107421875,
-0.05462646484375,
0.0196075439453125,
0.0210723876953125,
-0.01438140869140625,
0.0031719207763671875,
-0.030853271484375,
-0.0269622802734375,
0.00433349609375,
0.0261383056640625,
-0.042877197265625,
-0.033599853515625,
-0.055633544921875... |
roberta-large | 2023-03-22T09:25:01.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"onnx",
"safetensors",
"roberta",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1907.11692",
"arxiv:1806.02847",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | null | null | null | roberta-large | 133 | 5,073,263 | transformers | 2022-03-02T23:29:04 | ---
language: en
tags:
- exbert
license: mit
datasets:
- bookcorpus
- wikipedia
---
# RoBERTa large model
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1907.11692) and first released in
[this repository](htt... | 9,280 | [
[
-0.01348114013671875,
-0.058319091796875,
0.0187530517578125,
-0.0002536773681640625,
-0.0238037109375,
-0.0056610107421875,
-0.029541015625,
-0.027099609375,
0.0188140869140625,
0.033447265625,
-0.04193115234375,
-0.04119873046875,
-0.06573486328125,
0.0064... |
bert-base-cased | 2022-11-16T15:18:28.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | null | null | null | bert-base-cased | 157 | 5,046,520 | transformers | 2022-03-02T23:29:04 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT base model (cased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https:... | 8,982 | [
[
-0.00791168212890625,
-0.046417236328125,
0.016326904296875,
0.017486572265625,
-0.041015625,
0.002872467041015625,
-0.0024013519287109375,
-0.01007843017578125,
0.0302276611328125,
0.036956787109375,
-0.042694091796875,
-0.03314208984375,
-0.0595703125,
0.0... |
microsoft/layoutlmv3-base | 2023-04-12T12:49:21.000Z | [
"transformers",
"pytorch",
"tf",
"onnx",
"layoutlmv3",
"en",
"arxiv:2204.08387",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null | microsoft | null | null | microsoft/layoutlmv3-base | 199 | 4,904,052 | transformers | 2022-04-18T06:53:05 | ---
language: en
license: cc-by-nc-sa-4.0
---
# LayoutLMv3
[Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/) | [GitHub](https://aka.ms/layoutlmv3)
## Model description
LayoutLMv3 is a pre-trained multimodal Transformer for Document AI with unified text and image masking. The sim... | 1,672 | [
[
-0.0279083251953125,
-0.0275115966796875,
0.0304412841796875,
0.0307769775390625,
-0.016021728515625,
-0.01016998291015625,
0.0164794921875,
-0.01020050048828125,
-0.011383056640625,
0.038818359375,
-0.0419921875,
-0.040191650390625,
-0.037109375,
-0.0138168... |
bert-base-multilingual-cased | 2022-11-16T23:22:54.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl"... | fill-mask | null | null | null | bert-base-multilingual-cased | 241 | 4,807,995 | transformers | 2022-03-02T23:29:04 | ---
language:
- multilingual
- af
- sq
- ar
- an
- hy
- ast
- az
- ba
- eu
- bar
- be
- bn
- inc
- bs
- br
- bg
- my
- ca
- ceb
- ce
- zh
- cv
- hr
- cs
- da
- nl
- en
- et
- fi
- fr
- gl
- ka
- de
- el
- gu
- ht
- he
- hi
- hu
- is
- io
- id
- ga
- it
- ja
- jv
- kn
- kk
- ky
- ko
- la
- lv
- lt
- roa
- nds
- lm
- mk... | 7,104 | [
[
-0.0262603759765625,
-0.060150146484375,
0.012237548828125,
0.025970458984375,
-0.031005859375,
0.005405426025390625,
-0.0201416015625,
-0.0233154296875,
0.02862548828125,
0.040130615234375,
-0.052001953125,
-0.0308380126953125,
-0.048553466796875,
0.0039253... |
CompVis/stable-diffusion-safety-checker | 2022-11-25T17:21:38.000Z | [
"transformers",
"pytorch",
"clip",
"arxiv:2103.00020",
"arxiv:1910.09700",
"endpoints_compatible",
"has_space",
"region:us"
] | null | CompVis | null | null | CompVis/stable-diffusion-safety-checker | 79 | 4,714,020 | transformers | 2022-08-22T10:22:34 | ---
tags:
- clip
---
# Model Card for stable-diffusion-safety-checker
# Model Details
## Model Description
More information needed
- **Developed by:** More information needed
- **Shared by [Optional]:** CompVis
- **Model type:** Image Identification
- **Language(s) (NLP):** More information needed
- **License... | 5,359 | [
[
-0.033172607421875,
-0.055419921875,
0.018310546875,
0.00818634033203125,
-0.0128326416015625,
-0.01904296875,
0.0013494491577148438,
-0.04119873046875,
0.0038776397705078125,
0.029571533203125,
-0.0265045166015625,
-0.04156494140625,
-0.062469482421875,
-0.... |
mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis | 2023-03-16T20:03:13.000Z | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"financial",
"stocks",
"sentiment",
"dataset:financial_phrasebank",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | mrm8488 | null | null | mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis | 109 | 4,468,850 | transformers | 2022-03-02T23:29:05 | ---
license: apache-2.0
tags:
- generated_from_trainer
- financial
- stocks
- sentiment
widget:
- text: "Operating profit totaled EUR 9.4 mn , down from EUR 11.7 mn in 2004 ."
datasets:
- financial_phrasebank
metrics:
- accuracy
model-index:
- name: distilRoberta-financial-sentiment
results:
- task:
name: Tex... | 2,044 | [
[
-0.0296173095703125,
-0.043365478515625,
-0.00102996826171875,
0.028900146484375,
-0.0255279541015625,
-0.006290435791015625,
-0.006916046142578125,
-0.0010595321655273438,
0.006244659423828125,
0.01160430908203125,
-0.047821044921875,
-0.0531005859375,
-0.05895... |
lxyuan/distilbert-base-multilingual-cased-sentiments-student | 2023-06-24T04:09:07.000Z | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"sentiment-analysis",
"zero-shot-distillation",
"distillation",
"zero-shot-classification",
"debarta-v3",
"en",
"ar",
"de",
"es",
"fr",
"ja",
"zh",
"id",
"hi",
"it",
"ms",
"pt",
"dataset:tyqian... | text-classification | lxyuan | null | null | lxyuan/distilbert-base-multilingual-cased-sentiments-student | 38 | 4,349,010 | transformers | 2023-05-05T16:22:55 | ---
license: apache-2.0
tags:
- sentiment-analysis
- text-classification
- zero-shot-distillation
- distillation
- zero-shot-classification
- debarta-v3
model-index:
- name: distilbert-base-multilingual-cased-sentiments-student
results: []
datasets:
- tyqiangz/multilingual-sentiments
language:
- en
- ar
- de
- es
- f... | 5,577 | [
[
-0.0288848876953125,
-0.052886962890625,
0.0166473388671875,
0.0264892578125,
-0.01430511474609375,
0.0003046989440917969,
-0.0217437744140625,
0.0030193328857421875,
0.00989532470703125,
0.00980377197265625,
-0.033935546875,
-0.055572509765625,
-0.0523986816406... |
allenai/longformer-base-4096 | 2023-04-05T18:24:00.000Z | [
"transformers",
"pytorch",
"tf",
"rust",
"longformer",
"en",
"arxiv:2004.05150",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null | allenai | null | null | allenai/longformer-base-4096 | 107 | 4,035,450 | transformers | 2022-03-02T23:29:05 | ---
language: en
license: apache-2.0
---
# longformer-base-4096
[Longformer](https://arxiv.org/abs/2004.05150) is a transformer model for long documents.
`longformer-base-4096` is a BERT-like model started from the RoBERTa checkpoint and pretrained for MLM on long documents. It supports sequences of length up to 4,0... | 1,253 | [
[
-0.016815185546875,
-0.0389404296875,
0.040313720703125,
0.028289794921875,
0.007564544677734375,
-0.016937255859375,
-0.024322509765625,
-0.03375244140625,
0.009765625,
0.042877197265625,
-0.04742431640625,
-0.0058746337890625,
-0.045928955078125,
0.0118713... |
facebook/bart-large-cnn | 2023-10-03T04:52:04.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"bart",
"text2text-generation",
"summarization",
"en",
"dataset:cnn_dailymail",
"arxiv:1910.13461",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | summarization | facebook | null | null | facebook/bart-large-cnn | 647 | 3,952,883 | transformers | 2022-03-02T23:29:05 | ---
language:
- en
tags:
- summarization
license: mit
thumbnail: https://huggingface.co/front/thumbnails/facebook.png
datasets:
- cnn_dailymail
model-index:
- name: facebook/bart-large-cnn
results:
- task:
type: summarization
name: Summarization
dataset:
name: cnn_dailymail
type: cnn_dai... | 6,003 | [
[
-0.035247802734375,
-0.055023193359375,
0.02850341796875,
0.02752685546875,
-0.0382080078125,
-0.019073486328125,
0.00536346435546875,
-0.0229339599609375,
0.0298614501953125,
0.04620361328125,
-0.0208587646484375,
-0.0300445556640625,
-0.04248046875,
0.0327... |
cl-tohoku/bert-base-japanese | 2021-09-23T13:45:36.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | cl-tohoku | null | null | cl-tohoku/bert-base-japanese | 15 | 3,830,390 | transformers | 2022-03-02T23:29:05 | ---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
widget:
- text: 東北大学で[MASK]の研究をしています。
---
# BERT base Japanese (IPA dictionary)
This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level to... | 1,743 | [
[
-0.035369873046875,
-0.052978515625,
0.0237884521484375,
0.0175018310546875,
-0.049774169921875,
-0.01739501953125,
-0.0164794921875,
-0.037017822265625,
0.034027099609375,
0.033111572265625,
-0.05194091796875,
-0.033172607421875,
-0.04608154296875,
-0.00026... |
camembert-base | 2023-05-30T14:36:19.000Z | [
"transformers",
"pytorch",
"tf",
"safetensors",
"camembert",
"fill-mask",
"fr",
"dataset:oscar",
"arxiv:1911.03894",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | null | null | null | camembert-base | 44 | 3,827,519 | transformers | 2022-03-02T23:29:04 | ---
language: fr
license: mit
datasets:
- oscar
---
# CamemBERT: a Tasty French Language Model
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-... | 6,969 | [
[
-0.0152130126953125,
-0.0570068359375,
0.019561767578125,
0.0211029052734375,
-0.01403045654296875,
-0.006702423095703125,
-0.0253753662109375,
-0.00923919677734375,
0.030120849609375,
0.036529541015625,
-0.03570556640625,
-0.0474853515625,
-0.052642822265625,
... |
sentence-transformers/all-MiniLM-L6-v2 | 2022-11-07T08:44:33.000Z | [
"sentence-transformers",
"pytorch",
"tf",
"rust",
"bert",
"feature-extraction",
"sentence-similarity",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:sea... | sentence-similarity | sentence-transformers | null | null | sentence-transformers/all-MiniLM-L6-v2 | 1,078 | 3,768,075 | sentence-transformers | 2022-03-02T23:29:05 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- nat... | 10,610 | [
[
-0.026092529296875,
-0.06402587890625,
0.024322509765625,
0.0079345703125,
-0.01029205322265625,
-0.021026611328125,
-0.0173187255859375,
-0.0211181640625,
0.025390625,
0.0139617919921875,
-0.037078857421875,
-0.040008544921875,
-0.048309326171875,
0.0092620... |
facebook/mbart-large-50 | 2023-03-28T08:28:50.000Z | [
"transformers",
"pytorch",
"tf",
"mbart",
"text2text-generation",
"mbart-50",
"multilingual",
"ar",
"cs",
"de",
"en",
"es",
"et",
"fi",
"fr",
"gu",
"hi",
"it",
"ja",
"kk",
"ko",
"lt",
"lv",
"my",
"ne",
"nl",
"ro",
"ru",
"si",
"tr",
"vi",
"zh",
"af",
... | text2text-generation | facebook | null | null | facebook/mbart-large-50 | 90 | 3,361,559 | transformers | 2022-03-02T23:29:05 | ---
language:
- multilingual
- ar
- cs
- de
- en
- es
- et
- fi
- fr
- gu
- hi
- it
- ja
- kk
- ko
- lt
- lv
- my
- ne
- nl
- ro
- ru
- si
- tr
- vi
- zh
- af
- az
- bn
- fa
- he
- hr
- id
- ka
- km
- mk
- ml
- mn
- mr
- pl
- ps
- pt
- sv
- sw
- ta
- te
- th
- tl
- uk
- ur
- xh
- gl
- sl
license: mit
tags:
- mbart-50... | 4,595 | [
[
-0.0345458984375,
-0.037322998046875,
0.004802703857421875,
0.0258026123046875,
-0.0286712646484375,
0.0096588134765625,
-0.021392822265625,
-0.021331787109375,
0.0192413330078125,
0.01446533203125,
-0.042755126953125,
-0.04608154296875,
-0.049560546875,
0.0... |
distilbert-base-uncased-distilled-squad | 2023-04-06T13:40:56.000Z | [
"transformers",
"pytorch",
"tf",
"tflite",
"coreml",
"safetensors",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | question-answering | null | null | null | distilbert-base-uncased-distilled-squad | 64 | 3,358,757 | transformers | 2022-03-02T23:29:04 | ---
language: en
datasets:
- squad
widget:
- text: "Which name is also used to describe the Amazon rainforest in English?"
context: "The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also know... | 10,955 | [
[
-0.0267181396484375,
-0.06646728515625,
0.0169525146484375,
0.01119232177734375,
-0.00801849365234375,
0.0150146484375,
-0.014617919921875,
-0.0214385986328125,
-0.004642486572265625,
0.01114654541015625,
-0.06011962890625,
-0.0208892822265625,
-0.05624389648437... |
google/electra-base-discriminator | 2021-04-30T07:33:10.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"electra",
"pretraining",
"en",
"arxiv:1406.2661",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null | google | null | null | google/electra-base-discriminator | 37 | 3,318,249 | transformers | 2022-03-02T23:29:05 | ---
language: en
thumbnail: https://huggingface.co/front/thumbnails/google.png
license: apache-2.0
---
## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
**ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks usi... | 2,199 | [
[
-0.034942626953125,
-0.037567138671875,
0.0118255615234375,
0.0135955810546875,
-0.017913818359375,
0.025543212890625,
-0.0184173583984375,
-0.01348114013671875,
0.0285186767578125,
0.03411865234375,
-0.0260772705078125,
-0.016448974609375,
-0.038543701171875,
... |
pyannote/segmentation | 2023-10-04T18:52:36.000Z | [
"pyannote-audio",
"pytorch",
"pyannote",
"pyannote-audio-model",
"audio",
"voice",
"speech",
"speaker",
"speaker-segmentation",
"voice-activity-detection",
"overlapped-speech-detection",
"resegmentation",
"arxiv:2104.04045",
"license:mit",
"has_space",
"region:us"
] | voice-activity-detection | pyannote | null | null | pyannote/segmentation | 309 | 3,189,524 | pyannote-audio | 2022-03-02T23:29:05 | ---
tags:
- pyannote
- pyannote-audio
- pyannote-audio-model
- audio
- voice
- speech
- speaker
- speaker-segmentation
- voice-activity-detection
- overlapped-speech-detection
- resegmentation
license: mit
inference: false
extra_gated_prompt: "The collected information will help acquire a better knowledge of pyannote.a... | 5,844 | [
[
-0.048187255859375,
-0.049713134765625,
0.0252227783203125,
0.0224456787109375,
-0.0297088623046875,
-0.0223846435546875,
-0.0261077880859375,
-0.0256805419921875,
0.031951904296875,
0.033172607421875,
-0.05047607421875,
-0.048187255859375,
-0.019287109375,
... |
distilroberta-base | 2022-11-16T23:22:40.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"roberta",
"fill-mask",
"exbert",
"en",
"dataset:openwebtext",
"arxiv:1910.01108",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | null | null | null | distilroberta-base | 86 | 3,140,871 | transformers | 2022-03-02T23:29:04 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- openwebtext
---
# Model Card for DistilRoBERTa base
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluat... | 7,498 | [
[
-0.0174560546875,
-0.058135986328125,
0.01849365234375,
0.0123443603515625,
-0.019500732421875,
-0.0018777847290039062,
-0.019134521484375,
-0.018829345703125,
0.0098419189453125,
0.0335693359375,
-0.043243408203125,
-0.038482666015625,
-0.0576171875,
0.0152... |
pyannote/speaker-diarization | 2023-10-04T18:53:17.000Z | [
"pyannote-audio",
"pyannote",
"pyannote-audio-pipeline",
"audio",
"voice",
"speech",
"speaker",
"speaker-diarization",
"speaker-change-detection",
"voice-activity-detection",
"overlapped-speech-detection",
"automatic-speech-recognition",
"dataset:ami",
"dataset:dihard",
"dataset:voxconve... | automatic-speech-recognition | pyannote | null | null | pyannote/speaker-diarization | 537 | 2,981,173 | pyannote-audio | 2022-03-02T23:29:05 | ---
tags:
- pyannote
- pyannote-audio
- pyannote-audio-pipeline
- audio
- voice
- speech
- speaker
- speaker-diarization
- speaker-change-detection
- voice-activity-detection
- overlapped-speech-detection
- automatic-speech-recognition
datasets:
- ami
- dihard
- voxconverse
- aishell
- repere
- voxceleb
license: mit
e... | 11,494 | [
[
-0.046142578125,
-0.0531005859375,
0.007167816162109375,
0.038177490234375,
-0.01226806640625,
0.0023937225341796875,
-0.039459228515625,
-0.0251007080078125,
0.042388916015625,
0.0274200439453125,
-0.0272979736328125,
-0.05548095703125,
-0.0299530029296875,
... |
prajjwal1/bert-small | 2021-10-27T18:31:52.000Z | [
"transformers",
"pytorch",
"BERT",
"MNLI",
"NLI",
"transformer",
"pre-training",
"en",
"arxiv:1908.08962",
"arxiv:2110.01518",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | prajjwal1 | null | null | prajjwal1/bert-small | 12 | 2,935,794 | transformers | 2022-03-02T23:29:05 | ---
language:
- en
license:
- mit
tags:
- BERT
- MNLI
- NLI
- transformer
- pre-training
---
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert).
This is one of the smaller ... | 2,572 | [
[
-0.031707763671875,
-0.041748046875,
0.034271240234375,
-0.0016889572143554688,
-0.01259613037109375,
-0.0216522216796875,
-0.02337646484375,
-0.032440185546875,
0.0079498291015625,
0.0124664306640625,
-0.054656982421875,
-0.024566650390625,
-0.038116455078125,
... |
facebook/bart-large-mnli | 2023-09-05T14:49:34.000Z | [
"transformers",
"pytorch",
"jax",
"rust",
"safetensors",
"bart",
"text-classification",
"zero-shot-classification",
"dataset:multi_nli",
"arxiv:1910.13461",
"arxiv:1909.00161",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | zero-shot-classification | facebook | null | null | facebook/bart-large-mnli | 726 | 2,909,045 | transformers | 2022-03-02T23:29:05 | ---
license: mit
thumbnail: https://huggingface.co/front/thumbnails/facebook.png
pipeline_tag: zero-shot-classification
datasets:
- multi_nli
---
# bart-large-mnli
This is the checkpoint for [bart-large](https://huggingface.co/facebook/bart-large) after being trained on the [MultiNLI (MNLI)](https://huggingface.co/da... | 3,793 | [
[
-0.0284423828125,
-0.043670654296875,
0.0249481201171875,
0.01000213623046875,
-0.0018644332885742188,
-0.0100250244140625,
0.0018339157104492188,
-0.0286712646484375,
0.0241241455078125,
0.0256500244140625,
-0.05023193359375,
-0.049041748046875,
-0.032775878906... |
xlm-roberta-large | 2023-09-29T13:04:24.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"onnx",
"safetensors",
"xlm-roberta",
"fill-mask",
"exbert",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi... | fill-mask | null | null | null | xlm-roberta-large | 219 | 2,582,394 | transformers | 2022-03-02T23:29:04 | ---
tags:
- exbert
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
-... | 5,241 | [
[
-0.03472900390625,
-0.057403564453125,
0.0170440673828125,
0.00618743896484375,
-0.0150299072265625,
-0.001697540283203125,
-0.03173828125,
-0.0298309326171875,
0.0155792236328125,
0.043853759765625,
-0.032318115234375,
-0.04302978515625,
-0.05377197265625,
... |
openai/clip-vit-base-patch16 | 2022-10-04T09:42:28.000Z | [
"transformers",
"pytorch",
"jax",
"clip",
"zero-shot-image-classification",
"vision",
"arxiv:2103.00020",
"arxiv:1908.04913",
"endpoints_compatible",
"has_space",
"region:us"
] | zero-shot-image-classification | openai | null | null | openai/clip-vit-base-patch16 | 41 | 2,560,286 | transformers | 2022-03-02T23:29:05 | ---
tags:
- vision
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
---
# Model Card: CLIP
Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found [h... | 7,916 | [
[
-0.038726806640625,
-0.04437255859375,
0.012847900390625,
-0.0020294189453125,
-0.012603759765625,
-0.0195159912109375,
0.00241851806640625,
-0.05499267578125,
0.00960540771484375,
0.029541015625,
-0.0215911865234375,
-0.0313720703125,
-0.048919677734375,
0.... |
nlpconnect/vit-gpt2-image-captioning | 2023-02-27T15:00:09.000Z | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-to-text",
"image-captioning",
"doi:10.57967/hf/0222",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | image-to-text | nlpconnect | null | null | nlpconnect/vit-gpt2-image-captioning | 591 | 2,485,193 | transformers | 2022-03-02T23:29:05 | ---
tags:
- image-to-text
- image-captioning
license: apache-2.0
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg
example_title: Savanna
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
example_title: Football Match
- src: https:... | 2,702 | [
[
-0.0182647705078125,
-0.031524658203125,
0.00959014892578125,
0.0275726318359375,
-0.040985107421875,
0.00250244140625,
-0.00007170438766479492,
-0.0248870849609375,
-0.0013437271118164062,
0.029266357421875,
-0.042816162109375,
-0.0205078125,
-0.060577392578125... |
t5-small | 2023-06-30T02:31:26.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"onnx",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"translation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:c4",
"arxiv:1805.12471",
"arxiv:1708.00055",
"arxiv:1704.05426",
"arxiv:1606.05250",
"arxiv:... | translation | null | null | null | t5-small | 161 | 2,346,088 | transformers | 2022-03-02T23:29:04 | ---
language:
- en
- fr
- ro
- de
- multilingual
license: apache-2.0
tags:
- summarization
- translation
datasets:
- c4
---
# Model Card for T5 Small
 and NLP tools (including word segment... | 1,123 | [
[
-0.021881103515625,
-0.0265655517578125,
0.0011539459228515625,
0.0556640625,
-0.0289306640625,
0.00384521484375,
-0.01404571533203125,
-0.018829345703125,
-0.0028018951416015625,
0.032958984375,
-0.026458740234375,
-0.0213775634765625,
-0.043975830078125,
0... |
facebook/wav2vec2-base-960h | 2022-11-14T21:37:23.000Z | [
"transformers",
"pytorch",
"tf",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"en",
"dataset:librispeech_asr",
"arxiv:2006.11477",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | automatic-speech-recognition | facebook | null | null | facebook/wav2vec2-base-960h | 162 | 1,741,258 | transformers | 2022-03-02T23:29:05 | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.... | 4,431 | [
[
-0.01495361328125,
-0.049468994140625,
0.0128326416015625,
0.0127716064453125,
-0.01332855224609375,
-0.01201629638671875,
-0.036651611328125,
-0.040771484375,
-0.003925323486328125,
0.0116119384765625,
-0.04620361328125,
-0.045318603515625,
-0.0440673828125,
... |
microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext | 2023-11-06T18:03:43.000Z | [
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"exbert",
"en",
"arxiv:2007.15779",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | microsoft | null | null | microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext | 131 | 1,685,260 | transformers | 2022-03-02T23:29:05 | ---
language: en
tags:
- exbert
license: mit
widget:
- text: "[MASK] is a tumor suppressor gene."
---
## MSR BiomedBERT (abstracts + full text)
<div style="border: 2px solid orange; border-radius:10px; padding:0px 10px; width: fit-content;">
* This model was previously named **"PubMedBERT (abstracts + full text)"**.... | 2,436 | [
[
-0.0162353515625,
-0.04083251953125,
0.041656494140625,
0.008331298828125,
-0.028564453125,
0.006622314453125,
-0.017822265625,
-0.038665771484375,
0.0193634033203125,
0.0217132568359375,
-0.0308837890625,
-0.0458984375,
-0.0543212890625,
0.0233612060546875,... |
google/fnet-base | 2021-10-31T07:33:21.000Z | [
"transformers",
"pytorch",
"rust",
"fnet",
"pretraining",
"en",
"dataset:c4",
"arxiv:2105.03824",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | google | null | null | google/fnet-base | 13 | 1,611,942 | transformers | 2022-03-02T23:29:05 | ---
language: en
tags:
- fnet
license: apache-2.0
datasets:
- c4
---
# FNet base model
Pretrained model on English language using a masked language modeling (MLM) and next sentence prediction (NSP) objective. It was
introduced in [this paper](https://arxiv.org/abs/2105.03824) and first released in [this repository](... | 12,614 | [
[
-0.03802490234375,
-0.065185546875,
-0.004058837890625,
0.023040771484375,
-0.0263519287109375,
-0.012176513671875,
-0.02545166015625,
-0.0531005859375,
0.043426513671875,
0.0117034912109375,
-0.0504150390625,
-0.0183563232421875,
-0.044921875,
-0.0011768341... |
shibing624/text2vec-base-chinese | 2023-08-28T08:58:03.000Z | [
"transformers",
"pytorch",
"onnx",
"bert",
"feature-extraction",
"text2vec",
"sentence-similarity",
"zh",
"dataset:shibing624/nli_zh",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | sentence-similarity | shibing624 | null | null | shibing624/text2vec-base-chinese | 473 | 1,589,826 | transformers | 2022-03-02T23:29:05 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- text2vec
- feature-extraction
- sentence-similarity
- transformers
datasets:
- shibing624/nli_zh
language:
- zh
metrics:
- spearmanr
library_name: transformers
---
# shibing624/text2vec-base-chinese
This is a CoSENT(Cosine Sentence) model: shibing624/tex... | 9,285 | [
[
-0.006488800048828125,
-0.056549072265625,
0.0229644775390625,
0.029998779296875,
-0.0227813720703125,
-0.03387451171875,
-0.0203704833984375,
-0.01314544677734375,
0.0069732666015625,
0.02984619140625,
-0.0308990478515625,
-0.041595458984375,
-0.0413818359375,
... |
timm/resnet50.a1_in1k | 2023-04-05T18:08:16.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:2110.00476",
"arxiv:1512.03385",
"license:apache-2.0",
"has_space",
"region:us"
] | image-classification | timm | null | null | timm/resnet50.a1_in1k | 9 | 1,577,731 | timm | 2023-04-05T18:07:45 | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
---
# Model card for resnet50.a1_in1k
A ResNet-B image classification model.
This model features:
* ReLU activations
* single layer 7x7 convolution with pooling
* 1x1 convolution shortcut downsample
Trained on ImageNet-1k in `timm` usin... | 38,406 | [
[
-0.06549072265625,
-0.0157928466796875,
0.00183868408203125,
0.02838134765625,
-0.030517578125,
-0.00846099853515625,
-0.00994873046875,
-0.02886962890625,
0.0867919921875,
0.0216217041015625,
-0.04901123046875,
-0.040283203125,
-0.04541015625,
-0.0006422996... |
nlptown/bert-base-multilingual-uncased-sentiment | 2023-07-27T18:14:29.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"en",
"nl",
"de",
"fr",
"it",
"es",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | nlptown | null | null | nlptown/bert-base-multilingual-uncased-sentiment | 196 | 1,572,780 | transformers | 2022-03-02T23:29:05 | ---
language:
- en
- nl
- de
- fr
- it
- es
license: mit
---
# bert-base-multilingual-uncased-sentiment
This is a bert-base-multilingual-uncased model finetuned for sentiment analysis on product reviews in six languages: English, Dutch, German, French, Spanish, and Italian. It predicts the sentiment of the review as... | 1,903 | [
[
-0.044036865234375,
-0.04083251953125,
0.01678466796875,
0.058868408203125,
-0.0267791748046875,
-0.00809478759765625,
-0.0283966064453125,
-0.047821044921875,
0.029754638671875,
0.03662109375,
-0.050933837890625,
-0.058868408203125,
-0.04205322265625,
0.003... |
bert-large-cased | 2023-04-06T13:41:58.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | null | null | null | bert-large-cased | 13 | 1,569,927 | transformers | 2022-03-02T23:29:04 | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT large model (cased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/g... | 9,214 | [
[
-0.0111846923828125,
-0.046142578125,
0.0198516845703125,
0.0201873779296875,
-0.042022705078125,
0.0015964508056640625,
-0.0030803680419921875,
-0.01270294189453125,
0.031280517578125,
0.03924560546875,
-0.041778564453125,
-0.030548095703125,
-0.061279296875,
... |
openai/clip-vit-large-patch14-336 | 2022-10-04T09:41:39.000Z | [
"transformers",
"pytorch",
"tf",
"clip",
"zero-shot-image-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"has_space",
"region:us"
] | zero-shot-image-classification | openai | null | null | openai/clip-vit-large-patch14-336 | 59 | 1,534,763 | transformers | 2022-04-22T14:57:43 | ---
tags:
- generated_from_keras_callback
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
model-index:
- name: clip-vit-large-patch14-336
results: []
---
<!-- This model card has been gener... | 1,045 | [
[
-0.038055419921875,
-0.040740966796875,
0.03253173828125,
0.0024471282958984375,
-0.04364013671875,
-0.0309600830078125,
0.00014650821685791016,
-0.02325439453125,
0.01114654541015625,
0.037322998046875,
-0.04949951171875,
-0.036285400390625,
-0.0634765625,
... |
meta-llama/Llama-2-70b-chat-hf | 2023-10-12T16:19:08.000Z | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"en",
"arxiv:2307.09288",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | meta-llama | null | null | meta-llama/Llama-2-70b-chat-hf | 1,561 | 1,531,804 | transformers | 2023-07-14T18:02:07 | ---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license te... | 9,960 | [
[
-0.01375579833984375,
-0.052825927734375,
0.027191162109375,
0.0132904052734375,
-0.0277862548828125,
0.0153961181640625,
-0.0003552436828613281,
-0.059814453125,
0.00212860107421875,
0.026336669921875,
-0.0491943359375,
-0.042572021484375,
-0.0504150390625,
... |
pyannote/segmentation-3.0 | 2023-10-04T18:53:59.000Z | [
"pyannote-audio",
"pytorch",
"pyannote",
"pyannote-audio-model",
"audio",
"voice",
"speech",
"speaker",
"speaker-diarization",
"speaker-change-detection",
"speaker-segmentation",
"voice-activity-detection",
"overlapped-speech-detection",
"resegmentation",
"license:mit",
"has_space",
... | voice-activity-detection | pyannote | null | null | pyannote/segmentation-3.0 | 25 | 1,512,995 | pyannote-audio | 2023-09-22T12:03:10 | ---
tags:
- pyannote
- pyannote-audio
- pyannote-audio-model
- audio
- voice
- speech
- speaker
- speaker-diarization
- speaker-change-detection
- speaker-segmentation
- voice-activity-detection
- overlapped-speech-detection
- resegmentation
license: mit
inference: false
extra_gated_prompt: "The collected information w... | 4,648 | [
[
-0.0228118896484375,
-0.045806884765625,
0.01507568359375,
0.0221710205078125,
-0.0389404296875,
-0.0167999267578125,
-0.03997802734375,
-0.0287017822265625,
0.0313720703125,
0.03741455078125,
-0.031585693359375,
-0.0428466796875,
-0.017242431640625,
-0.0239... |
pyannote/speaker-diarization-3.0 | 2023-10-04T18:54:33.000Z | [
"pyannote-audio",
"pyannote",
"pyannote-audio-pipeline",
"audio",
"voice",
"speech",
"speaker",
"speaker-diarization",
"speaker-change-detection",
"voice-activity-detection",
"overlapped-speech-detection",
"automatic-speech-recognition",
"arxiv:2111.14448",
"arxiv:2012.01477",
"license:m... | automatic-speech-recognition | pyannote | null | null | pyannote/speaker-diarization-3.0 | 91 | 1,511,968 | pyannote-audio | 2023-09-22T13:40:36 | ---
tags:
- pyannote
- pyannote-audio
- pyannote-audio-pipeline
- audio
- voice
- speech
- speaker
- speaker-diarization
- speaker-change-detection
- voice-activity-detection
- overlapped-speech-detection
- automatic-speech-recognition
license: mit
extra_gated_prompt: "The collected information will help acquire a bet... | 10,655 | [
[
-0.048187255859375,
-0.05755615234375,
0.008392333984375,
0.03692626953125,
-0.01526641845703125,
0.005237579345703125,
-0.037811279296875,
-0.0221710205078125,
0.035797119140625,
0.027191162109375,
-0.029754638671875,
-0.053436279296875,
-0.0318603515625,
0... |
facebook/wav2vec2-xlsr-53-espeak-cv-ft | 2021-12-10T17:18:39.000Z | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"speech",
"audio",
"phoneme-recognition",
"multi-lingual",
"dataset:common_voice",
"arxiv:2109.11680",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | automatic-speech-recognition | facebook | null | null | facebook/wav2vec2-xlsr-53-espeak-cv-ft | 16 | 1,507,688 | transformers | 2022-03-02T23:29:05 | ---
language: multi-lingual
datasets:
- common_voice
tags:
- speech
- audio
- automatic-speech-recognition
- phoneme-recognition
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co... | 3,012 | [
[
-0.0119171142578125,
-0.035003662109375,
0.0123443603515625,
0.0185394287109375,
-0.0155181884765625,
0.005764007568359375,
-0.0164794921875,
-0.050445556640625,
0.01059722900390625,
0.02642822265625,
-0.054595947265625,
-0.0467529296875,
-0.041168212890625,
... |
QCRI/bert-base-multilingual-cased-pos-english | 2023-01-25T06:00:31.000Z | [
"transformers",
"pytorch",
"bert",
"token-classification",
"part-of-speech",
"finetuned",
"en",
"license:cc-by-nc-3.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | token-classification | QCRI | null | null | QCRI/bert-base-multilingual-cased-pos-english | 21 | 1,504,084 | transformers | 2022-04-27T08:15:20 | ---
language:
- en
tags:
- part-of-speech
- finetuned
license: cc-by-nc-3.0
---
# BERT-base-multilingual-cased finetuned for Part-of-Speech tagging
This is a multilingual BERT model fine tuned for part-of-speech tagging for English. It is trained using the Penn TreeBank (Marcus et al., 1993) and achieves an F1-... | 1,468 | [
[
-0.03271484375,
-0.050994873046875,
-0.00537109375,
0.0200042724609375,
-0.022430419921875,
0.01033782958984375,
-0.012481689453125,
-0.0281524658203125,
0.0166778564453125,
0.01297760009765625,
-0.034332275390625,
-0.040985107421875,
-0.035980224609375,
-0.... |
indobenchmark/indobert-base-p1 | 2021-05-19T20:22:23.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"indobert",
"indobenchmark",
"indonlu",
"id",
"dataset:Indo4B",
"arxiv:2009.05387",
"license:mit",
"has_space",
"region:us"
] | feature-extraction | indobenchmark | null | null | indobenchmark/indobert-base-p1 | 8 | 1,496,629 | transformers | 2022-03-02T23:29:05 | ---
language: id
tags:
- indobert
- indobenchmark
- indonlu
license: mit
inference: false
datasets:
- Indo4B
---
# IndoBERT Base Model (phase1 - uncased)
[IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a ma... | 2,712 | [
[
-0.0278472900390625,
-0.035003662109375,
0.00737762451171875,
0.03814697265625,
-0.034637451171875,
-0.0213775634765625,
-0.036834716796875,
-0.0243377685546875,
0.0171051025390625,
0.03717041015625,
-0.026885986328125,
-0.029693603515625,
-0.04986572265625,
... |
google/flan-t5-large | 2023-07-17T12:49:05.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:svakulenk0/qrecc",
"dataset:taskmaster2",
"dataset:djaym7/wiki_dialog",
"dataset:deepmind/code_contests",
"dataset:lambada",
"dataset:gsm8k",
"dat... | text2text-generation | google | null | null | google/flan-t5-large | 299 | 1,485,728 | transformers | 2022-10-21T10:07:08 | ---
language:
- en
- fr
- ro
- de
- multilingual
widget:
- text: "Translate to German: My name is Arthur"
example_title: "Translation"
- text: "Please answer to the following question. Who is going to be the next Ballon d'or?"
example_title: "Question Answering"
- text: "Q: Can Geoffrey Hinton have a conversatio... | 10,818 | [
[
-0.0343017578125,
-0.043365478515625,
0.02313232421875,
0.00015282630920410156,
-0.0069122314453125,
-0.0113525390625,
-0.03326416015625,
-0.048736572265625,
-0.00991058349609375,
0.00968170166015625,
-0.037017822265625,
-0.038421630859375,
-0.049468994140625,
... |
microsoft/layoutlm-base-uncased | 2022-12-16T16:25:46.000Z | [
"transformers",
"pytorch",
"tf",
"layoutlm",
"arxiv:1912.13318",
"endpoints_compatible",
"has_space",
"region:us"
] | null | microsoft | null | null | microsoft/layoutlm-base-uncased | 25 | 1,444,971 | transformers | 2022-03-02T23:29:05 | # LayoutLM
**Multimodal (text + layout/format + image) pre-training for document AI**
[Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/) | [GitHub](https://aka.ms/layoutlm)
## Model description
LayoutLM is a simple but effective pre-training method of text and layout for document ... | 1,500 | [
[
-0.01995849609375,
-0.053558349609375,
0.0418701171875,
0.0208892822265625,
-0.02032470703125,
-0.007595062255859375,
0.01529693603515625,
-0.0106048583984375,
-0.007171630859375,
0.02734375,
-0.040252685546875,
-0.046142578125,
-0.04010009765625,
-0.0221405... |
ProsusAI/finbert | 2023-05-23T12:43:35.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"financial-sentiment-analysis",
"sentiment-analysis",
"en",
"arxiv:1908.10063",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | ProsusAI | null | null | ProsusAI/finbert | 373 | 1,423,334 | transformers | 2022-03-02T23:29:04 | ---
language: "en"
tags:
- financial-sentiment-analysis
- sentiment-analysis
widget:
- text: "Stocks rallied and the British pound gained."
---
FinBERT is a pre-trained NLP model to analyze sentiment of financial text. It is built by further training the BERT language model in the finance domain, using a large financi... | 1,476 | [
[
-0.0447998046875,
-0.042938232421875,
0.007137298583984375,
0.0182952880859375,
-0.03857421875,
0.00543975830078125,
-0.00799560546875,
-0.0262908935546875,
0.0264434814453125,
0.05767822265625,
-0.05206298828125,
-0.0552978515625,
-0.031829833984375,
-0.009... |
cardiffnlp/twitter-roberta-base-sentiment-latest | 2023-05-28T05:45:10.000Z | [
"transformers",
"pytorch",
"tf",
"roberta",
"text-classification",
"en",
"dataset:tweet_eval",
"arxiv:2202.03829",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | cardiffnlp | null | null | cardiffnlp/twitter-roberta-base-sentiment-latest | 258 | 1,386,030 | transformers | 2022-03-15T01:21:58 | ---
language: en
widget:
- text: Covid cases are increasing fast!
datasets:
- tweet_eval
---
# Twitter-roBERTa-base for Sentiment Analysis - UPDATED (2022)
This is a RoBERTa-base model trained on ~124M tweets from January 2018 to December 2021, and finetuned for sentiment analysis with the TweetEval benchmark.
The ... | 4,310 | [
[
-0.01308441162109375,
-0.055389404296875,
0.02001953125,
0.0306549072265625,
-0.0196990966796875,
0.017547607421875,
-0.0249176025390625,
-0.025634765625,
0.017974853515625,
0.0010271072387695312,
-0.044525146484375,
-0.06317138671875,
-0.051788330078125,
0.... |
alexandrainst/scandi-nli-large | 2023-09-20T11:55:47.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"zero-shot-classification",
"da",
"no",
"nb",
"sv",
"dataset:strombergnlp/danfever",
"dataset:KBLab/overlim",
"dataset:MoritzLaurer/multilingual-NLI-26lang-2mil7",
"license:apache-2.0",
"endpoints_compatible",
"ha... | zero-shot-classification | alexandrainst | null | null | alexandrainst/scandi-nli-large | 5 | 1,356,428 | transformers | 2022-11-28T07:05:27 | ---
pipeline_tag: zero-shot-classification
language:
- da
- 'no'
- nb
- sv
license: apache-2.0
datasets:
- strombergnlp/danfever
- KBLab/overlim
- MoritzLaurer/multilingual-NLI-26lang-2mil7
widget:
- example_title: Danish
text: >-
Mexicansk bokser advarer Messi - 'Du skal bede til gud, om at jeg ikke
finder d... | 9,788 | [
[
-0.048248291015625,
-0.029052734375,
0.0144805908203125,
0.0212860107421875,
-0.0206451416015625,
-0.00919342041015625,
-0.018890380859375,
-0.043975830078125,
0.059661865234375,
0.0005970001220703125,
-0.04791259765625,
-0.061370849609375,
-0.04058837890625,
... |
laion/CLIP-ViT-H-14-laion2B-s32B-b79K | 2023-04-18T17:45:56.000Z | [
"open_clip",
"pytorch",
"clip",
"zero-shot-image-classification",
"arxiv:1910.04867",
"license:mit",
"has_space",
"region:us"
] | zero-shot-image-classification | laion | null | null | laion/CLIP-ViT-H-14-laion2B-s32B-b79K | 181 | 1,340,812 | open_clip | 2022-09-14T22:52:28 | ---
license: mit
widget:
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
library_name: open_clip
pipeline_tag: zero-shot-image-classification
---
# Model Card for CLIP ViT-H/14 - LAION-2B
# T... | 8,203 | [
[
-0.0226287841796875,
-0.044158935546875,
0.0161590576171875,
0.0019664764404296875,
-0.029296875,
-0.033721923828125,
-0.01512908935546875,
-0.050628662109375,
-0.0005974769592285156,
0.034027099609375,
-0.0330810546875,
-0.04559326171875,
-0.045806884765625,
... |
dslim/bert-base-NER | 2023-05-09T16:37:55.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"token-classification",
"en",
"dataset:conll2003",
"arxiv:1810.04805",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | token-classification | dslim | null | null | dslim/bert-base-NER | 278 | 1,323,754 | transformers | 2022-03-02T23:29:05 | ---
language: en
datasets:
- conll2003
license: mit
---
# bert-base-NER
## Model description
**bert-base-NER** is a fine-tuned BERT model that is ready to use for **Named Entity Recognition** and achieves **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: locat... | 4,804 | [
[
-0.036407470703125,
-0.050384521484375,
0.01511383056640625,
0.01139068603515625,
-0.027099609375,
-0.008209228515625,
-0.033782958984375,
-0.043060302734375,
0.0230865478515625,
0.0251007080078125,
-0.033477783203125,
-0.03985595703125,
-0.05511474609375,
0... |
microsoft/resnet-50 | 2023-03-10T17:35:03.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"resnet",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:1512.03385",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | image-classification | microsoft | null | null | microsoft/resnet-50 | 168 | 1,259,553 | transformers | 2022-03-16T15:42:43 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
---
# ResNet-50 v1.5
ResNet model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by He et al.
Disclaimer: The team ... | 2,642 | [
[
-0.047393798828125,
-0.01062774658203125,
-0.0162811279296875,
-0.00691986083984375,
-0.021942138671875,
-0.01119232177734375,
-0.0061798095703125,
-0.052581787109375,
0.0235748291015625,
0.03277587890625,
-0.045623779296875,
-0.02362060546875,
-0.04098510742187... |
bigscience/bloom-560m | 2023-09-26T09:16:49.000Z | [
"transformers",
"pytorch",
"jax",
"onnx",
"safetensors",
"bloom",
"text-generation",
"ak",
"ar",
"as",
"bm",
"bn",
"ca",
"code",
"en",
"es",
"eu",
"fon",
"fr",
"gu",
"hi",
"id",
"ig",
"ki",
"kn",
"lg",
"ln",
"ml",
"mr",
"ne",
"nso",
"ny",
"or",
"pa",... | text-generation | bigscience | null | null | bigscience/bloom-560m | 274 | 1,235,084 | transformers | 2022-05-19T11:51:24 | ---
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zhs
- zht
- zu
pipeline_tag: text-gener... | 20,187 | [
[
-0.01873779296875,
-0.044097900390625,
0.032867431640625,
0.02044677734375,
-0.00919342041015625,
-0.018280029296875,
-0.03851318359375,
-0.04302978515625,
0.00598907470703125,
0.03887939453125,
-0.033599853515625,
-0.052215576171875,
-0.04913330078125,
0.00... |
deepset/roberta-base-squad2 | 2023-09-26T11:36:30.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"roberta",
"question-answering",
"en",
"dataset:squad_v2",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | question-answering | deepset | null | null | deepset/roberta-base-squad2 | 483 | 1,212,516 | transformers | 2022-03-02T23:29:05 | ---
language: en
license: cc-by-4.0
datasets:
- squad_v2
model-index:
- name: deepset/roberta-base-squad2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- type: exac... | 8,387 | [
[
-0.0306243896484375,
-0.04876708984375,
0.0306243896484375,
0.00460052490234375,
-0.0031681060791015625,
0.00795745849609375,
-0.007904052734375,
-0.0284271240234375,
0.0225372314453125,
0.0218505859375,
-0.0633544921875,
-0.050048828125,
-0.0207366943359375,
... |
martin-ha/toxic-comment-model | 2022-05-06T02:24:31.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"en",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | martin-ha | null | null | martin-ha/toxic-comment-model | 29 | 1,196,172 | transformers | 2022-03-02T23:29:05 | ---
language: en
---
## Model description
This model is a fine-tuned version of the [DistilBERT model](https://huggingface.co/transformers/model_doc/distilbert.html) to classify toxic comments.
## How to use
You can use the model with the following code.
```python
from transformers import AutoModelForSequenceClass... | 3,184 | [
[
-0.028076171875,
-0.033935546875,
0.013397216796875,
0.0095672607421875,
-0.01136016845703125,
-0.007282257080078125,
0.0025634765625,
-0.0287933349609375,
0.0013036727905273438,
0.016815185546875,
-0.03814697265625,
-0.053497314453125,
-0.06549072265625,
0.... |
google/flan-t5-base | 2023-07-17T12:48:39.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:svakulenk0/qrecc",
"dataset:taskmaster2",
"dataset:djaym7/wiki_dialog",
"dataset:deepmind/code_contests",
"dataset:lambada",
"dataset:gsm8k",
"dat... | text2text-generation | google | null | null | google/flan-t5-base | 395 | 1,145,305 | transformers | 2022-10-21T10:02:31 | ---
language:
- en
- fr
- ro
- de
- multilingual
tags:
- text2text-generation
widget:
- text: "Translate to German: My name is Arthur"
example_title: "Translation"
- text: "Please answer to the following question. Who is going to be the next Ballon d'or?"
example_title: "Question Answering"
- text: "Q: Can Geof... | 13,375 | [
[
-0.03338623046875,
-0.043670654296875,
0.021270751953125,
-0.000125885009765625,
-0.0068817138671875,
-0.0107421875,
-0.031402587890625,
-0.04705810546875,
-0.0114593505859375,
0.009124755859375,
-0.038330078125,
-0.039306640625,
-0.04925537109375,
0.0036735... |
google/vit-base-patch16-224 | 2023-09-05T15:27:12.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"vit",
"image-classification",
"vision",
"dataset:imagenet-1k",
"dataset:imagenet-21k",
"arxiv:2010.11929",
"arxiv:2006.03677",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | image-classification | google | null | null | google/vit-base-patch16-224 | 362 | 1,108,207 | transformers | 2022-03-02T23:29:05 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
- imagenet-21k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teap... | 5,688 | [
[
-0.04669189453125,
-0.0118865966796875,
-0.0011205673217773438,
-0.00598907470703125,
-0.0290374755859375,
-0.01224517822265625,
-0.004547119140625,
-0.04632568359375,
0.011810302734375,
0.037384033203125,
-0.023651123046875,
-0.0192413330078125,
-0.056549072265... |
EleutherAI/pythia-1.4b | 2023-07-09T16:01:57.000Z | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"en",
"dataset:EleutherAI/the_pile",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"re... | text-generation | EleutherAI | null | null | EleutherAI/pythia-1.4b | 9 | 1,101,663 | transformers | 2023-02-09T14:08:20 | ---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70... | 13,574 | [
[
-0.02484130859375,
-0.0601806640625,
0.0250091552734375,
0.0051116943359375,
-0.01763916015625,
-0.01537322998046875,
-0.0166168212890625,
-0.0343017578125,
0.0163421630859375,
0.0118408203125,
-0.0277557373046875,
-0.0219879150390625,
-0.03173828125,
-0.004... |
WarriorMama777/OrangeMixs | 2023-06-28T10:00:13.000Z | [
"diffusers",
"stable-diffusion",
"text-to-image",
"dataset:Nerfgun3/bad_prompt",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | WarriorMama777 | null | null | WarriorMama777/OrangeMixs | 3,464 | 1,093,124 | diffusers | 2022-12-04T14:18:34 | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
datasets: Nerfgun3/bad_prompt
---
----
# OrangeMixs
"OrangeMixs" shares various Merge models that can be used with StableDiffusionWebui:Automatic1111 and others.
<img src="https://i.imgur.com/VZg0LqQ.png" width="1000" height="">
Ma... | 61,021 | [
[
-0.056610107421875,
-0.0240478515625,
0.023773193359375,
0.0185394287109375,
-0.0158538818359375,
-0.02264404296875,
0.0199432373046875,
-0.03753662109375,
0.0258636474609375,
0.04608154296875,
-0.039886474609375,
-0.04754638671875,
-0.041534423828125,
0.021... |
sentence-transformers/msmarco-distilbert-dot-v5 | 2023-11-02T09:31:39.000Z | [
"sentence-transformers",
"pytorch",
"tf",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"arxiv:1908.10084",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | sentence-similarity | sentence-transformers | null | null | sentence-transformers/msmarco-distilbert-dot-v5 | 7 | 1,072,480 | sentence-transformers | 2022-03-02T23:29:05 | ---
language:
- en
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# msmarco-distilbert-dot-v5
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vect... | 6,479 | [
[
-0.0162506103515625,
-0.060455322265625,
0.03338623046875,
0.0210723876953125,
-0.0175933837890625,
-0.0265350341796875,
-0.022430419921875,
-0.006500244140625,
0.007152557373046875,
0.023468017578125,
-0.041290283203125,
-0.05023193359375,
-0.056915283203125,
... |
microsoft/tapex-base-finetuned-wikisql | 2023-01-24T16:57:17.000Z | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"tapex",
"table-question-answering",
"en",
"dataset:wikisql",
"arxiv:2107.07653",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | table-question-answering | microsoft | null | null | microsoft/tapex-base-finetuned-wikisql | 13 | 1,067,188 | transformers | 2022-03-02T23:29:05 | ---
language: en
tags:
- tapex
- table-question-answering
datasets:
- wikisql
license: mit
---
# TAPEX (base-sized model)
TAPEX was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jia... | 3,149 | [
[
-0.032867431640625,
-0.0582275390625,
0.03863525390625,
-0.01161956787109375,
-0.017852783203125,
0.003314971923828125,
-0.01727294921875,
-0.00948333740234375,
0.0256805419921875,
0.041259765625,
-0.03802490234375,
-0.042327880859375,
-0.03533935546875,
-0.... |
bert-large-uncased | 2022-11-14T21:36:14.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | null | null | null | bert-large-uncased | 56 | 1,040,642 | transformers | 2022-03-02T23:29:04 | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT large model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com... | 8,961 | [
[
-0.00824737548828125,
-0.044677734375,
0.0166168212890625,
0.0220947265625,
-0.04248046875,
0.004222869873046875,
-0.004711151123046875,
-0.01503753662109375,
0.031158447265625,
0.0394287109375,
-0.04437255859375,
-0.031646728515625,
-0.05950927734375,
0.014... |
cardiffnlp/twitter-xlm-roberta-base-sentiment | 2023-07-19T20:41:38.000Z | [
"transformers",
"pytorch",
"tf",
"xlm-roberta",
"text-classification",
"multilingual",
"arxiv:2104.12250",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | cardiffnlp | null | null | cardiffnlp/twitter-xlm-roberta-base-sentiment | 147 | 1,038,434 | transformers | 2022-03-02T23:29:05 | ---
language: multilingual
widget:
- text: "🤗"
- text: "T'estimo! ❤️"
- text: "I love you!"
- text: "I hate you 🤮"
- text: "Mahal kita!"
- text: "사랑해!"
- text: "난 너가 싫어"
- text: "😍😍😍"
---
# twitter-XLM-roBERTa-base for Sentiment Analysis
This is a multilingual XLM-roBERTa-base model trained on ~198M tweets and ... | 3,249 | [
[
-0.0160675048828125,
-0.046539306640625,
0.01490020751953125,
0.030364990234375,
-0.015594482421875,
0.0193634033203125,
-0.03240966796875,
-0.015869140625,
0.0180816650390625,
0.0102081298828125,
-0.045135498046875,
-0.07318115234375,
-0.054779052734375,
0.... |
nvidia/speakerverification_en_titanet_large | 2023-03-13T19:13:57.000Z | [
"nemo",
"speaker",
"speech",
"audio",
"speaker-verification",
"speaker-recognition",
"speaker-diarization",
"titanet",
"NeMo",
"pytorch",
"en",
"dataset:VOXCELEB-1",
"dataset:VOXCELEB-2",
"dataset:FISHER",
"dataset:switchboard",
"dataset:librispeech_asr",
"dataset:SRE(2004-2010)",
... | null | nvidia | null | null | nvidia/speakerverification_en_titanet_large | 35 | 1,036,656 | nemo | 2022-07-15T00:26:00 | ---
language:
- en
library_name: nemo
datasets:
- VOXCELEB-1
- VOXCELEB-2
- FISHER
- switchboard
- librispeech_asr
- SRE(2004-2010)
thumbnail: null
tags:
- speaker
- speech
- audio
- speaker-verification
- speaker-recognition
- speaker-diarization
- titanet
- NeMo
- pytorch
license: cc-by-4.0
widget:
- src: https://hu... | 8,124 | [
[
-0.0433349609375,
-0.06610107421875,
0.005069732666015625,
-0.004642486572265625,
-0.01093292236328125,
-0.01270294189453125,
-0.01837158203125,
-0.0293121337890625,
0.016693115234375,
0.0200347900390625,
-0.03466796875,
-0.035247802734375,
-0.03753662109375,
... |
pysentimiento/robertuito-sentiment-analysis | 2023-02-25T14:25:07.000Z | [
"pysentimiento",
"pytorch",
"tf",
"roberta",
"twitter",
"sentiment-analysis",
"es",
"arxiv:2106.09462",
"has_space",
"region:us"
] | null | pysentimiento | null | null | pysentimiento/robertuito-sentiment-analysis | 30 | 1,008,941 | pysentimiento | 2022-03-02T23:29:05 | ---
language:
- es
library_name: pysentimiento
tags:
- twitter
- sentiment-analysis
---
# Sentiment Analysis in Spanish
## robertuito-sentiment-analysis
Repository: [https://github.com/pysentimiento/pysentimiento/](https://github.com/finiteautomata/pysentimiento/)
Model trained with TASS 2020 corpus (around... | 4,583 | [
[
-0.0269317626953125,
-0.04925537109375,
0.0157928466796875,
0.0472412109375,
-0.019744873046875,
0.01517486572265625,
-0.037994384765625,
-0.0386962890625,
0.042327880859375,
0.01806640625,
-0.041107177734375,
-0.0675048828125,
-0.07086181640625,
0.015670776... |
vinai/bertweet-base | 2022-10-22T08:52:39.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | vinai | null | null | vinai/bertweet-base | 21 | 1,007,895 | transformers | 2022-03-02T23:29:05 | # <a name="introduction"></a> BERTweet: A pre-trained language model for English Tweets
BERTweet is the first public large-scale language model pre-trained for English Tweets. BERTweet is trained based on the [RoBERTa](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) pre-training procedure.... | 1,890 | [
[
-0.03759765625,
-0.045745849609375,
0.0215606689453125,
0.021331787109375,
-0.034759521484375,
0.0022335052490234375,
-0.027008056640625,
-0.039947509765625,
0.039398193359375,
0.011474609375,
-0.040771484375,
-0.046630859375,
-0.047119140625,
-0.01444244384... |
CIDAS/clipseg-rd64-refined | 2023-01-04T11:56:08.000Z | [
"transformers",
"pytorch",
"clipseg",
"vision",
"image-segmentation",
"arxiv:2112.10003",
"license:apache-2.0",
"has_space",
"region:us"
] | image-segmentation | CIDAS | null | null | CIDAS/clipseg-rd64-refined | 58 | 980,137 | transformers | 2022-11-01T14:25:57 | ---
license: apache-2.0
tags:
- vision
- image-segmentation
inference: false
---
# CLIPSeg model
CLIPSeg model with reduce dimension 64, refined (using a more complex convolution). It was introduced in the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Lüddecke et al. an... | 595 | [
[
-0.06158447265625,
-0.04052734375,
0.045135498046875,
0.00789642333984375,
-0.04302978515625,
-0.0294342041015625,
0.019195556640625,
-0.0261688232421875,
0.0018758773803710938,
0.028472900390625,
-0.0709228515625,
-0.04632568359375,
-0.057281494140625,
0.00... |
EleutherAI/pythia-2.8b | 2023-06-09T00:35:37.000Z | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"en",
"dataset:EleutherAI/pile",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region... | text-generation | EleutherAI | null | null | EleutherAI/pythia-2.8b | 9 | 962,527 | transformers | 2023-02-13T14:37:12 | ---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 1... | 13,570 | [
[
-0.0238037109375,
-0.0594482421875,
0.0245819091796875,
0.00455474853515625,
-0.0186309814453125,
-0.0153961181640625,
-0.01763916015625,
-0.035491943359375,
0.01505279541015625,
0.01201629638671875,
-0.0260162353515625,
-0.020263671875,
-0.03289794921875,
-... |
laion/CLIP-ViT-B-16-laion2B-s34B-b88K | 2023-04-19T18:55:10.000Z | [
"open_clip",
"zero-shot-image-classification",
"arxiv:1910.04867",
"license:mit",
"has_space",
"region:us"
] | zero-shot-image-classification | laion | null | null | laion/CLIP-ViT-B-16-laion2B-s34B-b88K | 18 | 959,561 | open_clip | 2023-01-03T00:16:18 | ---
license: mit
pipeline_tag: zero-shot-image-classification
library_name: open_clip
---
# Model Card for CLIP ViT-B/16 - LAION-2B
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Details](#training-details)
4. [Evaluation](#evaluation)
5. [Acknowledgements](#acknowledgements)
6.... | 7,755 | [
[
-0.0210418701171875,
-0.0391845703125,
0.0140838623046875,
0.0042724609375,
-0.0281829833984375,
-0.034698486328125,
-0.015045166015625,
-0.053375244140625,
-0.004058837890625,
0.031646728515625,
-0.031524658203125,
-0.039459228515625,
-0.042724609375,
-0.00... |
Bingsu/clip-vit-base-patch32-ko | 2022-11-08T11:02:10.000Z | [
"transformers",
"pytorch",
"tf",
"safetensors",
"clip",
"zero-shot-image-classification",
"ko",
"arxiv:2004.09813",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | zero-shot-image-classification | Bingsu | null | null | Bingsu/clip-vit-base-patch32-ko | 3 | 956,648 | transformers | 2022-09-16T05:18:05 | ---
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: 기타치는 고양이, 피아노 치는 강아지
example_title: Guitar, cat and dog
language: ko
license: mit
---
# clip-vit-base-patch32-ko
Korean CLIP model trained by [Making Monolingual Sentence Embeddings Multilingua... | 2,625 | [
[
-0.034637451171875,
-0.050933837890625,
0.0167388916015625,
0.0254669189453125,
-0.034332275390625,
0.0036296844482421875,
-0.0171356201171875,
0.0026950836181640625,
0.0303192138671875,
0.021575927734375,
-0.0305938720703125,
-0.046051025390625,
-0.054656982421... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.