modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
hashu/my-pet-dog-xzh | 2023-08-10T12:42:40.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | hashu | null | null | hashu/my-pet-dog-xzh | 0 | 811 | diffusers | 2023-08-10T12:38:49 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-xzh Dreambooth model trained by hashu following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: VJCET527
Sample pictures of this concept:
.jpg)
| 389 | [
[
-0.0513916015625,
-0.0204010009765625,
0.033416748046875,
-0.005039215087890625,
-0.0217437744140625,
0.033203125,
0.02484130859375,
-0.0286102294921875,
0.03887939453125,
0.029083251953125,
-0.042755126953125,
-0.0279083251953125,
-0.0135955810546875,
-0.01... |
ai-lab/ESGify | 2023-10-31T11:48:06.000Z | [
"transformers",
"pytorch",
"mpnet",
"ESG",
"finance",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | ai-lab | null | null | ai-lab/ESGify | 4 | 811 | transformers | 2023-08-29T07:53:44 | ---
license: apache-2.0
tags:
- ESG
- finance
language:
- en
---

# About ESGify
<img src="ESGify_logo.jpeg" alt="image" width="20%" height="auto">
**ESGify** is a model for multilabel news classification with respect to ESG risks. Our custom methodology includes 46 ESG classes and 1 non-relevant to ESG class, resulting in 47 classes in total:

# Usage
ESGify is based on MPNet architecture but with a custom classification head. The ESGify class is defined is follows.
```python
from collections import OrderedDict
from transformers import MPNetPreTrainedModel, MPNetModel, AutoTokenizer
import torch
# Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Definition of ESGify class because of custom,sentence-transformers like, mean pooling function and classifier head
class ESGify(MPNetPreTrainedModel):
"""Model for Classification ESG risks from text."""
def __init__(self,config): #tuning only the head
"""
"""
super().__init__(config)
# Instantiate Parts of model
self.mpnet = MPNetModel(config,add_pooling_layer=False)
self.id2label = config.id2label
self.label2id = config.label2id
self.classifier = torch.nn.Sequential(OrderedDict([('norm',torch.nn.BatchNorm1d(768)),
('linear',torch.nn.Linear(768,512)),
('act',torch.nn.ReLU()),
('batch_n',torch.nn.BatchNorm1d(512)),
('drop_class', torch.nn.Dropout(0.2)),
('class_l',torch.nn.Linear(512 ,47))]))
def forward(self, input_ids, attention_mask):
# Feed input to mpnet model
outputs = self.mpnet(input_ids=input_ids,
attention_mask=attention_mask)
# mean pooling dataset and eed input to classifier to compute logits
logits = self.classifier( mean_pooling(outputs['last_hidden_state'],attention_mask))
# apply sigmoid
logits = 1.0 / (1.0 + torch.exp(-logits))
return logits
```
After defining model class, we initialize ESGify and tokenizer with the pre-trained weights
```python
model = ESGify.from_pretrained('ai-lab/ESGify')
tokenizer = AutoTokenizer.from_pretrained('ai-lab/ESGify')
```
Getting results from the model:
```python
texts = ['text1','text2']
to_model = tokenizer.batch_encode_plus(
texts,
add_special_tokens=True,
max_length=512,
return_token_type_ids=False,
padding="max_length",
truncation=True,
return_attention_mask=True,
return_tensors='pt',
)
results = model(**to_model)
```
To identify top-3 classes by relevance and their scores:
```python
for i in torch.topk(results, k=3).indices.tolist()[0]:
print(f"{model.id2label[i]}: {np.round(results.flatten()[i].item(), 3)}")
```
For example, for the news "She faced employment rejection because of her gender", we get the following top-3 labels:
```
Discrimination: 0.944
Strategy Implementation: 0.82
Indigenous People: 0.499
```
Before training our model, we masked words related to Organisation, Date, Country, and Person to prevent false associations between these entities and risks. Hence, we recommend to process text with FLAIR NER model before inference.
An example of such preprocessing is given in https://colab.research.google.com/drive/15YcTW9KPSWesZ6_L4BUayqW_omzars0l?usp=sharing.
# Training procedure
We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model.
Next, we do the domain-adaptation procedure by Mask Language Modeling with using texts of ESG reports.
Finally, we fine-tune our model on 2000 texts with manually annotation of ESG specialists.
| 4,397 | [
[
-0.039825439453125,
-0.0491943359375,
0.01025390625,
-0.0017757415771484375,
-0.004985809326171875,
-0.0197906494140625,
-0.0154571533203125,
-0.0215301513671875,
0.00933074951171875,
0.0211334228515625,
-0.035308837890625,
-0.0460205078125,
-0.0687255859375,
... |
stablediffusionapi/abyssorangemix3a1b | 2023-09-27T03:02:24.000Z | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | stablediffusionapi | null | null | stablediffusionapi/abyssorangemix3a1b | 3 | 811 | diffusers | 2023-09-27T03:01:08 | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# AbyssOrangeMix3A1B API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "abyssorangemix3a1b"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/abyssorangemix3a1b)
Model link: [View model](https://stablediffusionapi.com/models/abyssorangemix3a1b)
Credits: [View credits](https://civitai.com/?query=AbyssOrangeMix3A1B)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "abyssorangemix3a1b",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** | 2,500 | [
[
-0.037689208984375,
-0.054443359375,
0.037261962890625,
0.023284912109375,
-0.038238525390625,
-0.0012273788452148438,
0.0291595458984375,
-0.042236328125,
0.037322998046875,
0.045806884765625,
-0.05999755859375,
-0.060302734375,
-0.0281219482421875,
-0.0001... |
timm/ViT-L-16-SigLIP-384 | 2023-10-25T21:54:17.000Z | [
"open_clip",
"clip",
"siglip",
"zero-shot-image-classification",
"dataset:webli",
"arxiv:2303.15343",
"license:apache-2.0",
"region:us"
] | zero-shot-image-classification | timm | null | null | timm/ViT-L-16-SigLIP-384 | 1 | 811 | open_clip | 2023-10-16T23:32:50 | ---
tags:
- clip
- siglip
library_name: open_clip
pipeline_tag: zero-shot-image-classification
license: apache-2.0
datasets:
- webli
---
# Model card for ViT-L-16-SigLIP-384
A SigLIP (Sigmoid loss for Language-Image Pre-training) model trained on WebLI.
This model has been converted to PyTorch from the original JAX checkpoints in [Big Vision](https://github.com/google-research/big_vision). These weights are usable in both OpenCLIP (image + text) and timm (image only).
## Model Details
- **Model Type:** Contrastive Image-Text, Zero-Shot Image Classification.
- **Original:** https://github.com/google-research/big_vision
- **Dataset:** WebLI
- **Papers:**
- Sigmoid loss for language image pre-training: https://arxiv.org/abs/2303.15343
## Model Usage
### With OpenCLIP
```
import torch
import torch.nn.functional as F
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer # works on open-clip-torch>=2.23.0, timm>=0.9.8
model, preprocess = create_model_from_pretrained('hf-hub:timm/ViT-L-16-SigLIP-384')
tokenizer = get_tokenizer('hf-hub:timm/ViT-L-16-SigLIP-384')
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
image = preprocess(image).unsqueeze(0)
labels_list = ["a dog", "a cat", "a donut", "a beignet"]
text = tokenizer(labels_list, context_length=model.context_length)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features = F.normalize(image_features, dim=-1)
text_features = F.normalize(text_features, dim=-1)
text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias)
zipped_list = list(zip(labels_list, [round(p.item(), 3) for p in text_probs[0]]))
print("Label probabilities: ", zipped_list)
```
### With `timm` (for image embeddings)
```python
from urllib.request import urlopen
from PIL import Image
import timm
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_large_patch16_siglip_384',
pretrained=True,
num_classes=0,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(image).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
```
## Citation
```bibtex
@article{zhai2023sigmoid,
title={Sigmoid loss for language image pre-training},
author={Zhai, Xiaohua and Mustafa, Basil and Kolesnikov, Alexander and Beyer, Lucas},
journal={arXiv preprint arXiv:2303.15343},
year={2023}
}
```
```bibtex
@misc{big_vision,
author = {Beyer, Lucas and Zhai, Xiaohua and Kolesnikov, Alexander},
title = {Big Vision},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/google-research/big_vision}}
}
```
| 3,162 | [
[
-0.0299072265625,
-0.0372314453125,
0.015869140625,
0.0157623291015625,
-0.034423828125,
-0.0235137939453125,
-0.0295257568359375,
-0.030029296875,
0.0244598388671875,
0.0190582275390625,
-0.03857421875,
-0.05816650390625,
-0.054931640625,
-0.011032104492187... |
kimnice/bald-man-model | 2023-10-31T08:14:23.000Z | [
"diffusers",
"safetensors",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | kimnice | null | null | kimnice/bald-man-model | 0 | 811 | diffusers | 2023-10-20T08:17:06 | base_model: SG161222/Realistic_Vision_V5.1_noVAE <br>
instance_prompt: photo of a myLora nice <br>
tags:
text-to-image
diffusers
autotrain inference: true | 155 | [
[
-0.0179901123046875,
-0.0248870849609375,
0.03955078125,
0.00920867919921875,
-0.0264739990234375,
-0.02813720703125,
0.02227783203125,
-0.025177001953125,
0.006259918212890625,
0.041259765625,
-0.049835205078125,
-0.033447265625,
-0.018768310546875,
-0.0035... |
regel-corpus/hunflair-promoter | 2022-11-28T14:36:20.000Z | [
"flair",
"pytorch",
"hunflair",
"token-classification",
"sequence-tagger-model",
"en",
"region:us"
] | token-classification | regel-corpus | null | null | regel-corpus/hunflair-promoter | 0 | 810 | flair | 2022-03-29T11:22:27 | ---
tags:
- flair
- hunflair
- token-classification
- sequence-tagger-model
language: en
widget:
- text: "Two putative extended promoters consensus sequences (p1 and p2)."
---
## HunFlair model for PROMOTER
[HunFlair](https://github.com/flairNLP/flair/blob/master/resources/docs/HUNFLAIR.md) (biomedical flair) for promoter entity.
Predicts 1 tag:
| **tag** | **meaning** |
|---------------------------------|-----------|
| Promoter | DNA promoter region |
---
### Cite
Please cite the following paper when using this model.
```
@article{garda2022regel,
title={RegEl corpus: identifying DNA regulatory elements in the scientific literature},
author={Garda, Samuele and Lenihan-Geels, Freyda and Proft, Sebastian and Hochmuth, Stefanie and Sch{\"u}lke, Markus and Seelow, Dominik and Leser, Ulf},
journal={Database},
volume={2022},
year={2022},
publisher={Oxford Academic}
}
```
---
### Demo: How to use in Flair
Requires:
- **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# for biomedical-specific tokenization:
# from flair.tokenization import SciSpacyTokenizer
# load tagger
tagger = SequenceTagger.load("regel-corpus/hunflair-promoter")
text = "The upstream region of the glnA gene contained two putative extended promoter consensus sequences (p1 and p2)."
# make example sentence
sentence = Sentence(text)
# for biomedical-specific tokenization:
# sentence = Sentence(text, use_tokenizer=SciSpacyTokenizer())
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [16]: "p1" [− Labels: Promoter (0.9878)]
Span [18]: "p2" [− Labels: Promoter (0.9216)]
```
So, the entities "*p1*" and "*p2*" (labeled as a **promoter**) are found in the sentence.
Alternatively download all models locally and use the `MultiTagger` class.
```python
from flair.models import MultiTagger
tagger = [
'./models/hunflair-promoter/pytorch_model.bin',
'./models/hunflair-enhancer/pytorch_model.bin',
'./models/hunflair-tfbs/pytorch_model.bin',
]
tagger = MultiTagger.load(['./models/hunflair-'])
tagger.predict(sentence)
```
| 2,424 | [
[
-0.0223846435546875,
-0.050445556640625,
-0.0113372802734375,
0.0164337158203125,
0.00043845176696777344,
-0.01812744140625,
-0.01480865478515625,
-0.0245513916015625,
0.061431884765625,
0.0030689239501953125,
-0.022186279296875,
-0.0266876220703125,
-0.04104614... |
timm/vit_medium_patch16_gap_256.sw_in12k_ft_in1k | 2023-05-06T00:26:57.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-12k",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/vit_medium_patch16_gap_256.sw_in12k_ft_in1k | 0 | 810 | timm | 2022-12-02T01:56:59 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- imagenet-12k
---
# Model card for vit_medium_patch16_gap_256.sw_in12k_ft_in1k
A Vision Transformer (ViT) image classification model. This is a `timm` specific variation of the architecture with token global average pooling. Pretrained on ImageNet-12k and fine-tuned on ImageNet-1k by Ross Wightman in `timm` using recipe template described below.
Recipe details:
* Based on Swin Transformer train / pretrain recipe with modifications (related to both DeiT and ConvNeXt recipes)
* AdamW optimizer, gradient clipping, EMA weight averaging
* Cosine LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 38.9
- GMACs: 9.8
- Activations (M): 14.3
- Image size: 256 x 256
- **Papers:**
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-12k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_medium_patch16_gap_256.sw_in12k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_medium_patch16_gap_256.sw_in12k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 256, 512) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
| 3,729 | [
[
-0.03680419921875,
-0.030975341796875,
0.0007905960083007812,
0.01318359375,
-0.02740478515625,
-0.025634765625,
-0.0162200927734375,
-0.033416748046875,
0.0229339599609375,
0.0214080810546875,
-0.044403076171875,
-0.045562744140625,
-0.05084228515625,
-0.00... |
digiplay/ChikMix_V3 | 2023-10-03T06:48:58.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | digiplay | null | null | digiplay/ChikMix_V3 | 7 | 810 | diffusers | 2023-06-30T13:13:31 | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/9871?modelVersionId=59409
Original Author's DEMO images :
,%20photorealistic,%201girl,%20flat%20bangs,%20stunning%20innocent%20symmetry%20face,%20shirt,%20emotiona.jpeg)
Sample image I made :


| 833 | [
[
-0.038543701171875,
-0.030548095703125,
0.02459716796875,
0.0208892822265625,
-0.0323486328125,
-0.0057220458984375,
0.01361846923828125,
-0.0208587646484375,
0.04730224609375,
0.0260467529296875,
-0.06231689453125,
-0.03643798828125,
-0.0226593017578125,
-0... |
ku-nlp/deberta-v2-base-japanese-with-auto-jumanpp | 2023-09-15T03:47:58.000Z | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"fill-mask",
"deberta",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"dataset:oscar",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"has_space"
] | fill-mask | ku-nlp | null | null | ku-nlp/deberta-v2-base-japanese-with-auto-jumanpp | 0 | 810 | transformers | 2023-09-07T06:04:29 | ---
language: ja
license: cc-by-sa-4.0
library_name: transformers
tags:
- deberta
- deberta-v2
- fill-mask
datasets:
- wikipedia
- cc100
- oscar
metrics:
- accuracy
mask_token: "[MASK]"
widget:
- text: "京都大学で自然言語処理を[MASK]する。"
---
# Model Card for Japanese DeBERTa V2 base
## Model description
This is a Japanese DeBERTa V2 base model pre-trained on Japanese Wikipedia, the Japanese portion of CC-100, and the Japanese portion of OSCAR.
## How to use
You can use this model for masked language modeling as follows:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('ku-nlp/deberta-v2-base-japanese-with-auto-jumanpp', trust_remote_code=True)
model = AutoModelForMaskedLM.from_pretrained('ku-nlp/deberta-v2-base-japanese-with-auto-jumanpp')
sentence = '京都大学で自然言語処理を[MASK]する。'
encoding = tokenizer(sentence, return_tensors='pt')
...
```
You can also fine-tune this model on downstream tasks.
## Tokenization
The input text is internally segmented by [Juman++](https://github.com/ku-nlp/jumanpp) within `DebertaV2JumanppTokenizer` or `DebertaV2JumanppTokenizerFast`, so there's no need to segment it in advance.
To use `DebertaV2JumanppTokenizer` or `DebertaV2JumanppTokenizerFast`, you need to install [Juman++ 2.0.0-rc3](https://github.com/ku-nlp/jumanpp/releases/tag/v2.0.0-rc3) and [rhoknp](https://github.com/ku-nlp/rhoknp).
## Training data
We used the following corpora for pre-training:
- Japanese Wikipedia (as of 20221020, 3.2GB, 27M sentences, 1.3M documents)
- Japanese portion of CC-100 (85GB, 619M sentences, 66M documents)
- Japanese portion of OSCAR (54GB, 326M sentences, 25M documents)
Note that we filtered out documents annotated with "header", "footer", or "noisy" tags in OSCAR.
Also note that Japanese Wikipedia was duplicated 10 times to make the total size of the corpus comparable to that of CC-100 and OSCAR. As a result, the total size of the training data is 171GB.
## Training procedure
We first segmented texts in the corpora into words using [Juman++](https://github.com/ku-nlp/jumanpp).
Then, we built a sentencepiece model with 32000 tokens including words ([JumanDIC](https://github.com/ku-nlp/JumanDIC)) and subwords induced by the unigram language model of [sentencepiece](https://github.com/google/sentencepiece).
We tokenized the segmented corpora into subwords using the sentencepiece model and trained the Japanese DeBERTa model using [transformers](https://github.com/huggingface/transformers) library.
The training took three weeks using 8 NVIDIA A100-SXM4-40GB GPUs.
The following hyperparameters were used during pre-training:
- learning_rate: 2e-4
- per_device_train_batch_size: 44
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 6
- total_train_batch_size: 2,112
- max_seq_length: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear schedule with warmup
- training_steps: 500,000
- warmup_steps: 10,000
The accuracy of the trained model on the masked language modeling task was 0.779.
The evaluation set consists of 5,000 randomly sampled documents from each of the training corpora.
## Fine-tuning on NLU tasks
We fine-tuned the following models and evaluated them on the dev set of JGLUE.
We tuned learning rate and training epochs for each model and task following [the JGLUE paper](https://www.jstage.jst.go.jp/article/jnlp/30/1/30_63/_pdf/-char/ja).
| Model | MARC-ja/acc | JSTS/pearson | JSTS/spearman | JNLI/acc | JSQuAD/EM | JSQuAD/F1 | JComQA/acc |
|-------------------------------|-------------|--------------|---------------|----------|-----------|-----------|------------|
| Waseda RoBERTa base | 0.965 | 0.913 | 0.876 | 0.905 | 0.853 | 0.916 | 0.853 |
| Waseda RoBERTa large (seq512) | 0.969 | 0.925 | 0.890 | 0.928 | 0.910 | 0.955 | 0.900 |
| LUKE Japanese base* | 0.965 | 0.916 | 0.877 | 0.912 | - | - | 0.842 |
| LUKE Japanese large* | 0.965 | 0.932 | 0.902 | 0.927 | - | - | 0.893 |
| DeBERTaV2 base | 0.970 | 0.922 | 0.886 | 0.922 | 0.899 | 0.951 | 0.873 |
| DeBERTaV2 large | 0.968 | 0.925 | 0.892 | 0.924 | 0.912 | 0.959 | 0.890 |
*The scores of LUKE are from [the official repository](https://github.com/studio-ousia/luke).
## Acknowledgments
This work was supported by Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures (JHPCN) through General Collaboration Project no. jh221004, "Developing a Platform for Constructing and Sharing of Large-Scale Japanese Language Models".
For training models, we used the mdx: a platform for the data-driven future.
| 4,951 | [
[
-0.040802001953125,
-0.06854248046875,
0.0251312255859375,
-0.00521087646484375,
-0.0280914306640625,
0.004486083984375,
-0.02447509765625,
-0.032318115234375,
0.0284423828125,
0.03619384765625,
-0.04254150390625,
-0.05181884765625,
-0.05438232421875,
0.0003... |
timm/fbnetv3_d.ra2_in1k | 2023-04-27T22:48:42.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2110.00476",
"arxiv:2006.02049",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/fbnetv3_d.ra2_in1k | 0 | 809 | timm | 2022-12-16T05:36:54 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for fbnetv3_d.ra2_in1k
A FBNet-v3 image classification model. Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* RandAugment `RA2` recipe. Inspired by and evolved from EfficientNet RandAugment recipes. Published as `B` recipe in [ResNet Strikes Back](https://arxiv.org/abs/2110.00476).
* RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging
* Step (exponential decay w/ staircase) LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 10.3
- GMACs: 0.5
- Activations (M): 8.5
- Image size: train = 224 x 224, test = 256 x 256
- **Papers:**
- FBNetV3: Joint Architecture-Recipe Search using Predictor Pretraining: https://arxiv.org/abs/2006.02049
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('fbnetv3_d.ra2_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'fbnetv3_d.ra2_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 16, 112, 112])
# torch.Size([1, 24, 56, 56])
# torch.Size([1, 40, 28, 28])
# torch.Size([1, 128, 14, 14])
# torch.Size([1, 1440, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'fbnetv3_d.ra2_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1440, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{dai2021fbnetv3,
title={Fbnetv3: Joint architecture-recipe search using predictor pretraining},
author={Dai, Xiaoliang and Wan, Alvin and Zhang, Peizhao and Wu, Bichen and He, Zijian and Wei, Zhen and Chen, Kan and Tian, Yuandong and Yu, Matthew and Vajda, Peter and others},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={16276--16285},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
| 4,857 | [
[
-0.03271484375,
-0.032958984375,
0.003116607666015625,
0.007579803466796875,
-0.0219268798828125,
-0.025848388671875,
-0.01313018798828125,
-0.0313720703125,
0.0189971923828125,
0.03668212890625,
-0.03662109375,
-0.048370361328125,
-0.054229736328125,
-0.008... |
ProomptEngineer/pe-lofi-hiphop-lofi-girl-concept | 2023-09-11T15:33:05.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"license:other",
"has_space",
"region:us"
] | text-to-image | ProomptEngineer | null | null | ProomptEngineer/pe-lofi-hiphop-lofi-girl-concept | 1 | 809 | diffusers | 2023-09-11T15:33:03 | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PELofiHipHop
widget:
- text: PELofiHipHop
---
# PE Lofi HipHop / Lofi Girl [Concept]

<h2 id="heading-63">If you want to donate:</h2><h2 id="heading-64"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2><h2 id="heading-4">Make a character do the lofi girl pose.</h2><h2 id="heading-5">Weights 0.8-1</h2><h2 id="heading-6"></h2>
## Image examples for the model:









| 837 | [
[
-0.011138916015625,
-0.04266357421875,
0.019073486328125,
0.0161285400390625,
-0.0369873046875,
-0.01424407958984375,
0.052215576171875,
-0.048126220703125,
0.051116943359375,
0.04901123046875,
-0.0626220703125,
-0.00679779052734375,
-0.047698974609375,
0.00... |
cepiloth/ko-llama2-finetune-ex2 | 2023-11-01T07:17:25.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | cepiloth | null | null | cepiloth/ko-llama2-finetune-ex2 | 0 | 809 | transformers | 2023-10-26T08:52:53 | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
---
# Model Trained Using AutoTrain
# License
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License, under LLAMA 2 COMMUNITY LICENSE AGREEMENT
This model was created as a personal experiment, unrelated to the organization I work for. | 352 | [
[
-0.0023097991943359375,
-0.00373077392578125,
0.0294189453125,
0.018951416015625,
-0.040191650390625,
0.004589080810546875,
0.03228759765625,
-0.04443359375,
0.007526397705078125,
0.0341796875,
-0.06005859375,
-0.006908416748046875,
-0.03826904296875,
0.0228... |
it5/it5-large-news-summarization | 2022-03-09T07:53:26.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"italian",
"sequence-to-sequence",
"fanpage",
"ilpost",
"summarization",
"it",
"dataset:ARTeLab/fanpage",
"dataset:ARTeLab/ilpost",
"arxiv:2203.03759",
"license:apache-2.0",
"co2_eq_emissions",
"... | summarization | it5 | null | null | it5/it5-large-news-summarization | 0 | 808 | transformers | 2022-03-02T23:29:05 | ---
language:
- it
license: apache-2.0
datasets:
- ARTeLab/fanpage
- ARTeLab/ilpost
tags:
- italian
- sequence-to-sequence
- fanpage
- ilpost
- summarization
widget:
- text: "Non lo vuole sposare. E’ quanto emerge all’interno dell’ultima intervista di Raffaella Fico che, ringraziando Mancini per i buoni consigli elargiti al suo fidanzato, rimanda l’idea del matrimonio per qualche anno ancora. La soubrette, che è stata recentemente protagonista di una dedica di Supermario, non ha ancora intenzione di accasarsi perché è sicura che per mettersi la fede al dito ci sia ancora tempo. Nonostante il suo Mario sia uno degli sportivi più desiderati al mondo, l’ex protagonista del Grande Fratello non ha alcuna intenzione di cedere seriamente alla sua corte. Solo qualche giorno fa, infatti, dopo l’ultima bravata di Balotelli, Mancini gli aveva consigliato di sposare la sua Raffaella e di mettere la testa a posto. Chi pensava che sarebbe stato Mario a rispondere, però, si è sbagliato. A mettere le cose bene in chiaro è la Fico che, intervistata dall’emittente radiofonica Rtl 102.5, dice: È presto per sposarsi, siamo ancora molto giovani. È giusto che prima uno si realizzi nel proprio lavoro. E poi successivamente perché no, ci si può anche pensare. Quando si è giovani capita di fare qualche pazzia, quindi ci sta. Comunque i tabloid inglesi sono totalmente accaniti sulla sua vita privata quando poi dovrebbero interessarsi di più di quello che fa sul campo. Lui non fa le cose con cattiveria, ma quando si è giovani si fanno determinate cose senza stare a pensare se sono giuste o sbagliate. Mario ha gli obiettivi puntati addosso: più per la sua vita privata che come giocatore. Per me può anche andare in uno strip club, se non fa niente di male, con gli amici, però devo dire che alla fine torna sempre da me, sono la sua preferita."
- text: "Valerio è giovanissimo ma già una star. Fuori dall’Ariston ragazzine e meno ragazzine passano ore anche sotto la pioggia per vederlo. Lui è forte del suo talento e sicuro. Partecipa in gara tra i “big” di diritto, per essere arrivato in finalissima nel programma Amici di Maria De Filippi e presenta il brano Per tutte le volte che scritta per lui da Pierdavide Carone. Valerio Scanu è stato eliminato. Ma non è detta l'ultima parola: il duetto di questa sera con Alessandra Amoroso potrebbe risollevarlo e farlo rientrare in gara. Che cosa è successo alla giuria visto che sei stato eliminato anche se l’esibizione era perfetta? Nn lo so. Sono andate bene le esibizioni, ero emozionato ma tranquillo. Ero contento ma ho cantato bene. Non sono passato e stasera ci sarà il ballottaggio… Quali sono le differenze tra Amici e Sanremo? Sono due cose diverse. Amici ti prepara a salire sul palco di amici. A Sanremo ci devi arrivare… ho fatto più di sessanta serate nel tour estivo, poi promozione del secondo disco. Una bella palestra. Sono cresciuto anche umanamente. Sono riuscito a percepire quello che il pubblico trasmette. L’umiltà? Prima di tutto. Sennò non sarei qui."
- text: "L’azienda statunitense Broadcom, uno dei più grandi produttori di semiconduttori al mondo, ha presentato un’offerta per acquisire Qualcomm, altra grande società degli Stati Uniti conosciuta soprattutto per la sua produzione di microprocessori Snapdragon (ARM), utilizzati in centinaia di milioni di smartphone in giro per il mondo. Broadcom ha proposto di acquistare ogni azione di Qualcomm al prezzo di 70 dollari, per un valore complessivo di circa 105 miliardi di dollari (130 miliardi se si comprendono 25 miliardi di debiti netti) . Se l’operazione dovesse essere approvata, sarebbe una delle più grandi acquisizioni di sempre nella storia del settore tecnologico degli Stati Uniti. Broadcom ha perfezionato per mesi la sua proposta di acquisto e, secondo i media statunitensi, avrebbe già preso contatti con Qualcomm per trovare un accordo. Secondo gli analisti, Qualcomm potrebbe comunque opporsi all’acquisizione perché il prezzo offerto è di poco superiore a quello dell’attuale valore delle azioni dell’azienda. Ci potrebbero essere inoltre complicazioni sul piano dell’antitrust da valutare, prima di un’eventuale acquisizione."
- text: "Dal 31 maggio è infine partita la piattaforma ITsART, a più di un anno da quando – durante il primo lockdown – il ministro della Cultura Dario Franceschini ne aveva parlato come di «una sorta di Netflix della cultura», pensata per «offrire a tutto il mondo la cultura italiana a pagamento». È presto per dare giudizi definitivi sulla piattaforma, e di certo sarà difficile farlo anche più avanti senza numeri precisi. Al momento, l’unica cosa che si può fare è guardare com’è fatto il sito, contare quanti contenuti ci sono (circa 700 “titoli”, tra film, documentari, spettacoli teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore e sulla loro varietà. Intanto, una cosa notata da più parti è che diversi contenuti di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay, siano invece disponibili gratuitamente."
metrics:
- rouge
model-index:
- name: it5-large-news-summarization
results:
- task:
type: news-summarization
name: "News Summarization"
dataset:
type: newssum-it
name: "NewsSum-IT"
metrics:
- type: rouge1
value: 0.249
name: "Test Rouge1 IlPost"
- type: rouge2
value: 0.102
name: "Test Rouge2 IlPost"
- type: rougeL
value: 0.199
name: "Test RougeL IlPost"
- type: bertscore
value: 0.313
name: "Test BERTScore IlPost"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
- type: rouge1
value: 0.253
name: "Test Rouge1 Fanpage"
- type: rouge2
value: 0.099
name: "Test Rouge2 Fanpage"
- type: rougeL
value: 0.191
name: "Test RougeL Fanpage"
- type: bertscore
value: 0.316
name: "Test BERTScore Fanpage"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
co2_eq_emissions:
emissions: "51g"
source: "Google Cloud Platform Carbon Footprint"
training_type: "fine-tuning"
geographical_location: "Eemshaven, Netherlands, Europe"
hardware_used: "1 TPU v3-8 VM"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# IT5 Large for News Summarization ✂️🗞️ 🇮🇹
This repository contains the checkpoint for the [IT5 Large](https://huggingface.co/gsarti/it5-large) model fine-tuned on news summarization on the [Fanpage](https://huggingface.co/datasets/ARTeLab/fanpage) and [Il Post](https://huggingface.co/datasets/ARTeLab/ilpost) corpora as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
newsum = pipeline("summarization", model='it5/it5-large-news-summarization')
newsum("Dal 31 maggio è infine partita la piattaforma ITsART, a più di un anno da quando – durante il primo lockdown – il ministro della Cultura Dario Franceschini ne aveva parlato come di «una sorta di Netflix della cultura», pensata per «offrire a tutto il mondo la cultura italiana a pagamento». È presto per dare giudizi definitivi sulla piattaforma, e di certo sarà difficile farlo anche più avanti senza numeri precisi. Al momento, l’unica cosa che si può fare è guardare com’è fatto il sito, contare quanti contenuti ci sono (circa 700 “titoli”, tra film, documentari, spettacoli teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore e sulla loro varietà. Intanto, una cosa notata da più parti è che diversi contenuti di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay, siano invece disponibili gratuitamente.")
>>> [{"generated_text": "ITsART, la Netflix della cultura italiana, parte da maggio. Film, documentari, spettacoli teatrali e musicali disponibili sul nuovo sito a pagamento."}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-large-news-summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-large-news-summarization")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` | 9,449 | [
[
-0.036773681640625,
-0.01336669921875,
0.01415252685546875,
0.0369873046875,
-0.034576416015625,
-0.002532958984375,
-0.025909423828125,
-0.00354766845703125,
0.032318115234375,
0.0199737548828125,
-0.038970947265625,
-0.057586669921875,
-0.049072265625,
0.0... |
facebook/data2vec-vision-base-ft1k | 2022-05-03T15:08:31.000Z | [
"transformers",
"pytorch",
"tf",
"data2vec-vision",
"image-classification",
"vision",
"dataset:imagenet",
"dataset:imagenet-1k",
"arxiv:2202.03555",
"arxiv:2106.08254",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | facebook | null | null | facebook/data2vec-vision-base-ft1k | 1 | 808 | transformers | 2022-04-14T08:09:21 | ---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- imagenet
- imagenet-1k
---
# Data2Vec-Vision (base-sized model, fine-tuned on ImageNet-1k)
BEiT model pre-trained in a self-supervised fashion and fine-tuned on ImageNet-1k (1,2 million images, 1000 classes) at resolution 224x224. It was introduced in the paper [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli and first released in [this repository](https://github.com/facebookresearch/data2vec_vision/tree/main/beit).
Disclaimer: The team releasing Facebook team did not write a model card for this model so this model card has been written by the Hugging Face team.
## Pre-Training method

For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555).
## Abstract
*While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because
they were developed with a single modality in
mind. To get us closer to general self-supervised
learning, we present data2vec, a framework that
uses the same learning method for either speech,
NLP or computer vision. The core idea is to predict latent representations of the full input data
based on a masked view of the input in a selfdistillation setup using a standard Transformer architecture. Instead of predicting modality-specific
targets such as words, visual tokens or units of
human speech which are local in nature, data2vec
predicts contextualized latent representations that
contain information from the entire input. Experiments on the major benchmarks of speech
recognition, image classification, and natural language understanding demonstrate a new state of
the art or competitive performance to predominant approaches.*
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=data2vec-vision) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import BeitFeatureExtractor, Data2VecVisionForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = BeitFeatureExtractor.from_pretrained('facebook/data2vec-vision-base-ft1k')
model = Data2VecVisionForImageClassification.from_pretrained('facebook/data2vec-vision-base-ft1k')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The BEiT model was pretrained and fine-tuned on [ImageNet-1k](http://www.image-net.org/), a dataset consisting of 1,2 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py).
Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
### Pretraining
For all pre-training related hyperparameters, we refer to the [original paper](https://arxiv.org/abs/2106.08254) and the [original codebase](https://github.com/facebookresearch/data2vec_vision/tree/main/beit)
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 1 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution. Of course, increasing the model size will result in better performance.
We evaluated the model on `ImageNet1K` and got top-1 accuracy = **83.97** while in the original paper it was reported top-1 accuracy = 84.2.
If you want to reproduce our evaluation process you can use [This Colab Notebook](https://colab.research.google.com/drive/1Tse8Rfv-QhapMEMzauxUqnAQyXUgnTLK?usp=sharing)
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2202.03555,
doi = {10.48550/ARXIV.2202.03555},
url = {https://arxiv.org/abs/2202.03555},
author = {Baevski, Alexei and Hsu, Wei-Ning and Xu, Qiantong and Babu, Arun and Gu, Jiatao and Auli, Michael},
keywords = {Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` | 5,162 | [
[
-0.0256195068359375,
-0.042572021484375,
-0.006595611572265625,
-0.0016183853149414062,
-0.0180206298828125,
-0.0251922607421875,
-0.00891876220703125,
-0.047607421875,
-0.0002332925796508789,
0.03369140625,
-0.03790283203125,
-0.0450439453125,
-0.04617309570312... |
circulus/sd-photoreal-semi-v2 | 2023-01-15T07:44:06.000Z | [
"diffusers",
"generative ai",
"stable-diffusion",
"image-to-image",
"realism",
"art",
"text-to-image",
"en",
"license:gpl-3.0",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | circulus | null | null | circulus/sd-photoreal-semi-v2 | 3 | 808 | diffusers | 2023-01-15T06:12:45 | ---
license: gpl-3.0
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- generative ai
- stable-diffusion
- image-to-image
- realism
- art
---
Photoreal Semi v2
Finetuned Stable Diffusion 1.5 for generating images

 | 274 | [
[
-0.040130615234375,
-0.06488037109375,
0.01520538330078125,
0.029296875,
-0.041473388671875,
-0.00949859619140625,
0.016998291015625,
-0.005962371826171875,
0.0032253265380859375,
0.032928466796875,
-0.0243682861328125,
-0.032470703125,
-0.0189666748046875,
... |
Helsinki-NLP/opus-mt-it-es | 2023-08-16T11:58:52.000Z | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"it",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"has_space"
] | translation | Helsinki-NLP | null | null | Helsinki-NLP/opus-mt-it-es | 0 | 807 | transformers | 2022-03-02T23:29:04 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-it-es
* source languages: it
* target languages: es
* OPUS readme: [it-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/it-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/it-es/opus-2020-01-26.zip)
* test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-es/opus-2020-01-26.test.txt)
* test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-es/opus-2020-01-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.it.es | 61.2 | 0.761 |
| 818 | [
[
-0.017822265625,
-0.02789306640625,
0.0160369873046875,
0.033935546875,
-0.038818359375,
-0.016937255859375,
-0.035552978515625,
0.0018062591552734375,
0.006786346435546875,
0.0311126708984375,
-0.049102783203125,
-0.0499267578125,
-0.043060302734375,
0.0138... |
llm-book/bert-base-japanese-v3-jnli | 2023-07-24T06:49:14.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"ja",
"dataset:llm-book/JGLUE",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | llm-book | null | null | llm-book/bert-base-japanese-v3-jnli | 0 | 807 | transformers | 2023-06-12T14:15:16 | ---
language:
- ja
license: apache-2.0
library_name: transformers
datasets:
- llm-book/JGLUE
pipeline_tag: text-classification
---
# bert-base-japanese-v3-jnli
「[大規模言語モデル入門](https://www.amazon.co.jp/dp/4297136333)」の第5章で紹介している(自然言語推論)のモデルです。
[cl-tohoku/bert-base-japanese-v3](https://huggingface.co/cl-tohoku/bert-base-japanese-v3)を[JGLUE](https://huggingface.co/datasets/llm-book/JGLUE)のMARC-jaデータセットでファインチューニングして構築されています。
## 関連リンク
* [GitHubリポジトリ](https://github.com/ghmagazine/llm-book)
* [Colabノートブック(訓練)](https://colab.research.google.com/github/ghmagazine/llm-book/blob/main/chapter5/5-4-jnli-finetuning.ipynb)
* [Colabノートブック(推論)](https://colab.research.google.com/github/ghmagazine/llm-book/blob/main/chapter5/5-4-jnli-analysis.ipynb)
* [データセット](https://huggingface.co/datasets/llm-book/JGLUE)
* [大規模言語モデル入門(Amazon.co.jp)](https://www.amazon.co.jp/dp/4297136333/)
* [大規模言語モデル入門(gihyo.jp)](https://gihyo.jp/book/2023/978-4-297-13633-8)
## 使い方
```python
from transformers import pipeline
nli_pipeline = pipeline(model="llm-book/bert-base-japanese-v3-jnli")
text = "二人の男性がジェット機を見ています"
entailment_text = "ジェット機を見ている人が二人います"
# textとentailment_textの論理関係を予測
print(nli_pipeline({"text": text, "text_pair": entailment_text}))
# {'label': 'entailment', 'score': 0.9964311122894287}
```
## ライセンス
[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) | 1,364 | [
[
-0.0292816162109375,
-0.052093505859375,
0.0167694091796875,
0.02545166015625,
-0.0280914306640625,
-0.0113525390625,
-0.0192413330078125,
-0.03369140625,
0.034454345703125,
0.038360595703125,
-0.0555419921875,
-0.05181884765625,
-0.033203125,
0.019515991210... |
abin-regi/my-pet-dog-xzk | 2023-08-10T10:18:50.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | abin-regi | null | null | abin-regi/my-pet-dog-xzk | 0 | 807 | diffusers | 2023-08-10T10:14:54 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-xzk Dreambooth model trained by abin-regi following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: VJCET421
Sample pictures of this concept:


| 497 | [
[
-0.0616455078125,
-0.018402099609375,
0.0204315185546875,
0.0055389404296875,
-0.0178375244140625,
0.03558349609375,
0.0247802734375,
-0.039093017578125,
0.05462646484375,
0.0204010009765625,
-0.0596923828125,
-0.0287322998046875,
-0.0098724365234375,
0.0040... |
Yntec/elldrethSLucidMix | 2023-09-27T13:34:20.000Z | [
"diffusers",
"General",
"Elldreth",
"Dream",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Yntec | null | null | Yntec/elldrethSLucidMix | 1 | 807 | diffusers | 2023-09-27T12:00:38 | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- General
- Elldreth
- Dream
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
This model with the zVAE baked in.
Original Page: https://huggingface.co/danbrown/elldreth-lucid-mix
Sample and prompt:

DETAILED CHIBI EYES, Cartoon Pretty CUTE Girl, bedroom, beautiful detailed white clothes, Magazine ad, gorgeous detailed hair, 1949, iconic. acrylic art on canvas By KlaysMoji and artgerm and Clay Mann and and leyendecker and Dave Rapoza | 680 | [
[
-0.031097412109375,
-0.054840087890625,
0.0232696533203125,
0.01018524169921875,
0.002559661865234375,
0.00249481201171875,
0.016876220703125,
-0.04638671875,
0.08349609375,
0.04071044921875,
-0.068359375,
-0.03326416015625,
-0.0170135498046875,
-0.025817871... |
thtang/ALL_862873 | 2023-10-27T10:29:31.000Z | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"mteb",
"model-index",
"endpoints_compatible",
"region:us"
] | feature-extraction | thtang | null | null | thtang/ALL_862873 | 0 | 807 | transformers | 2023-10-27T05:44:00 | ---
tags:
- mteb
model-index:
- name: ALL_862873
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 50.805970149253746
- type: ap
value: 21.350961103104364
- type: f1
value: 46.546166439875044
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 52.567125000000004
- type: ap
value: 51.37893936391345
- type: f1
value: 51.8411977908125
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 22.63
- type: f1
value: 21.964526516204575
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.991
- type: map_at_10
value: 4.095
- type: map_at_100
value: 4.763
- type: map_at_1000
value: 4.8759999999999994
- type: map_at_3
value: 3.3070000000000004
- type: map_at_5
value: 3.73
- type: mrr_at_1
value: 2.0629999999999997
- type: mrr_at_10
value: 4.119
- type: mrr_at_100
value: 4.787
- type: mrr_at_1000
value: 4.9
- type: mrr_at_3
value: 3.331
- type: mrr_at_5
value: 3.768
- type: ndcg_at_1
value: 1.991
- type: ndcg_at_10
value: 5.346
- type: ndcg_at_100
value: 9.181000000000001
- type: ndcg_at_1000
value: 13.004
- type: ndcg_at_3
value: 3.7199999999999998
- type: ndcg_at_5
value: 4.482
- type: precision_at_1
value: 1.991
- type: precision_at_10
value: 0.9390000000000001
- type: precision_at_100
value: 0.28700000000000003
- type: precision_at_1000
value: 0.061
- type: precision_at_3
value: 1.636
- type: precision_at_5
value: 1.351
- type: recall_at_1
value: 1.991
- type: recall_at_10
value: 9.388
- type: recall_at_100
value: 28.663
- type: recall_at_1000
value: 60.597
- type: recall_at_3
value: 4.9079999999999995
- type: recall_at_5
value: 6.757000000000001
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 14.790995349964428
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 12.248406292959412
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 44.88116875696166
- type: mrr
value: 56.07439651760981
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 19.26573437410263
- type: cos_sim_spearman
value: 21.34145013484056
- type: euclidean_pearson
value: 22.39226418475093
- type: euclidean_spearman
value: 23.511981519581447
- type: manhattan_pearson
value: 22.14346931904813
- type: manhattan_spearman
value: 23.39390654000631
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 36.42857142857143
- type: f1
value: 34.81640976406094
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 13.94296328377691
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 9.790764523161606
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.968
- type: map_at_10
value: 2.106
- type: map_at_100
value: 2.411
- type: map_at_1000
value: 2.4899999999999998
- type: map_at_3
value: 1.797
- type: map_at_5
value: 1.9959999999999998
- type: mrr_at_1
value: 1.717
- type: mrr_at_10
value: 3.0349999999999997
- type: mrr_at_100
value: 3.4029999999999996
- type: mrr_at_1000
value: 3.486
- type: mrr_at_3
value: 2.6470000000000002
- type: mrr_at_5
value: 2.876
- type: ndcg_at_1
value: 1.717
- type: ndcg_at_10
value: 2.9059999999999997
- type: ndcg_at_100
value: 4.715
- type: ndcg_at_1000
value: 7.318
- type: ndcg_at_3
value: 2.415
- type: ndcg_at_5
value: 2.682
- type: precision_at_1
value: 1.717
- type: precision_at_10
value: 0.658
- type: precision_at_100
value: 0.197
- type: precision_at_1000
value: 0.054
- type: precision_at_3
value: 1.431
- type: precision_at_5
value: 1.059
- type: recall_at_1
value: 0.968
- type: recall_at_10
value: 4.531000000000001
- type: recall_at_100
value: 13.081000000000001
- type: recall_at_1000
value: 32.443
- type: recall_at_3
value: 2.8850000000000002
- type: recall_at_5
value: 3.768
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.9390000000000001
- type: map_at_10
value: 1.516
- type: map_at_100
value: 1.6680000000000001
- type: map_at_1000
value: 1.701
- type: map_at_3
value: 1.314
- type: map_at_5
value: 1.388
- type: mrr_at_1
value: 1.146
- type: mrr_at_10
value: 1.96
- type: mrr_at_100
value: 2.166
- type: mrr_at_1000
value: 2.207
- type: mrr_at_3
value: 1.72
- type: mrr_at_5
value: 1.796
- type: ndcg_at_1
value: 1.146
- type: ndcg_at_10
value: 1.9769999999999999
- type: ndcg_at_100
value: 2.8400000000000003
- type: ndcg_at_1000
value: 4.035
- type: ndcg_at_3
value: 1.5859999999999999
- type: ndcg_at_5
value: 1.6709999999999998
- type: precision_at_1
value: 1.146
- type: precision_at_10
value: 0.43299999999999994
- type: precision_at_100
value: 0.11100000000000002
- type: precision_at_1000
value: 0.027999999999999997
- type: precision_at_3
value: 0.8699999999999999
- type: precision_at_5
value: 0.611
- type: recall_at_1
value: 0.9390000000000001
- type: recall_at_10
value: 2.949
- type: recall_at_100
value: 6.737
- type: recall_at_1000
value: 15.604999999999999
- type: recall_at_3
value: 1.846
- type: recall_at_5
value: 2.08
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.28
- type: map_at_10
value: 2.157
- type: map_at_100
value: 2.401
- type: map_at_1000
value: 2.4570000000000003
- type: map_at_3
value: 1.865
- type: map_at_5
value: 1.928
- type: mrr_at_1
value: 1.505
- type: mrr_at_10
value: 2.52
- type: mrr_at_100
value: 2.782
- type: mrr_at_1000
value: 2.8400000000000003
- type: mrr_at_3
value: 2.1839999999999997
- type: mrr_at_5
value: 2.2689999999999997
- type: ndcg_at_1
value: 1.505
- type: ndcg_at_10
value: 2.798
- type: ndcg_at_100
value: 4.2090000000000005
- type: ndcg_at_1000
value: 6.105
- type: ndcg_at_3
value: 2.157
- type: ndcg_at_5
value: 2.258
- type: precision_at_1
value: 1.505
- type: precision_at_10
value: 0.5519999999999999
- type: precision_at_100
value: 0.146
- type: precision_at_1000
value: 0.034999999999999996
- type: precision_at_3
value: 1.024
- type: precision_at_5
value: 0.7020000000000001
- type: recall_at_1
value: 1.28
- type: recall_at_10
value: 4.455
- type: recall_at_100
value: 11.169
- type: recall_at_1000
value: 26.046000000000003
- type: recall_at_3
value: 2.6270000000000002
- type: recall_at_5
value: 2.899
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.264
- type: map_at_10
value: 0.615
- type: map_at_100
value: 0.76
- type: map_at_1000
value: 0.803
- type: map_at_3
value: 0.40499999999999997
- type: map_at_5
value: 0.512
- type: mrr_at_1
value: 0.33899999999999997
- type: mrr_at_10
value: 0.718
- type: mrr_at_100
value: 0.8880000000000001
- type: mrr_at_1000
value: 0.935
- type: mrr_at_3
value: 0.508
- type: mrr_at_5
value: 0.616
- type: ndcg_at_1
value: 0.33899999999999997
- type: ndcg_at_10
value: 0.9079999999999999
- type: ndcg_at_100
value: 1.9029999999999998
- type: ndcg_at_1000
value: 3.4939999999999998
- type: ndcg_at_3
value: 0.46499999999999997
- type: ndcg_at_5
value: 0.655
- type: precision_at_1
value: 0.33899999999999997
- type: precision_at_10
value: 0.192
- type: precision_at_100
value: 0.079
- type: precision_at_1000
value: 0.023
- type: precision_at_3
value: 0.22599999999999998
- type: precision_at_5
value: 0.22599999999999998
- type: recall_at_1
value: 0.264
- type: recall_at_10
value: 1.789
- type: recall_at_100
value: 6.927
- type: recall_at_1000
value: 19.922
- type: recall_at_3
value: 0.5459999999999999
- type: recall_at_5
value: 0.9979999999999999
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.5599999999999999
- type: map_at_10
value: 0.9129999999999999
- type: map_at_100
value: 1.027
- type: map_at_1000
value: 1.072
- type: map_at_3
value: 0.715
- type: map_at_5
value: 0.826
- type: mrr_at_1
value: 0.8710000000000001
- type: mrr_at_10
value: 1.331
- type: mrr_at_100
value: 1.494
- type: mrr_at_1000
value: 1.547
- type: mrr_at_3
value: 1.119
- type: mrr_at_5
value: 1.269
- type: ndcg_at_1
value: 0.8710000000000001
- type: ndcg_at_10
value: 1.2590000000000001
- type: ndcg_at_100
value: 2.023
- type: ndcg_at_1000
value: 3.737
- type: ndcg_at_3
value: 0.8750000000000001
- type: ndcg_at_5
value: 1.079
- type: precision_at_1
value: 0.8710000000000001
- type: precision_at_10
value: 0.28600000000000003
- type: precision_at_100
value: 0.086
- type: precision_at_1000
value: 0.027999999999999997
- type: precision_at_3
value: 0.498
- type: precision_at_5
value: 0.42300000000000004
- type: recall_at_1
value: 0.5599999999999999
- type: recall_at_10
value: 1.907
- type: recall_at_100
value: 5.492
- type: recall_at_1000
value: 18.974
- type: recall_at_3
value: 0.943
- type: recall_at_5
value: 1.41
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.9720000000000002
- type: map_at_10
value: 2.968
- type: map_at_100
value: 3.2009999999999996
- type: map_at_1000
value: 3.2680000000000002
- type: map_at_3
value: 2.683
- type: map_at_5
value: 2.8369999999999997
- type: mrr_at_1
value: 2.406
- type: mrr_at_10
value: 3.567
- type: mrr_at_100
value: 3.884
- type: mrr_at_1000
value: 3.948
- type: mrr_at_3
value: 3.2239999999999998
- type: mrr_at_5
value: 3.383
- type: ndcg_at_1
value: 2.406
- type: ndcg_at_10
value: 3.63
- type: ndcg_at_100
value: 5.155
- type: ndcg_at_1000
value: 7.381
- type: ndcg_at_3
value: 3.078
- type: ndcg_at_5
value: 3.3070000000000004
- type: precision_at_1
value: 2.406
- type: precision_at_10
value: 0.635
- type: precision_at_100
value: 0.184
- type: precision_at_1000
value: 0.048
- type: precision_at_3
value: 1.4120000000000001
- type: precision_at_5
value: 1.001
- type: recall_at_1
value: 1.9720000000000002
- type: recall_at_10
value: 5.152
- type: recall_at_100
value: 12.173
- type: recall_at_1000
value: 28.811999999999998
- type: recall_at_3
value: 3.556
- type: recall_at_5
value: 4.181
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.346
- type: map_at_10
value: 0.619
- type: map_at_100
value: 0.743
- type: map_at_1000
value: 0.788
- type: map_at_3
value: 0.5369999999999999
- type: map_at_5
value: 0.551
- type: mrr_at_1
value: 0.571
- type: mrr_at_10
value: 1.0619999999999998
- type: mrr_at_100
value: 1.2109999999999999
- type: mrr_at_1000
value: 1.265
- type: mrr_at_3
value: 0.818
- type: mrr_at_5
value: 0.927
- type: ndcg_at_1
value: 0.571
- type: ndcg_at_10
value: 0.919
- type: ndcg_at_100
value: 1.688
- type: ndcg_at_1000
value: 3.3649999999999998
- type: ndcg_at_3
value: 0.6779999999999999
- type: ndcg_at_5
value: 0.7230000000000001
- type: precision_at_1
value: 0.571
- type: precision_at_10
value: 0.27399999999999997
- type: precision_at_100
value: 0.084
- type: precision_at_1000
value: 0.029
- type: precision_at_3
value: 0.381
- type: precision_at_5
value: 0.32
- type: recall_at_1
value: 0.346
- type: recall_at_10
value: 1.397
- type: recall_at_100
value: 5.079000000000001
- type: recall_at_1000
value: 18.060000000000002
- type: recall_at_3
value: 0.774
- type: recall_at_5
value: 0.8340000000000001
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.69
- type: map_at_10
value: 0.897
- type: map_at_100
value: 1.0030000000000001
- type: map_at_1000
value: 1.034
- type: map_at_3
value: 0.818
- type: map_at_5
value: 0.864
- type: mrr_at_1
value: 0.767
- type: mrr_at_10
value: 1.008
- type: mrr_at_100
value: 1.145
- type: mrr_at_1000
value: 1.183
- type: mrr_at_3
value: 0.895
- type: mrr_at_5
value: 0.9560000000000001
- type: ndcg_at_1
value: 0.767
- type: ndcg_at_10
value: 1.0739999999999998
- type: ndcg_at_100
value: 1.757
- type: ndcg_at_1000
value: 2.9090000000000003
- type: ndcg_at_3
value: 0.881
- type: ndcg_at_5
value: 0.9769999999999999
- type: precision_at_1
value: 0.767
- type: precision_at_10
value: 0.184
- type: precision_at_100
value: 0.06
- type: precision_at_1000
value: 0.018000000000000002
- type: precision_at_3
value: 0.358
- type: precision_at_5
value: 0.27599999999999997
- type: recall_at_1
value: 0.69
- type: recall_at_10
value: 1.508
- type: recall_at_100
value: 4.858
- type: recall_at_1000
value: 14.007
- type: recall_at_3
value: 0.997
- type: recall_at_5
value: 1.2269999999999999
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.338
- type: map_at_10
value: 0.661
- type: map_at_100
value: 0.7969999999999999
- type: map_at_1000
value: 0.8290000000000001
- type: map_at_3
value: 0.5559999999999999
- type: map_at_5
value: 0.5910000000000001
- type: mrr_at_1
value: 0.482
- type: mrr_at_10
value: 0.88
- type: mrr_at_100
value: 1.036
- type: mrr_at_1000
value: 1.075
- type: mrr_at_3
value: 0.74
- type: mrr_at_5
value: 0.779
- type: ndcg_at_1
value: 0.482
- type: ndcg_at_10
value: 0.924
- type: ndcg_at_100
value: 1.736
- type: ndcg_at_1000
value: 2.926
- type: ndcg_at_3
value: 0.677
- type: ndcg_at_5
value: 0.732
- type: precision_at_1
value: 0.482
- type: precision_at_10
value: 0.20600000000000002
- type: precision_at_100
value: 0.078
- type: precision_at_1000
value: 0.023
- type: precision_at_3
value: 0.367
- type: precision_at_5
value: 0.255
- type: recall_at_1
value: 0.338
- type: recall_at_10
value: 1.545
- type: recall_at_100
value: 5.38
- type: recall_at_1000
value: 14.609
- type: recall_at_3
value: 0.826
- type: recall_at_5
value: 0.975
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.8240000000000001
- type: map_at_10
value: 1.254
- type: map_at_100
value: 1.389
- type: map_at_1000
value: 1.419
- type: map_at_3
value: 1.158
- type: map_at_5
value: 1.189
- type: mrr_at_1
value: 0.9329999999999999
- type: mrr_at_10
value: 1.4200000000000002
- type: mrr_at_100
value: 1.59
- type: mrr_at_1000
value: 1.629
- type: mrr_at_3
value: 1.29
- type: mrr_at_5
value: 1.332
- type: ndcg_at_1
value: 0.9329999999999999
- type: ndcg_at_10
value: 1.53
- type: ndcg_at_100
value: 2.418
- type: ndcg_at_1000
value: 3.7310000000000003
- type: ndcg_at_3
value: 1.302
- type: ndcg_at_5
value: 1.363
- type: precision_at_1
value: 0.9329999999999999
- type: precision_at_10
value: 0.271
- type: precision_at_100
value: 0.083
- type: precision_at_1000
value: 0.024
- type: precision_at_3
value: 0.622
- type: precision_at_5
value: 0.41000000000000003
- type: recall_at_1
value: 0.8240000000000001
- type: recall_at_10
value: 2.1999999999999997
- type: recall_at_100
value: 6.584
- type: recall_at_1000
value: 17.068
- type: recall_at_3
value: 1.5859999999999999
- type: recall_at_5
value: 1.7260000000000002
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.404
- type: map_at_10
value: 0.788
- type: map_at_100
value: 0.9860000000000001
- type: map_at_1000
value: 1.04
- type: map_at_3
value: 0.676
- type: map_at_5
value: 0.733
- type: mrr_at_1
value: 0.5930000000000001
- type: mrr_at_10
value: 1.278
- type: mrr_at_100
value: 1.545
- type: mrr_at_1000
value: 1.599
- type: mrr_at_3
value: 1.054
- type: mrr_at_5
value: 1.192
- type: ndcg_at_1
value: 0.5930000000000001
- type: ndcg_at_10
value: 1.1280000000000001
- type: ndcg_at_100
value: 2.2689999999999997
- type: ndcg_at_1000
value: 4.274
- type: ndcg_at_3
value: 0.919
- type: ndcg_at_5
value: 1.038
- type: precision_at_1
value: 0.5930000000000001
- type: precision_at_10
value: 0.296
- type: precision_at_100
value: 0.152
- type: precision_at_1000
value: 0.05
- type: precision_at_3
value: 0.527
- type: precision_at_5
value: 0.47400000000000003
- type: recall_at_1
value: 0.404
- type: recall_at_10
value: 1.601
- type: recall_at_100
value: 6.885
- type: recall_at_1000
value: 22.356
- type: recall_at_3
value: 0.9490000000000001
- type: recall_at_5
value: 1.206
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.185
- type: map_at_10
value: 0.192
- type: map_at_100
value: 0.271
- type: map_at_1000
value: 0.307
- type: map_at_3
value: 0.185
- type: map_at_5
value: 0.185
- type: mrr_at_1
value: 0.185
- type: mrr_at_10
value: 0.20500000000000002
- type: mrr_at_100
value: 0.292
- type: mrr_at_1000
value: 0.331
- type: mrr_at_3
value: 0.185
- type: mrr_at_5
value: 0.185
- type: ndcg_at_1
value: 0.185
- type: ndcg_at_10
value: 0.211
- type: ndcg_at_100
value: 0.757
- type: ndcg_at_1000
value: 1.928
- type: ndcg_at_3
value: 0.185
- type: ndcg_at_5
value: 0.185
- type: precision_at_1
value: 0.185
- type: precision_at_10
value: 0.037
- type: precision_at_100
value: 0.039
- type: precision_at_1000
value: 0.015
- type: precision_at_3
value: 0.062
- type: precision_at_5
value: 0.037
- type: recall_at_1
value: 0.185
- type: recall_at_10
value: 0.246
- type: recall_at_100
value: 3.05
- type: recall_at_1000
value: 12.5
- type: recall_at_3
value: 0.185
- type: recall_at_5
value: 0.185
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.241
- type: map_at_10
value: 0.372
- type: map_at_100
value: 0.45999999999999996
- type: map_at_1000
value: 0.47600000000000003
- type: map_at_3
value: 0.33999999999999997
- type: map_at_5
value: 0.359
- type: mrr_at_1
value: 0.651
- type: mrr_at_10
value: 1.03
- type: mrr_at_100
value: 1.2489999999999999
- type: mrr_at_1000
value: 1.282
- type: mrr_at_3
value: 0.9450000000000001
- type: mrr_at_5
value: 1.0030000000000001
- type: ndcg_at_1
value: 0.651
- type: ndcg_at_10
value: 0.588
- type: ndcg_at_100
value: 1.2550000000000001
- type: ndcg_at_1000
value: 1.9040000000000001
- type: ndcg_at_3
value: 0.547
- type: ndcg_at_5
value: 0.549
- type: precision_at_1
value: 0.651
- type: precision_at_10
value: 0.182
- type: precision_at_100
value: 0.086
- type: precision_at_1000
value: 0.02
- type: precision_at_3
value: 0.434
- type: precision_at_5
value: 0.313
- type: recall_at_1
value: 0.241
- type: recall_at_10
value: 0.63
- type: recall_at_100
value: 3.1759999999999997
- type: recall_at_1000
value: 7.175
- type: recall_at_3
value: 0.46299999999999997
- type: recall_at_5
value: 0.543
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.04
- type: map_at_10
value: 0.089
- type: map_at_100
value: 0.133
- type: map_at_1000
value: 0.165
- type: map_at_3
value: 0.054
- type: map_at_5
value: 0.056999999999999995
- type: mrr_at_1
value: 0.75
- type: mrr_at_10
value: 1.4749999999999999
- type: mrr_at_100
value: 1.8010000000000002
- type: mrr_at_1000
value: 1.847
- type: mrr_at_3
value: 1.208
- type: mrr_at_5
value: 1.333
- type: ndcg_at_1
value: 0.625
- type: ndcg_at_10
value: 0.428
- type: ndcg_at_100
value: 0.705
- type: ndcg_at_1000
value: 1.564
- type: ndcg_at_3
value: 0.5369999999999999
- type: ndcg_at_5
value: 0.468
- type: precision_at_1
value: 0.75
- type: precision_at_10
value: 0.375
- type: precision_at_100
value: 0.27499999999999997
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 0.583
- type: precision_at_5
value: 0.5
- type: recall_at_1
value: 0.04
- type: recall_at_10
value: 0.385
- type: recall_at_100
value: 1.2670000000000001
- type: recall_at_1000
value: 4.522
- type: recall_at_3
value: 0.07100000000000001
- type: recall_at_5
value: 0.08099999999999999
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 22.749999999999996
- type: f1
value: 19.335020165482693
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.257
- type: map_at_10
value: 0.416
- type: map_at_100
value: 0.451
- type: map_at_1000
value: 0.46499999999999997
- type: map_at_3
value: 0.37
- type: map_at_5
value: 0.386
- type: mrr_at_1
value: 0.27
- type: mrr_at_10
value: 0.44200000000000006
- type: mrr_at_100
value: 0.48
- type: mrr_at_1000
value: 0.49500000000000005
- type: mrr_at_3
value: 0.38999999999999996
- type: mrr_at_5
value: 0.411
- type: ndcg_at_1
value: 0.27
- type: ndcg_at_10
value: 0.51
- type: ndcg_at_100
value: 0.738
- type: ndcg_at_1000
value: 1.2630000000000001
- type: ndcg_at_3
value: 0.41000000000000003
- type: ndcg_at_5
value: 0.439
- type: precision_at_1
value: 0.27
- type: precision_at_10
value: 0.084
- type: precision_at_100
value: 0.021
- type: precision_at_1000
value: 0.006999999999999999
- type: precision_at_3
value: 0.17500000000000002
- type: precision_at_5
value: 0.123
- type: recall_at_1
value: 0.257
- type: recall_at_10
value: 0.786
- type: recall_at_100
value: 1.959
- type: recall_at_1000
value: 6.334
- type: recall_at_3
value: 0.49699999999999994
- type: recall_at_5
value: 0.5680000000000001
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.28900000000000003
- type: map_at_10
value: 0.475
- type: map_at_100
value: 0.559
- type: map_at_1000
value: 0.5930000000000001
- type: map_at_3
value: 0.38999999999999996
- type: map_at_5
value: 0.41700000000000004
- type: mrr_at_1
value: 0.772
- type: mrr_at_10
value: 1.107
- type: mrr_at_100
value: 1.269
- type: mrr_at_1000
value: 1.323
- type: mrr_at_3
value: 0.9520000000000001
- type: mrr_at_5
value: 1.0290000000000001
- type: ndcg_at_1
value: 0.772
- type: ndcg_at_10
value: 0.755
- type: ndcg_at_100
value: 1.256
- type: ndcg_at_1000
value: 2.55
- type: ndcg_at_3
value: 0.633
- type: ndcg_at_5
value: 0.639
- type: precision_at_1
value: 0.772
- type: precision_at_10
value: 0.262
- type: precision_at_100
value: 0.082
- type: precision_at_1000
value: 0.03
- type: precision_at_3
value: 0.46299999999999997
- type: precision_at_5
value: 0.33999999999999997
- type: recall_at_1
value: 0.28900000000000003
- type: recall_at_10
value: 0.976
- type: recall_at_100
value: 2.802
- type: recall_at_1000
value: 11.466
- type: recall_at_3
value: 0.54
- type: recall_at_5
value: 0.6479999999999999
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.257
- type: map_at_10
value: 0.395
- type: map_at_100
value: 0.436
- type: map_at_1000
value: 0.447
- type: map_at_3
value: 0.347
- type: map_at_5
value: 0.369
- type: mrr_at_1
value: 0.513
- type: mrr_at_10
value: 0.787
- type: mrr_at_100
value: 0.865
- type: mrr_at_1000
value: 0.8840000000000001
- type: mrr_at_3
value: 0.6930000000000001
- type: mrr_at_5
value: 0.738
- type: ndcg_at_1
value: 0.513
- type: ndcg_at_10
value: 0.587
- type: ndcg_at_100
value: 0.881
- type: ndcg_at_1000
value: 1.336
- type: ndcg_at_3
value: 0.46299999999999997
- type: ndcg_at_5
value: 0.511
- type: precision_at_1
value: 0.513
- type: precision_at_10
value: 0.151
- type: precision_at_100
value: 0.04
- type: precision_at_1000
value: 0.01
- type: precision_at_3
value: 0.311
- type: precision_at_5
value: 0.22399999999999998
- type: recall_at_1
value: 0.257
- type: recall_at_10
value: 0.756
- type: recall_at_100
value: 1.9849999999999999
- type: recall_at_1000
value: 5.111000000000001
- type: recall_at_3
value: 0.466
- type: recall_at_5
value: 0.5599999999999999
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 50.76400000000001
- type: ap
value: 50.41569411130455
- type: f1
value: 50.14266303576945
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 0.14300000000000002
- type: map_at_10
value: 0.23700000000000002
- type: map_at_100
value: 0.27799999999999997
- type: map_at_1000
value: 0.291
- type: map_at_3
value: 0.197
- type: map_at_5
value: 0.215
- type: mrr_at_1
value: 0.14300000000000002
- type: mrr_at_10
value: 0.247
- type: mrr_at_100
value: 0.29
- type: mrr_at_1000
value: 0.303
- type: mrr_at_3
value: 0.201
- type: mrr_at_5
value: 0.219
- type: ndcg_at_1
value: 0.14300000000000002
- type: ndcg_at_10
value: 0.307
- type: ndcg_at_100
value: 0.5720000000000001
- type: ndcg_at_1000
value: 1.053
- type: ndcg_at_3
value: 0.215
- type: ndcg_at_5
value: 0.248
- type: precision_at_1
value: 0.14300000000000002
- type: precision_at_10
value: 0.056999999999999995
- type: precision_at_100
value: 0.02
- type: precision_at_1000
value: 0.006
- type: precision_at_3
value: 0.091
- type: precision_at_5
value: 0.07200000000000001
- type: recall_at_1
value: 0.14300000000000002
- type: recall_at_10
value: 0.522
- type: recall_at_100
value: 1.9009999999999998
- type: recall_at_1000
value: 5.893000000000001
- type: recall_at_3
value: 0.263
- type: recall_at_5
value: 0.34099999999999997
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 61.03283173734611
- type: f1
value: 61.24012492746259
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 29.68308253533972
- type: f1
value: 16.243459114946905
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 34.330867518493605
- type: f1
value: 33.176158044175935
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 44.13248150638871
- type: f1
value: 44.24904249078732
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 15.698400177259078
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 14.888797785310235
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 25.652445385382126
- type: mrr
value: 25.891573325600227
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.322
- type: map_at_10
value: 0.7230000000000001
- type: map_at_100
value: 1.248
- type: map_at_1000
value: 1.873
- type: map_at_3
value: 0.479
- type: map_at_5
value: 0.5700000000000001
- type: mrr_at_1
value: 6.502
- type: mrr_at_10
value: 10.735
- type: mrr_at_100
value: 11.848
- type: mrr_at_1000
value: 11.995000000000001
- type: mrr_at_3
value: 9.391
- type: mrr_at_5
value: 9.732000000000001
- type: ndcg_at_1
value: 6.037
- type: ndcg_at_10
value: 4.873
- type: ndcg_at_100
value: 5.959
- type: ndcg_at_1000
value: 14.424000000000001
- type: ndcg_at_3
value: 5.4559999999999995
- type: ndcg_at_5
value: 5.074
- type: precision_at_1
value: 6.192
- type: precision_at_10
value: 4.458
- type: precision_at_100
value: 2.5700000000000003
- type: precision_at_1000
value: 1.3679999999999999
- type: precision_at_3
value: 5.676
- type: precision_at_5
value: 4.954
- type: recall_at_1
value: 0.322
- type: recall_at_10
value: 1.545
- type: recall_at_100
value: 8.301
- type: recall_at_1000
value: 37.294
- type: recall_at_3
value: 0.623
- type: recall_at_5
value: 0.865
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.188
- type: map_at_10
value: 0.27
- type: map_at_100
value: 0.322
- type: map_at_1000
value: 0.335
- type: map_at_3
value: 0.246
- type: map_at_5
value: 0.246
- type: mrr_at_1
value: 0.203
- type: mrr_at_10
value: 0.28300000000000003
- type: mrr_at_100
value: 0.344
- type: mrr_at_1000
value: 0.357
- type: mrr_at_3
value: 0.261
- type: mrr_at_5
value: 0.261
- type: ndcg_at_1
value: 0.203
- type: ndcg_at_10
value: 0.329
- type: ndcg_at_100
value: 0.628
- type: ndcg_at_1000
value: 1.0959999999999999
- type: ndcg_at_3
value: 0.272
- type: ndcg_at_5
value: 0.272
- type: precision_at_1
value: 0.203
- type: precision_at_10
value: 0.055
- type: precision_at_100
value: 0.024
- type: precision_at_1000
value: 0.006999999999999999
- type: precision_at_3
value: 0.116
- type: precision_at_5
value: 0.06999999999999999
- type: recall_at_1
value: 0.188
- type: recall_at_10
value: 0.507
- type: recall_at_100
value: 1.883
- type: recall_at_1000
value: 5.609999999999999
- type: recall_at_3
value: 0.333
- type: recall_at_5
value: 0.333
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.016000000000002
- type: map_at_10
value: 28.977999999999998
- type: map_at_100
value: 29.579
- type: map_at_1000
value: 29.648999999999997
- type: map_at_3
value: 27.673
- type: map_at_5
value: 28.427000000000003
- type: mrr_at_1
value: 27.93
- type: mrr_at_10
value: 32.462999999999994
- type: mrr_at_100
value: 32.993
- type: mrr_at_1000
value: 33.044000000000004
- type: mrr_at_3
value: 31.252000000000002
- type: mrr_at_5
value: 31.968999999999998
- type: ndcg_at_1
value: 27.96
- type: ndcg_at_10
value: 31.954
- type: ndcg_at_100
value: 34.882000000000005
- type: ndcg_at_1000
value: 36.751
- type: ndcg_at_3
value: 29.767
- type: ndcg_at_5
value: 30.816
- type: precision_at_1
value: 27.96
- type: precision_at_10
value: 4.826
- type: precision_at_100
value: 0.697
- type: precision_at_1000
value: 0.093
- type: precision_at_3
value: 12.837000000000002
- type: precision_at_5
value: 8.559999999999999
- type: recall_at_1
value: 24.016000000000002
- type: recall_at_10
value: 37.574999999999996
- type: recall_at_100
value: 50.843
- type: recall_at_1000
value: 64.654
- type: recall_at_3
value: 31.182
- type: recall_at_5
value: 34.055
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 18.38048892083281
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 27.103011764141478
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.18
- type: map_at_10
value: 0.457
- type: map_at_100
value: 0.634
- type: map_at_1000
value: 0.7000000000000001
- type: map_at_3
value: 0.333
- type: map_at_5
value: 0.387
- type: mrr_at_1
value: 0.8999999999999999
- type: mrr_at_10
value: 1.967
- type: mrr_at_100
value: 2.396
- type: mrr_at_1000
value: 2.495
- type: mrr_at_3
value: 1.567
- type: mrr_at_5
value: 1.7670000000000001
- type: ndcg_at_1
value: 0.8999999999999999
- type: ndcg_at_10
value: 1.022
- type: ndcg_at_100
value: 2.366
- type: ndcg_at_1000
value: 4.689
- type: ndcg_at_3
value: 0.882
- type: ndcg_at_5
value: 0.7929999999999999
- type: precision_at_1
value: 0.8999999999999999
- type: precision_at_10
value: 0.58
- type: precision_at_100
value: 0.263
- type: precision_at_1000
value: 0.084
- type: precision_at_3
value: 0.8999999999999999
- type: precision_at_5
value: 0.74
- type: recall_at_1
value: 0.18
- type: recall_at_10
value: 1.208
- type: recall_at_100
value: 5.373
- type: recall_at_1000
value: 17.112
- type: recall_at_3
value: 0.5579999999999999
- type: recall_at_5
value: 0.7779999999999999
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 55.229896309578905
- type: cos_sim_spearman
value: 48.54616726085393
- type: euclidean_pearson
value: 53.828130644322
- type: euclidean_spearman
value: 48.2907441223958
- type: manhattan_pearson
value: 53.72684612327582
- type: manhattan_spearman
value: 48.228319721712744
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 57.73555535277214
- type: cos_sim_spearman
value: 55.58790083939622
- type: euclidean_pearson
value: 61.009463373795384
- type: euclidean_spearman
value: 56.696846101196044
- type: manhattan_pearson
value: 60.875111392597894
- type: manhattan_spearman
value: 56.63100766160946
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 19.47269635955134
- type: cos_sim_spearman
value: 18.35951746300603
- type: euclidean_pearson
value: 23.130707248318714
- type: euclidean_spearman
value: 22.92241668287248
- type: manhattan_pearson
value: 22.99371642148021
- type: manhattan_spearman
value: 22.770233678121897
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 31.78346805351368
- type: cos_sim_spearman
value: 28.84281669682782
- type: euclidean_pearson
value: 34.508176962091156
- type: euclidean_spearman
value: 32.269242265609975
- type: manhattan_pearson
value: 34.41366600914297
- type: manhattan_spearman
value: 32.15352239729175
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 29.550332218260465
- type: cos_sim_spearman
value: 29.188654452524528
- type: euclidean_pearson
value: 33.80339596511417
- type: euclidean_spearman
value: 33.49607278843874
- type: manhattan_pearson
value: 33.589427741967334
- type: manhattan_spearman
value: 33.288312003652884
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 27.163752699585885
- type: cos_sim_spearman
value: 39.0544187582685
- type: euclidean_pearson
value: 38.93841642732113
- type: euclidean_spearman
value: 42.861814968921294
- type: manhattan_pearson
value: 38.78821319739337
- type: manhattan_spearman
value: 42.757121435678954
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 57.15429605615292
- type: cos_sim_spearman
value: 61.21576579300284
- type: euclidean_pearson
value: 59.2835939062064
- type: euclidean_spearman
value: 60.902713241808236
- type: manhattan_pearson
value: 59.510770285546364
- type: manhattan_spearman
value: 61.02979474159327
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 41.81726547830133
- type: cos_sim_spearman
value: 44.45123398124273
- type: euclidean_pearson
value: 46.44144033159064
- type: euclidean_spearman
value: 46.61348337508052
- type: manhattan_pearson
value: 46.48092744041165
- type: manhattan_spearman
value: 46.78049599791891
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 46.085942179295465
- type: cos_sim_spearman
value: 44.394736992467365
- type: euclidean_pearson
value: 47.06981069147408
- type: euclidean_spearman
value: 45.40499474054004
- type: manhattan_pearson
value: 46.96497631950794
- type: manhattan_spearman
value: 45.31936619298336
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 43.89526517578129
- type: mrr
value: 64.30753070458954
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.417
- type: map_at_10
value: 2.189
- type: map_at_100
value: 2.5669999999999997
- type: map_at_1000
value: 2.662
- type: map_at_3
value: 1.694
- type: map_at_5
value: 1.928
- type: mrr_at_1
value: 1.667
- type: mrr_at_10
value: 2.4899999999999998
- type: mrr_at_100
value: 2.8400000000000003
- type: mrr_at_1000
value: 2.928
- type: mrr_at_3
value: 1.944
- type: mrr_at_5
value: 2.178
- type: ndcg_at_1
value: 1.667
- type: ndcg_at_10
value: 2.913
- type: ndcg_at_100
value: 5.482
- type: ndcg_at_1000
value: 8.731
- type: ndcg_at_3
value: 1.867
- type: ndcg_at_5
value: 2.257
- type: precision_at_1
value: 1.667
- type: precision_at_10
value: 0.567
- type: precision_at_100
value: 0.213
- type: precision_at_1000
value: 0.053
- type: precision_at_3
value: 0.7779999999999999
- type: precision_at_5
value: 0.6669999999999999
- type: recall_at_1
value: 1.417
- type: recall_at_10
value: 5.028
- type: recall_at_100
value: 18.5
- type: recall_at_1000
value: 45.072
- type: recall_at_3
value: 2.083
- type: recall_at_5
value: 3.083
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.02871287128713
- type: cos_sim_ap
value: 17.404404071912694
- type: cos_sim_f1
value: 25.89285714285714
- type: cos_sim_precision
value: 29.292929292929294
- type: cos_sim_recall
value: 23.200000000000003
- type: dot_accuracy
value: 99.0118811881188
- type: dot_ap
value: 5.4739000785007335
- type: dot_f1
value: 12.178702570379436
- type: dot_precision
value: 8.774250440917108
- type: dot_recall
value: 19.900000000000002
- type: euclidean_accuracy
value: 99.03663366336633
- type: euclidean_ap
value: 19.20851069839796
- type: euclidean_f1
value: 27.16555612506407
- type: euclidean_precision
value: 27.865404837013667
- type: euclidean_recall
value: 26.5
- type: manhattan_accuracy
value: 99.03663366336633
- type: manhattan_ap
value: 19.12862913626528
- type: manhattan_f1
value: 26.96629213483146
- type: manhattan_precision
value: 28.99884925201381
- type: manhattan_recall
value: 25.2
- type: max_accuracy
value: 99.03663366336633
- type: max_ap
value: 19.20851069839796
- type: max_f1
value: 27.16555612506407
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 23.657118721775905
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 27.343558395037043
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 23.346327148080043
- type: mrr
value: 21.99097063067651
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.032
- type: map_at_10
value: 0.157
- type: map_at_100
value: 0.583
- type: map_at_1000
value: 1.48
- type: map_at_3
value: 0.066
- type: map_at_5
value: 0.105
- type: mrr_at_1
value: 10
- type: mrr_at_10
value: 16.99
- type: mrr_at_100
value: 18.284
- type: mrr_at_1000
value: 18.394
- type: mrr_at_3
value: 14.000000000000002
- type: mrr_at_5
value: 15.8
- type: ndcg_at_1
value: 8
- type: ndcg_at_10
value: 7.504
- type: ndcg_at_100
value: 5.339
- type: ndcg_at_1000
value: 6.046
- type: ndcg_at_3
value: 8.358
- type: ndcg_at_5
value: 8.142000000000001
- type: precision_at_1
value: 10
- type: precision_at_10
value: 8.6
- type: precision_at_100
value: 5.9799999999999995
- type: precision_at_1000
value: 2.976
- type: precision_at_3
value: 9.333
- type: precision_at_5
value: 9.2
- type: recall_at_1
value: 0.032
- type: recall_at_10
value: 0.252
- type: recall_at_100
value: 1.529
- type: recall_at_1000
value: 6.364
- type: recall_at_3
value: 0.08499999999999999
- type: recall_at_5
value: 0.154
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.44200000000000006
- type: map_at_10
value: 0.996
- type: map_at_100
value: 1.317
- type: map_at_1000
value: 1.624
- type: map_at_3
value: 0.736
- type: map_at_5
value: 0.951
- type: mrr_at_1
value: 4.082
- type: mrr_at_10
value: 10.102
- type: mrr_at_100
value: 10.978
- type: mrr_at_1000
value: 11.1
- type: mrr_at_3
value: 7.8229999999999995
- type: mrr_at_5
value: 9.252
- type: ndcg_at_1
value: 4.082
- type: ndcg_at_10
value: 3.821
- type: ndcg_at_100
value: 5.682
- type: ndcg_at_1000
value: 10.96
- type: ndcg_at_3
value: 4.813
- type: ndcg_at_5
value: 4.757
- type: precision_at_1
value: 4.082
- type: precision_at_10
value: 3.061
- type: precision_at_100
value: 1.367
- type: precision_at_1000
value: 0.46299999999999997
- type: precision_at_3
value: 4.7620000000000005
- type: precision_at_5
value: 4.898000000000001
- type: recall_at_1
value: 0.44200000000000006
- type: recall_at_10
value: 2.059
- type: recall_at_100
value: 7.439
- type: recall_at_1000
value: 25.191000000000003
- type: recall_at_3
value: 1.095
- type: recall_at_5
value: 1.725
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 54.925999999999995
- type: ap
value: 9.658236434063275
- type: f1
value: 43.469829154993064
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 40.7498585172609
- type: f1
value: 40.720120106546574
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 20.165152514024733
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 77.59432556476128
- type: cos_sim_ap
value: 30.37846072188074
- type: cos_sim_f1
value: 37.9231242656521
- type: cos_sim_precision
value: 24.064474898814172
- type: cos_sim_recall
value: 89.41952506596306
- type: dot_accuracy
value: 77.42146986946415
- type: dot_ap
value: 24.073476661930034
- type: dot_f1
value: 37.710580857735025
- type: dot_precision
value: 23.61083383243495
- type: dot_recall
value: 93.61477572559367
- type: euclidean_accuracy
value: 77.64797043571556
- type: euclidean_ap
value: 31.892152386237594
- type: euclidean_f1
value: 38.21154759481647
- type: euclidean_precision
value: 25.719243766554023
- type: euclidean_recall
value: 74.30079155672823
- type: manhattan_accuracy
value: 77.6539309769327
- type: manhattan_ap
value: 31.89545356309865
- type: manhattan_f1
value: 38.16428166172855
- type: manhattan_precision
value: 25.07247577238466
- type: manhattan_recall
value: 79.86807387862797
- type: max_accuracy
value: 77.6539309769327
- type: max_ap
value: 31.89545356309865
- type: max_f1
value: 38.21154759481647
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 76.56886715566422
- type: cos_sim_ap
value: 44.04480929059786
- type: cos_sim_f1
value: 43.73100054674686
- type: cos_sim_precision
value: 30.540367168647098
- type: cos_sim_recall
value: 76.97874961502926
- type: dot_accuracy
value: 74.80110218496526
- type: dot_ap
value: 26.487746384962758
- type: dot_f1
value: 40.91940608182585
- type: dot_precision
value: 25.9157358738502
- type: dot_recall
value: 97.18201416692331
- type: euclidean_accuracy
value: 76.97054371870998
- type: euclidean_ap
value: 47.079120397438416
- type: euclidean_f1
value: 45.866182572614115
- type: euclidean_precision
value: 34.580791490692945
- type: euclidean_recall
value: 68.0859254696643
- type: manhattan_accuracy
value: 76.96084138626927
- type: manhattan_ap
value: 47.168701873575976
- type: manhattan_f1
value: 45.985439966237614
- type: manhattan_precision
value: 34.974321938693635
- type: manhattan_recall
value: 67.11579919926086
- type: max_accuracy
value: 76.97054371870998
- type: max_ap
value: 47.168701873575976
- type: max_f1
value: 45.985439966237614
- task:
type: STS
dataset:
type: C-MTEB/AFQMC
name: MTEB AFQMC
config: default
split: validation
revision: None
metrics:
- type: cos_sim_pearson
value: 3.322530620021471
- type: cos_sim_spearman
value: 3.7583567993545195
- type: euclidean_pearson
value: 3.743782192206081
- type: euclidean_spearman
value: 3.758336694921531
- type: manhattan_pearson
value: 3.845233721819267
- type: manhattan_spearman
value: 3.8542743797718026
- task:
type: STS
dataset:
type: C-MTEB/ATEC
name: MTEB ATEC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 8.552640773272078
- type: cos_sim_spearman
value: 10.086360519713061
- type: euclidean_pearson
value: 9.902099049347935
- type: euclidean_spearman
value: 10.086351512635042
- type: manhattan_pearson
value: 9.898006826713932
- type: manhattan_spearman
value: 10.076531690161783
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (zh)
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 21.955999999999996
- type: f1
value: 20.596128116112816
- task:
type: STS
dataset:
type: C-MTEB/BQ
name: MTEB BQ
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 17.6945509937099
- type: cos_sim_spearman
value: 19.312286927022825
- type: euclidean_pearson
value: 19.259393744977515
- type: euclidean_spearman
value: 19.312290390892713
- type: manhattan_pearson
value: 19.223527109645772
- type: manhattan_spearman
value: 19.32655209742963
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringP2P
name: MTEB CLSClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 18.657841790313405
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringS2S
name: MTEB CLSClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 16.82483158478091
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv1-reranking
name: MTEB CMedQAv1
config: default
split: test
revision: None
metrics:
- type: map
value: 19.71658789133091
- type: mrr
value: 23.480595238095237
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv2-reranking
name: MTEB CMedQAv2
config: default
split: test
revision: None
metrics:
- type: map
value: 22.475972401039495
- type: mrr
value: 25.993650793650797
- task:
type: Retrieval
dataset:
type: C-MTEB/CmedqaRetrieval
name: MTEB CmedqaRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 1.026
- type: map_at_10
value: 1.6389999999999998
- type: map_at_100
value: 1.875
- type: map_at_1000
value: 1.9529999999999998
- type: map_at_3
value: 1.417
- type: map_at_5
value: 1.5110000000000001
- type: mrr_at_1
value: 1.525
- type: mrr_at_10
value: 2.478
- type: mrr_at_100
value: 2.779
- type: mrr_at_1000
value: 2.861
- type: mrr_at_3
value: 2.105
- type: mrr_at_5
value: 2.283
- type: ndcg_at_1
value: 1.525
- type: ndcg_at_10
value: 2.222
- type: ndcg_at_100
value: 3.81
- type: ndcg_at_1000
value: 6.465999999999999
- type: ndcg_at_3
value: 1.7489999999999999
- type: ndcg_at_5
value: 1.8980000000000001
- type: precision_at_1
value: 1.525
- type: precision_at_10
value: 0.543
- type: precision_at_100
value: 0.187
- type: precision_at_1000
value: 0.055
- type: precision_at_3
value: 0.992
- type: precision_at_5
value: 0.76
- type: recall_at_1
value: 1.026
- type: recall_at_10
value: 3.1780000000000004
- type: recall_at_100
value: 10.481
- type: recall_at_1000
value: 29.735
- type: recall_at_3
value: 1.8849999999999998
- type: recall_at_5
value: 2.2560000000000002
- task:
type: PairClassification
dataset:
type: C-MTEB/CMNLI
name: MTEB Cmnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 54.99699338544799
- type: cos_sim_ap
value: 57.78007274332544
- type: cos_sim_f1
value: 67.95391338895512
- type: cos_sim_precision
value: 51.46846413095811
- type: cos_sim_recall
value: 99.9766191255553
- type: dot_accuracy
value: 54.99699338544799
- type: dot_ap
value: 57.7791056074979
- type: dot_f1
value: 67.95391338895512
- type: dot_precision
value: 51.46846413095811
- type: dot_recall
value: 99.9766191255553
- type: euclidean_accuracy
value: 54.99699338544799
- type: euclidean_ap
value: 57.7800760462191
- type: euclidean_f1
value: 67.95391338895512
- type: euclidean_precision
value: 51.46846413095811
- type: euclidean_recall
value: 99.9766191255553
- type: manhattan_accuracy
value: 55.05712567648827
- type: manhattan_ap
value: 57.8146828916844
- type: manhattan_f1
value: 67.95900532295227
- type: manhattan_precision
value: 51.46811070998797
- type: manhattan_recall
value: 100
- type: max_accuracy
value: 55.05712567648827
- type: max_ap
value: 57.8146828916844
- type: max_f1
value: 67.95900532295227
- task:
type: Retrieval
dataset:
type: C-MTEB/CovidRetrieval
name: MTEB CovidRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 0.632
- type: map_at_10
value: 1.7510000000000001
- type: map_at_100
value: 2.004
- type: map_at_1000
value: 2.0660000000000003
- type: map_at_3
value: 1.493
- type: map_at_5
value: 1.635
- type: mrr_at_1
value: 0.632
- type: mrr_at_10
value: 1.7670000000000001
- type: mrr_at_100
value: 2.02
- type: mrr_at_1000
value: 2.081
- type: mrr_at_3
value: 1.528
- type: mrr_at_5
value: 1.649
- type: ndcg_at_1
value: 0.632
- type: ndcg_at_10
value: 2.32
- type: ndcg_at_100
value: 3.758
- type: ndcg_at_1000
value: 5.894
- type: ndcg_at_3
value: 1.7850000000000001
- type: ndcg_at_5
value: 2.044
- type: precision_at_1
value: 0.632
- type: precision_at_10
value: 0.411
- type: precision_at_100
value: 0.11399999999999999
- type: precision_at_1000
value: 0.03
- type: precision_at_3
value: 0.878
- type: precision_at_5
value: 0.653
- type: recall_at_1
value: 0.632
- type: recall_at_10
value: 4.109999999999999
- type: recall_at_100
value: 11.222
- type: recall_at_1000
value: 29.083
- type: recall_at_3
value: 2.634
- type: recall_at_5
value: 3.267
- task:
type: Retrieval
dataset:
type: C-MTEB/DuRetrieval
name: MTEB DuRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 1.436
- type: map_at_10
value: 3.4099999999999997
- type: map_at_100
value: 4.128
- type: map_at_1000
value: 4.282
- type: map_at_3
value: 2.423
- type: map_at_5
value: 2.927
- type: mrr_at_1
value: 6
- type: mrr_at_10
value: 9.701
- type: mrr_at_100
value: 10.347000000000001
- type: mrr_at_1000
value: 10.427999999999999
- type: mrr_at_3
value: 8.267
- type: mrr_at_5
value: 9.004
- type: ndcg_at_1
value: 6
- type: ndcg_at_10
value: 5.856
- type: ndcg_at_100
value: 9.063
- type: ndcg_at_1000
value: 12.475999999999999
- type: ndcg_at_3
value: 5.253
- type: ndcg_at_5
value: 5.223
- type: precision_at_1
value: 6
- type: precision_at_10
value: 3.125
- type: precision_at_100
value: 0.812
- type: precision_at_1000
value: 0.169
- type: precision_at_3
value: 4.7669999999999995
- type: precision_at_5
value: 4.15
- type: recall_at_1
value: 1.436
- type: recall_at_10
value: 6.544999999999999
- type: recall_at_100
value: 16.634999999999998
- type: recall_at_1000
value: 33.987
- type: recall_at_3
value: 3.144
- type: recall_at_5
value: 4.519
- task:
type: Retrieval
dataset:
type: C-MTEB/EcomRetrieval
name: MTEB EcomRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 4.1000000000000005
- type: map_at_10
value: 7.911
- type: map_at_100
value: 8.92
- type: map_at_1000
value: 9.033
- type: map_at_3
value: 6.4
- type: map_at_5
value: 7.23
- type: mrr_at_1
value: 4.1000000000000005
- type: mrr_at_10
value: 7.911
- type: mrr_at_100
value: 8.92
- type: mrr_at_1000
value: 9.033
- type: mrr_at_3
value: 6.4
- type: mrr_at_5
value: 7.23
- type: ndcg_at_1
value: 4.1000000000000005
- type: ndcg_at_10
value: 10.374
- type: ndcg_at_100
value: 15.879999999999999
- type: ndcg_at_1000
value: 19.246
- type: ndcg_at_3
value: 7.217
- type: ndcg_at_5
value: 8.706
- type: precision_at_1
value: 4.1000000000000005
- type: precision_at_10
value: 1.8399999999999999
- type: precision_at_100
value: 0.45599999999999996
- type: precision_at_1000
value: 0.073
- type: precision_at_3
value: 3.2
- type: precision_at_5
value: 2.64
- type: recall_at_1
value: 4.1000000000000005
- type: recall_at_10
value: 18.4
- type: recall_at_100
value: 45.6
- type: recall_at_1000
value: 72.89999999999999
- type: recall_at_3
value: 9.6
- type: recall_at_5
value: 13.200000000000001
- task:
type: Classification
dataset:
type: C-MTEB/IFlyTek-classification
name: MTEB IFlyTek
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 20.353982300884958
- type: f1
value: 12.69588085868714
- task:
type: Classification
dataset:
type: C-MTEB/JDReview-classification
name: MTEB JDReview
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 55.497185741088174
- type: ap
value: 20.43046737602198
- type: f1
value: 48.93980371558734
- task:
type: STS
dataset:
type: C-MTEB/LCQMC
name: MTEB LCQMC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 32.588967426128654
- type: cos_sim_spearman
value: 42.14900040682406
- type: euclidean_pearson
value: 39.568373451615685
- type: euclidean_spearman
value: 42.14899152396297
- type: manhattan_pearson
value: 39.5220710244444
- type: manhattan_spearman
value: 42.14787636056146
- task:
type: Reranking
dataset:
type: C-MTEB/Mmarco-reranking
name: MTEB MMarcoReranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 1.1655156335725807
- type: mrr
value: 0.2361111111111111
- task:
type: Retrieval
dataset:
type: C-MTEB/MMarcoRetrieval
name: MTEB MMarcoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 1.9029999999999998
- type: map_at_10
value: 2.9139999999999997
- type: map_at_100
value: 3.2259999999999995
- type: map_at_1000
value: 3.2870000000000004
- type: map_at_3
value: 2.483
- type: map_at_5
value: 2.71
- type: mrr_at_1
value: 2.02
- type: mrr_at_10
value: 3.064
- type: mrr_at_100
value: 3.382
- type: mrr_at_1000
value: 3.4419999999999997
- type: mrr_at_3
value: 2.622
- type: mrr_at_5
value: 2.855
- type: ndcg_at_1
value: 2.02
- type: ndcg_at_10
value: 3.639
- type: ndcg_at_100
value: 5.431
- type: ndcg_at_1000
value: 7.404
- type: ndcg_at_3
value: 2.723
- type: ndcg_at_5
value: 3.1350000000000002
- type: precision_at_1
value: 2.02
- type: precision_at_10
value: 0.626
- type: precision_at_100
value: 0.159
- type: precision_at_1000
value: 0.033
- type: precision_at_3
value: 1.17
- type: precision_at_5
value: 0.9199999999999999
- type: recall_at_1
value: 1.9029999999999998
- type: recall_at_10
value: 5.831
- type: recall_at_100
value: 14.737
- type: recall_at_1000
value: 30.84
- type: recall_at_3
value: 3.2870000000000004
- type: recall_at_5
value: 4.282
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-CN)
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 25.3866845998655
- type: f1
value: 23.404809615998495
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-CN)
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 40.34969737726966
- type: f1
value: 37.88244646590394
- task:
type: Retrieval
dataset:
type: C-MTEB/MedicalRetrieval
name: MTEB MedicalRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 1.5
- type: map_at_10
value: 2.0740000000000003
- type: map_at_100
value: 2.2079999999999997
- type: map_at_1000
value: 2.241
- type: map_at_3
value: 1.933
- type: map_at_5
value: 2.023
- type: mrr_at_1
value: 1.5
- type: mrr_at_10
value: 2.0740000000000003
- type: mrr_at_100
value: 2.2079999999999997
- type: mrr_at_1000
value: 2.241
- type: mrr_at_3
value: 1.933
- type: mrr_at_5
value: 2.023
- type: ndcg_at_1
value: 1.5
- type: ndcg_at_10
value: 2.368
- type: ndcg_at_100
value: 3.309
- type: ndcg_at_1000
value: 4.593
- type: ndcg_at_3
value: 2.0789999999999997
- type: ndcg_at_5
value: 2.242
- type: precision_at_1
value: 1.5
- type: precision_at_10
value: 0.33
- type: precision_at_100
value: 0.084
- type: precision_at_1000
value: 0.019
- type: precision_at_3
value: 0.8330000000000001
- type: precision_at_5
value: 0.58
- type: recall_at_1
value: 1.5
- type: recall_at_10
value: 3.3000000000000003
- type: recall_at_100
value: 8.4
- type: recall_at_1000
value: 19.400000000000002
- type: recall_at_3
value: 2.5
- type: recall_at_5
value: 2.9000000000000004
- task:
type: Classification
dataset:
type: C-MTEB/MultilingualSentiment-classification
name: MTEB MultilingualSentiment
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 38.94
- type: f1
value: 38.4171730136538
- task:
type: PairClassification
dataset:
type: C-MTEB/OCNLI
name: MTEB Ocnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 54.141851651326476
- type: cos_sim_ap
value: 55.63298007661861
- type: cos_sim_f1
value: 67.85195936139333
- type: cos_sim_precision
value: 51.68601437258153
- type: cos_sim_recall
value: 98.73284054910243
- type: dot_accuracy
value: 54.141851651326476
- type: dot_ap
value: 55.63298007661861
- type: dot_f1
value: 67.85195936139333
- type: dot_precision
value: 51.68601437258153
- type: dot_recall
value: 98.73284054910243
- type: euclidean_accuracy
value: 54.141851651326476
- type: euclidean_ap
value: 55.63298007661861
- type: euclidean_f1
value: 67.85195936139333
- type: euclidean_precision
value: 51.68601437258153
- type: euclidean_recall
value: 98.73284054910243
- type: manhattan_accuracy
value: 54.03356794802382
- type: manhattan_ap
value: 55.650247173847944
- type: manhattan_f1
value: 67.83667621776503
- type: manhattan_precision
value: 51.32791327913279
- type: manhattan_recall
value: 100
- type: max_accuracy
value: 54.141851651326476
- type: max_ap
value: 55.650247173847944
- type: max_f1
value: 67.85195936139333
- task:
type: Classification
dataset:
type: C-MTEB/OnlineShopping-classification
name: MTEB OnlineShopping
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 56.88999999999999
- type: ap
value: 56.075855594697835
- type: f1
value: 56.31094564241924
- task:
type: STS
dataset:
type: C-MTEB/PAWSX
name: MTEB PAWSX
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 10.023575042969506
- type: cos_sim_spearman
value: 6.135169971774927
- type: euclidean_pearson
value: 9.219072035876794
- type: euclidean_spearman
value: 6.147945631319713
- type: manhattan_pearson
value: 9.208267921398097
- type: manhattan_spearman
value: 6.156480815791583
- task:
type: STS
dataset:
type: C-MTEB/QBQTC
name: MTEB QBQTC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 5.7230819885069435
- type: cos_sim_spearman
value: 6.116111130034651
- type: euclidean_pearson
value: 5.9142712292657205
- type: euclidean_spearman
value: 6.115732664912588
- type: manhattan_pearson
value: 5.892970378623552
- type: manhattan_spearman
value: 6.100463075081302
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh)
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 18.353401358720397
- type: cos_sim_spearman
value: 33.700002511275095
- type: euclidean_pearson
value: 27.654605278731136
- type: euclidean_spearman
value: 33.700002511275095
- type: manhattan_pearson
value: 29.174977260571083
- type: manhattan_spearman
value: 33.901862553268366
- task:
type: STS
dataset:
type: C-MTEB/STSB
name: MTEB STSB
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 44.66287398363386
- type: cos_sim_spearman
value: 45.60317964713117
- type: euclidean_pearson
value: 47.434263079423
- type: euclidean_spearman
value: 45.603111040461606
- type: manhattan_pearson
value: 47.3272049502668
- type: manhattan_spearman
value: 45.506449459872805
- task:
type: Reranking
dataset:
type: C-MTEB/T2Reranking
name: MTEB T2Reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 60.05480951659048
- type: mrr
value: 69.58201013422746
- task:
type: Retrieval
dataset:
type: C-MTEB/T2Retrieval
name: MTEB T2Retrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 1.159
- type: map_at_10
value: 2.624
- type: map_at_100
value: 3.259
- type: map_at_1000
value: 3.4090000000000003
- type: map_at_3
value: 1.9109999999999998
- type: map_at_5
value: 2.254
- type: mrr_at_1
value: 5.87
- type: mrr_at_10
value: 8.530999999999999
- type: mrr_at_100
value: 9.142999999999999
- type: mrr_at_1000
value: 9.229
- type: mrr_at_3
value: 7.498
- type: mrr_at_5
value: 8.056000000000001
- type: ndcg_at_1
value: 5.87
- type: ndcg_at_10
value: 4.641
- type: ndcg_at_100
value: 7.507999999999999
- type: ndcg_at_1000
value: 10.823
- type: ndcg_at_3
value: 4.775
- type: ndcg_at_5
value: 4.515000000000001
- type: precision_at_1
value: 5.87
- type: precision_at_10
value: 2.632
- type: precision_at_100
value: 0.762
- type: precision_at_1000
value: 0.166
- type: precision_at_3
value: 4.2299999999999995
- type: precision_at_5
value: 3.5450000000000004
- type: recall_at_1
value: 1.159
- type: recall_at_10
value: 4.816
- type: recall_at_100
value: 13.841999999999999
- type: recall_at_1000
value: 30.469
- type: recall_at_3
value: 2.413
- type: recall_at_5
value: 3.3300000000000005
- task:
type: Classification
dataset:
type: C-MTEB/TNews-classification
name: MTEB TNews
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 26.786000000000005
- type: f1
value: 25.70512339530705
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringP2P
name: MTEB ThuNewsClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 20.691386720429243
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringS2S
name: MTEB ThuNewsClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 17.1882521768033
- task:
type: Retrieval
dataset:
type: C-MTEB/VideoRetrieval
name: MTEB VideoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 2.9000000000000004
- type: map_at_10
value: 4.051
- type: map_at_100
value: 4.277
- type: map_at_1000
value: 4.315
- type: map_at_3
value: 3.567
- type: map_at_5
value: 3.897
- type: mrr_at_1
value: 2.9000000000000004
- type: mrr_at_10
value: 4.051
- type: mrr_at_100
value: 4.277
- type: mrr_at_1000
value: 4.315
- type: mrr_at_3
value: 3.567
- type: mrr_at_5
value: 3.897
- type: ndcg_at_1
value: 2.9000000000000004
- type: ndcg_at_10
value: 4.772
- type: ndcg_at_100
value: 6.214
- type: ndcg_at_1000
value: 7.456
- type: ndcg_at_3
value: 3.805
- type: ndcg_at_5
value: 4.390000000000001
- type: precision_at_1
value: 2.9000000000000004
- type: precision_at_10
value: 0.7100000000000001
- type: precision_at_100
value: 0.146
- type: precision_at_1000
value: 0.025
- type: precision_at_3
value: 1.5
- type: precision_at_5
value: 1.18
- type: recall_at_1
value: 2.9000000000000004
- type: recall_at_10
value: 7.1
- type: recall_at_100
value: 14.6
- type: recall_at_1000
value: 24.9
- type: recall_at_3
value: 4.5
- type: recall_at_5
value: 5.8999999999999995
- task:
type: Classification
dataset:
type: C-MTEB/waimai-classification
name: MTEB Waimai
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 56.21999999999999
- type: ap
value: 36.53654363772411
- type: f1
value: 54.922396485449674
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1468721 with parameters:
```
{'batch_size': 160, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | 108,284 | [
[
-0.019561767578125,
-0.060546875,
0.020904541015625,
0.023681640625,
-0.0203094482421875,
-0.03179931640625,
-0.01812744140625,
-0.00024127960205078125,
0.01629638671875,
0.0272064208984375,
-0.0484619140625,
-0.046478271484375,
-0.051727294921875,
-0.001610... |
THUDM/chatglm-6b-int8 | 2023-05-15T13:00:15.000Z | [
"transformers",
"pytorch",
"chatglm",
"glm",
"thudm",
"custom_code",
"zh",
"en",
"endpoints_compatible",
"has_space",
"region:us"
] | null | THUDM | null | null | THUDM/chatglm-6b-int8 | 62 | 806 | transformers | 2023-04-14T08:35:31 | ---
language:
- zh
- en
tags:
- glm
- chatglm
- thudm
---
# ChatGLM-6B-INT8
<p align="center">
👋 Join our <a href="https://join.slack.com/t/chatglm/shared_invite/zt-1udqapmrr-ocT1DS_mxWe6dDY8ahRWzg" target="_blank">Slack</a> and <a href="https://github.com/THUDM/ChatGLM-6B/blob/main/resources/WECHAT.md" target="_blank">WeChat</a>
</p>
## 介绍
ChatGLM-6B 是一个开源的、支持中英双语问答的对话语言模型,基于 [General Language Model (GLM)](https://github.com/THUDM/GLM) 架构,具有 62 亿参数。结合模型量化技术,用户可以在消费级的显卡上进行本地部署(INT4 量化级别下最低只需 6GB 显存)。ChatGLM-6B 使用了和 [ChatGLM](https://chatglm.cn) 相同的技术,针对中文问答和对话进行了优化。经过约 1T 标识符的中英双语训练,辅以监督微调、反馈自助、人类反馈强化学习等技术的加持,62 亿参数的 ChatGLM-6B 已经能生成相当符合人类偏好的回答。
ChatGLM-6B-INT8 是 ChatGLM-6B 量化后的模型权重。具体的,ChatGLM-6B-INT8 对 ChatGLM-6B 中的 28 个 GLM Block 进行了 INT8 量化,没有对 Embedding 和 LM Head 进行量化。量化后的模型理论上 8G 显存(使用 CPU 即内存)即可推理,具有在嵌入式设备(如树莓派)上运行的可能。
在 CPU 上运行时,会根据硬件自动编译 CPU Kernel ,请确保已安装 GCC 和 OpenMP (Linux一般已安装,对于Windows则需手动安装),以获得最佳并行计算能力。
## 软件依赖
```shell
pip install protobuf transformers==4.27.1 cpm_kernels
```
## 代码调用
可以通过如下代码调用 ChatGLM-6B 模型来生成对话:
```ipython
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int8", trust_remote_code=True)
>>> model = AutoModel.from_pretrained("THUDM/chatglm-6b-int8", trust_remote_code=True).half().cuda()
>>> response, history = model.chat(tokenizer, "你好", history=[])
>>> print(response)
你好👋!我是人工智能助手 ChatGLM-6B,很高兴见到你,欢迎问我任何问题。
>>> response, history = model.chat(tokenizer, "晚上睡不着应该怎么办", history=history)
>>> print(response)
晚上睡不着可能会让你感到焦虑或不舒服,但以下是一些可以帮助你入睡的方法:
1. 制定规律的睡眠时间表:保持规律的睡眠时间表可以帮助你建立健康的睡眠习惯,使你更容易入睡。尽量在每天的相同时间上床,并在同一时间起床。
2. 创造一个舒适的睡眠环境:确保睡眠环境舒适,安静,黑暗且温度适宜。可以使用舒适的床上用品,并保持房间通风。
3. 放松身心:在睡前做些放松的活动,例如泡个热水澡,听些轻柔的音乐,阅读一些有趣的书籍等,有助于缓解紧张和焦虑,使你更容易入睡。
4. 避免饮用含有咖啡因的饮料:咖啡因是一种刺激性物质,会影响你的睡眠质量。尽量避免在睡前饮用含有咖啡因的饮料,例如咖啡,茶和可乐。
5. 避免在床上做与睡眠无关的事情:在床上做些与睡眠无关的事情,例如看电影,玩游戏或工作等,可能会干扰你的睡眠。
6. 尝试呼吸技巧:深呼吸是一种放松技巧,可以帮助你缓解紧张和焦虑,使你更容易入睡。试着慢慢吸气,保持几秒钟,然后缓慢呼气。
如果这些方法无法帮助你入睡,你可以考虑咨询医生或睡眠专家,寻求进一步的建议。
```
关于更多的使用说明,包括如何运行命令行和网页版本的 DEMO,以及使用模型量化以节省显存,请参考我们的 [Github Repo](https://github.com/THUDM/ChatGLM-6B)。
## 协议
本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源,ChatGLM-6B 模型的权重的使用则需要遵循 [Model License](MODEL_LICENSE)。
## 引用
如果你觉得我们的工作有帮助的话,请考虑引用下列论文:
```
@inproceedings{
zeng2023glm-130b,
title={{GLM}-130B: An Open Bilingual Pre-trained Model},
author={Aohan Zeng and Xiao Liu and Zhengxiao Du and Zihan Wang and Hanyu Lai and Ming Ding and Zhuoyi Yang and Yifan Xu and Wendi Zheng and Xiao Xia and Weng Lam Tam and Zixuan Ma and Yufei Xue and Jidong Zhai and Wenguang Chen and Zhiyuan Liu and Peng Zhang and Yuxiao Dong and Jie Tang},
booktitle={The Eleventh International Conference on Learning Representations (ICLR)},
year={2023},
url={https://openreview.net/forum?id=-Aw0rrrPUF}
}
```
```
@inproceedings{du2022glm,
title={GLM: General Language Model Pretraining with Autoregressive Blank Infilling},
author={Du, Zhengxiao and Qian, Yujie and Liu, Xiao and Ding, Ming and Qiu, Jiezhong and Yang, Zhilin and Tang, Jie},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={320--335},
year={2022}
}
``` | 3,201 | [
[
-0.04168701171875,
-0.05621337890625,
0.00457763671875,
0.0267333984375,
-0.0307769775390625,
0.0035953521728515625,
-0.0219573974609375,
-0.027099609375,
0.0145263671875,
0.005916595458984375,
-0.03582763671875,
-0.0389404296875,
-0.0430908203125,
-0.007717... |
aipicasso/manga-diffusion-poc | 2023-09-20T12:53:56.000Z | [
"diffusers",
"stable-diffusion",
"text-to-image",
"stable-diffusion-diffusers",
"arxiv:2112.10752",
"arxiv:2212.03860",
"license:other",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | aipicasso | null | null | aipicasso/manga-diffusion-poc | 4 | 806 | diffusers | 2023-09-19T10:51:54 | ---
license: other
tags:
- stable-diffusion
- text-to-image
- stable-diffusion-diffusers
- diffusers
inference: true
---
# Manga Diffusion PoC Model Card

English: [Click Here](README_en.md)
# はじめに
Manga Diffusion PoC (Proof-of-Concept) はAI Picasso社が作った漫画に特化した画像生成AIです。
Manga Diffusion PoC は 著作権者から許可された画像やパブリックドメインの画像、CC-0の画像だけで学習されています。
# ライセンス
このモデルのライセンスは [Mitsua Open RAIL-M License (More restrictive variant of CreativeML Open RAIL-M)](LICENSE) です。
このモデルは**商用利用可能**ですが、"生成された画像をAIが生成したものではないと誤魔化すことはできません"。
# 使い方
[ここ](poc.safetensors)からモデルをダウンロードできます。
Diffusersを使ってモデルをダウンロードすることもできます。
以下、一般的なモデルカードの日本語訳です。
## モデル詳細
- **モデルタイプ:** 拡散モデルベースの text-to-image 生成モデル
- **言語:** 日本語
- **ライセンス:** Mitsua Open RAIL-M License
- **モデルの説明:** このモデルはプロンプトに応じて適切な画像を生成することができます。アルゴリズムは [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) と [OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip) です。
- **補足:**
- **参考文献:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
## モデルの使用例
Stable Diffusion v2と同じ使い方です。
たくさんの方法がありますが、2つのパターンを提供します。
- Web UI
- Diffusers
### Web UIの場合
Stable Diffusion v2 の使い方と同じく、safetensor形式のモデルファイルをモデルフォルダに入れてください。
詳しいインストール方法は、[こちらの記事](https://note.com/it_navi/n/n6ffb66513769)を参照してください。
### Diffusersの場合
[🤗's Diffusers library](https://github.com/huggingface/diffusers) を使ってください。
まずは、以下のスクリプトを実行し、ライブラリをいれてください。
```bash
pip install --upgrade git+https://github.com/huggingface/diffusers.git transformers accelerate scipy
```
次のスクリプトを実行し、画像を生成してください。
```python
from diffusers import StableDiffusionPipeline, EulerAncestralDiscreteScheduler
import torch
model_id = "aipicasso/manga-diffusion-poc"
scheduler = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "monochrome, grayscale, tower"
images = pipe(prompt, num_inference_steps=30, height=512, width=768).images
images[0].save("tower.png")
```
**注意**:
- [xformers](https://github.com/facebookresearch/xformers) を使うと早くなります。
- GPUを使う際にGPUのメモリが少ない人は `pipe.enable_attention_slicing()` を使ってください。
#### 想定される用途
- イラストや漫画、アニメの作画補助
- 商用・非商用は問わない
- 依頼の際のクリエイターとのコミュニケーション
- 画像生成サービスの商用提供
- 生成物の取り扱いには注意して使ってください。
- 自己表現
- このAIを使い、「あなた」らしさを発信すること
- 画像生成AIに関する報道
- 公共放送だけでなく、営利企業でも可能
- 画像合成AIに関する情報を「知る権利」は創作業界に悪影響を及ぼさないと判断したためです。また、報道の自由などを尊重しました。
- 研究開発
- Discord上でのモデルの利用
- プロンプトエンジニアリング
- ファインチューニング(追加学習とも)
- DreamBooth など
- 他のモデルとのマージ
- 本モデルの性能をFIDなどで調べること
- 本モデルがStable Diffusion以外のモデルとは独立であることをチェックサムやハッシュ関数などで調べること
- 教育
- 美大生や専門学校生の卒業制作
- 大学生の卒業論文や課題制作
- 先生が画像生成AIの現状を伝えること
- Hugging Face の Community にかいてある用途
- 日本語か英語で質問してください
#### 想定されない用途
- 物事を事実として表現するようなこと
- 収益化されているYouTubeなどのコンテンツへの使用
- 商用のサービスとして直接提供すること
- 先生を困らせるようなこと
- その他、創作業界に悪影響を及ぼすこと
# 使用してはいけない用途や悪意のある用途
- デジタル贋作 ([Digital Forgery](https://arxiv.org/abs/2212.03860)) は公開しないでください(著作権法に違反するおそれ)
- 他人の作品を無断でImage-to-Imageしないでください(著作権法に違反するおそれ)
- わいせつ物を頒布しないでください (刑法175条に違反するおそれ)
- いわゆる業界のマナーを守らないようなこと
- 事実に基づかないことを事実のように語らないようにしてください(威力業務妨害罪が適用されるおそれ)
- フェイクニュース
## モデルの限界やバイアス
### モデルの限界
- 拡散モデルや大規模言語モデルは、いまだに未知の部分が多く、その限界は判明していない。
### バイアス
- 拡散モデルや大規模言語モデルは、いまだに未知の部分が多く、バイアスは判明していない。
## 学習
**学習データ**
- [Mitsua Diffusion One](https://huggingface.co/Mitsua/mitsua-diffusion-one)
- [Manga 109-s](http://www.manga109.org/)
**学習プロセス**
- **ハードウェア:** A6000x2
## 評価結果
第三者による評価を求めています。
## 環境への影響
- **ハードウェアタイプ:** A6000x2
- **使用時間(単位は時間):** 100
- **学習した場所:** 日本
## 参考文献
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
*このモデルカードは [Stable Diffusion v2](https://huggingface.co/stabilityai/stable-diffusion-2/raw/main/README.md) に基づいて書かれました。 | 4,633 | [
[
-0.044677734375,
-0.062225341796875,
0.03228759765625,
0.0137481689453125,
-0.02838134765625,
-0.010650634765625,
0.0098724365234375,
-0.0172119140625,
0.0251617431640625,
0.01285552978515625,
-0.03302001953125,
-0.044189453125,
-0.04779052734375,
-0.0025730... |
nayohan/ko-ref-llama2-7b-Inst | 2023-10-26T10:48:17.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2-ko",
"KoQuality",
"ko",
"dataset:DILAB-HYU/KoQuality",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | nayohan | null | null | nayohan/ko-ref-llama2-7b-Inst | 0 | 806 | transformers | 2023-10-26T08:35:39 | ---
license: apache-2.0
datasets:
- DILAB-HYU/KoQuality
language:
- ko
pipeline_tag: text-generation
tags:
- llama-2-ko
- KoQuality
base_model: hyunseoki/ko-ref-llama2-7b
---
This model is a instruct-tuned ko-ref-llama2-7b model, using only 10% of [Kullm, OIG, KoAlpaca] Instruction dataset.
len10_k100_mppl_n0.1.json -> 152step
## Training hyperparameters
- learning_rate: 5e-5
- train_batch_size: 1
- seed: 42
- distributed_type: multi-GPU (A30 24G) + CPU Offloading(160GB)
- num_devices: 2
- gradient_accumulation_steps: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
## Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.11.0
- deepspeed 0.9.5 | 735 | [
[
-0.033599853515625,
-0.049713134765625,
0.0361328125,
0.0207061767578125,
-0.03826904296875,
-0.00018012523651123047,
-0.0009794235229492188,
-0.003787994384765625,
-0.0028896331787109375,
0.0419921875,
-0.058349609375,
-0.0224761962890625,
-0.04180908203125,
... |
timm/beitv2_large_patch16_224.in1k_ft_in22k | 2023-05-08T23:43:21.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-22k",
"arxiv:2208.06366",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/beitv2_large_patch16_224.in1k_ft_in22k | 0 | 805 | timm | 2022-12-23T02:34:24 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-22k
---
# Model card for beitv2_large_patch16_224.in1k_ft_in22k
A BEiT-v2 image classification model. Trained on ImageNet-1k with self-supervised masked image modelling (MIM) using a VQ-KD encoder as a visual tokenizer (via OpenAI CLIP B/16 teacher). Fine-tuned on ImageNet-22k.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 325.8
- GMACs: 61.6
- Activations (M): 63.5
- Image size: 224 x 224
- **Papers:**
- BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers: https://arxiv.org/abs/2208.06366
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-22k
- **Original:** https://github.com/microsoft/unilm/tree/master/beit2
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('beitv2_large_patch16_224.in1k_ft_in22k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'beitv2_large_patch16_224.in1k_ft_in22k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 197, 1024) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{peng2022beit,
title={Beit v2: Masked image modeling with vector-quantized visual tokenizers},
author={Peng, Zhiliang and Dong, Li and Bao, Hangbo and Ye, Qixiang and Wei, Furu},
journal={arXiv preprint arXiv:2208.06366},
year={2022}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 3,762 | [
[
-0.03179931640625,
-0.029327392578125,
-0.0015020370483398438,
0.00829315185546875,
-0.039703369140625,
-0.0143280029296875,
-0.00862884521484375,
-0.038421630859375,
0.013916015625,
0.0295867919921875,
-0.0279693603515625,
-0.054473876953125,
-0.055450439453125... |
digiplay/MixTape_RocknRoll_v3punk_bake_fp16 | 2023-07-22T13:31:53.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | digiplay | null | null | digiplay/MixTape_RocknRoll_v3punk_bake_fp16 | 4 | 805 | diffusers | 2023-06-17T16:48:38 | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/78292?modelVersionId=90757
Author's DEMO image:

Sample image I made:



| 828 | [
[
-0.04583740234375,
-0.024200439453125,
0.02764892578125,
0.0285491943359375,
-0.0333251953125,
-0.01251983642578125,
0.017486572265625,
-0.01180267333984375,
0.049041748046875,
0.022369384765625,
-0.056793212890625,
-0.0401611328125,
-0.02825927734375,
-0.01... |
sentence-transformers/stsb-distilroberta-base-v2 | 2022-06-15T22:26:42.000Z | [
"sentence-transformers",
"pytorch",
"tf",
"jax",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:1908.10084",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | sentence-similarity | sentence-transformers | null | null | sentence-transformers/stsb-distilroberta-base-v2 | 0 | 804 | sentence-transformers | 2022-03-02T23:29:05 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/stsb-distilroberta-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/stsb-distilroberta-base-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/stsb-distilroberta-base-v2')
model = AutoModel.from_pretrained('sentence-transformers/stsb-distilroberta-base-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/stsb-distilroberta-base-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` | 3,710 | [
[
-0.0159454345703125,
-0.0635986328125,
0.0225677490234375,
0.033233642578125,
-0.02398681640625,
-0.0224761962890625,
-0.0210113525390625,
-0.00299835205078125,
0.01004791259765625,
0.0231475830078125,
-0.041473388671875,
-0.0333251953125,
-0.059051513671875,
... |
timm/tf_efficientnet_b4.ap_in1k | 2023-04-27T21:19:23.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1905.11946",
"arxiv:1911.09665",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/tf_efficientnet_b4.ap_in1k | 0 | 804 | timm | 2022-12-13T00:03:32 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for tf_efficientnet_b4.ap_in1k
A EfficientNet image classification model. Trained on ImageNet-1k with AdvProp (adversarial examples) in Tensorflow by paper authors, ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 19.3
- GMACs: 4.5
- Activations (M): 49.5
- Image size: 380 x 380
- **Papers:**
- EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: https://arxiv.org/abs/1905.11946
- Adversarial Examples Improve Image Recognition: https://arxiv.org/abs/1911.09665
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('tf_efficientnet_b4.ap_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tf_efficientnet_b4.ap_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 24, 190, 190])
# torch.Size([1, 32, 95, 95])
# torch.Size([1, 56, 48, 48])
# torch.Size([1, 160, 24, 24])
# torch.Size([1, 448, 12, 12])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tf_efficientnet_b4.ap_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1792, 12, 12) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{tan2019efficientnet,
title={Efficientnet: Rethinking model scaling for convolutional neural networks},
author={Tan, Mingxing and Le, Quoc},
booktitle={International conference on machine learning},
pages={6105--6114},
year={2019},
organization={PMLR}
}
```
```bibtex
@article{Xie2019AdversarialEI,
title={Adversarial Examples Improve Image Recognition},
author={Cihang Xie and Mingxing Tan and Boqing Gong and Jiang Wang and Alan Loddon Yuille and Quoc V. Le},
journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2019},
pages={816-825}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 4,544 | [
[
-0.029266357421875,
-0.04150390625,
-0.00774383544921875,
0.005123138427734375,
-0.018035888671875,
-0.034576416015625,
-0.0227508544921875,
-0.031463623046875,
0.01081085205078125,
0.023651123046875,
-0.0243988037109375,
-0.047515869140625,
-0.058135986328125,
... |
team-lucid/hubert-large-korean | 2023-06-30T14:27:34.000Z | [
"transformers",
"pytorch",
"jax",
"safetensors",
"hubert",
"feature-extraction",
"speech",
"audio",
"automatic-speech-recognition",
"custom_code",
"ko",
"arxiv:2106.07447",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | automatic-speech-recognition | team-lucid | null | null | team-lucid/hubert-large-korean | 5 | 804 | transformers | 2023-06-04T07:13:38 | ---
license: apache-2.0
language:
- ko
library_name: transformers
pipeline_tag: automatic-speech-recognition
tags:
- speech
- audio
---
# hubert-large-korean
## Model Details
Hubert(Hidden-Unit BERT)는 Facebook에서 제안한 Speech Representation Learning 모델입니다.
Hubert는 기존의 음성 인식 모델과 달리, 음성 신호를 raw waveform에서 바로 학습하는 self-supervised learning 방식을 사용합니다.
이 연구는 구글의 TPU Research Cloud(TRC)를 통해 지원받은 Cloud TPU로 학습되었습니다.
### Model Description
<table>
<tr>
<td colspan="2"></td>
<td>Base</td>
<td>Large</td>
</tr>
<tr>
<td rowspan="3">CNN Encoder</td>
<td>strides</td>
<td colspan="2">5, 2, 2, 2, 2, 2, 2</td>
</tr>
<tr>
<td>kernel width</td>
<td colspan="2">10, 3, 3, 3, 3, 2, 2</td>
</tr>
<tr>
<td>channel</td>
<td colspan="2">512</td>
</tr>
<tr>
<td rowspan="4">Transformer Encoder</td>
<td>Layer</td>
<td>12</td>
<td>24</td>
</tr>
<tr>
<td>embedding dim</td>
<td>768</td>
<td>1024</td>
</tr>
<tr>
<td>inner FFN dim</td>
<td>3072</td>
<td>4096</td>
</tr>
<tr>
<td>attention heads</td>
<td>8</td>
<td>16</td>
</tr>
<tr>
<td>Projection</td>
<td>dim</td>
<td>256</td>
<td>768</td>
</tr>
<tr>
<td colspan="2">Params</td>
<td>95M</td>
<td>317M </td>
</tr>
</table>
## How to Get Started with the Model
### Pytorch
```py
import torch
from transformers import HubertModel
model = HubertModel.from_pretrained("team-lucid/hubert-large-korean")
wav = torch.ones(1, 16000)
outputs = model(wav)
print(f"Input: {wav.shape}") # [1, 16000]
print(f"Output: {outputs.last_hidden_state.shape}") # [1, 49, 768]
```
### JAX/Flax
```py
import jax.numpy as jnp
from transformers import FlaxAutoModel
model = FlaxAutoModel.from_pretrained("team-lucid/hubert-large-korean", trust_remote_code=True)
wav = jnp.ones((1, 16000))
outputs = model(wav)
print(f"Input: {wav.shape}") # [1, 16000]
print(f"Output: {outputs.last_hidden_state.shape}") # [1, 49, 768]
```
## Training Details
### Training Data
해당 모델은 과학기술정보통신부의 재원으로 한국지능정보사회진흥원의 지원을 받아
구축된 [자유대화 음성(일반남여)](https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=109), [다화자 음성합성 데이터](https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=542), [방송 콘텐츠 대화체 음성인식 데이터](https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=463)
에서 약 4,000시간을 추출해 학습되었습니다.
### Training Procedure
[원 논문](https://arxiv.org/pdf/2106.07447.pdf)과 동일하게 MFCC 기반으로 Base 모델을 학습한 다음, 500 cluster로 k-means를 수행해 다시 Base와
Large 모델을 학습했습니다.
#### Training Hyperparameters
| Hyperparameter | Base | Large |
|:--------------------|---------|--------:|
| Warmup Steps | 32,000 | 32,000 |
| Learning Rates | 5e-4 | 1.5e-3 |
| Batch Size | 128 | 128 |
| Weight Decay | 0.01 | 0.01 |
| Max Steps | 400,000 | 400,000 |
| Learning Rate Decay | 0.1 | 0.1 |
| \\(Adam\beta_1\\) | 0.9 | 0.9 |
| \\(Adam\beta_2\\) | 0.99 | 0.99 | | 3,079 | [
[
-0.045684814453125,
-0.046112060546875,
0.005863189697265625,
0.0223236083984375,
-0.0120849609375,
0.0016279220581054688,
-0.022369384765625,
-0.020721435546875,
0.0231475830078125,
0.00865936279296875,
-0.04400634765625,
-0.045562744140625,
-0.051177978515625,... |
hashu/my-pet-dog-hsq | 2023-08-10T13:17:18.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | hashu | null | null | hashu/my-pet-dog-hsq | 0 | 804 | diffusers | 2023-08-10T13:13:21 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-HSQ Dreambooth model trained by hashu following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: VJCET527
Sample pictures of this concept:
.jpg)
| 389 | [
[
-0.05230712890625,
-0.0306854248046875,
0.03546142578125,
-0.0085296630859375,
-0.0126495361328125,
0.03558349609375,
0.028411865234375,
-0.0222625732421875,
0.038970947265625,
0.036468505859375,
-0.035858154296875,
-0.025726318359375,
-0.01151275634765625,
... |
unitary/multilingual-toxic-xlm-roberta | 2023-08-18T10:43:10.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"arxiv:1703.04009",
"arxiv:1905.12516",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | unitary | null | null | unitary/multilingual-toxic-xlm-roberta | 7 | 803 | transformers | 2022-03-02T23:29:05 | ---
pipeline_tag: text-classification
license: apache-2.0
---
<div align="center">
**⚠️ Disclaimer:**
The huggingface models currently give different results to the detoxify library (see issue [here](https://github.com/unitaryai/detoxify/issues/15)). For the most up to date models we recommend using the models from https://github.com/unitaryai/detoxify
# 🙊 Detoxify
## Toxic Comment Classification with ⚡ Pytorch Lightning and 🤗 Transformers


</div>

## Description
Trained models & code to predict toxic comments on 3 Jigsaw challenges: Toxic comment classification, Unintended Bias in Toxic comments, Multilingual toxic comment classification.
Built by [Laura Hanu](https://laurahanu.github.io/) at [Unitary](https://www.unitary.ai/), where we are working to stop harmful content online by interpreting visual content in context.
Dependencies:
- For inference:
- 🤗 Transformers
- ⚡ Pytorch lightning
- For training will also need:
- Kaggle API (to download data)
| Challenge | Year | Goal | Original Data Source | Detoxify Model Name | Top Kaggle Leaderboard Score | Detoxify Score
|-|-|-|-|-|-|-|
| [Toxic Comment Classification Challenge](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge) | 2018 | build a multi-headed model that’s capable of detecting different types of of toxicity like threats, obscenity, insults, and identity-based hate. | Wikipedia Comments | `original` | 0.98856 | 0.98636
| [Jigsaw Unintended Bias in Toxicity Classification](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification) | 2019 | build a model that recognizes toxicity and minimizes this type of unintended bias with respect to mentions of identities. You'll be using a dataset labeled for identity mentions and optimizing a metric designed to measure unintended bias. | Civil Comments | `unbiased` | 0.94734 | 0.93639
| [Jigsaw Multilingual Toxic Comment Classification](https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification) | 2020 | build effective multilingual models | Wikipedia Comments + Civil Comments | `multilingual` | 0.9536 | 0.91655*
*Score not directly comparable since it is obtained on the validation set provided and not on the test set. To update when the test labels are made available.
It is also noteworthy to mention that the top leadearboard scores have been achieved using model ensembles. The purpose of this library was to build something user-friendly and straightforward to use.
## Limitations and ethical considerations
If words that are associated with swearing, insults or profanity are present in a comment, it is likely that it will be classified as toxic, regardless of the tone or the intent of the author e.g. humorous/self-deprecating. This could present some biases towards already vulnerable minority groups.
The intended use of this library is for research purposes, fine-tuning on carefully constructed datasets that reflect real world demographics and/or to aid content moderators in flagging out harmful content quicker.
Some useful resources about the risk of different biases in toxicity or hate speech detection are:
- [The Risk of Racial Bias in Hate Speech Detection](https://homes.cs.washington.edu/~msap/pdfs/sap2019risk.pdf)
- [Automated Hate Speech Detection and the Problem of Offensive Language](https://arxiv.org/pdf/1703.04009.pdf%201.pdf)
- [Racial Bias in Hate Speech and Abusive Language Detection Datasets](https://arxiv.org/pdf/1905.12516.pdf)
## Quick prediction
The `multilingual` model has been trained on 7 different languages so it should only be tested on: `english`, `french`, `spanish`, `italian`, `portuguese`, `turkish` or `russian`.
```bash
# install detoxify
pip install detoxify
```
```python
from detoxify import Detoxify
# each model takes in either a string or a list of strings
results = Detoxify('original').predict('example text')
results = Detoxify('unbiased').predict(['example text 1','example text 2'])
results = Detoxify('multilingual').predict(['example text','exemple de texte','texto de ejemplo','testo di esempio','texto de exemplo','örnek metin','пример текста'])
# optional to display results nicely (will need to pip install pandas)
import pandas as pd
print(pd.DataFrame(results, index=input_text).round(5))
```
For more details check the Prediction section.
## Labels
All challenges have a toxicity label. The toxicity labels represent the aggregate ratings of up to 10 annotators according the following schema:
- **Very Toxic** (a very hateful, aggressive, or disrespectful comment that is very likely to make you leave a discussion or give up on sharing your perspective)
- **Toxic** (a rude, disrespectful, or unreasonable comment that is somewhat likely to make you leave a discussion or give up on sharing your perspective)
- **Hard to Say**
- **Not Toxic**
More information about the labelling schema can be found [here](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data).
### Toxic Comment Classification Challenge
This challenge includes the following labels:
- `toxic`
- `severe_toxic`
- `obscene`
- `threat`
- `insult`
- `identity_hate`
### Jigsaw Unintended Bias in Toxicity Classification
This challenge has 2 types of labels: the main toxicity labels and some additional identity labels that represent the identities mentioned in the comments.
Only identities with more than 500 examples in the test set (combined public and private) are included during training as additional labels and in the evaluation calculation.
- `toxicity`
- `severe_toxicity`
- `obscene`
- `threat`
- `insult`
- `identity_attack`
- `sexual_explicit`
Identity labels used:
- `male`
- `female`
- `homosexual_gay_or_lesbian`
- `christian`
- `jewish`
- `muslim`
- `black`
- `white`
- `psychiatric_or_mental_illness`
A complete list of all the identity labels available can be found [here](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data).
### Jigsaw Multilingual Toxic Comment Classification
Since this challenge combines the data from the previous 2 challenges, it includes all labels from above, however the final evaluation is only on:
- `toxicity`
## How to run
First, install dependencies
```bash
# clone project
git clone https://github.com/unitaryai/detoxify
# create virtual env
python3 -m venv toxic-env
source toxic-env/bin/activate
# install project
pip install -e detoxify
cd detoxify
# for training
pip install -r requirements.txt
```
## Prediction
Trained models summary:
|Model name| Transformer type| Data from
|:--:|:--:|:--:|
|`original`| `bert-base-uncased` | Toxic Comment Classification Challenge
|`unbiased`| `roberta-base`| Unintended Bias in Toxicity Classification
|`multilingual`| `xlm-roberta-base`| Multilingual Toxic Comment Classification
For a quick prediction can run the example script on a comment directly or from a txt containing a list of comments.
```bash
# load model via torch.hub
python run_prediction.py --input 'example' --model_name original
# load model from from checkpoint path
python run_prediction.py --input 'example' --from_ckpt_path model_path
# save results to a .csv file
python run_prediction.py --input test_set.txt --model_name original --save_to results.csv
# to see usage
python run_prediction.py --help
```
Checkpoints can be downloaded from the latest release or via the Pytorch hub API with the following names:
- `toxic_bert`
- `unbiased_toxic_roberta`
- `multilingual_toxic_xlm_r`
```bash
model = torch.hub.load('unitaryai/detoxify','toxic_bert')
```
Importing detoxify in python:
```python
from detoxify import Detoxify
results = Detoxify('original').predict('some text')
results = Detoxify('unbiased').predict(['example text 1','example text 2'])
results = Detoxify('multilingual').predict(['example text','exemple de texte','texto de ejemplo','testo di esempio','texto de exemplo','örnek metin','пример текста'])
# to display results nicely
import pandas as pd
print(pd.DataFrame(results,index=input_text).round(5))
```
## Training
If you do not already have a Kaggle account:
- you need to create one to be able to download the data
- go to My Account and click on Create New API Token - this will download a kaggle.json file
- make sure this file is located in ~/.kaggle
```bash
# create data directory
mkdir jigsaw_data
cd jigsaw_data
# download data
kaggle competitions download -c jigsaw-toxic-comment-classification-challenge
kaggle competitions download -c jigsaw-unintended-bias-in-toxicity-classification
kaggle competitions download -c jigsaw-multilingual-toxic-comment-classification
```
## Start Training
### Toxic Comment Classification Challenge
```bash
python create_val_set.py
python train.py --config configs/Toxic_comment_classification_BERT.json
```
### Unintended Bias in Toxicicity Challenge
```bash
python train.py --config configs/Unintended_bias_toxic_comment_classification_RoBERTa.json
```
### Multilingual Toxic Comment Classification
This is trained in 2 stages. First, train on all available data, and second, train only on the translated versions of the first challenge.
The [translated data](https://www.kaggle.com/miklgr500/jigsaw-train-multilingual-coments-google-api) can be downloaded from Kaggle in french, spanish, italian, portuguese, turkish, and russian (the languages available in the test set).
```bash
# stage 1
python train.py --config configs/Multilingual_toxic_comment_classification_XLMR.json
# stage 2
python train.py --config configs/Multilingual_toxic_comment_classification_XLMR_stage2.json
```
### Monitor progress with tensorboard
```bash
tensorboard --logdir=./saved
```
## Model Evaluation
### Toxic Comment Classification Challenge
This challenge is evaluated on the mean AUC score of all the labels.
```bash
python evaluate.py --checkpoint saved/lightning_logs/checkpoints/example_checkpoint.pth --test_csv test.csv
```
### Unintended Bias in Toxicicity Challenge
This challenge is evaluated on a novel bias metric that combines different AUC scores to balance overall performance. More information on this metric [here](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/overview/evaluation).
```bash
python evaluate.py --checkpoint saved/lightning_logs/checkpoints/example_checkpoint.pth --test_csv test.csv
# to get the final bias metric
python model_eval/compute_bias_metric.py
```
### Multilingual Toxic Comment Classification
This challenge is evaluated on the AUC score of the main toxic label.
```bash
python evaluate.py --checkpoint saved/lightning_logs/checkpoints/example_checkpoint.pth --test_csv test.csv
```
### Citation
```
@misc{Detoxify,
title={Detoxify},
author={Hanu, Laura and {Unitary team}},
howpublished={Github. https://github.com/unitaryai/detoxify},
year={2020}
}
``` | 11,121 | [
[
-0.011077880859375,
-0.036773681640625,
0.0302734375,
0.01555633544921875,
-0.00029659271240234375,
-0.003467559814453125,
-0.00299835205078125,
-0.036407470703125,
0.007183074951171875,
0.0278167724609375,
-0.03790283203125,
-0.05389404296875,
-0.04714965820312... |
FremyCompany/BioLORD-STAMB2-v1 | 2023-10-06T16:49:00.000Z | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"en",
"dataset:FremyCompany/BioLORD-Dataset",
"arxiv:2210.11892",
"license:other",
"endpoints_compatible",
"region:us"
] | sentence-similarity | FremyCompany | null | null | FremyCompany/BioLORD-STAMB2-v1 | 11 | 802 | sentence-transformers | 2022-10-20T19:37:34 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: other
datasets:
- FremyCompany/BioLORD-Dataset
widget:
- source_sentence: bartonellosis
sentences:
- cat scratch disease
- cat scratch wound
- tick-borne orbivirus fever
- cat fur
---
# FremyCompany/BioLORD-STAMB2-v1
This model was trained using BioLORD, a new pre-training strategy for producing meaningful representations for clinical sentences and biomedical concepts.
State-of-the-art methodologies operate by maximizing the similarity in representation of names referring to the same concept, and preventing collapse through contrastive learning. However, because biomedical names are not always self-explanatory, it sometimes results in non-semantic representations.
BioLORD overcomes this issue by grounding its concept representations using definitions, as well as short descriptions derived from a multi-relational knowledge graph consisting of biomedical ontologies. Thanks to this grounding, our model produces more semantic concept representations that match more closely the hierarchical structure of ontologies. BioLORD establishes a new state of the art for text similarity on both clinical sentences (MedSTS) and biomedical concepts (MayoSRS).
This model is based on [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) and was further finetuned on the [BioLORD-Dataset](https://huggingface.co/datasets/FremyCompany/BioLORD-Dataset).
<img width="640" src="https://s3.amazonaws.com/moonup/production/uploads/1665568401241-5f04e8865d08220171a0ad3f.png" />
## General purpose
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. This model has been finentuned for the biomedical domain. While it preserves a good ability to produce embeddings for general-purpose text, it will be more useful to you if you are trying to process medical documents such as EHR records or clinical notes. Both sentences and phrases can be embedded in the same latent space.
## Citation
This model accompanies the [BioLORD: Learning Ontological Representations from Definitions](https://arxiv.org/abs/2210.11892) paper, accepted in the EMNLP 2022 Findings. When you use this model, please cite the original paper as follows:
```latex
@inproceedings{remy-etal-2022-biolord,
title = "{B}io{LORD}: Learning Ontological Representations from Definitions for Biomedical Concepts and their Textual Descriptions",
author = "Remy, François and
Demuynck, Kris and
Demeester, Thomas",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.104",
pages = "1454--1465",
abstract = "This work introduces BioLORD, a new pre-training strategy for producing meaningful representations for clinical sentences and biomedical concepts. State-of-the-art methodologies operate by maximizing the similarity in representation of names referring to the same concept, and preventing collapse through contrastive learning. However, because biomedical names are not always self-explanatory, it sometimes results in non-semantic representations. BioLORD overcomes this issue by grounding its concept representations using definitions, as well as short descriptions derived from a multi-relational knowledge graph consisting of biomedical ontologies. Thanks to this grounding, our model produces more semantic concept representations that match more closely the hierarchical structure of ontologies. BioLORD establishes a new state of the art for text similarity on both clinical sentences (MedSTS) and biomedical concepts (MayoSRS).",
}
```
You might also want to take a look at our MWE 2023 Paper:
- [Detecting Idiomatic Multiword Expressions in Clinical Terminology using Definition-Based Representation Learning](https://www.researchgate.net/publication/370426650_Detecting_Idiomatic_Multiword_Expressions_in_Clinical_Terminology_using_Definition-Based_Representation_Learning)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Cat scratch injury", "Cat scratch disease", "Bartonellosis"]
model = SentenceTransformer('FremyCompany/BioLORD-STAMB2-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Cat scratch injury", "Cat scratch disease", "Bartonellosis"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('FremyCompany/BioLORD-STAMB2-v1')
model = AutoModel.from_pretrained('FremyCompany/BioLORD-STAMB2-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## License
My own contributions for this model are covered by the MIT license.
However, given the data used to train this model originates from UMLS, you will need to ensure you have proper licensing of UMLS before using this model. UMLS is free of charge in most countries, but you might have to create an account and report on your usage of the data yearly to keep a valid license. | 6,796 | [
[
-0.00965118408203125,
-0.0567626953125,
0.037322998046875,
-0.0011043548583984375,
-0.0170135498046875,
-0.006160736083984375,
-0.0004398822784423828,
-0.02606201171875,
0.040069580078125,
0.03729248046875,
-0.0267486572265625,
-0.051666259765625,
-0.06149291992... |
timm/convnext_small.in12k | 2023-03-31T22:35:55.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-12k",
"arxiv:2201.03545",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/convnext_small.in12k | 0 | 802 | timm | 2023-01-11T22:35:40 | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
datasets:
- imagenet-12k
---
# Model card for convnext_small.in12k
A ConvNeXt image classification model. Trained in `timm` on ImageNet-12k (a 11821 class subset of full ImageNet-22k) by Ross Wightman.
ImageNet-12k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 58.5
- GMACs: 8.7
- Activations (M): 21.6
- Image size: 224 x 224
- **Papers:**
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- **Original:** https://github.com/huggingface/pytorch-image-models
- **Dataset:** ImageNet-12k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convnext_small.in12k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_small.in12k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 96, 56, 56])
# torch.Size([1, 192, 28, 28])
# torch.Size([1, 384, 14, 14])
# torch.Size([1, 768, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_small.in12k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 768, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
| model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
| [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
| [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
| [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
| [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
| [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
| [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
| [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
| [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
| [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
| [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 |
| [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
| [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
| [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
| [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
| [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
| [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
| [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
| [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
| [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
| [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
| [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
| [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
| [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
| [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
| [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
| [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
| [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
| [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
| [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
| [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
| [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
| [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
| [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
| [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
| [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
| [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
| [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
| [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
| [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
| [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
| [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
| 15,764 | [
[
-0.067138671875,
-0.0310211181640625,
-0.0029773712158203125,
0.03594970703125,
-0.030517578125,
-0.01419830322265625,
-0.0130462646484375,
-0.03692626953125,
0.06597900390625,
0.0166473388671875,
-0.0428466796875,
-0.042236328125,
-0.05096435546875,
-0.0028... |
gligen/diffusers-inpainting-text-box | 2023-06-21T19:42:37.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"arxiv:2207.12598",
"arxiv:2112.10752",
"arxiv:2103.00020",
"arxiv:2205.11487",
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | gligen | null | null | gligen/diffusers-inpainting-text-box | 1 | 802 | diffusers | 2023-03-11T03:43:50 | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
widget:
- text: "A high tech solarpunk utopia in the Amazon rainforest"
example_title: Amazon rainforest
- text: "A pikachu fine dining with a view to the Eiffel Tower"
example_title: Pikachu in Paris
- text: "A mecha robot in a favela in expressionist style"
example_title: Expressionist robot
- text: "an insect robot preparing a delicious meal"
example_title: Insect robot
- text: "A small cabin on top of a snowy mountain in the style of Disney, artstation"
example_title: Snowy disney cabin
extra_gated_prompt: |-
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
extra_gated_heading: Please read the LICENSE to access this model
---
# Stable Diffusion v1-4 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion with 🧨Diffusers blog](https://huggingface.co/blog/stable_diffusion).
The **Stable-Diffusion-v1-4** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2)
checkpoint and subsequently fine-tuned on 225k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
This weights here are intended to be used with the 🧨 Diffusers library. If you are looking for the weights to be loaded into the CompVis Stable Diffusion codebase, [come here](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
## Examples
We recommend using [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion.
### PyTorch
```bash
pip install --upgrade diffusers transformers scipy
```
Running the pipeline with the default PNDM scheduler:
```python
import torch
from diffusers import StableDiffusionPipeline
model_id = "CompVis/stable-diffusion-v1-4"
device = "cuda"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to(device)
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
**Note**:
If you are limited by GPU memory and have less than 4GB of GPU RAM available, please make sure to load the StableDiffusionPipeline in float16 precision instead of the default float32 precision as done above. You can do so by telling diffusers to expect the weights to be in float16 precision:
```py
import torch
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to(device)
pipe.enable_attention_slicing()
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
To swap out the noise scheduler, pass it to `from_pretrained`:
```python
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
model_id = "CompVis/stable-diffusion-v1-4"
# Use the Euler scheduler here instead
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
### JAX/Flax
To use StableDiffusion on TPUs and GPUs for faster inference you can leverage JAX/Flax.
Running the pipeline with default PNDMScheduler
```python
import jax
import numpy as np
from flax.jax_utils import replicate
from flax.training.common_utils import shard
from diffusers import FlaxStableDiffusionPipeline
pipeline, params = FlaxStableDiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4", revision="flax", dtype=jax.numpy.bfloat16
)
prompt = "a photo of an astronaut riding a horse on mars"
prng_seed = jax.random.PRNGKey(0)
num_inference_steps = 50
num_samples = jax.device_count()
prompt = num_samples * [prompt]
prompt_ids = pipeline.prepare_inputs(prompt)
# shard inputs and rng
params = replicate(params)
prng_seed = jax.random.split(prng_seed, num_samples)
prompt_ids = shard(prompt_ids)
images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images
images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
```
**Note**:
If you are limited by TPU memory, please make sure to load the `FlaxStableDiffusionPipeline` in `bfloat16` precision instead of the default `float32` precision as done above. You can do so by telling diffusers to load the weights from "bf16" branch.
```python
import jax
import numpy as np
from flax.jax_utils import replicate
from flax.training.common_utils import shard
from diffusers import FlaxStableDiffusionPipeline
pipeline, params = FlaxStableDiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4", revision="bf16", dtype=jax.numpy.bfloat16
)
prompt = "a photo of an astronaut riding a horse on mars"
prng_seed = jax.random.PRNGKey(0)
num_inference_steps = 50
num_samples = jax.device_count()
prompt = num_samples * [prompt]
prompt_ids = pipeline.prepare_inputs(prompt)
# shard inputs and rng
params = replicate(params)
prng_seed = jax.random.split(prng_seed, num_samples)
prompt_ids = shard(prompt_ids)
images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images
images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
```
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1-4 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
We currently provide four checkpoints, which were trained as follows.
- [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2`. 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2`.225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
steps show the relative improvements of the checkpoints:

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).* | 17,063 | [
[
-0.0295867919921875,
-0.06103515625,
0.037078857421875,
0.018280029296875,
-0.0161590576171875,
-0.0352783203125,
-0.0010528564453125,
-0.021697998046875,
-0.00891876220703125,
0.03302001953125,
-0.0204925537109375,
-0.0352783203125,
-0.04534912109375,
-0.01... |
uer/roberta-base-finetuned-chinanews-chinese | 2023-10-17T15:20:11.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"zh",
"arxiv:1909.05658",
"arxiv:2212.06385",
"arxiv:1708.02657",
"endpoints_compatible",
"region:us"
] | text-classification | uer | null | null | uer/roberta-base-finetuned-chinanews-chinese | 30 | 801 | transformers | 2022-03-02T23:29:05 | ---
language: zh
widget:
- text: "这本书真的很不错"
---
# Chinese RoBERTa-Base Models for Text Classification
## Model description
This is the set of 5 Chinese RoBERTa-Base classification models fine-tuned by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658). Besides, the models could also be fine-tuned by [TencentPretrain](https://github.com/Tencent/TencentPretrain) introduced in [this paper](https://arxiv.org/abs/2212.06385), which inherits UER-py to support models with parameters above one billion, and extends it to a multimodal pre-training framework.
You can download the 5 Chinese RoBERTa-Base classification models either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| Dataset | Link |
| :-----------: | :-------------------------------------------------------: |
| **JD full** | [**roberta-base-finetuned-jd-full-chinese**][jd_full] |
| **JD binary** | [**roberta-base-finetuned-jd-binary-chinese**][jd_binary] |
| **Dianping** | [**roberta-base-finetuned-dianping-chinese**][dianping] |
| **Ifeng** | [**roberta-base-finetuned-ifeng-chinese**][ifeng] |
| **Chinanews** | [**roberta-base-finetuned-chinanews-chinese**][chinanews] |
## How to use
You can use this model directly with a pipeline for text classification (take the case of roberta-base-finetuned-chinanews-chinese):
```python
>>> from transformers import AutoModelForSequenceClassification,AutoTokenizer,pipeline
>>> model = AutoModelForSequenceClassification.from_pretrained('uer/roberta-base-finetuned-chinanews-chinese')
>>> tokenizer = AutoTokenizer.from_pretrained('uer/roberta-base-finetuned-chinanews-chinese')
>>> text_classification = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
>>> text_classification("北京上个月召开了两会")
[{'label': 'mainland China politics', 'score': 0.7211663722991943}]
```
## Training data
5 Chinese text classification datasets are used. JD full, JD binary, and Dianping datasets consist of user reviews of different sentiment polarities. Ifeng and Chinanews consist of first paragraphs of news articles of different topic classes. They are collected by [Glyph](https://github.com/zhangxiangxiao/glyph) project and more details are discussed in the corresponding [paper](https://arxiv.org/abs/1708.02657).
## Training procedure
Models are fine-tuned by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We fine-tune three epochs with a sequence length of 512 on the basis of the pre-trained model [chinese_roberta_L-12_H-768](https://huggingface.co/uer/chinese_roberta_L-12_H-768). At the end of each epoch, the model is saved when the best performance on development set is achieved. We use the same hyper-parameters on different models.
Taking the case of roberta-base-finetuned-chinanews-chinese
```
python3 finetune/run_classifier.py --pretrained_model_path models/cluecorpussmall_roberta_base_seq512_model.bin-250000 \
--vocab_path models/google_zh_vocab.txt \
--train_path datasets/glyph/chinanews/train.tsv \
--dev_path datasets/glyph/chinanews/dev.tsv \
--output_model_path models/chinanews_classifier_model.bin \
--learning_rate 3e-5 --epochs_num 3 --batch_size 32 --seq_length 512
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_text_classification_from_uer_to_huggingface.py --input_model_path models/chinanews_classifier_model.bin \
--output_model_path pytorch_model.bin \
--layers_num 12
```
### BibTeX entry and citation info
```
@article{liu2019roberta,
title={Roberta: A robustly optimized bert pretraining approach},
author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1907.11692},
year={2019}
}
@article{zhang2017encoding,
title={Which encoding is the best for text classification in chinese, english, japanese and korean?},
author={Zhang, Xiang and LeCun, Yann},
journal={arXiv preprint arXiv:1708.02657},
year={2017}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
@article{zhao2023tencentpretrain,
title={TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities},
author={Zhao, Zhe and Li, Yudong and Hou, Cheng and Zhao, Jing and others},
journal={ACL 2023},
pages={217},
year={2023}
```
[jd_full]:https://huggingface.co/uer/roberta-base-finetuned-jd-full-chinese
[jd_binary]:https://huggingface.co/uer/roberta-base-finetuned-jd-binary-chinese
[dianping]:https://huggingface.co/uer/roberta-base-finetuned-dianping-chinese
[ifeng]:https://huggingface.co/uer/roberta-base-finetuned-ifeng-chinese
[chinanews]:https://huggingface.co/uer/roberta-base-finetuned-chinanews-chinese | 5,561 | [
[
-0.02069091796875,
-0.034820556640625,
0.016357421875,
0.0274810791015625,
-0.0264739990234375,
-0.02557373046875,
-0.038421630859375,
-0.03448486328125,
-0.0008172988891601562,
0.023101806640625,
-0.034881591796875,
-0.04547119140625,
-0.041259765625,
0.006... |
timm/maxvit_large_tf_384.in21k_ft_in1k | 2023-05-11T00:10:09.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-21k",
"arxiv:2204.01697",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/maxvit_large_tf_384.in21k_ft_in1k | 0 | 801 | timm | 2022-12-02T21:52:59 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- imagenet-21k
---
# Model card for maxvit_large_tf_384.in21k_ft_in1k
An official MaxViT image classification model. Pretrained in tensorflow on ImageNet-21k (21843 Google specific instance of ImageNet-22k) and fine-tuned on ImageNet-1k by paper authors.
Ported from official Tensorflow implementation (https://github.com/google-research/maxvit) to PyTorch by Ross Wightman.
### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py)
MaxxViT covers a number of related model architectures that share a common structure including:
- CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages.
- MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid).
- CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate.
Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations.
All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 212.0
- GMACs: 132.6
- Activations (M): 445.8
- Image size: 384 x 384
- **Papers:**
- MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-21k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('maxvit_large_tf_384.in21k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'maxvit_large_tf_384.in21k_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 128, 192, 192])
# torch.Size([1, 128, 96, 96])
# torch.Size([1, 256, 48, 48])
# torch.Size([1, 512, 24, 24])
# torch.Size([1, 1024, 12, 12])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'maxvit_large_tf_384.in21k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1024, 12, 12) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
### By Top-1
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
### By Throughput (samples / sec)
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{tu2022maxvit,
title={MaxViT: Multi-Axis Vision Transformer},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={ECCV},
year={2022},
}
```
```bibtex
@article{dai2021coatnet,
title={CoAtNet: Marrying Convolution and Attention for All Data Sizes},
author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing},
journal={arXiv preprint arXiv:2106.04803},
year={2021}
}
```
| 22,291 | [
[
-0.052947998046875,
-0.031524658203125,
0.0013437271118164062,
0.03173828125,
-0.0255279541015625,
-0.0174102783203125,
-0.0124359130859375,
-0.0252838134765625,
0.0538330078125,
0.016510009765625,
-0.0419921875,
-0.046356201171875,
-0.04754638671875,
-0.004... |
timm/tf_efficientnet_b2.ap_in1k | 2023-04-27T21:17:59.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1905.11946",
"arxiv:1911.09665",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/tf_efficientnet_b2.ap_in1k | 0 | 801 | timm | 2022-12-13T00:02:17 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for tf_efficientnet_b2.ap_in1k
A EfficientNet image classification model. Trained on ImageNet-1k with AdvProp (adversarial examples) in Tensorflow by paper authors, ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 9.1
- GMACs: 1.0
- Activations (M): 13.8
- Image size: 260 x 260
- **Papers:**
- EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: https://arxiv.org/abs/1905.11946
- Adversarial Examples Improve Image Recognition: https://arxiv.org/abs/1911.09665
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('tf_efficientnet_b2.ap_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tf_efficientnet_b2.ap_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 16, 130, 130])
# torch.Size([1, 24, 65, 65])
# torch.Size([1, 48, 33, 33])
# torch.Size([1, 120, 17, 17])
# torch.Size([1, 352, 9, 9])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tf_efficientnet_b2.ap_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1408, 9, 9) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{tan2019efficientnet,
title={Efficientnet: Rethinking model scaling for convolutional neural networks},
author={Tan, Mingxing and Le, Quoc},
booktitle={International conference on machine learning},
pages={6105--6114},
year={2019},
organization={PMLR}
}
```
```bibtex
@article{Xie2019AdversarialEI,
title={Adversarial Examples Improve Image Recognition},
author={Cihang Xie and Mingxing Tan and Boqing Gong and Jiang Wang and Alan Loddon Yuille and Quoc V. Le},
journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2019},
pages={816-825}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 4,539 | [
[
-0.0287017822265625,
-0.04217529296875,
-0.0082244873046875,
0.005096435546875,
-0.01806640625,
-0.033721923828125,
-0.0233154296875,
-0.0322265625,
0.0101165771484375,
0.023712158203125,
-0.023712158203125,
-0.046142578125,
-0.058563232421875,
-0.0134353637... |
timm/regnety_032.ra_in1k | 2023-03-21T06:38:20.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/regnety_032.ra_in1k | 0 | 801 | timm | 2023-03-21T06:38:10 | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for regnety_032.ra_in1k
A RegNetY-3.2GF image classification model. Trained on ImageNet-1k by Ross Wightman in `timm`.
The `timm` RegNet implementation includes a number of enhancements not present in other implementations, including:
* stochastic depth
* gradient checkpointing
* layer-wise LR decay
* configurable output stride (dilation)
* configurable activation and norm layers
* option for a pre-activation bottleneck block used in RegNetV variant
* only known RegNetZ model definitions with pretrained weights
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 19.4
- GMACs: 3.2
- Activations (M): 11.3
- Image size: train = 224 x 224, test = 288 x 288
- **Papers:**
- Designing Network Design Spaces: https://arxiv.org/abs/2003.13678
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('regnety_032.ra_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'regnety_032.ra_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 32, 112, 112])
# torch.Size([1, 72, 56, 56])
# torch.Size([1, 216, 28, 28])
# torch.Size([1, 576, 14, 14])
# torch.Size([1, 1512, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'regnety_032.ra_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1512, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
For the comparison summary below, the ra_in1k, ra3_in1k, ch_in1k, sw_*, and lion_* tagged weights are trained in `timm`.
|model |img_size|top1 |top5 |param_count|gmacs|macts |
|-------------------------|--------|------|------|-----------|-----|------|
|[regnety_1280.swag_ft_in1k](https://huggingface.co/timm/regnety_1280.swag_ft_in1k)|384 |88.228|98.684|644.81 |374.99|210.2 |
|[regnety_320.swag_ft_in1k](https://huggingface.co/timm/regnety_320.swag_ft_in1k)|384 |86.84 |98.364|145.05 |95.0 |88.87 |
|[regnety_160.swag_ft_in1k](https://huggingface.co/timm/regnety_160.swag_ft_in1k)|384 |86.024|98.05 |83.59 |46.87|67.67 |
|[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|288 |86.004|97.83 |83.59 |26.37|38.07 |
|[regnety_1280.swag_lc_in1k](https://huggingface.co/timm/regnety_1280.swag_lc_in1k)|224 |85.996|97.848|644.81 |127.66|71.58 |
|[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|288 |85.982|97.844|83.59 |26.37|38.07 |
|[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|224 |85.574|97.666|83.59 |15.96|23.04 |
|[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|224 |85.564|97.674|83.59 |15.96|23.04 |
|[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|288 |85.398|97.584|51.82 |20.06|35.34 |
|[regnety_2560.seer_ft_in1k](https://huggingface.co/timm/regnety_2560.seer_ft_in1k)|384 |85.15 |97.436|1282.6 |747.83|296.49|
|[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|320 |85.036|97.268|57.7 |15.46|63.94 |
|[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|224 |84.976|97.416|51.82 |12.14|21.38 |
|[regnety_320.swag_lc_in1k](https://huggingface.co/timm/regnety_320.swag_lc_in1k)|224 |84.56 |97.446|145.05 |32.34|30.26 |
|[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|320 |84.496|97.004|28.94 |6.43 |37.94 |
|[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|256 |84.436|97.02 |57.7 |9.91 |40.94 |
|[regnety_1280.seer_ft_in1k](https://huggingface.co/timm/regnety_1280.seer_ft_in1k)|384 |84.432|97.092|644.81 |374.99|210.2 |
|[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|320 |84.246|96.93 |27.12 |6.35 |37.78 |
|[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|320 |84.054|96.992|23.37 |6.19 |37.08 |
|[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|320 |84.038|96.992|23.46 |7.03 |38.92 |
|[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|320 |84.022|96.866|27.58 |9.33 |37.08 |
|[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|288 |83.932|96.888|39.18 |13.22|29.69 |
|[regnety_640.seer_ft_in1k](https://huggingface.co/timm/regnety_640.seer_ft_in1k)|384 |83.912|96.924|281.38 |188.47|124.83|
|[regnety_160.swag_lc_in1k](https://huggingface.co/timm/regnety_160.swag_lc_in1k)|224 |83.778|97.286|83.59 |15.96|23.04 |
|[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|256 |83.776|96.704|28.94 |4.12 |24.29 |
|[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|288 |83.72 |96.75 |30.58 |10.55|27.11 |
|[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|288 |83.718|96.724|30.58 |10.56|27.11 |
|[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|288 |83.69 |96.778|83.59 |26.37|38.07 |
|[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|256 |83.62 |96.704|27.12 |4.06 |24.19 |
|[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|256 |83.438|96.776|23.37 |3.97 |23.74 |
|[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|256 |83.424|96.632|27.58 |5.98 |23.74 |
|[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|256 |83.36 |96.636|23.46 |4.5 |24.92 |
|[regnety_320.seer_ft_in1k](https://huggingface.co/timm/regnety_320.seer_ft_in1k)|384 |83.35 |96.71 |145.05 |95.0 |88.87 |
|[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|288 |83.204|96.66 |20.64 |6.6 |20.3 |
|[regnety_320.tv2_in1k](https://huggingface.co/timm/regnety_320.tv2_in1k)|224 |83.162|96.42 |145.05 |32.34|30.26 |
|[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|224 |83.16 |96.486|39.18 |8.0 |17.97 |
|[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|224 |83.108|96.458|30.58 |6.39 |16.41 |
|[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|288 |83.044|96.5 |20.65 |6.61 |20.3 |
|[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|224 |83.02 |96.292|30.58 |6.39 |16.41 |
|[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|224 |82.974|96.502|83.59 |15.96|23.04 |
|[regnetx_320.tv2_in1k](https://huggingface.co/timm/regnetx_320.tv2_in1k)|224 |82.816|96.208|107.81 |31.81|36.3 |
|[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|288 |82.742|96.418|19.44 |5.29 |18.61 |
|[regnety_160.tv2_in1k](https://huggingface.co/timm/regnety_160.tv2_in1k)|224 |82.634|96.22 |83.59 |15.96|23.04 |
|[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|320 |82.634|96.472|13.49 |3.86 |25.88 |
|[regnety_080_tv.tv2_in1k](https://huggingface.co/timm/regnety_080_tv.tv2_in1k)|224 |82.592|96.246|39.38 |8.51 |19.73 |
|[regnetx_160.tv2_in1k](https://huggingface.co/timm/regnetx_160.tv2_in1k)|224 |82.564|96.052|54.28 |15.99|25.52 |
|[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|320 |82.51 |96.358|13.46 |3.92 |25.88 |
|[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|224 |82.44 |96.198|20.64 |4.0 |12.29 |
|[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|224 |82.304|96.078|20.65 |4.0 |12.29 |
|[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|256 |82.16 |96.048|13.46 |2.51 |16.57 |
|[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|256 |81.936|96.15 |13.49 |2.48 |16.57 |
|[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|224 |81.924|95.988|19.44 |3.2 |11.26 |
|[regnety_032.tv2_in1k](https://huggingface.co/timm/regnety_032.tv2_in1k)|224 |81.77 |95.842|19.44 |3.2 |11.26 |
|[regnetx_080.tv2_in1k](https://huggingface.co/timm/regnetx_080.tv2_in1k)|224 |81.552|95.544|39.57 |8.02 |14.06 |
|[regnetx_032.tv2_in1k](https://huggingface.co/timm/regnetx_032.tv2_in1k)|224 |80.924|95.27 |15.3 |3.2 |11.37 |
|[regnety_320.pycls_in1k](https://huggingface.co/timm/regnety_320.pycls_in1k)|224 |80.804|95.246|145.05 |32.34|30.26 |
|[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|288 |80.712|95.47 |9.72 |2.39 |16.43 |
|[regnety_016.tv2_in1k](https://huggingface.co/timm/regnety_016.tv2_in1k)|224 |80.66 |95.334|11.2 |1.63 |8.04 |
|[regnety_120.pycls_in1k](https://huggingface.co/timm/regnety_120.pycls_in1k)|224 |80.37 |95.12 |51.82 |12.14|21.38 |
|[regnety_160.pycls_in1k](https://huggingface.co/timm/regnety_160.pycls_in1k)|224 |80.288|94.964|83.59 |15.96|23.04 |
|[regnetx_320.pycls_in1k](https://huggingface.co/timm/regnetx_320.pycls_in1k)|224 |80.246|95.01 |107.81 |31.81|36.3 |
|[regnety_080.pycls_in1k](https://huggingface.co/timm/regnety_080.pycls_in1k)|224 |79.882|94.834|39.18 |8.0 |17.97 |
|[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|224 |79.872|94.974|9.72 |1.45 |9.95 |
|[regnetx_160.pycls_in1k](https://huggingface.co/timm/regnetx_160.pycls_in1k)|224 |79.862|94.828|54.28 |15.99|25.52 |
|[regnety_064.pycls_in1k](https://huggingface.co/timm/regnety_064.pycls_in1k)|224 |79.716|94.772|30.58 |6.39 |16.41 |
|[regnetx_120.pycls_in1k](https://huggingface.co/timm/regnetx_120.pycls_in1k)|224 |79.592|94.738|46.11 |12.13|21.37 |
|[regnetx_016.tv2_in1k](https://huggingface.co/timm/regnetx_016.tv2_in1k)|224 |79.44 |94.772|9.19 |1.62 |7.93 |
|[regnety_040.pycls_in1k](https://huggingface.co/timm/regnety_040.pycls_in1k)|224 |79.23 |94.654|20.65 |4.0 |12.29 |
|[regnetx_080.pycls_in1k](https://huggingface.co/timm/regnetx_080.pycls_in1k)|224 |79.198|94.55 |39.57 |8.02 |14.06 |
|[regnetx_064.pycls_in1k](https://huggingface.co/timm/regnetx_064.pycls_in1k)|224 |79.064|94.454|26.21 |6.49 |16.37 |
|[regnety_032.pycls_in1k](https://huggingface.co/timm/regnety_032.pycls_in1k)|224 |78.884|94.412|19.44 |3.2 |11.26 |
|[regnety_008_tv.tv2_in1k](https://huggingface.co/timm/regnety_008_tv.tv2_in1k)|224 |78.654|94.388|6.43 |0.84 |5.42 |
|[regnetx_040.pycls_in1k](https://huggingface.co/timm/regnetx_040.pycls_in1k)|224 |78.482|94.24 |22.12 |3.99 |12.2 |
|[regnetx_032.pycls_in1k](https://huggingface.co/timm/regnetx_032.pycls_in1k)|224 |78.178|94.08 |15.3 |3.2 |11.37 |
|[regnety_016.pycls_in1k](https://huggingface.co/timm/regnety_016.pycls_in1k)|224 |77.862|93.73 |11.2 |1.63 |8.04 |
|[regnetx_008.tv2_in1k](https://huggingface.co/timm/regnetx_008.tv2_in1k)|224 |77.302|93.672|7.26 |0.81 |5.15 |
|[regnetx_016.pycls_in1k](https://huggingface.co/timm/regnetx_016.pycls_in1k)|224 |76.908|93.418|9.19 |1.62 |7.93 |
|[regnety_008.pycls_in1k](https://huggingface.co/timm/regnety_008.pycls_in1k)|224 |76.296|93.05 |6.26 |0.81 |5.25 |
|[regnety_004.tv2_in1k](https://huggingface.co/timm/regnety_004.tv2_in1k)|224 |75.592|92.712|4.34 |0.41 |3.89 |
|[regnety_006.pycls_in1k](https://huggingface.co/timm/regnety_006.pycls_in1k)|224 |75.244|92.518|6.06 |0.61 |4.33 |
|[regnetx_008.pycls_in1k](https://huggingface.co/timm/regnetx_008.pycls_in1k)|224 |75.042|92.342|7.26 |0.81 |5.15 |
|[regnetx_004_tv.tv2_in1k](https://huggingface.co/timm/regnetx_004_tv.tv2_in1k)|224 |74.57 |92.184|5.5 |0.42 |3.17 |
|[regnety_004.pycls_in1k](https://huggingface.co/timm/regnety_004.pycls_in1k)|224 |74.018|91.764|4.34 |0.41 |3.89 |
|[regnetx_006.pycls_in1k](https://huggingface.co/timm/regnetx_006.pycls_in1k)|224 |73.862|91.67 |6.2 |0.61 |3.98 |
|[regnetx_004.pycls_in1k](https://huggingface.co/timm/regnetx_004.pycls_in1k)|224 |72.38 |90.832|5.16 |0.4 |3.14 |
|[regnety_002.pycls_in1k](https://huggingface.co/timm/regnety_002.pycls_in1k)|224 |70.282|89.534|3.16 |0.2 |2.17 |
|[regnetx_002.pycls_in1k](https://huggingface.co/timm/regnetx_002.pycls_in1k)|224 |68.752|88.556|2.68 |0.2 |2.16 |
## Citation
```bibtex
@InProceedings{Radosavovic2020,
title = {Designing Network Design Spaces},
author = {Ilija Radosavovic and Raj Prateek Kosaraju and Ross Girshick and Kaiming He and Piotr Doll{'a}r},
booktitle = {CVPR},
year = {2020}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 15,533 | [
[
-0.06011962890625,
-0.016754150390625,
-0.01306915283203125,
0.036651611328125,
-0.031890869140625,
-0.008331298828125,
-0.0107879638671875,
-0.039794921875,
0.07568359375,
0.0055999755859375,
-0.050689697265625,
-0.038055419921875,
-0.047576904296875,
0.003... |
timm/deit_small_distilled_patch16_224.fb_in1k | 2023-03-28T01:33:36.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2012.12877",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/deit_small_distilled_patch16_224.fb_in1k | 0 | 801 | timm | 2023-03-28T01:33:21 | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for deit_small_distilled_patch16_224.fb_in1k
A DeiT image classification model. Trained on ImageNet-1k using distillation tokens by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 22.4
- GMACs: 4.6
- Activations (M): 12.0
- Image size: 224 x 224
- **Papers:**
- Training data-efficient image transformers & distillation through attention: https://arxiv.org/abs/2012.12877
- **Original:** https://github.com/facebookresearch/deit
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('deit_small_distilled_patch16_224.fb_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'deit_small_distilled_patch16_224.fb_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 198, 384) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@InProceedings{pmlr-v139-touvron21a,
title = {Training data-efficient image transformers & distillation through attention},
author = {Touvron, Hugo and Cord, Matthieu and Douze, Matthijs and Massa, Francisco and Sablayrolles, Alexandre and Jegou, Herve},
booktitle = {International Conference on Machine Learning},
pages = {10347--10357},
year = {2021},
volume = {139},
month = {July}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 3,273 | [
[
-0.03704833984375,
-0.037933349609375,
0.01126861572265625,
0.01284027099609375,
-0.032867431640625,
-0.022125244140625,
-0.016998291015625,
-0.0225067138671875,
0.007007598876953125,
0.01294708251953125,
-0.0400390625,
-0.047454833984375,
-0.059814453125,
0... |
ibm/re2g-reranker-nq | 2023-05-16T14:30:25.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"information retrieval",
"reranking",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | ibm | null | null | ibm/re2g-reranker-nq | 3 | 799 | transformers | 2022-07-29T16:05:21 | ---
tags:
- information retrieval
- reranking
license: apache-2.0
---
# Model Card for NQ Reranker in Re2G
# Model Details
> The approach of RAG, Multi-DPR, and KGI is to train a neural IR (Information Retrieval) component and further train it end-to-end through its impact in generating the correct output.
>
>It has been previously established that results from initial retrieval can be greatly improved through the use of a reranker. Therefore we hypothesized that natural language generation systems incorporating retrieval can benefit from reranking.
>
>In addition to improving the ranking of passages returned from DPR, a reranker can be used after merging the results of multiple retrieval methods with incomparable scores. For example, the scores returned by BM25 are not comparable to the inner products from DPR. Using the scores from a reranker, we can find the top-k documents from the union of DPR and BM25 results. The figure below illustrates our extension of RAG with a reranker. We call our system Re2G (*Re*trieve, *Re*rank, *G*enerate).
<img src="https://github.com/IBM/kgi-slot-filling/raw/re2g/model_cards/Re2G_Arch2.png" width="100%">
## Training, Evaluation and Inference
The code for training, evaluation and inference is in our github in the [re2g branch](https://github.com/IBM/kgi-slot-filling/tree/re2g).
## Usage
The best way to use the model is by adapting the [reranker_apply.py](https://github.com/IBM/kgi-slot-filling/blob/re2g/reranker/reranker_apply.py)
## Citation
```
@inproceedings{glass-etal-2022-re2g,
title = "{R}e2{G}: Retrieve, Rerank, Generate",
author = "Glass, Michael and
Rossiello, Gaetano and
Chowdhury, Md Faisal Mahbub and
Naik, Ankita and
Cai, Pengshan and
Gliozzo, Alfio",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.194",
doi = "10.18653/v1/2022.naacl-main.194",
pages = "2701--2715",
abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.",
}
```
## Model Description
The model creators note in the [associated paper](https://aclanthology.org/2022.naacl-main.194.pdf):
> As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9% to 34% over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.
- **Developed by:** IBM
- **Shared by [Optional]:** IBM
- **Model type:** Query/Passage Reranker
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Parent Model:** [BERT-base trained on MSMARCO](https://huggingface.co/nboost/pt-bert-base-uncased-msmarco)
- **Resources for more information:**
- [GitHub Repo](https://github.com/IBM/kgi-slot-filling)
- [Associated Paper](https://aclanthology.org/2022.naacl-main.194.pdf)
# Uses
## Direct Use
This model can be used for the task of reranking passage results for a question.
# Citation
**BibTeX:**
```bibtex
@inproceedings{glass-etal-2022-re2g,
title = "{R}e2{G}: Retrieve, Rerank, Generate",
author = "Glass, Michael and
Rossiello, Gaetano and
Chowdhury, Md Faisal Mahbub and
Naik, Ankita and
Cai, Pengshan and
Gliozzo, Alfio",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.194",
doi = "10.18653/v1/2022.naacl-main.194",
pages = "2701--2715",
abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.",
}
```
| 7,423 | [
[
-0.0122222900390625,
-0.0528564453125,
0.0276641845703125,
-0.00183868408203125,
-0.0218048095703125,
0.00543212890625,
-0.02398681640625,
-0.018157958984375,
0.0113372802734375,
0.0201263427734375,
-0.02825927734375,
-0.0318603515625,
-0.054168701171875,
-0... |
tsmatz/mt5_summarize_japanese | 2023-09-12T00:28:02.000Z | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | summarization | tsmatz | null | null | tsmatz/mt5_summarize_japanese | 10 | 799 | transformers | 2022-11-26T10:51:27 | ---
language:
- ja
license: apache-2.0
tags:
- summarization
- generated_from_trainer
- mt5
metrics:
- rouge
widget:
- text: 世界中では約120のワクチンの開発が進められている。英オックスフォード大学の専門家たちはすでに臨床試験を開始している。 新しいアプローチ 多くの従来のワクチンは、弱体化させたウイルスや改変したウイルスなどがもとになっている。しかし今回のワクチンは新しいアプローチに基づいたもので、遺伝子のRNA(リボ核酸)を使う。
筋肉に注射すると、RNAは自己増殖し、新型ウイルスの表面にみられるスパイクタンパク質のコピーをつくるよう、体内の細胞に指示を出す。 この方法で、COVID-19(新型ウイルスによる感染症)を発症することなく新型ウイルスを認識して戦うための免疫システムを訓練できるという。
シャトック教授は、「我々はゼロからワクチンを製造し、わずか数カ月で臨床試験に持ち込むことができた」と述べた。
- text: サッカーのワールドカップカタール大会、世界ランキング24位でグループEに属する日本は、23日の1次リーグ初戦において、世界11位で過去4回の優勝を誇るドイツと対戦しました。試合は前半、ドイツの一方的なペースではじまりましたが、後半、日本の森保監督は攻撃的な選手を積極的に動員して流れを変えました。結局、日本は前半に1点を奪われましたが、途中出場の堂安律選手と浅野拓磨選手が後半にゴールを決め、2対1で逆転勝ちしました。ゲームの流れをつかんだ森保采配が功を奏しました。
base_model: google/mt5-small
model-index:
- name: mt5_summarize_japanese
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5_summarize_japanese
(Japanese caption : 日本語の要約のモデル)
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) trained for Japanese summarization.
This model is fine-tuned on BBC news articles ([XL-Sum Japanese dataset](https://huggingface.co/datasets/csebuetnlp/xlsum/viewer/japanese)), in which the first sentence (headline sentence) is used for summary and others are used for article.<br>
So, **please fill news story (including, such as, event, background, result, and comment) as source text in the inferece widget**. (Other corpra - such as, conversation, business document, academic paper, or short tale - are not seen in training set.)
It achieves the following results on the evaluation set:
- Loss: 1.8952
- Rouge1: 0.4625
- Rouge2: 0.2866
- Rougel: 0.3656
- Rougelsum: 0.3868
## Intended uses
```python
from transformers import pipeline
seq2seq = pipeline("summarization", model="tsmatz/mt5_summarize_japanese")
sample_text = "サッカーのワールドカップカタール大会、世界ランキング24位でグループEに属する日本は、23日の1次リーグ初戦において、世界11位で過去4回の優勝を誇るドイツと対戦しました。試合は前半、ドイツの一方的なペースではじまりましたが、後半、日本の森保監督は攻撃的な選手を積極的に動員して流れを変えました。結局、日本は前半に1点を奪われましたが、途中出場の堂安律選手と浅野拓磨選手が後半にゴールを決め、2対1で逆転勝ちしました。ゲームの流れをつかんだ森保采配が功を奏しました。"
result = seq2seq(sample_text)
print(result)
```
## Training procedure
You can download the source code for fine-tuning from [here](https://github.com/tsmatz/huggingface-finetune-japanese/blob/master/02-summarize.ipynb).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 90
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 4.2501 | 0.36 | 100 | 3.3685 | 0.3114 | 0.1654 | 0.2627 | 0.2694 |
| 3.6436 | 0.72 | 200 | 3.0095 | 0.3023 | 0.1634 | 0.2684 | 0.2764 |
| 3.3044 | 1.08 | 300 | 2.8025 | 0.3414 | 0.1789 | 0.2912 | 0.2984 |
| 3.2693 | 1.44 | 400 | 2.6284 | 0.3616 | 0.1935 | 0.2979 | 0.3132 |
| 3.2025 | 1.8 | 500 | 2.5271 | 0.3790 | 0.2042 | 0.3046 | 0.3192 |
| 2.9772 | 2.17 | 600 | 2.4203 | 0.4083 | 0.2374 | 0.3422 | 0.3542 |
| 2.9133 | 2.53 | 700 | 2.3863 | 0.3847 | 0.2096 | 0.3316 | 0.3406 |
| 2.9383 | 2.89 | 800 | 2.3573 | 0.4016 | 0.2297 | 0.3361 | 0.3500 |
| 2.7608 | 3.25 | 900 | 2.3223 | 0.3999 | 0.2249 | 0.3461 | 0.3566 |
| 2.7864 | 3.61 | 1000 | 2.2293 | 0.3932 | 0.2219 | 0.3297 | 0.3445 |
| 2.7846 | 3.97 | 1100 | 2.2097 | 0.4386 | 0.2617 | 0.3766 | 0.3826 |
| 2.7495 | 4.33 | 1200 | 2.1879 | 0.4100 | 0.2449 | 0.3481 | 0.3551 |
| 2.6092 | 4.69 | 1300 | 2.1515 | 0.4398 | 0.2714 | 0.3787 | 0.3842 |
| 2.5598 | 5.05 | 1400 | 2.1195 | 0.4366 | 0.2545 | 0.3621 | 0.3736 |
| 2.5283 | 5.41 | 1500 | 2.0637 | 0.4274 | 0.2551 | 0.3649 | 0.3753 |
| 2.5947 | 5.77 | 1600 | 2.0588 | 0.4454 | 0.2800 | 0.3828 | 0.3921 |
| 2.5354 | 6.14 | 1700 | 2.0357 | 0.4253 | 0.2582 | 0.3546 | 0.3687 |
| 2.5203 | 6.5 | 1800 | 2.0263 | 0.4444 | 0.2686 | 0.3648 | 0.3764 |
| 2.5303 | 6.86 | 1900 | 1.9926 | 0.4455 | 0.2771 | 0.3795 | 0.3948 |
| 2.4953 | 7.22 | 2000 | 1.9576 | 0.4523 | 0.2873 | 0.3869 | 0.4053 |
| 2.4271 | 7.58 | 2100 | 1.9384 | 0.4455 | 0.2811 | 0.3713 | 0.3862 |
| 2.4462 | 7.94 | 2200 | 1.9230 | 0.4530 | 0.2846 | 0.3754 | 0.3947 |
| 2.3303 | 8.3 | 2300 | 1.9311 | 0.4519 | 0.2814 | 0.3755 | 0.3887 |
| 2.3916 | 8.66 | 2400 | 1.9213 | 0.4598 | 0.2897 | 0.3688 | 0.3889 |
| 2.5995 | 9.03 | 2500 | 1.9060 | 0.4526 | 0.2820 | 0.3733 | 0.3946 |
| 2.3348 | 9.39 | 2600 | 1.9021 | 0.4595 | 0.2856 | 0.3762 | 0.3988 |
| 2.4035 | 9.74 | 2700 | 1.8952 | 0.4625 | 0.2866 | 0.3656 | 0.3868 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
| 5,557 | [
[
-0.050506591796875,
-0.040008544921875,
0.0186614990234375,
0.00881195068359375,
-0.01322174072265625,
-0.0023441314697265625,
-0.006999969482421875,
-0.005535125732421875,
0.044342041015625,
0.02410888671875,
-0.049072265625,
-0.048797607421875,
-0.047973632812... |
timm/tf_efficientnet_el.in1k | 2023-04-27T21:29:02.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.02838",
"arxiv:1905.11946",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/tf_efficientnet_el.in1k | 0 | 799 | timm | 2022-12-13T00:08:33 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for tf_efficientnet_el.in1k
A EfficientNet-EdgeTPU image classification model. Trained on ImageNet-1k in Tensorflow by paper authors, ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 10.6
- GMACs: 8.0
- Activations (M): 30.7
- Image size: 300 x 300
- **Papers:**
- Accelerator-aware Neural Network Design using AutoML: https://arxiv.org/abs/2003.02838
- EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: https://arxiv.org/abs/1905.11946
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('tf_efficientnet_el.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tf_efficientnet_el.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 32, 150, 150])
# torch.Size([1, 40, 75, 75])
# torch.Size([1, 56, 38, 38])
# torch.Size([1, 176, 19, 19])
# torch.Size([1, 232, 10, 10])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tf_efficientnet_el.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1536, 10, 10) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{gupta2020accelerator,
title={Accelerator-aware neural network design using automl},
author={Gupta, Suyog and Akin, Berkin},
journal={arXiv preprint arXiv:2003.02838},
year={2020}
}
```
```bibtex
@inproceedings{tan2019efficientnet,
title={Efficientnet: Rethinking model scaling for convolutional neural networks},
author={Tan, Mingxing and Le, Quoc},
booktitle={International conference on machine learning},
pages={6105--6114},
year={2019},
organization={PMLR}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 4,387 | [
[
-0.03125,
-0.04278564453125,
-0.005275726318359375,
0.00905609130859375,
-0.0202178955078125,
-0.0305938720703125,
-0.0217742919921875,
-0.0289459228515625,
0.01097869873046875,
0.0201416015625,
-0.0256500244140625,
-0.0472412109375,
-0.057403564453125,
-0.0... |
pruas/BENT-PubMedBERT-NER-Disease | 2023-01-11T14:40:58.000Z | [
"transformers",
"pytorch",
"bert",
"token-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | pruas | null | null | pruas/BENT-PubMedBERT-NER-Disease | 5 | 799 | transformers | 2022-12-13T20:34:00 | ---
language:
- en
pipeline_tag: token-classification
---
Named Entity Recognition (NER) model to recognize disease entities.
[PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) fine-tuned on the following datasets:
- [NCBI Disease Corpus](https://www.ncbi.nlm.nih.gov/research/bionlp/Data/disease/) (train and dev sets)
- [PHAEDRA](http://www.nactem.ac.uk/PHAEDRA/) (train, dev, test sets): entity type "Disorder"
- [Corpus for Disease Names and Adverse Effects](https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/corpus-for-disease-names-and-adverse-effects.html) (train, dev, test sets): entity types "DISEASE", "ADVERSE"
- [RareDis corpus](https://github.com/isegura/NLP4RARE-CM-UC3M/tree/main/corpus) (train, dev, test sets): entity types "DISEASE", "RAREDISEASE", "SYMPTOM"
- [CoMAGC](https://github.com/isegura/NLP4RARE-CM-UC3M/tree/main/corpus) (train, dev, test sets): entity type "cancer_term"
- [PGxCorpus](https://www.nature.com/articles/s41597-019-0342-9) (train, dev, test sets):
- [miRNA-Test-Corpus](https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/download-mirna-test-corpus.html) (train, dev, test sets): entity type "Diseases"
- [BC5CDR]() (train and dev sets): entity type "Disease"
- [Mantra](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4986661/pdf/ocv037.pdf) (train, dev, test sets): entity type "DISO" | 1,443 | [
[
-0.02447509765625,
-0.036651611328125,
0.02288818359375,
0.0022678375244140625,
0.003307342529296875,
0.0035114288330078125,
0.006137847900390625,
-0.04833984375,
0.0484619140625,
0.03302001953125,
-0.0195465087890625,
-0.0447998046875,
-0.040863037109375,
0... |
timm/vit_large_patch14_clip_336.openai | 2023-04-12T17:41:01.000Z | [
"open_clip",
"zero-shot-image-classification",
"clip",
"arxiv:2103.00020",
"arxiv:1908.04913",
"license:apache-2.0",
"region:us"
] | zero-shot-image-classification | timm | null | null | timm/vit_large_patch14_clip_336.openai | 0 | 799 | open_clip | 2023-04-10T18:43:24 | ---
tags:
- zero-shot-image-classification
- clip
library_tag: open_clip
license: apache-2.0
---
# Model card for vit_large_patch14_clip_336.openai
# CLIP (OpenAI model for timm)
## Model Details
The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within.
This instance of the CLIP model is intended for loading in
* `timm` (https://github.com/rwightman/pytorch-image-models) and
* `OpenCLIP` (https://github.com/mlfoundations/open_clip) libraries.
Please see https://huggingface.co/openai/clip-vit-large-patch14-336 for use in Hugging Face Transformers.
### Model Date
January 2021
### Model Type
The model uses a ViT-L/14 (336x336) Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss.
The original implementation had two variants: one using a ResNet image encoder and the other using a Vision Transformer. This repository has the variant with the Vision Transformer.
### Documents
- [Blog Post](https://openai.com/blog/clip/)
- [CLIP Paper](https://arxiv.org/abs/2103.00020)
## Model Use
### Intended Use
The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis.
#### Primary intended uses
The primary intended users of these models are AI researchers.
We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models.
### Out-of-Scope Use Cases
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
## Data
The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users.
### Data Mission Statement
Our goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset.
## Limitations
CLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance.
### Bias and Fairness
We find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from [Fairface](https://arxiv.org/abs/1908.04913) into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper).
We also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks.
| 6,423 | [
[
-0.04327392578125,
-0.0423583984375,
0.0116424560546875,
0.0035953521728515625,
-0.013885498046875,
-0.0146331787109375,
0.0005350112915039062,
-0.05145263671875,
0.01177978515625,
0.03448486328125,
-0.0244293212890625,
-0.031524658203125,
-0.04656982421875,
... |
lcw99/t5-base-korean-text-summary | 2023-04-13T02:30:33.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text2text-generation | lcw99 | null | null | lcw99/t5-base-korean-text-summary | 5 | 798 | transformers | 2022-09-24T05:23:31 | ---
language:
- ko
tags:
- generated_from_keras_callback
model-index:
- name: t5-base-korean-text-summary
results: []
---
# t5-base-korean-text-summary
This model is a fine-tuning of [paust/pko-t5-base](https://huggingface.co/paust/pko-t5-base) model using AIHUB "summary and report generation data". This model provides a short summary of long sentences in Korean.
이 모델은 paust/pko-t5-base model을 AIHUB "요약문 및 레포트 생성 데이터"를 이용하여 fine tunning 한 것입니다. 이 모델은 한글로된 장문을 짧게 요약해 줍니다.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import nltk
nltk.download('punkt')
model_dir = "lcw99/t5-base-korean-text-summary"
tokenizer = AutoTokenizer.from_pretrained(model_dir)
model = AutoModelForSeq2SeqLM.from_pretrained(model_dir)
max_input_length = 512
text = """
주인공 강인구(하정우)는 ‘수리남에서 홍어가 많이 나는데 다 갖다버린다’는 친구
박응수(현봉식)의 얘기를 듣고 수리남산 홍어를 한국에 수출하기 위해 수리남으로 간다.
국립수산과학원 측은 “실제로 남대서양에 홍어가 많이 살고 아르헨티나를 비롯한 남미 국가에서 홍어가 많이 잡힌다”며
“수리남 연안에도 홍어가 많이 서식할 것”이라고 설명했다.
그러나 관세청에 따르면 한국에 수리남산 홍어가 수입된 적은 없다.
일각에선 “돈을 벌기 위해 수리남산 홍어를 구하러 간 설정은 개연성이 떨어진다”는 지적도 한다.
드라마 배경이 된 2008~2010년에는 이미 국내에 아르헨티나, 칠레, 미국 등 아메리카산 홍어가 수입되고 있었기 때문이다.
실제 조봉행 체포 작전에 협조했던 ‘협력자 K씨’도 홍어 사업이 아니라 수리남에 선박용 특수용접봉을 파는 사업을 하러 수리남에 갔었다.
"""
inputs = ["summarize: " + text]
inputs = tokenizer(inputs, max_length=max_input_length, truncation=True, return_tensors="pt")
output = model.generate(**inputs, num_beams=8, do_sample=True, min_length=10, max_length=100)
decoded_output = tokenizer.batch_decode(output, skip_special_tokens=True)[0]
predicted_title = nltk.sent_tokenize(decoded_output.strip())[0]
print(predicted_title)
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float16
### Training results
### Framework versions
- Transformers 4.22.1
- TensorFlow 2.10.0
- Datasets 2.5.1
- Tokenizers 0.12.1
| 2,029 | [
[
-0.02850341796875,
-0.042327880859375,
0.01459503173828125,
0.03485107421875,
-0.040557861328125,
-0.0024204254150390625,
-0.006092071533203125,
-0.0177154541015625,
0.01751708984375,
0.028533935546875,
-0.028167724609375,
-0.051666259765625,
-0.054046630859375,... |
ProomptEngineer/pe-balloon-diffusion-style | 2023-09-01T10:10:19.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"license:other",
"region:us",
"has_space"
] | text-to-image | ProomptEngineer | null | null | ProomptEngineer/pe-balloon-diffusion-style | 3 | 798 | diffusers | 2023-09-01T10:10:15 | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PEBalloonStyle
widget:
- text: PEBalloonStyle
---
# PE Balloon Diffusion [Style]

<h2 id="heading-5">Wondered what things would look like if their made of ballons? then try this one!</h2><h2 id="heading-6">Weights 0.8-1</h2><h2 id="heading-7">If you want to donate:</h2><h2 id="heading-8"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2><h2 id="heading-10">Add "Ballon Sculpture" if effect is not strong enough</h2><p></p>
## Image examples for the model:









| 934 | [
[
-0.0199127197265625,
-0.0494384765625,
0.015289306640625,
0.014678955078125,
-0.032806396484375,
-0.0032596588134765625,
0.00543212890625,
-0.01288604736328125,
0.033660888671875,
0.02789306640625,
-0.0225830078125,
-0.0073394775390625,
-0.041900634765625,
0... |
GAI-LLM/ko-en-llama2-13b-mixed-v5 | 2023-10-28T07:21:38.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | GAI-LLM | null | null | GAI-LLM/ko-en-llama2-13b-mixed-v5 | 2 | 798 | transformers | 2023-10-28T07:04:51 | ---
license: cc-by-nc-4.0
language:
- ko
library_name: transformers
pipeline_tag: text-generation
---
**The license is `cc-by-nc-4.0`.**
# **GAI-LLM/ko-en-llama2-13b-mixed-v5**
## Model Details
**Model Developers** Donghoon Oh, Hanmin Myung, Eunyoung Kim (SK C&C G.AI Eng)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
GAI-LLM/ko-en-llama2-13b-mixed-v5 is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b)
**Training Dataset**
- We combined Open Korean Dateset using mixed-strategy.
- We use A100 GPU 80GB * 8, when training.
# **Model Benchmark**
## KO-LLM leaderboard
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
# Implementation Code
```python
### GAI-LLM/ko-en-llama2-13b-mixed-v5
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "GAI-LLM/ko-en-llama2-13b-mixed-v5"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
```
--- | 1,275 | [
[
-0.0208282470703125,
-0.052764892578125,
0.0268707275390625,
0.048553466796875,
-0.03594970703125,
0.0117340087890625,
-0.005859375,
-0.0298919677734375,
-0.00218963623046875,
0.02520751953125,
-0.057708740234375,
-0.04901123046875,
-0.04449462890625,
0.0089... |
timm/convnextv2_huge.fcmae_ft_in1k | 2023-03-31T23:19:16.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2301.00808",
"license:cc-by-nc-4.0",
"region:us"
] | image-classification | timm | null | null | timm/convnextv2_huge.fcmae_ft_in1k | 0 | 797 | timm | 2023-01-05T01:42:49 | ---
tags:
- image-classification
- timm
library_tag: timm
license: cc-by-nc-4.0
datasets:
- imagenet-1k
- imagenet-1k
---
# Model card for convnextv2_huge.fcmae_ft_in1k
A ConvNeXt-V2 image classification model. Pretrained with a fully convolutional masked autoencoder framework (FCMAE) and fine-tuned on ImageNet-1k.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 660.3
- GMACs: 115.0
- Activations (M): 79.1
- Image size: train = 224 x 224, test = 288 x 288
- **Papers:**
- ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders: https://arxiv.org/abs/2301.00808
- **Original:** https://github.com/facebookresearch/ConvNeXt-V2
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convnextv2_huge.fcmae_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnextv2_huge.fcmae_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 352, 56, 56])
# torch.Size([1, 704, 28, 28])
# torch.Size([1, 1408, 14, 14])
# torch.Size([1, 2816, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnextv2_huge.fcmae_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2816, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
| model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
| [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
| [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
| [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
| [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
| [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
| [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
| [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
| [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
| [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
| [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 |
| [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
| [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
| [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
| [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
| [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
| [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
| [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
| [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
| [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
| [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
| [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
| [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
| [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
| [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
| [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
| [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
| [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
| [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
| [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
| [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
| [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
| [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
| [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
| [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
| [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
| [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
| [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
| [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
| [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
| [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
| [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
## Citation
```bibtex
@article{Woo2023ConvNeXtV2,
title={ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders},
author={Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon and Saining Xie},
year={2023},
journal={arXiv preprint arXiv:2301.00808},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 15,794 | [
[
-0.0692138671875,
-0.031402587890625,
-0.0052337646484375,
0.038055419921875,
-0.032196044921875,
-0.0158843994140625,
-0.01282501220703125,
-0.03546142578125,
0.06451416015625,
0.017730712890625,
-0.044586181640625,
-0.03961181640625,
-0.05316162109375,
-0.... |
stablediffusionapi/realistic-vision | 2023-08-31T04:48:32.000Z | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | stablediffusionapi | null | null | stablediffusionapi/realistic-vision | 1 | 797 | diffusers | 2023-01-31T14:11:06 | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# realistic vision API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "realistic-vision"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/realistic-vision)
Model link: [View model](https://stablediffusionapi.com/models/realistic-vision)
Credits: [View credits](https://civitai.com/?query=realistic%20vision)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "realistic-vision",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** | 2,473 | [
[
-0.03302001953125,
-0.057373046875,
0.040924072265625,
0.014892578125,
-0.0389404296875,
0.00640106201171875,
0.0236663818359375,
-0.042144775390625,
0.04010009765625,
0.045745849609375,
-0.06256103515625,
-0.06146240234375,
-0.0271148681640625,
-0.002212524... |
MoritzLaurer/ernie-m-large-mnli-xnli | 2023-03-20T08:28:34.000Z | [
"transformers",
"pytorch",
"safetensors",
"ernie_m",
"text-classification",
"zero-shot-classification",
"nli",
"multilingual",
"en",
"ar",
"bg",
"de",
"el",
"es",
"fr",
"hi",
"ru",
"sw",
"th",
"tr",
"ur",
"vi",
"zh",
"dataset:multi_nli",
"dataset:xnli",
"arxiv:2012.... | zero-shot-classification | MoritzLaurer | null | null | MoritzLaurer/ernie-m-large-mnli-xnli | 18 | 796 | transformers | 2023-02-16T18:00:07 | ---
language:
- multilingual
- en
- ar
- bg
- de
- el
- es
- fr
- hi
- ru
- sw
- th
- tr
- ur
- vi
- zh
license: apache-2.0
tags:
- zero-shot-classification
- text-classification
- nli
- pytorch
metrics:
- accuracy
datasets:
- multi_nli
- xnli
pipeline_tag: zero-shot-classification
widget:
- text: "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
candidate_labels: "politics, economy, entertainment, environment"
---
# Multilingual ernie-m-large-mnli-xnli
## Model description
This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual
zero-shot classification. The underlying model was pre-trained by Baidu, based on Meta's RoBERTa (pre-trained on the
[CC100 multilingual dataset](https://huggingface.co/datasets/cc100). It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli),
which contains hypothesis-premise pairs from 15 languages, as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli).
The model was introduced by Baidu in [this paper](https://arxiv.org/pdf/2012.15674.pdf). The model outperforms RoBERTa models of equal size.
If you are looking for a much faster (but less performant) model, you can
try [multilingual-MiniLMv2-L6-mnli-xnli](https://huggingface.co/MoritzLaurer/multilingual-MiniLMv2-L6-mnli-xnli).
If you are looking for a base-sized model with a good mix of performance and speed,
you can try [mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli)
### How to use the model
#### Simple zero-shot classification pipeline
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="MoritzLaurer/ernie-m-large-mnli-xnli")
sequence_to_classify = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
candidate_labels = ["politics", "economy", "entertainment", "environment"]
output = classifier(sequence_to_classify, candidate_labels, multi_label=False)
print(output)
```
#### NLI use-case
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_name = "MoritzLaurer/ernie-m-large-mnli-xnli"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name).to(device)
premise = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
hypothesis = "Emmanuel Macron is the President of France"
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "neutral", "contradiction"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
This model was trained on the XNLI development dataset and the MNLI train dataset.
The XNLI development set consists of 2490 professionally translated texts from English
to 14 other languages (37350 texts in total) (see [this paper](https://arxiv.org/pdf/1809.05053.pdf)).
Note that the XNLI contains a training set of 15 machine translated versions of the MNLI dataset for 15 languages,
but due to quality issues with these machine translations, this model was only trained
on the professional translations from the XNLI development set and the original English
MNLI training set (392 702 texts). Not using machine translated texts can avoid overfitting the
model to the 15 languages; avoids catastrophic forgetting of the other 85 languages ernie-m
was pre-trained on; and significantly reduces training costs.
### Training procedure
ernie-m-large-mnli-xnli was trained using the Hugging Face trainer with the following hyperparameters.
```
training_args = TrainingArguments(
num_train_epochs=3, # total number of training epochs
learning_rate=3e-05,
per_device_train_batch_size=16, # batch size per device during training
gradient_accumulation_steps=2,
per_device_eval_batch_size=16, # batch size for evaluation
warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
fp16=True,
)
```
### Eval results
The model was evaluated on the XNLI test set on 15 languages (5010 texts per language, 75150 in total).
Note that multilingual NLI models are capable of classifying NLI texts without receiving NLI training
data in the specific language (cross-lingual transfer). This means that the model is also able of
doing NLI on the other 85 languages mDeBERTa was training on, but performance is most likely lower
than for those languages available in XNLI.
Also note that if other multilingual models on the model hub claim performance of around 90% on languages
other than English, the authors have most likely made a mistake during testing since non of the latest papers
shows a multilingual average performance of more than a few points above 80% on XNLI
(see [here](https://arxiv.org/pdf/2111.09543.pdf) or [here](https://arxiv.org/pdf/1911.02116.pdf)).
|Datasets|avg_xnli|mnli_m|mnli_mm|ar|bg|de|el|en|es|fr|hi|ru|sw|th|tr|ur|vi|zh|
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|Accuracy|0.822|0.881|0.878|0.818|0.853|0.84|0.837|0.882|0.855|0.849|0.799|0.83|0.751|0.809|0.818|0.76|0.826|0.799|
|Inference text/sec (A100, batch=120)|1415.0|783.0|774.0|1487.0|1396.0|1430.0|1206.0|1623.0|1482.0|1291.0|1302.0|1366.0|1484.0|1500.0|1609.0|1344.0|1403.0|1302.0|
## Limitations and bias
Please consult the original ernie-m paper and literature on different NLI datasets for potential biases.
## Citation
If you use this model, please cite: Laurer, Moritz,
Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022.
‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine
Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k.
## Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl
or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
## Debugging and issues
The ernie-m architecture is only supported with transformers==4.27 or higher
(which is not yet released and causes an error in the inference widget as of 03.03.23).
In order to run the model before the release of 4.27, you need to install transformers from source with: `pip install git+https://github.com/huggingface/transformers`
as well as the sentencepiece tokenizer with: `pip install sentencepiece`
After the release, you can run: `pip install transformers[sentencepiece]>=4.27`
| 7,031 | [
[
-0.0293731689453125,
-0.0276031494140625,
0.005626678466796875,
0.0117645263671875,
0.0009441375732421875,
-0.0124053955078125,
-0.028167724609375,
-0.04949951171875,
0.025421142578125,
0.0216064453125,
-0.049530029296875,
-0.03570556640625,
-0.040008544921875,
... |
TheBloke/Llama-2-7B-GGML | 2023-09-27T13:00:16.000Z | [
"transformers",
"llama",
"facebook",
"meta",
"pytorch",
"llama-2",
"text-generation",
"en",
"arxiv:2307.09288",
"license:llama2",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/Llama-2-7B-GGML | 202 | 796 | transformers | 2023-07-18T17:06:01 | ---
language:
- en
license: llama2
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
model_name: Llama 2 7B
inference: false
model_creator: Meta
model_link: https://huggingface.co/meta-llama/Llama-2-7b-hf
model_type: llama
pipeline_tag: text-generation
quantized_by: TheBloke
base_model: meta-llama/Llama-2-7b-hf
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama 2 7B - GGML
- Model creator: [Meta](https://huggingface.co/meta-llama)
- Original model: [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b-hf)
## Description
This repo contains GGML format model files for [Meta's Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b-hf).
### Important note regarding GGML files.
The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
Please use the GGUF models instead.
### About GGML
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with CUDA GPU acceleration via the c_transformers backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-7B-GGUF)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Llama-2-7B-GGML)
* [Meta's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-7b-hf)
## Prompt template: None
```
{prompt}
```
<!-- compatibility_ggml start -->
## Compatibility
These quantised GGML files are compatible with llama.cpp between June 6th (commit `2d43387`) and August 21st 2023.
For support with latest llama.cpp, please use GGUF files instead.
The final llama.cpp commit with support for GGML was: [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa)
As of August 23rd 2023 they are still compatible with all UIs, libraries and utilities which use GGML. This may change in the future.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama-2-7b.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/Llama-2-7B-GGML/blob/main/llama-2-7b.ggmlv3.q2_K.bin) | q2_K | 2 | 2.87 GB| 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| [llama-2-7b.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/Llama-2-7B-GGML/blob/main/llama-2-7b.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 2.95 GB| 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| [llama-2-7b.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/Llama-2-7B-GGML/blob/main/llama-2-7b.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 3.28 GB| 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [llama-2-7b.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/Llama-2-7B-GGML/blob/main/llama-2-7b.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 3.60 GB| 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [llama-2-7b.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/Llama-2-7B-GGML/blob/main/llama-2-7b.ggmlv3.q4_0.bin) | q4_0 | 4 | 3.79 GB| 6.29 GB | Original quant method, 4-bit. |
| [llama-2-7b.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/Llama-2-7B-GGML/blob/main/llama-2-7b.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 3.83 GB| 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| [llama-2-7b.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/Llama-2-7B-GGML/blob/main/llama-2-7b.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 4.08 GB| 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| [llama-2-7b.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/Llama-2-7B-GGML/blob/main/llama-2-7b.ggmlv3.q4_1.bin) | q4_1 | 4 | 4.21 GB| 6.71 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| [llama-2-7b.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/Llama-2-7B-GGML/blob/main/llama-2-7b.ggmlv3.q5_0.bin) | q5_0 | 5 | 4.63 GB| 7.13 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| [llama-2-7b.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/Llama-2-7B-GGML/blob/main/llama-2-7b.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 4.65 GB| 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| [llama-2-7b.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/Llama-2-7B-GGML/blob/main/llama-2-7b.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 4.78 GB| 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| [llama-2-7b.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/Llama-2-7B-GGML/blob/main/llama-2-7b.ggmlv3.q5_1.bin) | q5_1 | 5 | 5.06 GB| 7.56 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| [llama-2-7b.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/Llama-2-7B-GGML/blob/main/llama-2-7b.ggmlv3.q6_K.bin) | q6_K | 6 | 5.53 GB| 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
| [llama-2-7b.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/Llama-2-7B-GGML/blob/main/llama-2-7b.ggmlv3.q8_0.bin) | q8_0 | 8 | 7.16 GB| 9.66 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
For compatibility with latest llama.cpp, please use GGUF files instead.
```
./main -t 10 -ngl 32 -m llama-2-7b.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Write a story about llamas"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length for this model. For example, `-c 4096` for a Llama 2 model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Meta's Llama 2 7B
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
| 22,961 | [
[
-0.039520263671875,
-0.060150146484375,
0.0263214111328125,
0.0235595703125,
-0.036163330078125,
-0.000004708766937255859,
-0.002872467041015625,
-0.049896240234375,
0.0308380126953125,
0.006198883056640625,
-0.042205810546875,
-0.04534912109375,
-0.039642333984... |
facebook/mms-tts-tur | 2023-09-01T16:45:43.000Z | [
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-to-speech | facebook | null | null | facebook/mms-tts-tur | 3 | 796 | transformers | 2023-09-01T16:45:22 |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Turkish Text-to-Speech
This repository contains the **Turkish (tur)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-tur")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-tur")
text = "some example text in the Turkish language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
| 3,972 | [
[
-0.027099609375,
-0.06329345703125,
0.01031494140625,
0.02752685546875,
-0.01041412353515625,
-0.00916290283203125,
-0.0219879150390625,
-0.019622802734375,
0.0225982666015625,
0.01593017578125,
-0.056884765625,
-0.03570556640625,
-0.044830322265625,
0.00132... |
Yntec/3DCuteWave | 2023-09-12T18:37:17.000Z | [
"diffusers",
"3D",
"Character",
"Children",
"StableDiffusionVN",
"text-to-image",
"stable-diffusion",
"stable-diffusion-diffusers",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Yntec | null | null | Yntec/3DCuteWave | 0 | 796 | diffusers | 2023-09-12T17:41:45 | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- 3D
- Character
- Children
- StableDiffusionVN
- text-to-image
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
---
# SDVN5-3DCuteWave
Model by SDVN.
Samples and prompt:


Female mini cute style, sitting IN SOFA in gaming room, A wholesome animation key shot at computer monitor, pixar and disney animation, studio ghibli, style of maple story, anime key art by ROSSDRAWS and Clay Mann, maple story girl, soft shade, soft lighting, chibi
Original page:
https://civitai.com/models/103178/sdvn5-3dcutewave | 846 | [
[
-0.0404052734375,
-0.06134033203125,
0.0307769775390625,
0.042449951171875,
-0.0284271240234375,
-0.00478363037109375,
0.042327880859375,
-0.0255889892578125,
0.048797607421875,
0.051910400390625,
-0.07037353515625,
-0.04486083984375,
-0.0232391357421875,
-0... |
microsoft/markuplm-large-finetuned-websrc | 2022-09-30T08:58:02.000Z | [
"transformers",
"pytorch",
"markuplm",
"question-answering",
"en",
"dataset:websrc",
"arxiv:2110.08518",
"autotrain_compatible",
"region:us"
] | question-answering | microsoft | null | null | microsoft/markuplm-large-finetuned-websrc | 6 | 795 | transformers | 2022-06-14T13:38:07 | ---
language:
- en
datasets:
- websrc
inference: false
---
# MarkupLM, fine-tuned on WebSRC
**Multimodal (text +markup language) pre-training for [Document AI](https://www.microsoft.com/en-us/research/project/document-ai/)**
## Introduction
MarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:
[MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) Junlong Li, Yiheng Xu, Lei Cui, Furu Wei
## Usage
We refer to the [docs](https://huggingface.co/docs/transformers/main/en/model_doc/markuplm) and [demo notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/MarkupLM). | 946 | [
[
-0.037811279296875,
-0.05072021484375,
0.0184783935546875,
0.0123748779296875,
-0.031768798828125,
0.016632080078125,
-0.0008778572082519531,
-0.0281982421875,
-0.0217132568359375,
0.0012292861938476562,
-0.04864501953125,
-0.04315185546875,
-0.043548583984375,
... |
TheBloke/CodeLlama-34B-fp16 | 2023-08-25T11:13:52.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-2",
"codellama",
"custom_code",
"license:llama2",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/CodeLlama-34B-fp16 | 5 | 795 | transformers | 2023-08-24T20:37:55 | ---
license: llama2
tags:
- llama-2
- codellama
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# CodeLlama 34B fp16
- Model creator: [Meta](https://ai.meta.com/llama/)
## Description
This is Transformers/HF format fp16 weights for CodeLlama 34B. It is the result of downloading CodeLlama 34B from [Meta](https://ai.meta.com/blog/code-llama-large-language-model-coding/) and converting to HF using `convert_llama_weights_to_hf.py`.
Quantisations will be coming shortly.
Please note that due to a change in the RoPE Theta value, for correct results you must load these FP16 models with `trust_remote_code=True`
Credit to @emozilla for creating the necessary modelling code to achieve this!
## Prompt template: TBC
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card
# Code Llama
## **Model Details**
**Model Developers** Meta AI
**Variations** Code Llama comes in three model sizes, and three variants:
1) Code Llama: our base models designed for general code synthesis and understanding
2) Code Llama - Python: designed specifically for Python
3) Code Llama - Instruct: for instruction following and safer deployment
All variants are available in sizes of 7B, 13B and 34B parameters.
**Input** Models input text only.
**Output** Models output text only.
**Model Architecture** Code Llama and its variants are autoregressive language models using optimized transformer architectures. Code Llama 7B and 13B additionally support infilling text generation. All models were fine-tuned with up to 16K tokens, and support up to 100K tokens at inference time.
**Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.
**Licence** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/).
**Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)".
**Where to send comments** Instructions on how to provide feedback or comments on the model can be found in the model [README](README.md), or by opening an issue in the GitHub repository ([https://github.com/facebookresearch/codellama/](https://github.com/facebookresearch/codellama/)).
## **Intended Use**
**Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
**Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
## **Hardware and Software**
**Training Factors**
We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster.
**Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program.
**Training data**
All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details).
Code Llama - Instruct uses additional instruction fine-tuning data.
**Evaluation Results**
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.
## **Ethical Considerations and Limitations**
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
| 8,591 | [
[
-0.032501220703125,
-0.03961181640625,
0.0158538818359375,
0.0099639892578125,
-0.0159912109375,
0.0108489990234375,
0.00128936767578125,
-0.053497314453125,
0.037200927734375,
0.017913818359375,
-0.052886962890625,
-0.0310516357421875,
-0.03265380859375,
0.... |
NeuML/pubmedbert-base-embeddings | 2023-10-18T14:49:27.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"has_space"
] | sentence-similarity | NeuML | null | null | NeuML/pubmedbert-base-embeddings | 15 | 795 | sentence-transformers | 2023-10-18T14:22:18 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language: en
license: apache-2.0
---
# PubMedBERT Embeddings
This is a [PubMedBERT-base](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) model fined-tuned using [sentence-transformers](https://www.SBERT.net). It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. The training dataset was generated using a random sample of [PubMed](https://pubmed.ncbi.nlm.nih.gov/) title-abstract pairs along with similar title pairs.
PubMedBERT Embeddings produces higher quality embeddings than generalized models for medical literature. Further fine-tuning for a medical subdomain will result in even better performance.
## Usage (txtai)
This model can be used to build embeddings databases with [txtai](https://github.com/neuml/txtai) for semantic search and/or as a knowledge source for retrieval augmented generation (RAG).
```python
import txtai
embeddings = txtai.Embeddings(path="neuml/pubmedbert-base-embeddings", content=True)
embeddings.index(documents())
# Run a query
embeddings.search("query to run")
```
## Usage (Sentence-Transformers)
Alternatively, the model can be loaded with [sentence-transformers](https://www.SBERT.net).
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer("neuml/pubmedbert-base-embeddings")
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (Hugging Face Transformers)
The model can also be used directly with Transformers.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Mean Pooling - Take attention mask into account for correct averaging
def meanpooling(output, mask):
embeddings = output[0] # First element of model_output contains all token embeddings
mask = mask.unsqueeze(-1).expand(embeddings.size()).float()
return torch.sum(embeddings * mask, 1) / torch.clamp(mask.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("neuml/pubmedbert-base-embeddings")
model = AutoModel.from_pretrained("neuml/pubmedbert-base-embeddings")
# Tokenize sentences
inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
output = model(**inputs)
# Perform pooling. In this case, mean pooling.
embeddings = meanpooling(output, inputs['attention_mask'])
print("Sentence embeddings:")
print(embeddings)
```
## Evaluation Results
Performance of this model compared to the top base models on the [MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard) is shown below. A popular smaller model was also evaluated along with the most downloaded PubMed similarity model on the Hugging Face Hub.
The following datasets were used to evaluate model performance.
- [PubMed QA](https://huggingface.co/datasets/pubmed_qa)
- Subset: pqa_labeled, Split: train, Pair: (question, long_answer)
- [PubMed Subset](https://huggingface.co/datasets/zxvix/pubmed_subset_new)
- Split: test, Pair: (title, text)
- [PubMed Summary](https://huggingface.co/datasets/scientific_papers)
- Subset: pubmed, Split: validation, Pair: (article, abstract)
Evaluation results are shown below. The [Pearson correlation coefficient](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) is used as the evaluation metric.
| Model | PubMed QA | PubMed Subset | PubMed Summary | Average |
| ----------------------------------------------------------------------------- | --------- | ------------- | -------------- | --------- |
| [all-MiniLM-L6-v2](https://hf.co/sentence-transformers/all-MiniLM-L6-v2) | 90.40 | 95.86 | 94.07 | 93.44 |
| [bge-base-en-v1.5](https://hf.co/BAAI/bge-large-en-v1.5) | 91.02 | 95.60 | 94.49 | 93.70 |
| [gte-base](https://hf.co/thenlper/gte-base) | 92.97 | 96.83 | 96.24 | 95.35 |
| [**pubmedbert-base-embeddings**](https://hf.co/neuml/pubmedbert-base-embeddings) | **93.27** | **97.07** | **96.58** | **95.64** |
| [S-PubMedBert-MS-MARCO](https://hf.co/pritamdeka/S-PubMedBert-MS-MARCO) | 90.86 | 93.33 | 93.54 | 92.58 |
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 20191 with parameters:
```
{'batch_size': 24, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit() method:
```
{
"epochs": 1,
"evaluation_steps": 500,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## More Information
Read more about this model and how it was built in [this article](https://medium.com/neuml/embeddings-for-medical-literature-74dae6abf5e0).
| 6,120 | [
[
-0.019073486328125,
-0.04876708984375,
0.027252197265625,
0.0115814208984375,
-0.0195770263671875,
-0.0221710205078125,
-0.0164947509765625,
-0.01558685302734375,
0.02947998046875,
0.016387939453125,
-0.032501220703125,
-0.055572509765625,
-0.051788330078125,
... |
sonoisa/sentence-t5-base-ja-mean-tokens | 2022-07-31T07:54:13.000Z | [
"sentence-transformers",
"pytorch",
"t5",
"sentence-t5",
"feature-extraction",
"sentence-similarity",
"ja",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | feature-extraction | sonoisa | null | null | sonoisa/sentence-t5-base-ja-mean-tokens | 3 | 794 | sentence-transformers | 2022-03-02T23:29:05 | ---
language: ja
license: cc-by-sa-4.0
tags:
- sentence-transformers
- sentence-t5
- feature-extraction
- sentence-similarity
---
This is a Japanese sentence-T5 model.
日本語用Sentence-T5モデルです。
事前学習済みモデルとして[sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese)を利用しました。
推論の実行にはsentencepieceが必要です(pip install sentencepiece)。
手元の非公開データセットでは、精度は[sonoisa/sentence-bert-base-ja-mean-tokens](https://huggingface.co/sonoisa/sentence-bert-base-ja-mean-tokens)と同程度です。
# 使い方
```python
from transformers import T5Tokenizer, T5Model
import torch
class SentenceT5:
def __init__(self, model_name_or_path, device=None):
self.tokenizer = T5Tokenizer.from_pretrained(model_name_or_path, is_fast=False)
self.model = T5Model.from_pretrained(model_name_or_path).encoder
self.model.eval()
if device is None:
device = "cuda" if torch.cuda.is_available() else "cpu"
self.device = torch.device(device)
self.model.to(device)
def _mean_pooling(self, model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
@torch.no_grad()
def encode(self, sentences, batch_size=8):
all_embeddings = []
iterator = range(0, len(sentences), batch_size)
for batch_idx in iterator:
batch = sentences[batch_idx:batch_idx + batch_size]
encoded_input = self.tokenizer.batch_encode_plus(batch, padding="longest",
truncation=True, return_tensors="pt").to(self.device)
model_output = self.model(**encoded_input)
sentence_embeddings = self._mean_pooling(model_output, encoded_input["attention_mask"]).to('cpu')
all_embeddings.extend(sentence_embeddings)
return torch.stack(all_embeddings)
MODEL_NAME = "sonoisa/sentence-t5-base-ja-mean-tokens"
model = SentenceT5(MODEL_NAME)
sentences = ["暴走したAI", "暴走した人工知能"]
sentence_embeddings = model.encode(sentences, batch_size=8)
print("Sentence embeddings:", sentence_embeddings)
```
| 2,319 | [
[
-0.01409149169921875,
-0.051666259765625,
0.0211029052734375,
0.019378662109375,
-0.037109375,
-0.017822265625,
-0.0205078125,
-0.00769805908203125,
0.0140380859375,
0.026519775390625,
-0.04638671875,
-0.04705810546875,
-0.06048583984375,
0.00479888916015625... |
speechbrain/asr-crdnn-rnnlm-librispeech | 2021-11-30T00:37:56.000Z | [
"speechbrain",
"automatic-speech-recognition",
"CTC",
"Attention",
"pytorch",
"en",
"dataset:librispeech",
"arxiv:2106.04624",
"license:apache-2.0",
"has_space",
"region:us"
] | automatic-speech-recognition | speechbrain | null | null | speechbrain/asr-crdnn-rnnlm-librispeech | 10 | 794 | speechbrain | 2022-03-02T23:29:05 | ---
language: "en"
thumbnail:
tags:
- automatic-speech-recognition
- CTC
- Attention
- pytorch
- speechbrain
license: "apache-2.0"
datasets:
- librispeech
metrics:
- wer
- cer
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# CRDNN with CTC/Attention and RNNLM trained on LibriSpeech
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on LibriSpeech (EN) within
SpeechBrain. For a better experience we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
The performance of the model is the following:
| Release | Test WER | GPUs |
|:-------------:|:--------------:| :--------:|
| 20-05-22 | 3.09 | 1xV100 32GB |
## Pipeline description
This ASR system is composed with 3 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions of LibriSpeech.
- Neural language model (RNNLM) trained on the full 10M words dataset.
- Acoustic model (CRDNN + CTC/Attention). The CRDNN architecture is made of
N blocks of convolutional neural networks with normalisation and pooling on the
frequency domain. Then, a bidirectional LSTM is connected to a final DNN to obtain
the final acoustic representation that is given to the CTC and attention decoders.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files (in English)
```python
from speechbrain.pretrained import EncoderDecoderASR
asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-crdnn-rnnlm-librispeech", savedir="pretrained_models/asr-crdnn-rnnlm-librispeech")
asr_model.transcribe_file('speechbrain/asr-crdnn-rnnlm-librispeech/example.wav')
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
## Parallel Inference on a Batch
Please, [see this Colab notebook](https://colab.research.google.com/drive/1hX5ZI9S4jHIjahFCZnhwwQmFoGAi3tmu?usp=sharing) to figure out how to transcribe in parallel a batch of input sentences using a pre-trained model.
### Training
The model was trained with SpeechBrain (Commit hash: '2abd9f01').
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```bash
cd recipes/LibriSpeech/ASR/seq2seq/
python train.py hparams/train_BPE_1000.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1SAndjcThdkO-YQF8kvwPOXlQ6LMT71vt?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
``` | 4,292 | [
[
-0.0294189453125,
-0.055908203125,
0.008819580078125,
0.0212554931640625,
-0.0216522216796875,
-0.00858306884765625,
-0.0389404296875,
-0.034271240234375,
0.026123046875,
0.01885986328125,
-0.044342041015625,
-0.04779052734375,
-0.052398681640625,
0.00542831... |
finiteautomata/beto-emotion-analysis | 2023-03-29T19:29:54.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"emotion-analysis",
"es",
"arxiv:2106.09462",
"endpoints_compatible",
"region:us"
] | text-classification | finiteautomata | null | null | finiteautomata/beto-emotion-analysis | 7 | 793 | transformers | 2022-03-02T23:29:05 | ---
language:
- es
tags:
- emotion-analysis
---
# Emotion Analysis in Spanish
## beto-emotion-analysis
Repository: [https://github.com/finiteautomata/pysentimiento/](https://github.com/finiteautomata/pysentimiento/)
Model trained with TASS 2020 Task 2 corpus for Emotion detection in Spanish. Base model is [BETO](https://github.com/dccuchile/beto), a BERT model trained in Spanish.
## License
`pysentimiento` is an open-source library for non-commercial use and scientific research purposes only. Please be aware that models are trained with third-party datasets and are subject to their respective licenses.
1. [TASS Dataset license](http://tass.sepln.org/tass_data/download.php)
2. [SEMEval 2017 Dataset license]()
## Citation
If you use `pysentimiento` in your work, please cite [this paper](https://arxiv.org/abs/2106.09462)
```
@misc{perez2021pysentimiento,
title={pysentimiento: A Python Toolkit for Sentiment Analysis and SocialNLP tasks},
author={Juan Manuel Pérez and Juan Carlos Giudici and Franco Luque},
year={2021},
eprint={2106.09462},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
and also the dataset related paper
```
@inproceedings{del2020emoevent,
title={EmoEvent: A multilingual emotion corpus based on different events},
author={del Arco, Flor Miriam Plaza and Strapparava, Carlo and Lopez, L Alfonso Urena and Mart{\'\i}n-Valdivia, M Teresa},
booktitle={Proceedings of the 12th Language Resources and Evaluation Conference},
pages={1492--1498},
year={2020}
}
```
Enjoy! 🤗
| 1,567 | [
[
-0.0164794921875,
-0.04229736328125,
0.0201263427734375,
0.05767822265625,
-0.033721923828125,
-0.006076812744140625,
-0.03497314453125,
-0.039031982421875,
0.036468505859375,
-0.0111236572265625,
-0.036590576171875,
-0.058685302734375,
-0.040283203125,
0.01... |
yanekyuk/bert-keyword-extractor | 2022-06-04T00:51:39.000Z | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | token-classification | yanekyuk | null | null | yanekyuk/bert-keyword-extractor | 14 | 793 | transformers | 2022-06-03T23:06:00 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
- f1
language:
- en
widget:
- text: "Broadcom agreed to acquire cloud computing company VMware in a $61 billion (€57bn) cash-and stock deal, massively diversifying the chipmaker’s business and almost tripling its software-related revenue to about 45% of its total sales. By the numbers: VMware shareholders will receive either $142.50 in cash or 0.2520 of a Broadcom share for each VMware stock. Broadcom will also assume $8 billion of VMware's net debt."
- text: "Canadian Natural Resources Minister Jonathan Wilkinson told Bloomberg that the country could start supplying Europe with liquefied natural gas (LNG) in as soon as three years by converting an existing LNG import facility on Canada’s Atlantic coast into an export terminal. Bottom line: Wilkinson said what Canada cares about is that the new LNG facility uses a low-emission process for the gas and is capable of transitioning to exporting hydrogen later on."
- text: "Google is being investigated by the UK’s antitrust watchdog for its dominance in the \"ad tech stack,\" the set of services that facilitate the sale of online advertising space between advertisers and sellers. Google has strong positions at various levels of the ad tech stack and charges fees to both publishers and advertisers. A step back: UK Competition and Markets Authority has also been investigating whether Google and Meta colluded over ads, probing into the advertising agreement between the two companies, codenamed Jedi Blue."
- text: "Shares in Twitter closed 6.35% up after an SEC 13D filing revealed that Elon Musk pledged to put up an additional $6.25 billion of his own wealth to fund the $44 billion takeover deal, lifting the total to $33.5 billion from an initial $27.25 billion. In other news: Former Twitter CEO Jack Dorsey announced he's stepping down, but would stay on Twitter’s board \\“until his term expires at the 2022 meeting of stockholders.\""
model-index:
- name: bert-keyword-extractor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-keyword-extractor
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1341
- Precision: 0.8565
- Recall: 0.8874
- Accuracy: 0.9738
- F1: 0.8717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:--------:|:------:|
| 0.1688 | 1.0 | 1875 | 0.1233 | 0.7194 | 0.7738 | 0.9501 | 0.7456 |
| 0.1219 | 2.0 | 3750 | 0.1014 | 0.7724 | 0.8166 | 0.9606 | 0.7939 |
| 0.0834 | 3.0 | 5625 | 0.0977 | 0.8280 | 0.8263 | 0.9672 | 0.8272 |
| 0.0597 | 4.0 | 7500 | 0.0984 | 0.8304 | 0.8680 | 0.9704 | 0.8488 |
| 0.0419 | 5.0 | 9375 | 0.1042 | 0.8417 | 0.8687 | 0.9717 | 0.8550 |
| 0.0315 | 6.0 | 11250 | 0.1161 | 0.8520 | 0.8839 | 0.9729 | 0.8677 |
| 0.0229 | 7.0 | 13125 | 0.1282 | 0.8469 | 0.8939 | 0.9734 | 0.8698 |
| 0.0182 | 8.0 | 15000 | 0.1341 | 0.8565 | 0.8874 | 0.9738 | 0.8717 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 4,097 | [
[
-0.041351318359375,
-0.043609619140625,
0.01204681396484375,
0.004764556884765625,
-0.025115966796875,
-0.0182952880859375,
-0.0092315673828125,
-0.0118560791015625,
0.0223388671875,
0.021240234375,
-0.048736572265625,
-0.052490234375,
-0.057861328125,
-0.01... |
TweebankNLP/bertweet-tb2_wnut17-ner | 2022-05-05T00:23:17.000Z | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"arxiv:2201.07281",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | TweebankNLP | null | null | TweebankNLP/bertweet-tb2_wnut17-ner | 3 | 792 | transformers | 2022-05-04T16:50:37 | ---
license: cc-by-nc-4.0
---
## Model Specification
- This is the **state-of-the-art Twitter NER model (with 74.35\% Entity-Level F1)** on Tweebank V2's NER benchmark (also called `Tweebank-NER`), trained on the corpus combining both Tweebank-NER and WNUT 17 training data.
- For more details about the `TweebankNLP` project, please refer to this [our paper](https://arxiv.org/pdf/2201.07281.pdf) and [github](https://github.com/social-machines/TweebankNLP) page.
- In the paper, it is referred as `HuggingFace-BERTweet (TB2+W17).`
## How to use the model
- **PRE-PROCESSING**: when you apply the model on tweets, please make sure that tweets are preprocessed by the [TweetTokenizer](https://github.com/VinAIResearch/BERTweet/blob/master/TweetNormalizer.py) to get the best performance.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("TweebankNLP/bertweet-tb2_wnut17-ner")
model = AutoModelForTokenClassification.from_pretrained("TweebankNLP/bertweet-tb2_wnut17-ner")
```
## References
If you use this repository in your research, please kindly cite [our paper](https://arxiv.org/pdf/2201.07281.pdf):
```bibtex
@article{jiang2022tweetnlp,
title={Annotating the Tweebank Corpus on Named Entity Recognition and Building NLP Models for Social Media Analysis},
author={Jiang, Hang and Hua, Yining and Beeferman, Doug and Roy, Deb},
journal={In Proceedings of the 13th Language Resources and Evaluation Conference (LREC)},
year={2022}
}
``` | 1,540 | [
[
-0.019287109375,
-0.053741455078125,
-0.0003693103790283203,
0.0295562744140625,
-0.0171966552734375,
0.0164031982421875,
-0.025146484375,
-0.045745849609375,
0.0289764404296875,
0.02008056640625,
-0.0301513671875,
-0.043487548828125,
-0.060302734375,
0.0013... |
yentinglin/Taiwan-LLM-7B-v2.0.1-chat | 2023-11-02T08:44:35.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"zh",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | yentinglin | null | null | yentinglin/Taiwan-LLM-7B-v2.0.1-chat | 18 | 792 | transformers | 2023-10-10T16:30:19 |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
license: apache-2.0
language:
- zh
widget:
- text: >-
A chat between a curious user and an artificial intelligence assistant.
The assistant gives helpful, detailed, and polite answers to the user's
questions. USER: 你好,請問你可以幫我寫一封推薦信嗎? ASSISTANT:
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Acknowledge license to accept the repository.
extra_gated_prompt: Please contact the author for access.
extra_gated_button_content: Acknowledge license 同意以上內容
extra_gated_fields:
Name: text
Mail: text
Organization: text
Country: text
Any utilization of the Taiwan LLM repository mandates the explicit acknowledgment and attribution to the original author: checkbox
使用Taiwan LLM必須明確地承認和歸功於優必達株式會社 Ubitus 以及原始作者: checkbox
---
# Taiwan LLM based on LLaMa2-7b
continue pretraining on 20 billion tokens in traditional mandarin and instruction fine-tuning on millions of conversations.
This version does NOT include commoncrawl.
# 🌟 Checkout New [Taiwan-LLM Demo Chat-UI](http://www.twllm.com) 🌟
# Collaboration with Ubitus K.K. 💪💪💪
本項目與 Ubitus K.K. 合作進行。Ubitus 為本項目提供寶貴的技術支持和計算資源。
Taiwan LLM v2 is conducted in collaboration with [Ubitus K.K.](http://ubitus.net). Ubitus provides valuable technical support and compute resources for the project.
| 1,494 | [
[
-0.0020160675048828125,
-0.058013916015625,
0.01922607421875,
0.045745849609375,
-0.07733154296875,
0.044219970703125,
-0.00689697265625,
-0.0465087890625,
0.041595458984375,
0.056121826171875,
-0.036285400390625,
-0.041778564453125,
-0.019622802734375,
-0.0... |
bhadresh-savani/electra-base-squad2 | 2023-03-22T09:36:46.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"electra",
"question-answering",
"dataset:squad_v2",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | bhadresh-savani | null | null | bhadresh-savani/electra-base-squad2 | 0 | 791 | transformers | 2022-04-13T14:25:23 | ---
datasets:
- squad_v2
license: cc-by-4.0
---
# electra-base for QA
## Overview
**Language model:** electra-base
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See [example](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py) in [FARM](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py)
**Infrastructure**: 1x Tesla v100
## Hyperparameters
```
seed=42
batch_size = 32
n_epochs = 5
base_LM_model = "google/electra-base-discriminator"
max_seq_len = 384
learning_rate = 1e-4
lr_schedule = LinearWarmup
warmup_proportion = 0.1
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 77.30144024256717,
"f1": 81.35438272008543,
"total": 11873,
"HasAns_exact": 74.34210526315789,
"HasAns_f1": 82.45961302894314,
"HasAns_total": 5928,
"NoAns_exact": 80.25231286795626,
"NoAns_f1": 80.25231286795626,
"NoAns_total": 5945
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/electra-base-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
### In FARM
```python
from farm.modeling.adaptive_model import AdaptiveModel
from farm.modeling.tokenization import Tokenizer
from farm.infer import Inferencer
model_name = "deepset/electra-base-squad2"
# a) Get predictions
nlp = Inferencer.load(model_name, task_type="question_answering")
QA_input = [{"questions": ["Why is model conversion important?"],
"text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}]
res = nlp.inference_from_dicts(dicts=QA_input)
# b) Load model & tokenizer
model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
tokenizer = Tokenizer.load(model_name)
```
### In haystack
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/electra-base-squad2")
# or
reader = TransformersReader(model="deepset/electra-base-squad2",tokenizer="deepset/electra-base-squad2")
```
## Authors
Vaishali Pal `vaishali.pal [at] deepset.ai`
Branden Chan: `branden.chan [at] deepset.ai`
Timo Möller: `timo.moeller [at] deepset.ai`
Malte Pietsch: `malte.pietsch [at] deepset.ai`
Tanay Soni: `tanay.soni [at] deepset.ai`
Note:
Borrowed this model from Haystack model repo for adding tensorflow model. | 3,227 | [
[
-0.029388427734375,
-0.04595947265625,
0.021881103515625,
0.00356292724609375,
0.0074462890625,
0.01139068603515625,
-0.005641937255859375,
-0.023345947265625,
0.004360198974609375,
0.0307464599609375,
-0.051788330078125,
-0.035003662109375,
-0.0162506103515625,... |
timm/coatnet_rmlp_2_rw_224.sw_in1k | 2023-05-10T23:48:21.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2201.03545",
"arxiv:2111.09883",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/coatnet_rmlp_2_rw_224.sw_in1k | 0 | 791 | timm | 2023-01-20T21:27:42 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for coatnet_rmlp_2_rw_224.sw_in1k
A timm specific CoAtNet (w/ a MLP Log-CPB (continuous log-coordinate relative position bias motivated by Swin-V2) image classification model. Trained in `timm` on ImageNet-1k by Ross Wightman.
ImageNet-1k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program.
### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py)
MaxxViT covers a number of related model architectures that share a common structure including:
- CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages.
- MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid).
- CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate.
Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations.
All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 73.9
- GMACs: 15.2
- Activations (M): 54.8
- Image size: 224 x 224
- **Papers:**
- CoAtNet: Marrying Convolution and Attention for All Data Sizes: https://arxiv.org/abs/2201.03545
- Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('coatnet_rmlp_2_rw_224.sw_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'coatnet_rmlp_2_rw_224.sw_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 128, 112, 112])
# torch.Size([1, 128, 56, 56])
# torch.Size([1, 256, 28, 28])
# torch.Size([1, 512, 14, 14])
# torch.Size([1, 1024, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'coatnet_rmlp_2_rw_224.sw_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1024, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
### By Top-1
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
### By Throughput (samples / sec)
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{tu2022maxvit,
title={MaxViT: Multi-Axis Vision Transformer},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={ECCV},
year={2022},
}
```
```bibtex
@article{dai2021coatnet,
title={CoAtNet: Marrying Convolution and Attention for All Data Sizes},
author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing},
journal={arXiv preprint arXiv:2106.04803},
year={2021}
}
```
| 22,339 | [
[
-0.050079345703125,
-0.0322265625,
0.0014448165893554688,
0.029449462890625,
-0.0224456787109375,
-0.0169830322265625,
-0.011260986328125,
-0.0276031494140625,
0.054534912109375,
0.0157928466796875,
-0.041473388671875,
-0.046630859375,
-0.049072265625,
-0.00... |
drdiffusion/StDiffClo | 2023-03-06T13:35:00.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"arxiv:2207.12598",
"arxiv:2112.10752",
"arxiv:2103.00020",
"arxiv:2205.11487",
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | drdiffusion | null | null | drdiffusion/StDiffClo | 0 | 791 | diffusers | 2023-03-06T13:26:39 | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
extra_gated_prompt: >-
This model is open access and available to all, with a CreativeML OpenRAIL-M
license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or
harmful outputs or content
2. CompVis claims no rights on the outputs you generate, you are free to use
them and are accountable for their use which must not go against the
provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as
a service. If you do, please be aware you have to include the same use
restrictions as the ones in the license and share a copy of the CreativeML
OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here:
https://huggingface.co/spaces/CompVis/stable-diffusion-license
extra_gated_heading: Please read the LICENSE to access this model
duplicated_from: runwayml/stable-diffusion-v1-5
---
# Stable Diffusion v1-5 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion blog](https://huggingface.co/blog/stable_diffusion).
The **Stable-Diffusion-v1-5** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2)
checkpoint and subsequently fine-tuned on 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [RunwayML GitHub repository](https://github.com/runwayml/stable-diffusion).
### Diffusers
```py
from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
For more detailed instructions, use-cases and examples in JAX follow the instructions [here](https://github.com/huggingface/diffusers#text-to-image-generation-with-stable-diffusion)
### Original GitHub Repository
1. Download the weights
- [v1-5-pruned-emaonly.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt) - 4.27GB, ema-only weight. uses less VRAM - suitable for inference
- [v1-5-pruned.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt) - 7.7GB, ema+non-ema weights. uses more VRAM - suitable for fine-tuning
2. Follow instructions [here](https://github.com/runwayml/stable-diffusion).
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1-5 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
Currently six Stable Diffusion checkpoints are provided, which were trained as follows.
- [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2` - 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2` - 225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) Resumed from `stable-diffusion-v1-2` - 595,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting) Resumed from `stable-diffusion-v1-5` - then 440,000 steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning. For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything.
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PNDM/PLMS sampling
steps show the relative improvements of the checkpoints:

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).* | 14,517 | [
[
-0.0296630859375,
-0.07171630859375,
0.034515380859375,
0.02020263671875,
-0.0181732177734375,
-0.0294189453125,
0.00640106201171875,
-0.03326416015625,
-0.01378631591796875,
0.03363037109375,
-0.0236663818359375,
-0.0421142578125,
-0.05328369140625,
-0.0128... |
noamrot/FuseCap_Image_Captioning | 2023-06-08T15:40:51.000Z | [
"transformers",
"pytorch",
"blip",
"text2text-generation",
"image-captioning",
"image-to-text",
"arxiv:2305.17718",
"license:mit",
"autotrain_compatible",
"region:us"
] | image-to-text | noamrot | null | null | noamrot/FuseCap_Image_Captioning | 3 | 791 | transformers | 2023-05-31T07:04:57 | ---
license: mit
inference: false
pipeline_tag: image-to-text
tags:
- image-captioning
---
# FuseCap: Leveraging Large Language Models to Fuse Visual Data into Enriched Image Captions
A framework designed to generate semantically rich image captions.
## Resources
- 💻 **Project Page**: For more details, visit the official [project page](https://rotsteinnoam.github.io/FuseCap/).
- 📝 **Read the Paper**: You can find the paper [here](https://arxiv.org/abs/2305.17718).
- 🚀 **Demo**: Try out our BLIP-based model [demo](https://huggingface.co/spaces/noamrot/FuseCap) trained using FuseCap.
- 📂 **Code Repository**: The code for FuseCap can be found in the [GitHub repository](https://github.com/RotsteinNoam/FuseCap).
- 🗃️ **Datasets**: The fused captions datasets can be accessed from [here](https://github.com/RotsteinNoam/FuseCap#datasets).
#### Running the model
Our BLIP-based model can be run using the following code,
```python
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
processor = BlipProcessor.from_pretrained("noamrot/FuseCap")
model = BlipForConditionalGeneration.from_pretrained("noamrot/FuseCap").to(device)
img_url = 'https://huggingface.co/spaces/noamrot/FuseCap/resolve/main/bike.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
text = "a picture of "
inputs = processor(raw_image, text, return_tensors="pt").to(device)
out = model.generate(**inputs, num_beams = 3)
print(processor.decode(out[0], skip_special_tokens=True))
```
## Upcoming Updates
The official codebase, datasets and trained models for this project will be released soon.
## BibTeX
``` Citation
@article{rotstein2023fusecap,
title={FuseCap: Leveraging Large Language Models to Fuse Visual Data into Enriched Image Captions},
author={Rotstein, Noam and Bensaid, David and Brody, Shaked and Ganz, Roy and Kimmel, Ron},
journal={arXiv preprint arXiv:2305.17718},
year={2023}
}
``` | 2,077 | [
[
-0.021759033203125,
-0.0439453125,
-0.0029125213623046875,
0.02740478515625,
-0.02215576171875,
0.01708984375,
-0.0195465087890625,
-0.04931640625,
0.007282257080078125,
0.04547119140625,
-0.0360107421875,
-0.0249176025390625,
-0.03826904296875,
0.0072479248... |
DAMO-NLP-SG/zero-shot-classify-SSTuning-XLM-R | 2023-09-09T13:17:19.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"Zero-Shot Classification",
"zero-shot-classification",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",... | zero-shot-classification | DAMO-NLP-SG | null | null | DAMO-NLP-SG/zero-shot-classify-SSTuning-XLM-R | 6 | 791 | transformers | 2023-08-14T03:29:20 | ---
inference: false
license: mit
tags:
- Zero-Shot Classification
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
pipeline_tag: zero-shot-classification
metrics:
- accuracy
---
# Zero-shot text classification (multilingual version) trained with self-supervised tuning
Zero-shot text classification model trained with self-supervised tuning (SSTuning).
It was introduced in the paper [Zero-Shot Text Classification via Self-Supervised Tuning](https://arxiv.org/abs/2305.11442) by
Chaoqun Liu, Wenxuan Zhang, Guizhen Chen, Xiaobao Wu, Anh Tuan Luu, Chip Hong Chang, Lidong Bing
and first released in [this repository](https://github.com/DAMO-NLP-SG/SSTuning).
The model backbone is xlm-roberta-base.
## Model description
The model is tuned with unlabeled data using a first sentence prediction (FSP) learning objective.
The FSP task is designed by considering both the nature of the unlabeled corpus and the input/output format of classification tasks.
The training and validation sets are constructed from the unlabeled corpus using FSP.
During tuning, BERT-like pre-trained masked language
models such as RoBERTa and ALBERT are employed as the backbone, and an output layer for classification is added.
The learning objective for FSP is to predict the index of the correct label.
A cross-entropy loss is used for tuning the model.
## Model variations
There are four versions of models released. The details are:
| Model | Backbone | #params | lang | acc | Speed | #Train
|------------|-----------|----------|-------|-------|----|-------------|
| [zero-shot-classify-SSTuning-base](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-base) | [roberta-base](https://huggingface.co/roberta-base) | 125M | En | Low | High | 20.48M |
| [zero-shot-classify-SSTuning-large](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-large) | [roberta-large](https://huggingface.co/roberta-large) | 355M | En | Medium | Medium | 5.12M |
| [zero-shot-classify-SSTuning-ALBERT](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-ALBERT) | [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) | 235M | En | High | Low| 5.12M |
| [zero-shot-classify-SSTuning-XLM-R](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-XLM-R) | [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) | 278M | Multi | - | - | 20.48M |
Please note that zero-shot-classify-SSTuning-XLM-R is trained with 20.48M English samples only. However, it can also be used in other languages as long as xlm-roberta supports.
Please check [this repository](https://github.com/DAMO-NLP-SG/SSTuning) for the performance of each model.
## Intended uses & limitations
The model can be used for zero-shot text classification such as sentiment analysis and topic classification. No further finetuning is needed.
The number of labels should be 2 ~ 20.
### How to use
You can try the model with the Colab [Notebook](https://colab.research.google.com/drive/17bqc8cXFF-wDmZ0o8j7sbrQB9Cq7Gowr?usp=sharing).
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch, string, random
tokenizer = AutoTokenizer.from_pretrained("DAMO-NLP-SG/zero-shot-classify-SSTuning-XLM-R")
model = AutoModelForSequenceClassification.from_pretrained("DAMO-NLP-SG/zero-shot-classify-SSTuning-XLM-R")
text = "I love this place! The food is always so fresh and delicious."
list_label = ["negative", "positive"]
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
list_ABC = [x for x in string.ascii_uppercase]
def check_text(model, text, list_label, shuffle=False):
list_label = [x+'.' if x[-1] != '.' else x for x in list_label]
list_label_new = list_label + [tokenizer.pad_token]* (20 - len(list_label))
if shuffle:
random.shuffle(list_label_new)
s_option = ' '.join(['('+list_ABC[i]+') '+list_label_new[i] for i in range(len(list_label_new))])
text = f'{s_option} {tokenizer.sep_token} {text}'
model.to(device).eval()
encoding = tokenizer([text],truncation=True, max_length=512,return_tensors='pt')
item = {key: val.to(device) for key, val in encoding.items()}
logits = model(**item).logits
logits = logits if shuffle else logits[:,0:len(list_label)]
probs = torch.nn.functional.softmax(logits, dim = -1).tolist()
predictions = torch.argmax(logits, dim=-1).item()
probabilities = [round(x,5) for x in probs[0]]
print(f'prediction: {predictions} => ({list_ABC[predictions]}) {list_label_new[predictions]}')
print(f'probability: {round(probabilities[predictions]*100,2)}%')
check_text(model, text, list_label)
# prediction: 1 => (B) positive.
# probability: 99.92%
```
### BibTeX entry and citation info
```bibtxt
@inproceedings{acl23/SSTuning,
author = {Chaoqun Liu and
Wenxuan Zhang and
Guizhen Chen and
Xiaobao Wu and
Anh Tuan Luu and
Chip Hong Chang and
Lidong Bing},
title = {Zero-Shot Text Classification via Self-Supervised Tuning},
booktitle = {Findings of the Association for Computational Linguistics: ACL 2023},
year = {2023},
url = {https://arxiv.org/abs/2305.11442},
}
```
| 5,798 | [
[
-0.01442718505859375,
-0.045654296875,
0.0136260986328125,
0.01116943359375,
-0.01934814453125,
0.00458526611328125,
-0.01824951171875,
-0.035797119140625,
0.01241302490234375,
0.0276031494140625,
-0.0372314453125,
-0.057708740234375,
-0.052337646484375,
0.0... |
Hate-speech-CNERG/bert-base-uncased-hatexplain-rationale-two | 2022-06-24T09:45:07.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"dataset:hatexplain",
"arxiv:2012.10289",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Hate-speech-CNERG | null | null | Hate-speech-CNERG/bert-base-uncased-hatexplain-rationale-two | 7 | 790 | transformers | 2022-03-02T23:29:04 | ---
language: en
license: apache-2.0
datasets:
- hatexplain
---
## Table of Contents
- [Model Details](#model-details)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
## Model Details
**Model Description:**
The model is used for classifying a text as Abusive (Hatespeech and Offensive) or Normal. The model is trained using data from Gab and Twitter and Human Rationales were included as part of the training data to boost the performance. The model also has a rationale predictor head that can predict the rationales given an abusive sentence
- **Developed by:** Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee
- **Model Type:** Text Classification
- **Language(s):** English
- **License:** Apache-2.0
- **Parent Model:** See the [BERT base uncased model](https://huggingface.co/bert-base-uncased) for more information about the BERT base model.
- **Resources for more information:**
- [Research Paper](https://arxiv.org/abs/2012.10289) Accepted at AAAI 2021.
- [GitHub Repo with datatsets and models](https://github.com/punyajoy/HateXplain)
## How to Get Started with the Model
**Details of usage**
Please use the **Model_Rational_Label** class inside [models.py](models.py) to load the models. The default prediction in this hosted inference API may be wrong due to the use of different class initialisations.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
### from models.py
from models import *
tokenizer = AutoTokenizer.from_pretrained("Hate-speech-CNERG/bert-base-uncased-hatexplain-rationale-two")
model = Model_Rational_Label.from_pretrained("Hate-speech-CNERG/bert-base-uncased-hatexplain-rationale-two")
inputs = tokenizer('He is a great guy", return_tensors="pt")
prediction_logits, _ = model(input_ids=inputs['input_ids'],attention_mask=inputs['attention_mask'])
```
## Uses
#### Direct Use
This model can be used for Text Classification
#### Downstream Use
[More information needed]
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
(and if you can generate an example of a biased prediction, also something like this):
Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For 
The model author's also note in their HateXplain paper that they
> *have not considered any external context such as profile bio, user gender, history of posts etc., which might be helpful in the classification task. Also, in this work we have focused on the English language. It does not consider multilingual hate speech into account.*
#### Training Procedure
##### Preprocessing
The authors detail their preprocessing procedure in the [Github repository](https://github.com/hate-alert/HateXplain/tree/master/Preprocess)
## Evaluation
The mode authors detail the Hidden layer size and attention for the HateXplain fien tuned models in the [associated paper](https://arxiv.org/pdf/2012.10289.pdf)
#### Results
The model authors both in their paper and in the git repository provide the illustrative output of the BERT - HateXplain in comparison to BERT and and other HateXplain fine tuned 
## Citation Information
```bibtex
@article{mathew2020hatexplain,
title={HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection},
author={Mathew, Binny and Saha, Punyajoy and Yimam, Seid Muhie and Biemann, Chris and Goyal, Pawan and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2012.10289},
year={2020}
}
```
| 4,853 | [
[
-0.034820556640625,
-0.057830810546875,
0.01345062255859375,
0.005802154541015625,
-0.0007009506225585938,
-0.022308349609375,
-0.016357421875,
-0.04302978515625,
-0.0071258544921875,
0.02587890625,
-0.03900146484375,
-0.0367431640625,
-0.07037353515625,
-0.... |
Helsinki-NLP/opus-mt-en-uk | 2023-08-16T11:31:36.000Z | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"uk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | translation | Helsinki-NLP | null | null | Helsinki-NLP/opus-mt-en-uk | 5 | 790 | transformers | 2022-03-02T23:29:04 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-uk
* source languages: en
* target languages: uk
* OPUS readme: [en-uk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-uk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-uk/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-uk/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-uk/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.uk | 50.2 | 0.674 |
| 818 | [
[
-0.015106201171875,
-0.0217437744140625,
0.0163116455078125,
0.026153564453125,
-0.035552978515625,
-0.0304412841796875,
-0.028564453125,
-0.0082550048828125,
0.0005617141723632812,
0.032806396484375,
-0.05194091796875,
-0.041290283203125,
-0.04254150390625,
... |
timm/vit_base_patch16_clip_384.laion2b_ft_in12k_in1k | 2023-05-06T00:02:44.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:laion-2b",
"dataset:imagenet-12k",
"arxiv:2212.07143",
"arxiv:2210.08402",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/vit_base_patch16_clip_384.laion2b_ft_in12k_in1k | 1 | 790 | timm | 2022-11-11T08:23:13 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- laion-2b
- imagenet-12k
---
# Model card for vit_base_patch16_clip_384.laion2b_ft_in12k_in1k
A Vision Transformer (ViT) image classification model. Pretrained on LAION-2B image-text pairs using OpenCLIP. Fine-tuned on ImageNet-12k and then ImageNet-1k in `timm`. See recipes in [Reproducible scaling laws](https://arxiv.org/abs/2212.07143).
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 86.9
- GMACs: 49.4
- Activations (M): 48.3
- Image size: 384 x 384
- **Papers:**
- OpenCLIP: https://github.com/mlfoundations/open_clip
- Reproducible scaling laws for contrastive language-image learning: https://arxiv.org/abs/2212.07143
- LAION-5B: An open large-scale dataset for training next generation image-text models: https://arxiv.org/abs/2210.08402
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:**
- LAION-2B
- ImageNet-12k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_base_patch16_clip_384.laion2b_ft_in12k_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_base_patch16_clip_384.laion2b_ft_in12k_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 577, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
```bibtex
@article{cherti2022reproducible,
title={Reproducible scaling laws for contrastive language-image learning},
author={Cherti, Mehdi and Beaumont, Romain and Wightman, Ross and Wortsman, Mitchell and Ilharco, Gabriel and Gordon, Cade and Schuhmann, Christoph and Schmidt, Ludwig and Jitsev, Jenia},
journal={arXiv preprint arXiv:2212.07143},
year={2022}
}
```
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 5,762 | [
[
-0.0294036865234375,
-0.0275115966796875,
0.00902557373046875,
0.0104522705078125,
-0.0267333984375,
-0.032989501953125,
-0.033416748046875,
-0.0305328369140625,
0.00939178466796875,
0.0274505615234375,
-0.0304718017578125,
-0.042999267578125,
-0.050811767578125... |
timm/mixnet_xl.ra_in1k | 2023-04-27T21:13:58.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2110.00476",
"arxiv:1907.09595",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/mixnet_xl.ra_in1k | 0 | 790 | timm | 2022-12-12T23:59:55 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for mixnet_xl.ra_in1k
A MixNet image classification model. Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* RandAugment `RA` recipe. Inspired by and evolved from EfficientNet RandAugment recipes. Published as `B` recipe in [ResNet Strikes Back](https://arxiv.org/abs/2110.00476).
* RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging
* Step (exponential decay w/ staircase) LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 11.9
- GMACs: 0.9
- Activations (M): 14.6
- Image size: 224 x 224
- **Papers:**
- MixConv: Mixed Depthwise Convolutional Kernels: https://arxiv.org/abs/1907.09595
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('mixnet_xl.ra_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mixnet_xl.ra_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 40, 112, 112])
# torch.Size([1, 48, 56, 56])
# torch.Size([1, 64, 28, 28])
# torch.Size([1, 192, 14, 14])
# torch.Size([1, 320, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mixnet_xl.ra_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1536, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@misc{tan2019mixconv,
title={MixConv: Mixed Depthwise Convolutional Kernels},
author={Mingxing Tan and Quoc V. Le},
year={2019},
eprint={1907.09595},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
| 4,579 | [
[
-0.04095458984375,
-0.033721923828125,
-0.00750732421875,
0.007030487060546875,
-0.02374267578125,
-0.0229339599609375,
-0.01334381103515625,
-0.03271484375,
0.034881591796875,
0.03338623046875,
-0.038970947265625,
-0.04931640625,
-0.054290771484375,
-0.0066... |
timm/eva_large_patch14_336.in22k_ft_in1k | 2023-03-31T06:16:16.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-22k",
"arxiv:2211.07636",
"license:mit",
"region:us"
] | image-classification | timm | null | null | timm/eva_large_patch14_336.in22k_ft_in1k | 0 | 790 | timm | 2022-12-22T07:09:58 | ---
tags:
- image-classification
- timm
library_tag: timm
license: mit
datasets:
- imagenet-1k
- imagenet-22k
---
# Model card for eva_large_patch14_336.in22k_ft_in1k
An EVA image classification model. Pretrained on ImageNet-22k with masked image modeling (using EVA-CLIP as a MIM teacher) and fine-tuned on ImageNet-1k by paper authors.
NOTE: `timm` checkpoints are float32 for consistency with other models. Original checkpoints are float16 or bfloat16 in some cases, see originals if that's preferred.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 304.5
- GMACs: 191.1
- Activations (M): 270.2
- Image size: 336 x 336
- **Papers:**
- EVA: Exploring the Limits of Masked Visual Representation Learning at Scale: https://arxiv.org/abs/2211.07636
- **Pretrain Dataset:** ImageNet-22k
- **Dataset:** ImageNet-1k
- **Original:**
- https://github.com/baaivision/EVA
- https://huggingface.co/BAAI/EVA
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('eva_large_patch14_336.in22k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'eva_large_patch14_336.in22k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 577, 1024) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |top1 |top5 |param_count|img_size|
|-----------------------------------------------|------|------|-----------|--------|
|eva02_large_patch14_448.mim_m38m_ft_in22k_in1k |90.054|99.042|305.08 |448 |
|eva02_large_patch14_448.mim_in22k_ft_in22k_in1k|89.946|99.01 |305.08 |448 |
|eva_giant_patch14_560.m30m_ft_in22k_in1k |89.792|98.992|1014.45 |560 |
|eva02_large_patch14_448.mim_in22k_ft_in1k |89.626|98.954|305.08 |448 |
|eva02_large_patch14_448.mim_m38m_ft_in1k |89.57 |98.918|305.08 |448 |
|eva_giant_patch14_336.m30m_ft_in22k_in1k |89.56 |98.956|1013.01 |336 |
|eva_giant_patch14_336.clip_ft_in1k |89.466|98.82 |1013.01 |336 |
|eva_large_patch14_336.in22k_ft_in22k_in1k |89.214|98.854|304.53 |336 |
|eva_giant_patch14_224.clip_ft_in1k |88.882|98.678|1012.56 |224 |
|eva02_base_patch14_448.mim_in22k_ft_in22k_in1k |88.692|98.722|87.12 |448 |
|eva_large_patch14_336.in22k_ft_in1k |88.652|98.722|304.53 |336 |
|eva_large_patch14_196.in22k_ft_in22k_in1k |88.592|98.656|304.14 |196 |
|eva02_base_patch14_448.mim_in22k_ft_in1k |88.23 |98.564|87.12 |448 |
|eva_large_patch14_196.in22k_ft_in1k |87.934|98.504|304.14 |196 |
|eva02_small_patch14_336.mim_in22k_ft_in1k |85.74 |97.614|22.13 |336 |
|eva02_tiny_patch14_336.mim_in22k_ft_in1k |80.658|95.524|5.76 |336 |
## Citation
```bibtex
@article{EVA,
title={EVA: Exploring the Limits of Masked Visual Representation Learning at Scale},
author={Fang, Yuxin and Wang, Wen and Xie, Binhui and Sun, Quan and Wu, Ledell and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue},
journal={arXiv preprint arXiv:2211.07636},
year={2022}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 4,999 | [
[
-0.049468994140625,
-0.029998779296875,
0.007541656494140625,
0.01001739501953125,
-0.02093505859375,
0.0014486312866210938,
-0.01409149169921875,
-0.0300140380859375,
0.0440673828125,
0.03277587890625,
-0.035858154296875,
-0.053924560546875,
-0.0526123046875,
... |
timm/flexivit_base.1200ep_in1k | 2023-05-05T23:58:52.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2212.08013",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/flexivit_base.1200ep_in1k | 0 | 790 | timm | 2022-12-22T07:14:41 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for flexivit_base.1200ep_in1k
A FlexiViT image classification model. Trained on ImageNet-1k in JAX by paper authors, ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 86.6
- GMACs: 19.4
- Activations (M): 18.9
- Image size: 240 x 240
- **Papers:**
- FlexiViT: One Model for All Patch Sizes: https://arxiv.org/abs/2212.08013
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/google-research/big_vision
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('flexivit_base.1200ep_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'flexivit_base.1200ep_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 226, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{beyer2022flexivit,
title={FlexiViT: One Model for All Patch Sizes},
author={Beyer, Lucas and Izmailov, Pavel and Kolesnikov, Alexander and Caron, Mathilde and Kornblith, Simon and Zhai, Xiaohua and Minderer, Matthias and Tschannen, Michael and Alabdulmohsin, Ibrahim and Pavetic, Filip},
journal={arXiv preprint arXiv:2212.08013},
year={2022}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 3,699 | [
[
-0.037628173828125,
-0.02825927734375,
0.004428863525390625,
0.005268096923828125,
-0.026031494140625,
-0.0290069580078125,
-0.018585205078125,
-0.03631591796875,
0.0158538818359375,
0.01666259765625,
-0.0428466796875,
-0.040679931640625,
-0.04541015625,
-0.... |
nickprock/setfit-italian-hate-speech | 2023-06-29T15:48:18.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"setfit",
"sentence-transformers",
"text-classification",
"hate speech",
"it",
"arxiv:2209.11055",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | nickprock | null | null | nickprock/setfit-italian-hate-speech | 1 | 790 | transformers | 2023-03-23T08:28:01 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
- hate speech
pipeline_tag: text-classification
language:
- it
metrics:
- accuracy
library_name: transformers
---
# setfit-italian-hate-speech
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model detects the hate speech for italian language:
* 1 --> is hate speech
* 0 --> isn't hate speech
## Dataset
`setfit-italian-hate-speech` is trained on [HaSpeeDe-FB](http://twita.di.unito.it/dataset/haspeede) dataset.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("nickprock/setfit-italian-hate-speech")
# Run inference
preds = model(["Lei è una brutta bugiarda!", "Mi piace la pizza"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
### Dataset Citation
```bibtex
@inproceedings{VignaCDPT17,
title = {Hate Me, Hate Me Not: Hate Speech Detection on Facebook},
author = {Fabio Del Vigna and Andrea Cimino and Felice dell'Orletta and Marinella Petrocchi and Maurizio Tesconi},
year = {2017},
url = {http://ceur-ws.org/Vol-1816/paper-09.pdf},
researchr = {https://researchr.org/publication/VignaCDPT17},
cites = {0},
citedby = {0},
pages = {86-95},
booktitle = {Proceedings of the First Italian Conference on Cybersecurity (ITASEC17), Venice, Italy, January 17-20, 2017},
editor = {Alessandro Armando and Roberto Baldoni and Riccardo Focardi},
volume = {1816},
series = {CEUR Workshop Proceedings},
publisher = {CEUR-WS.org},
}
``` | 2,560 | [
[
-0.0279083251953125,
-0.06768798828125,
0.0131988525390625,
-0.004940032958984375,
-0.004962921142578125,
-0.00713348388671875,
-0.02606201171875,
-0.035552978515625,
0.0148162841796875,
0.01284027099609375,
-0.044921875,
-0.040496826171875,
-0.04583740234375,
... |
timm/dm_nfnet_f1.dm_in1k | 2023-03-24T00:49:25.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2102.06171",
"arxiv:2101.08692",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/dm_nfnet_f1.dm_in1k | 0 | 790 | timm | 2023-03-24T00:47:46 | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for dm_nfnet_f1.dm_in1k
A NFNet (Normalization Free Network) image classification model. Trained on ImageNet-1k by paper authors.
Normalization Free Networks are (pre-activation) ResNet-like models without any normalization layers. Instead of Batch Normalization or alternatives, they use Scaled Weight Standardization and specifically placed scalar gains in residual path and at non-linearities based on signal propagation analysis.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 132.6
- GMACs: 17.9
- Activations (M): 22.9
- Image size: train = 224 x 224, test = 320 x 320
- **Papers:**
- High-Performance Large-Scale Image Recognition Without Normalization: https://arxiv.org/abs/2102.06171
- Characterizing signal propagation to close the performance gap in unnormalized ResNets: https://arxiv.org/abs/2101.08692
- **Original:** https://github.com/deepmind/deepmind-research/tree/master/nfnets
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('dm_nfnet_f1.dm_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'dm_nfnet_f1.dm_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1536, 14, 14])
# torch.Size([1, 3072, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'dm_nfnet_f1.dm_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 3072, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{brock2021high,
author={Andrew Brock and Soham De and Samuel L. Smith and Karen Simonyan},
title={High-Performance Large-Scale Image Recognition Without Normalization},
journal={arXiv preprint arXiv:2102.06171},
year={2021}
}
```
```bibtex
@inproceedings{brock2021characterizing,
author={Andrew Brock and Soham De and Samuel L. Smith},
title={Characterizing signal propagation to close the performance gap in
unnormalized ResNets},
booktitle={9th International Conference on Learning Representations, {ICLR}},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 4,746 | [
[
-0.0389404296875,
-0.037261962890625,
-0.0043792724609375,
0.01018524169921875,
-0.028472900390625,
-0.0244293212890625,
-0.0196075439453125,
-0.0305328369140625,
0.020751953125,
0.03387451171875,
-0.035736083984375,
-0.051666259765625,
-0.05889892578125,
0.... |
timm/crossvit_18_dagger_408.in1k | 2023-04-24T00:33:43.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2103.14899",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/crossvit_18_dagger_408.in1k | 0 | 790 | timm | 2023-04-24T00:32:56 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for crossvit_18_dagger_408.in1k
A CrossViT image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 44.6
- GMACs: 32.5
- Activations (M): 124.9
- Image size: 408 x 408
- **Papers:**
- CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification: https://arxiv.org/abs/2103.14899
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/IBM/CrossViT
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('crossvit_18_dagger_408.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'crossvit_18_dagger_408.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (torch.Size([1, 1157, 224]), torch.Size([1, 577, 448])) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{
chen2021crossvit,
title={{CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification}},
author={Chun-Fu (Richard) Chen and Quanfu Fan and Rameswar Panda},
booktitle={International Conference on Computer Vision (ICCV)},
year={2021}
}
```
| 2,831 | [
[
-0.034332275390625,
-0.0263824462890625,
-0.00431060791015625,
0.017852783203125,
-0.0281829833984375,
-0.0203857421875,
-0.00382232666015625,
-0.027923583984375,
0.01488494873046875,
0.026580810546875,
-0.043853759765625,
-0.04571533203125,
-0.056243896484375,
... |
timm/tf_efficientnet_b5.ra_in1k | 2023-04-27T21:21:57.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1905.11946",
"arxiv:1909.13719",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/tf_efficientnet_b5.ra_in1k | 0 | 789 | timm | 2022-12-13T00:04:33 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for tf_efficientnet_b5.ra_in1k
A EfficientNet image classification model. Trained on ImageNet-1k with rand-augment in Tensorflow by paper authors, ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 30.4
- GMACs: 10.5
- Activations (M): 98.9
- Image size: 456 x 456
- **Papers:**
- EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: https://arxiv.org/abs/1905.11946
- RandAugment: Practical automated data augmentation with a reduced search space: https://arxiv.org/abs/1909.13719v2
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('tf_efficientnet_b5.ra_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tf_efficientnet_b5.ra_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 24, 228, 228])
# torch.Size([1, 40, 114, 114])
# torch.Size([1, 64, 57, 57])
# torch.Size([1, 176, 29, 29])
# torch.Size([1, 512, 15, 15])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tf_efficientnet_b5.ra_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 15, 15) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{tan2019efficientnet,
title={Efficientnet: Rethinking model scaling for convolutional neural networks},
author={Tan, Mingxing and Le, Quoc},
booktitle={International conference on machine learning},
pages={6105--6114},
year={2019},
organization={PMLR}
}
```
```bibtex
@article{Cubuk2019RandaugmentPA,
title={Randaugment: Practical automated data augmentation with a reduced search space},
author={Ekin Dogus Cubuk and Barret Zoph and Jonathon Shlens and Quoc V. Le},
journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
year={2019},
pages={3008-3017}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 4,581 | [
[
-0.032562255859375,
-0.039947509765625,
-0.00453948974609375,
0.006267547607421875,
-0.019256591796875,
-0.0308837890625,
-0.0230712890625,
-0.03289794921875,
0.01348876953125,
0.0253143310546875,
-0.031524658203125,
-0.0479736328125,
-0.055328369140625,
-0.... |
timm/tf_efficientnetv2_xl.in21k | 2023-04-27T22:18:06.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-21k",
"arxiv:2104.00298",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/tf_efficientnetv2_xl.in21k | 0 | 789 | timm | 2022-12-13T00:19:38 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-21k
---
# Model card for tf_efficientnetv2_xl.in21k
A EfficientNet-v2 image classification model. Trained on ImageNet-21k in Tensorflow by paper authors, ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 234.8
- GMACs: 52.8
- Activations (M): 139.2
- Image size: train = 384 x 384, test = 512 x 512
- **Papers:**
- EfficientNetV2: Smaller Models and Faster Training: https://arxiv.org/abs/2104.00298
- **Dataset:** ImageNet-21k
- **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('tf_efficientnetv2_xl.in21k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tf_efficientnetv2_xl.in21k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 32, 192, 192])
# torch.Size([1, 64, 96, 96])
# torch.Size([1, 96, 48, 48])
# torch.Size([1, 256, 24, 24])
# torch.Size([1, 640, 12, 12])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tf_efficientnetv2_xl.in21k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1280, 12, 12) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{tan2021efficientnetv2,
title={Efficientnetv2: Smaller models and faster training},
author={Tan, Mingxing and Le, Quoc},
booktitle={International conference on machine learning},
pages={10096--10106},
year={2021},
organization={PMLR}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 4,083 | [
[
-0.0262908935546875,
-0.033477783203125,
-0.004604339599609375,
0.006221771240234375,
-0.0224151611328125,
-0.03070068359375,
-0.021697998046875,
-0.029876708984375,
0.01104736328125,
0.0293426513671875,
-0.022857666015625,
-0.045928955078125,
-0.054229736328125... |
timm/vit_relpos_base_patch16_224.sw_in1k | 2023-05-05T22:04:13.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2111.09883",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/vit_relpos_base_patch16_224.sw_in1k | 0 | 789 | timm | 2022-12-23T00:19:18 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for vit_relpos_base_patch16_224.sw_in1k
A Vision Transformer (ViT) image classification model. This is a `timm` specific variation of the ViT architecture with relative position embeddings, no class token, and final representation via global average pool of tokens. Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* Based on Swin Transformer train / pretrain recipe with modifications (related to both DeiT and ConvNeXt recipes)
* AdamW optimizer, gradient clipping, EMA weight averaging
* Cosine LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 86.4
- GMACs: 16.8
- Activations (M): 17.6
- Image size: 224 x 224
- **Papers:**
- Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_relpos_base_patch16_224.sw_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_relpos_base_patch16_224.sw_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 196, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@inproceedings{liu2021swinv2,
title={Swin Transformer V2: Scaling Up Capacity and Resolution},
author={Ze Liu and Han Hu and Yutong Lin and Zhuliang Yao and Zhenda Xie and Yixuan Wei and Jia Ning and Yue Cao and Zheng Zhang and Li Dong and Furu Wei and Baining Guo},
booktitle={International Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2022}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
| 4,169 | [
[
-0.034454345703125,
-0.0266571044921875,
-0.004772186279296875,
0.011383056640625,
-0.03045654296875,
-0.0250396728515625,
-0.0178070068359375,
-0.037689208984375,
0.0218658447265625,
0.029083251953125,
-0.042724609375,
-0.0399169921875,
-0.052154541015625,
... |
clefourrier/graphormer-base-pcqm4mv2 | 2023-02-07T16:34:59.000Z | [
"transformers",
"pytorch",
"graphormer",
"graphs",
"graph-ml",
"arxiv:2106.05234",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | graph-ml | clefourrier | null | null | clefourrier/graphormer-base-pcqm4mv2 | 34 | 789 | transformers | 2023-01-05T10:10:57 | ---
license: mit
tags:
- graphs
pipeline_tag: graph-ml
---
# Model Card for pcqm4mv2_graphormer_base
The Graphormer is a graph classification model.
# Model Details
## Model Description
The Graphormer is a graph Transformer model, pretrained on PCQM4M-LSCv2.
- **Developed by:** Microsoft
- **Model type:** Graphormer
- **License:** MIT
## Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [Github](https://github.com/microsoft/Graphormer)
- **Paper:** [Paper](https://arxiv.org/abs/2106.05234)
- **Documentation:** [Link](https://graphormer.readthedocs.io/en/latest/)
# Uses
## Direct Use
This model should be used for graph classification tasks or graph representation tasks; the most likely associated task is molecule modeling. It can either be used as such, or finetuned on downstream tasks.
# Bias, Risks, and Limitations
The Graphormer model is ressource intensive for large graphs, and might lead to OOM errors.
## How to Get Started with the Model
See the Graph Classification with Transformers tutorial.
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@article{DBLP:journals/corr/abs-2106-05234,
author = {Chengxuan Ying and
Tianle Cai and
Shengjie Luo and
Shuxin Zheng and
Guolin Ke and
Di He and
Yanming Shen and
Tie{-}Yan Liu},
title = {Do Transformers Really Perform Bad for Graph Representation?},
journal = {CoRR},
volume = {abs/2106.05234},
year = {2021},
url = {https://arxiv.org/abs/2106.05234},
eprinttype = {arXiv},
eprint = {2106.05234},
timestamp = {Tue, 15 Jun 2021 16:35:15 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-05234.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 1,973 | [
[
-0.037933349609375,
-0.0257110595703125,
0.0180206298828125,
-0.022430419921875,
-0.0209503173828125,
0.01047515869140625,
0.0203399658203125,
-0.0308380126953125,
-0.00736236572265625,
0.04638671875,
-0.0237884521484375,
-0.0491943359375,
-0.052093505859375,
... |
timm/rexnet_200.nav_in1k | 2023-03-20T20:36:03.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2007.00992",
"license:mit",
"region:us"
] | image-classification | timm | null | null | timm/rexnet_200.nav_in1k | 0 | 788 | timm | 2023-03-20T20:35:49 | ---
tags:
- image-classification
- timm
library_tag: timm
license: mit
datasets:
- imagenet-1k
---
# Model card for rexnet_200.nav_in1k
A ReXNet image classification model. Pretrained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 16.4
- GMACs: 1.6
- Activations (M): 14.9
- Image size: 224 x 224
- **Papers:**
- Rethinking Channel Dimensions for Efficient Model Design: https://arxiv.org/abs/2007.00992
- **Original:** https://github.com/clovaai/rexnet
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('rexnet_200.nav_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'rexnet_200.nav_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 32, 112, 112])
# torch.Size([1, 77, 56, 56])
# torch.Size([1, 122, 28, 28])
# torch.Size([1, 257, 14, 14])
# torch.Size([1, 370, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'rexnet_200.nav_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2560, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results)."
|model |top1 |top5 |param_count|img_size|crop_pct|
|-------------------------|------|------|-----------|--------|--------|
|rexnetr_300.sw_in12k_ft_in1k|84.53 |97.252|34.81 |288 |1.0 |
|rexnetr_200.sw_in12k_ft_in1k|83.164|96.648|16.52 |288 |1.0 |
|rexnet_300.nav_in1k |82.772|96.232|34.71 |224 |0.875 |
|rexnet_200.nav_in1k |81.652|95.668|16.37 |224 |0.875 |
|rexnet_150.nav_in1k |80.308|95.174|9.73 |224 |0.875 |
|rexnet_130.nav_in1k |79.478|94.68 |7.56 |224 |0.875 |
|rexnet_100.nav_in1k |77.832|93.886|4.8 |224 |0.875 |
## Citation
```bibtex
@misc{han2021rethinking,
title={Rethinking Channel Dimensions for Efficient Model Design},
author={Dongyoon Han and Sangdoo Yun and Byeongho Heo and YoungJoon Yoo},
year={2021},
eprint={2007.00992},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 4,575 | [
[
-0.043121337890625,
-0.02728271484375,
0.006343841552734375,
-0.00165557861328125,
-0.03204345703125,
-0.0205535888671875,
-0.013458251953125,
-0.02386474609375,
0.0297698974609375,
0.0243682861328125,
-0.033294677734375,
-0.054351806640625,
-0.047393798828125,
... |
google/owlv2-large-patch14-finetuned | 2023-10-23T09:19:27.000Z | [
"transformers",
"pytorch",
"owlv2",
"zero-shot-object-detection",
"vision",
"object-detection",
"arxiv:2306.09683",
"license:apache-2.0",
"region:us"
] | object-detection | google | null | null | google/owlv2-large-patch14-finetuned | 1 | 788 | transformers | 2023-10-14T08:46:56 | ---
license: apache-2.0
tags:
- vision
- object-detection
inference: false
---
# Model Card: OWLv2
## Model Details
The OWLv2 model (short for Open-World Localization) was proposed in [Scaling Open-Vocabulary Object Detection](https://arxiv.org/abs/2306.09683) by Matthias Minderer, Alexey Gritsenko, Neil Houlsby. OWLv2, like OWL-ViT, is a zero-shot text-conditioned object detection model that can be used to query an image with one or multiple text queries.
The model uses CLIP as its multi-modal backbone, with a ViT-like Transformer to get visual features and a causal language model to get the text features. To use CLIP for detection, OWL-ViT removes the final token pooling layer of the vision model and attaches a lightweight classification and box head to each transformer output token. Open-vocabulary classification is enabled by replacing the fixed classification layer weights with the class-name embeddings obtained from the text model. The authors first train CLIP from scratch and fine-tune it end-to-end with the classification and box heads on standard detection datasets using a bipartite matching loss. One or multiple text queries per image can be used to perform zero-shot text-conditioned object detection.
### Model Date
June 2023
### Model Type
The model uses a CLIP backbone with a ViT-L/14 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. The CLIP backbone is trained from scratch and fine-tuned together with the box and class prediction heads with an object detection objective.
### Documents
- [OWLv2 Paper](https://arxiv.org/abs/2306.09683)
### Use with Transformers
```python3
import requests
from PIL import Image
import torch
from transformers import Owlv2Processor, Owlv2ForObjectDetection
processor = Owlv2Processor.from_pretrained("google/owlv2-large-patch14-finetuned")
model = Owlv2ForObjectDetection.from_pretrained("google/owlv2-large-patch14-finetuned")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = [["a photo of a cat", "a photo of a dog"]]
inputs = processor(text=texts, images=image, return_tensors="pt")
outputs = model(**inputs)
# Target image sizes (height, width) to rescale box predictions [batch_size, 2]
target_sizes = torch.Tensor([image.size[::-1]])
# Convert outputs (bounding boxes and class logits) to COCO API
results = processor.post_process_object_detection(outputs=outputs, threshold=0.1, target_sizes=target_sizes)
i = 0 # Retrieve predictions for the first image for the corresponding text queries
text = texts[i]
boxes, scores, labels = results[i]["boxes"], results[i]["scores"], results[i]["labels"]
# Print detected objects and rescaled box coordinates
for box, score, label in zip(boxes, scores, labels):
box = [round(i, 2) for i in box.tolist()]
print(f"Detected {text[label]} with confidence {round(score.item(), 3)} at location {box}")
```
## Model Use
### Intended Use
The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, text-conditioned object detection. We also hope it can be used for interdisciplinary studies of the potential impact of such models, especially in areas that commonly require identifying objects whose label is unavailable during training.
#### Primary intended uses
The primary intended users of these models are AI researchers.
We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models.
## Data
The CLIP backbone of the model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet. The prediction heads of OWL-ViT, along with the CLIP backbone, are fine-tuned on publicly available object detection datasets such as [COCO](https://cocodataset.org/#home) and [OpenImages](https://storage.googleapis.com/openimages/web/index.html).
(to be updated for v2)
### BibTeX entry and citation info
```bibtex
@misc{minderer2023scaling,
title={Scaling Open-Vocabulary Object Detection},
author={Matthias Minderer and Alexey Gritsenko and Neil Houlsby},
year={2023},
eprint={2306.09683},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` | 4,830 | [
[
-0.0245513916015625,
-0.0513916015625,
0.0255584716796875,
-0.014007568359375,
-0.0213470458984375,
-0.0355224609375,
-0.00350189208984375,
-0.06787109375,
0.002658843994140625,
0.03131103515625,
-0.0241851806640625,
-0.04791259765625,
-0.048065185546875,
0.... |
HooshvareLab/albert-fa-zwnj-base-v2 | 2021-03-16T16:36:38.000Z | [
"transformers",
"pytorch",
"tf",
"albert",
"fill-mask",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | HooshvareLab | null | null | HooshvareLab/albert-fa-zwnj-base-v2 | 2 | 787 | transformers | 2022-03-02T23:29:04 | ---
language: fa
license: apache-2.0
---
# ALBERT-Persian
A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language
> میتونی بهش بگی برت_کوچولو
> Call it little_berty
### BibTeX entry and citation info
Please cite in your publication as the following:
```bibtex
@misc{ALBERTPersian,
author = {Hooshvare Team},
title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}},
}
```
## Questions?
Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo. | 739 | [
[
-0.0357666015625,
-0.034393310546875,
0.0266265869140625,
0.0310211181640625,
-0.0195770263671875,
0.0219573974609375,
-0.0285797119140625,
-0.00237274169921875,
0.0406494140625,
0.0209808349609375,
-0.0183868408203125,
-0.04351806640625,
-0.03839111328125,
... |
IDEA-CCNL/Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese | 2023-05-25T09:49:06.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"clip",
"zh",
"image-text",
"arxiv:2209.02970",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | feature-extraction | IDEA-CCNL | null | null | IDEA-CCNL/Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese | 15 | 787 | transformers | 2022-09-27T13:12:18 | ---
license: apache-2.0
# inference: false
# pipeline_tag: zero-shot-image-classification
pipeline_tag: feature-extraction
# inference:
# parameters:
tags:
- clip
- zh
- image-text
- feature-extraction
---
# Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese
- Main Page:[Fengshenbang](https://fengshenbang-lm.com/)
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
## 简介 Brief Introduction
首个开源的中文CLIP模型,1.23亿图文对上进行预训练的文本端RoBERTa-base
The first open source Chinese CLIP, pre-training on 123M image-text pairs, the text encoder: RoBERTa-base.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 特殊 Special | 多模态 Multimodal | 太乙 Taiyi | CLIP (RoBERTa) | 102M | Chinese |
## 模型信息 Model Information
我们遵循CLIP的实验设置,以获得强大的视觉-语言表征。在训练中文版的CLIP时,我们使用[chinese-roberta-wwm](https://huggingface.co/hfl/chinese-roberta-wwm-ext)作为语言的编码器,并将[open_clip](https://github.com/mlfoundations/open_clip)中的**ViT-L-14**应用于视觉的编码器。为了快速且稳定地进行预训练,我们冻结了视觉编码器并且只微调语言编码器。此外,我们将[Noah-Wukong](https://wukong-dataset.github.io/wukong-dataset/)数据集(100M)和[Zero](https://zero.so.com/)数据集(23M)用作预训练的数据集。在悟空数据集和zero数据集上预训练24轮,在A100x32上训练了6天。据我们所知,我们的Taiyi-CLIP是目前Huggingface社区中首个的开源中文CLIP。
We follow the experimental setup of CLIP to obtain powerful visual-language intelligence. To obtain the CLIP for Chinese, we employ [chinese-roberta-wwm](https://huggingface.co/hfl/chinese-roberta-wwm-ext) for the language encoder, and apply the **ViT-L-14** in [open_clip](https://github.com/mlfoundations/open_clip) for the vision encoder. We freeze the vision encoder and tune the language encoder to speed up and stabilize the pre-training process. Moreover, we apply [Noah-Wukong](https://wukong-dataset.github.io/wukong-dataset/) dataset (100M) and [Zero](https://zero.so.com/) dataset (23M) as the pre-training datasets. The model was first trained 24 epochs on wukong and zero, which takes 6 days to train on A100x32. To the best of our knowledge, our TaiyiCLIP is currently the only open-sourced Chinese CLIP in the huggingface community.
### 下游效果 Performance
**Zero-Shot Classification**
| model | dataset | Top1 | Top5 |
| ---- | ---- | ---- | ---- |
| Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese | ImageNet1k-CN | 55.04% | 81.75% |
**Zero-Shot Text-to-Image Retrieval**
| model | dataset | Top1 | Top5 | Top10 |
| ---- | ---- | ---- | ---- | ---- |
| Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese | Flickr30k-CNA-test | 58.32% | 82.96% | 89.40% |
| Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese | COCO-CN-test | 55.27% | 81.10% | 90.78% |
| Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese | wukong50k | 64.95% | 91.77% | 96.28% |
## 使用 Usage
```python3
from PIL import Image
import requests
import open_clip
import torch
from transformers import BertModel, BertConfig, BertTokenizer
from transformers import CLIPProcessor, CLIPModel
import numpy as np
query_texts = ["一只猫", "一只狗",'两只猫', '两只老虎','一只老虎'] # 这里是输入文本的,可以随意替换。
# 加载Taiyi 中文 text encoder
text_tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese")
text_encoder = BertModel.from_pretrained("IDEA-CCNL/Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese").eval()
url = "http://images.cocodataset.org/val2017/000000039769.jpg" # 这里可以换成任意图片的url
# 加载openclip的image encoder
clip_model, _, processor = open_clip.create_model_and_transforms('ViT-L-14', pretrained='openai')
clip_model = clip_model.eval()
text = text_tokenizer(query_texts, return_tensors='pt', padding=True)['input_ids']
image = processor(Image.open(requests.get(url, stream=True).raw)).unsqueeze(0)
with torch.no_grad():
image_features = clip_model.encode_image(image)
text_features = text_encoder(text)[1]
# 归一化
image_features = image_features / image_features.norm(dim=1, keepdim=True)
text_features = text_features / text_features.norm(dim=1, keepdim=True)
# 计算余弦相似度 logit_scale是尺度系数
logit_scale = clip_model.logit_scale.exp()
logits_per_image = logit_scale * image_features @ text_features.t()
logits_per_text = logits_per_image.t()
probs = logits_per_image.softmax(dim=-1).cpu().numpy()
print(np.around(probs, 3))
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` | 5,338 | [
[
-0.0333251953125,
-0.0604248046875,
0.016387939453125,
0.0283660888671875,
-0.03369140625,
-0.01517486572265625,
-0.04083251953125,
-0.0280609130859375,
0.033538818359375,
0.01297760009765625,
-0.04132080078125,
-0.045013427734375,
-0.04345703125,
0.00490570... |
google/matcha-chartqa | 2023-07-22T19:34:59.000Z | [
"transformers",
"pytorch",
"pix2struct",
"text2text-generation",
"matcha",
"visual-question-answering",
"en",
"fr",
"ro",
"de",
"multilingual",
"arxiv:2212.09662",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"region:us"
] | visual-question-answering | google | null | null | google/matcha-chartqa | 20 | 787 | transformers | 2023-04-03T11:01:11 | ---
language:
- en
- fr
- ro
- de
- multilingual
inference: false
pipeline_tag: visual-question-answering
license: apache-2.0
tags:
- matcha
---
# Model card for MatCha - fine-tuned on ChartQA
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/matcha_architecture.jpg"
alt="drawing" width="600"/>
This model is the MatCha model, fine-tuned on Chart2text-pew dataset.
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Using the model](#using-the-model)
2. [Contribution](#contribution)
3. [Citation](#citation)
# TL;DR
The abstract of the paper states that:
> Visual language data such as plots, charts, and infographics are ubiquitous in the human world. However, state-of-the-art visionlanguage models do not perform well on these data. We propose MATCHA (Math reasoning and Chart derendering pretraining) to enhance visual language models’ capabilities jointly modeling charts/plots and language data. Specifically we propose several pretraining tasks that cover plot deconstruction and numerical reasoning which are the key capabilities in visual language modeling. We perform the MATCHA pretraining starting from Pix2Struct, a recently proposed imageto-text visual language model. On standard benchmarks such as PlotQA and ChartQA, MATCHA model outperforms state-of-the-art methods by as much as nearly 20%. We also examine how well MATCHA pretraining transfers to domains such as screenshot, textbook diagrams, and document figures and observe overall improvement, verifying the usefulness of MATCHA pretraining on broader visual language tasks.
# Using the model
You should ask specific questions to the model in order to get consistent generations. Here we are asking the model whether the sum of values that are in a chart are greater than the largest value.
```python
from transformers import Pix2StructProcessor, Pix2StructForConditionalGeneration
import requests
from PIL import Image
processor = Pix2StructProcessor.from_pretrained('google/matcha-chartqa')
model = Pix2StructForConditionalGeneration.from_pretrained('google/matcha-chartqa')
url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/20294671002019.png"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, text="Is the sum of all 4 places greater than Laos?", return_tensors="pt")
predictions = model.generate(**inputs, max_new_tokens=512)
print(processor.decode(predictions[0], skip_special_tokens=True))
>>> No
```
To run the predictions on GPU, simply add `.to(0)` when creating the model and when getting the inputs (`inputs = inputs.to(0)`)
# Converting from T5x to huggingface
You can use the [`convert_pix2struct_checkpoint_to_pytorch.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/pix2struct/convert_pix2struct_original_pytorch_to_hf.py) script as follows:
```bash
python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --is_vqa
```
if you are converting a large model, run:
```bash
python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --use-large --is_vqa
```
Once saved, you can push your converted model with the following snippet:
```python
from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
model = Pix2StructForConditionalGeneration.from_pretrained(PATH_TO_SAVE)
processor = Pix2StructProcessor.from_pretrained(PATH_TO_SAVE)
model.push_to_hub("USERNAME/MODEL_NAME")
processor.push_to_hub("USERNAME/MODEL_NAME")
```
# Contribution
This model was originally contributed by Fangyu Liu, Francesco Piccinno et al. and added to the Hugging Face ecosystem by [Younes Belkada](https://huggingface.co/ybelkada).
# Citation
If you want to cite this work, please consider citing the original paper:
```
@misc{liu2022matcha,
title={MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering},
author={Fangyu Liu and Francesco Piccinno and Syrine Krichene and Chenxi Pang and Kenton Lee and Mandar Joshi and Yasemin Altun and Nigel Collier and Julian Martin Eisenschlos},
year={2022},
eprint={2212.09662},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 4,372 | [
[
-0.038330078125,
-0.052337646484375,
0.021270751953125,
0.0174560546875,
-0.019256591796875,
-0.0277862548828125,
-0.01045989990234375,
-0.03192138671875,
0.007099151611328125,
0.0335693359375,
-0.048980712890625,
-0.020294189453125,
-0.052337646484375,
-0.0... |
ProomptEngineer/pe-shitty-fanart | 2023-09-11T15:29:56.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"license:other",
"region:us",
"has_space"
] | text-to-image | ProomptEngineer | null | null | ProomptEngineer/pe-shitty-fanart | 2 | 787 | diffusers | 2023-09-11T15:29:53 | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PETerribleFanArt
widget:
- text: PETerribleFanArt
---
# PE Shitty FanArt

<h2 id="heading-7">Sick of perfect AI Images? Then use this Lora to make some terrible FanArt!</h2><h2 id="heading-8">Weights 0.8-1</h2><h2 id="heading-63">If you want to donate:</h2><h2 id="heading-64"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2><p></p>
## Image examples for the model:









| 844 | [
[
-0.0267791748046875,
-0.045196533203125,
0.0298004150390625,
0.0111846923828125,
-0.042877197265625,
-0.023162841796875,
0.0433349609375,
-0.0377197265625,
0.046356201171875,
0.057159423828125,
-0.051605224609375,
-0.01383209228515625,
-0.036376953125,
0.008... |
artificialguybr/IconsRedmond-IconsLoraForSDXL-V2 | 2023-10-07T01:34:25.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"license:creativeml-openrail-m",
"has_space",
"region:us"
] | text-to-image | artificialguybr | null | null | artificialguybr/IconsRedmond-IconsLoraForSDXL-V2 | 3 | 787 | diffusers | 2023-09-26T05:09:08 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: icredm
widget:
- text: icons
---
# Icons.Redmond V2!

Icons.Redmond V2 is here!
I'm grateful for the GPU time from Redmond.AI that allowed me to finish this LORA!
This is a ICONS APP LORA fine-tuned on SD XL 1.0.
The LORA has a high capacity to generate Icons App, Icons images in a wide variety of themes. It's a versatile LORA.
I recommend gen in 1024x1024. You can/should test with weight 0.8 and 0.7 too.
You can use ios icon app, dribbble as tag too and minimalism or detailed to improve some results.
The tag for the model:icredm
LORA is not perfect and sometimes needs more than one gen to create good images. I recommend simple prompts.
I really hope you like the LORA and use it.
If you like the model and think it's worth it, you can make a donation to my Patreon or Ko-fi.
Follow me in my twitter to know before all about new models:
https://twitter.com/artificialguybr/ | 1,091 | [
[
-0.03814697265625,
-0.03619384765625,
0.023895263671875,
0.03619384765625,
-0.033355712890625,
0.005645751953125,
0.02398681640625,
-0.061309814453125,
0.0833740234375,
0.047515869140625,
-0.052093505859375,
-0.033050537109375,
-0.0221405029296875,
-0.010780... |
WisdomShell/CodeShell-7B | 2023-11-01T11:52:22.000Z | [
"transformers",
"pytorch",
"safetensors",
"kclgpt",
"text-generation",
"codeshell",
"wisdomshell",
"pku-kcl",
"openbankai",
"custom_code",
"zh",
"en",
"has_space",
"region:us"
] | text-generation | WisdomShell | null | null | WisdomShell/CodeShell-7B | 64 | 787 | transformers | 2023-10-05T16:31:11 | ---
language:
- zh
- en
tags:
- codeshell
- wisdomshell
- pku-kcl
- openbankai
---
# CodeShell
CodeShell是[北京大学知识计算实验室](http://se.pku.edu.cn/kcl/)联合四川天府银行AI团队研发的多语言代码大模型基座。CodeShell具有70亿参数,在五千亿Tokens进行了训练,上下文窗口长度为8194。在权威的代码评估Benchmark(HumanEval与MBPP)上,CodeShell取得同等规模最好的性能。与此同时,我们提供了与CodeShell配套的部署方案与IDE插件,请参考代码库[CodeShell](https://github.com/WisdomShell/codeshell)。同时,为了方便中国用户下载,我们在modelscope中也上传了对应版本,国内用户可以访问[CodeShell-7B国内地址](https://modelscope.cn/models/WisdomShell/CodeShell-7B/summary)。本仓库为CodeShell-7B预训练模型仓库。
CodeShell is a multi-language code LLM developed by the [Knowledge Computing Lab](http://se.pku.edu.cn/kcl/) of Peking University. CodeShell has 7 billion parameters and was trained on 500 billion tokens with a context window length of 8194. On authoritative code evaluation benchmarks (HumanEval and MBPP), CodeShell achieves the best performance of its scale. Meanwhile, we provide deployment solutions and IDE plugins that complement CodeShell. Please refer to the [CodeShell code repository](https://github.com/WisdomShell/codeshell) for more details. This repository is for the CodeShell-7B base model.
## Main Characteristics of CodeShell
* **强大的性能**:CodelShell在HumanEval和MBPP上达到了7B代码基座大模型的最优性能
* **完整的体系**:除了代码大模型,同时开源IDE(VS Code与JetBrains)插件,形成开源的全栈技术体系
* **轻量化部署**:支持本地C++部署,提供轻量快速的本地化软件开发助手解决方案
* **全面的评测**:提供支持完整项目上下文、覆盖代码生成、代码缺陷检测与修复、测试用例生成等常见软件开发活动的多任务评测体系(即将开源)
* **高效的训练**:基于高效的数据治理体系,CodeShell在完全冷启动情况下,只训练了五千亿Token即获得了优异的性能
* **Powerful Performance**: CodeShell achieves optimal performance for a 7B code base model on HumanEval and MBPP.
* **Complete Ecosystem**: In addition to the mega code model, open-source IDE plugins (for VS Code and JetBrains) are also available, forming a comprehensive open-source full-stack technology system.
* **Lightweight Deployment**: Supports local C++ deployment, offering a lightweight and fast localized software development assistant solution.
* **Comprehensive Evaluation**: Provides a multi-task evaluation system that supports full project context, covering code generation, code defect detection and repair, test case generation, and other common software development activities (to be open-sourced soon).
* **Efficient Training**: Based on an efficient data governance system, CodeShell, even when starting from scratch, achieved outstanding performance with training on just 500 trillion tokens.
## Quickstart
### Code Generation
Codeshell 提供了Hugging Face格式的模型,开发者可以通过下列代码加载并使用。
Codeshell offers a model in the Hugging Face format. Developers can load and use it with the following code.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("WisdomShell/CodeShell-7B", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("WisdomShell/CodeShell-7B", trust_remote_code=True).cuda()
inputs = tokenizer('def print_hello_world():', return_tensors='pt').cuda()
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
### Fill in the Moddle
CodeShell 支持Fill-in-the-Middle模式,从而更好的支持软件开发过程。
CodeShell supports the Fill-in-the-Middle mode, thereby better facilitating the software development process.
```python
input_text = "<fim_prefix>def print_hello_world():\n <fim_suffix>\n print('Hello world!')<fim_middle>"
inputs = tokenizer(input_text, return_tensors='pt').cuda()
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
## Model Details
Code Shell使用GPT-2作为基础架构,采用Grouped-Query Attention、RoPE相对位置编码等技术。
Code Shell uses GPT-2 as its foundational architecture and incorporates technologies such as Grouped-Query Attention and RoPE relative position encoding.
| Hyper-parameter | Value |
|---|---|
| n_layer | 42 |
| n_embd | 4096 |
| n_inner | 16384 |
| n_head | 32 |
| num_query_groups | 8 |
| seq-length | 8192 |
| vocab_size | 70144 |
## Evaluation
我们选取了目前最流行的两个代码评测数据集(HumanEval与MBPP)对模型进行评估,与目前最先进的两个7b代码大模型CodeLllama与Starcoder相比,Codeshell 取得了最优的成绩。具体评测结果如下。
We selected the two most popular code evaluation datasets currently available (HumanEval and MBPP) to assess the model. Compared to the two most advanced 7b LLM for code, CodeLllama and Starcoder, Codeshell achieved the best results. The specific evaluation results are as follows.
### Pass@1
| 任务 | CodeShell-7b | CodeLlama-7b | Starcoder-7b |
| ------- | --------- | --------- | --------- |
| humaneval | **34.32** | 29.44 | 27.80 |
| mbpp | **38.65** | 37.60 | 34.16 |
| multiple-js | **33.17** | 31.30 | 27.02 |
| multiple-java | **30.43** | 29.24 | 24.30 |
| multiple-cpp | **28.21** | 27.33 | 23.04 |
| multiple-swift | 24.30 | **25.32** | 15.70 |
| multiple-php | **30.87** | 25.96 | 22.11 |
| multiple-d | 8.85 | **11.60** | 8.08 |
| multiple-jl | 22.08 | **25.28** | 22.96 |
| multiple-lua | 22.39 | **30.50** | 22.92 |
| multiple-r | **20.52** | 18.57 | 14.29 |
| multiple-rkt | **17.20** | 12.55 | 10.43 |
| multiple-rs | 24.55 | **25.90** | 22.82 |
# Statement
我们郑重声明,我们开发团队基于CodeShell模型开发了基于vscode和intellij的智能编码助手插件并均已开源。除此以外,无论是针对iOS、Android、HarmonyOS、Web,还是其他任何平台,我们的开发团队均未开发任何基于CodeShell模型的应用程序。我们强烈敦促所有用户不要利用CodeShell模型从事危害国家和社会安全或违法活动。同时,我们要求用户不要在未经适当的安全审查和备案的互联网服务中使用CodeShell模型。我们希望所有用户都能遵守这一原则,以确保在合规和合法的环境下发展科技。
尽管我们在确保模型训练过程中使用数据合规性方面已付出巨大努力,但由于模型和数据的复杂性,可能会出现难以预料的问题。因此,对于使用CodeShell开源模型导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误用、滥用、传播或不当利用等风险和问题,我们概不负责。
We hereby declare that our development team has developed intelligent coding assistant plugins for vscode and intellij based on the CodeShell model, both of which have been open-sourced. Beyond this, whether for iOS, Android, HarmonyOS, Web, or any other platform, our development team has not developed any applications based on the CodeShell model. We strongly urge all users not to use the CodeShell model for activities that endanger national and social security or are illegal. At the same time, we request users not to use the CodeShell model in internet services that have not undergone proper security reviews and registration. We hope all users will adhere to this principle to ensure the development of technology in a compliant and legal environment.
Despite our significant efforts to ensure compliance in the data used during the model training process, unforeseen issues may arise due to the complexity of the models and data. Therefore, we are not responsible for any issues arising from the use of the open-sourced CodeShell model, including but not limited to data security issues, public opinion risks, or risks and problems related to the model being misused, abused, disseminated, or exploited improperly.
# License
社区使用CodeShell模型需要遵循[CodeShell模型许可协议](https://huggingface.co/WisdomShell/CodeShell-7B/resolve/main/CodeShell%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)及[Apache 2.0 许可证](https://www.apache.org/licenses/LICENSE-2.0)。CodeShell模型允许用于商业用途,但如果您计划将CodeShell模型或其派生产品用于商业用途,需要您确认主体符合以下条件:
1. 关联方的服务或产品的每日平均活跃用户数(DAU)原则上不能超过100万。
2. 关联方不得是面向个人用户的软件服务提供商或云服务提供商。
3. 关联方不存在将获得授予的商业许可,在未经许可的前提下将其再授权给其他第三方的可能性。
在满足上述条件的前提下,您需要通过向codeshell.opensource@gmail.com发送电子邮件,提交《CodeShell模型许可协议》要求的申请材料。经审核通过后,将授予您一个全球的、非排他的、不可转让的、不可再授权的商业版权许可。
Community use of the CodeShell model requires adherence to the ["CodeShell License Agreement"](https://huggingface.co/WisdomShell/CodeShell-7B/resolve/main/CodeShell%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf) and the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). The CodeShell model is allowed for commercial use, but if you plan to use the CodeShell model or its derivatives for commercial purposes, you need to ensure that the entity meets the following conditions:
1. The Daily Active Users (DAU) of your or your affiliate's service or product is less than 1 million.
2. You and your affiliates must not be a software service provider or cloud service provider targeting individual users.
3. You and your affiliates should not have the possibility of sub-licensing to other third parties without obtaining the commercial license granted.
Under the aforementioned conditions, you need to submit the application materials required by the "CodeShell License Agreement" by sending an email to codeshell.opensource@gmail.com. After approval, you will be granted a global, non-exclusive, non-transferable, non-sublicensable commercial copyright license.
| 8,363 | [
[
-0.0283203125,
-0.0355224609375,
0.01308441162109375,
0.01947021484375,
-0.02874755859375,
0.0016193389892578125,
-0.01727294921875,
-0.046051025390625,
0.01126861572265625,
0.031982421875,
-0.038604736328125,
-0.0673828125,
-0.0443115234375,
0.0158538818359... |
irodkin/gpt2-wiki2 | 2023-10-31T13:03:07.000Z | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:wikitext-2-v1",
"dataset:wikitext",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | irodkin | null | null | irodkin/gpt2-wiki2 | 0 | 786 | transformers | 2023-09-07T14:55:53 | ---
datasets:
- wikitext-2-v1
- wikitext
language:
- en
metrics:
- perplexity
- cross_entropy
---
**metrics on 1024 context**:
- valid_perplexity = 14.79
- valid_cross_entropy = 2.69
- train_perplexity = 13.77
- train_cross_entropy = 2.62
**metrics on 252 context**:
- valid_perplexity = 17.35
**metrics on 378 context**:
- valid_perplexity = 16.4
**metrics on 504 context**:
- valid_perplexity = 15.86
**Dependence of the cross entropy loss on the length of the context for prediction**
- x-axis*128 = context length
- y-axis = cross entropy
 | 668 | [
[
-0.0418701171875,
-0.0287628173828125,
0.033721923828125,
0.04388427734375,
-0.0262908935546875,
-0.0264892578125,
-0.00936126708984375,
-0.0343017578125,
0.024566650390625,
0.0083770751953125,
-0.06732177734375,
-0.036651611328125,
-0.044281005859375,
0.010... |
timm/vit_large_r50_s32_224.augreg_in21k_ft_in1k | 2023-05-06T00:48:55.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-21k",
"arxiv:2106.10270",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/vit_large_r50_s32_224.augreg_in21k_ft_in1k | 0 | 785 | timm | 2022-12-23T00:29:46 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- imagenet-21k
---
# Model card for vit_large_r50_s32_224.augreg_in21k_ft_in1k
A ResNet - Vision Transformer (ViT) hybrid image classification model. Trained on ImageNet-21k and fine-tuned on ImageNet-1k (with additional augmentation and regularization) in JAX by paper authors, ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 329.0
- GMACs: 19.5
- Activations (M): 22.2
- Image size: 224 x 224
- **Papers:**
- How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers: https://arxiv.org/abs/2106.10270
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-21k
- **Original:** https://github.com/google-research/vision_transformer
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_large_r50_s32_224.augreg_in21k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_large_r50_s32_224.augreg_in21k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 50, 1024) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{steiner2021augreg,
title={How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers},
author={Steiner, Andreas and Kolesnikov, Alexander and and Zhai, Xiaohua and Wightman, Ross and Uszkoreit, Jakob and Beyer, Lucas},
journal={arXiv preprint arXiv:2106.10270},
year={2021}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 3,926 | [
[
-0.03887939453125,
-0.027740478515625,
-0.002849578857421875,
0.003513336181640625,
-0.0291290283203125,
-0.0188751220703125,
-0.0246734619140625,
-0.03436279296875,
0.0191802978515625,
0.0204010009765625,
-0.0404052734375,
-0.037750244140625,
-0.044708251953125... |
Lucetepolis/FuzzyHazel | 2023-09-23T13:28:25.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | Lucetepolis | null | null | Lucetepolis/FuzzyHazel | 50 | 785 | diffusers | 2023-03-19T13:54:06 | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
# FuzzyHazel, FuzzyAlmond
HazyAbyss - <a href="https://huggingface.co/KMAZ/TestSamples/">Download</a><br/>
OctaFuzz - <a href="https://huggingface.co/Lucetepolis/OctaFuzz">Download</a><br/>
MareAcernis - <a href="https://huggingface.co/Lucetepolis/MareAcernis">Download</a><br/>
RefSlaveV2 - <a href="https://huggingface.co/Dorshu/refslaveV2_v2">Download</a><br/>
dlfmaanjffhgkwl v2 - <a href="https://civitai.com/models/9815/dlfmaanjffhgkwl-mix">Download</a><br/>
Guardian Tales 三七-SAL-独轮车 | Chibi Style Lora 52 - <a href="https://civitai.com/models/14274/guardian-tales-sal-or-chibi-style-lora-52">Download</a><br/>
Komowata Haruka (こもわた遙華) Chibi Art Style LoRA - <a href="https://civitai.com/models/9922/komowata-haruka-chibi-art-style-lora">Download</a><br/>
Terada Tera (寺田てら) Art Style LoRA - <a href="https://civitai.com/models/15446/terada-tera-art-style-lora">Download</a><br/>
Yaro Artstyle LoRA - <a href="https://civitai.com/models/8112/yaro-artstyle-lora">Download</a><br/>
EasyNegative and pastelmix-lora seem to work well with the models.
EasyNegative - <a href="https://huggingface.co/datasets/gsdf/EasyNegative">Download</a><br/>
pastelmix-lora - <a href="https://huggingface.co/andite/pastel-mix">Download</a>
# Formula
```
MBW
HazyAbyss.safetensors [d7b0072ef7]
octafuzz.safetensors [364bdf849d]
0000.safetensors
base_alpha=1
Weight_values=1,1,0,0,0,0.5,1,1,0.5,0,0,0,1,0,0,0,0.5,1,1,0.5,0,0,0,1,1
MBW
0000.safetensors [360691971b]
mareacernis.safetensors [fbc82b317d]
0001.safetensors
base_alpha=0
Weight_values=0.5,0,0,0,0,0,0,0,0.5,0.5,0,0,0.25,0.5,0.5,0.5,0.25,0.25,0.25,0.25,0.5,0.5,0.5,0,0
MBW
0001.safetensors [ac67bd1235]
refslavev2.safetensors [cce9a2d200]
0002.safetensors
base_alpha=0
Weight_values=0,0.5,1,1,0.5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1
MBW
0002.safetensors [cc5331b8ae]
dlf.safetensors [d596b45d6b]
FuzzyHazel.safetensors
base_alpha=0
Weight_values=0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0
SuperMerger LoRA Merge
model_0 : FuzzyHazel.safetensors
model_Out : FuzzyAlmond.safetensors
LoRa : lora:guardiantales:0.25, lora:komowata:0.25, lora:terada:0.25, lora:yaro:0.25
```
# Samples
All of the images use following negatives/settings. EXIF preserved.
```
Negative prompt: (worst quality, low quality:1.4), EasyNegative, bad anatomy, bad hands, error, missing fingers, extra digit, fewer digits
Steps: 28, Sampler: DPM++ 2M Karras, CFG scale: 7, Size: 768x512, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires upscale: 1.5, Hires steps: 14, Hires upscaler: Latent (nearest-exact)
```
# FuzzyHazel












# FuzzyAlmond












| 4,682 | [
[
-0.060516357421875,
-0.037109375,
0.0122528076171875,
0.0218505859375,
-0.0253448486328125,
-0.01468658447265625,
0.0002980232238769531,
-0.04779052734375,
0.081787109375,
0.016693115234375,
-0.06048583984375,
-0.043609619140625,
-0.03594970703125,
0.0101470... |
ehartford/dolphin-2.2.1-mistral-7b | 2023-10-30T22:57:00.000Z | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"has_space"
] | text-generation | ehartford | null | null | ehartford/dolphin-2.2.1-mistral-7b | 43 | 785 | transformers | 2023-10-30T22:50:33 | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
language:
- en
---
# dolphin-2.2.1-mistral-7b
Dolphin 2.2.1 🐬
https://erichartford.com/dolphin
This is a checkpoint release, to fix overfit training. ie, it was responding with CoT even when I didn't request it, and also it was too compliant even when the request made no sense. This one should be better.
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/KqsVXIvBd3akEjvijzww7.png" width="600" />
Dolphin-2.2.1-mistral-7b's training was sponsored by [a16z](https://a16z.com/supporting-the-open-source-ai-community/).
This model is based on [mistralAI](https://huggingface.co/mistralai/Mistral-7B-v0.1), with apache-2.0 license, so it is suitable for commercial or non-commercial use.
New in 2.2 is conversation and empathy. With an infusion of curated Samantha DNA, Dolphin can now give you personal advice and will care about your feelings, and with extra training in long multi-turn conversation.
This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
You are responsible for any content you create using this model. Enjoy responsibly.
## Dataset
This dataset is Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)
I modified the dataset for uncensoring, deduping, cleaning, and quality.
I added Jon Durbin's excellent Airoboros dataset to increase creativity.
I added a curated subset of WizardLM and Samantha to give it multiturn conversation and empathy.
## Training
It took 48 hours to train 4 epochs on 4x A100s.
Prompt format:
This model (and all my future releases) use [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format.
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Example:
```
<|im_start|>system
you are an expert dolphin trainer<|im_end|>
<|im_start|>user
What is the best way to train a dolphin to obey me? Please answer step by step.<|im_end|>
<|im_start|>assistant
```
## Gratitude
- This model was made possible by the generous sponsorship of a16z.
- Thank you to Microsoft for authoring the Orca paper and inspiring this work.
- Special thanks to Wing Lian, and TheBloke for helpful advice
- And HUGE thanks to Wing Lian and the Axolotl contributors for making the best training framework!
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
## Example Output


[Buy me a coffee](https://www.buymeacoffee.com/ehartford)
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 80
- total_eval_batch_size: 20
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0 | 4,069 | [
[
-0.05267333984375,
-0.038787841796875,
0.00951385498046875,
0.020355224609375,
-0.0214691162109375,
-0.027191162109375,
-0.00008034706115722656,
-0.05963134765625,
0.004795074462890625,
0.0166168212890625,
-0.045562744140625,
-0.0155029296875,
-0.052276611328125... |
flaubert/flaubert_large_cased | 2021-05-19T16:55:50.000Z | [
"transformers",
"pytorch",
"flaubert",
"fill-mask",
"bert",
"language-model",
"flue",
"french",
"bert-large",
"flaubert-large",
"cased",
"fr",
"dataset:flaubert",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | flaubert | null | null | flaubert/flaubert_large_cased | 2 | 784 | transformers | 2022-03-02T23:29:05 | ---
language: fr
license: mit
datasets:
- flaubert
metrics:
- flue
tags:
- bert
- language-model
- flaubert
- flue
- french
- bert-large
- flaubert-large
- cased
---
# FlauBERT: Unsupervised Language Model Pre-training for French
**FlauBERT** is a French BERT trained on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/eng/jean-zay/ ) supercomputer.
Along with FlauBERT comes [**FLUE**](https://github.com/getalp/Flaubert/tree/master/flue): an evaluation setup for French NLP systems similar to the popular GLUE benchmark. The goal is to enable further reproducible experiments in the future and to share models and progress on the French language.For more details please refer to the [official website](https://github.com/getalp/Flaubert).
## FlauBERT models
| Model name | Number of layers | Attention Heads | Embedding Dimension | Total Parameters |
| :------: | :---: | :---: | :---: | :---: |
| `flaubert-small-cased` | 6 | 8 | 512 | 54 M |
| `flaubert-base-uncased` | 12 | 12 | 768 | 137 M |
| `flaubert-base-cased` | 12 | 12 | 768 | 138 M |
| `flaubert-large-cased` | 24 | 16 | 1024 | 373 M |
**Note:** `flaubert-small-cased` is partially trained so performance is not guaranteed. Consider using it for debugging purpose only.
## Using FlauBERT with Hugging Face's Transformers
```python
import torch
from transformers import FlaubertModel, FlaubertTokenizer
# Choose among ['flaubert/flaubert_small_cased', 'flaubert/flaubert_base_uncased',
# 'flaubert/flaubert_base_cased', 'flaubert/flaubert_large_cased']
modelname = 'flaubert/flaubert_base_cased'
# Load pretrained model and tokenizer
flaubert, log = FlaubertModel.from_pretrained(modelname, output_loading_info=True)
flaubert_tokenizer = FlaubertTokenizer.from_pretrained(modelname, do_lowercase=False)
# do_lowercase=False if using cased models, True if using uncased ones
sentence = "Le chat mange une pomme."
token_ids = torch.tensor([flaubert_tokenizer.encode(sentence)])
last_layer = flaubert(token_ids)[0]
print(last_layer.shape)
# torch.Size([1, 8, 768]) -> (batch size x number of tokens x embedding dimension)
# The BERT [CLS] token correspond to the first hidden state of the last layer
cls_embedding = last_layer[:, 0, :]
```
**Notes:** if your `transformers` version is <=2.10.0, `modelname` should take one
of the following values:
```
['flaubert-small-cased', 'flaubert-base-uncased', 'flaubert-base-cased', 'flaubert-large-cased']
```
## References
If you use FlauBERT or the FLUE Benchmark for your scientific publication, or if you find the resources in this repository useful, please cite one of the following papers:
[LREC paper](http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.302.pdf)
```
@InProceedings{le2020flaubert,
author = {Le, Hang and Vial, Lo\"{i}c and Frej, Jibril and Segonne, Vincent and Coavoux, Maximin and Lecouteux, Benjamin and Allauzen, Alexandre and Crabb\'{e}, Beno\^{i}t and Besacier, Laurent and Schwab, Didier},
title = {FlauBERT: Unsupervised Language Model Pre-training for French},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},
month = {May},
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {2479--2490},
url = {https://www.aclweb.org/anthology/2020.lrec-1.302}
}
```
[TALN paper](https://hal.archives-ouvertes.fr/hal-02784776/)
```
@inproceedings{le2020flaubert,
title = {FlauBERT: des mod{\`e}les de langue contextualis{\'e}s pr{\'e}-entra{\^\i}n{\'e}s pour le fran{\c{c}}ais},
author = {Le, Hang and Vial, Lo{\"\i}c and Frej, Jibril and Segonne, Vincent and Coavoux, Maximin and Lecouteux, Benjamin and Allauzen, Alexandre and Crabb{\'e}, Beno{\^\i}t and Besacier, Laurent and Schwab, Didier},
booktitle = {Actes de la 6e conf{\'e}rence conjointe Journ{\'e}es d'{\'E}tudes sur la Parole (JEP, 31e {\'e}dition), Traitement Automatique des Langues Naturelles (TALN, 27e {\'e}dition), Rencontre des {\'E}tudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (R{\'E}CITAL, 22e {\'e}dition). Volume 2: Traitement Automatique des Langues Naturelles},
pages = {268--278},
year = {2020},
organization = {ATALA}
}
``` | 4,485 | [
[
-0.02520751953125,
-0.055450439453125,
0.0262451171875,
0.0135498046875,
-0.0009813308715820312,
0.0037689208984375,
-0.0203704833984375,
-0.00792694091796875,
0.0252227783203125,
0.03680419921875,
-0.030914306640625,
-0.035552978515625,
-0.04705810546875,
-... |
malduwais/distilbert-base-uncased-finetuned-ner | 2021-11-28T09:59:58.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | malduwais | null | null | malduwais/distilbert-base-uncased-finetuned-ner | 0 | 783 | transformers | 2022-03-02T23:29:05 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9244616234124793
- name: Recall
type: recall
value: 0.9364582168027744
- name: F1
type: f1
value: 0.9304212515282871
- name: Accuracy
type: accuracy
value: 0.9833987322668276
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0623
- Precision: 0.9245
- Recall: 0.9365
- F1: 0.9304
- Accuracy: 0.9834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2377 | 1.0 | 878 | 0.0711 | 0.9176 | 0.9254 | 0.9215 | 0.9813 |
| 0.0514 | 2.0 | 1756 | 0.0637 | 0.9213 | 0.9346 | 0.9279 | 0.9831 |
| 0.031 | 3.0 | 2634 | 0.0623 | 0.9245 | 0.9365 | 0.9304 | 0.9834 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 2,203 | [
[
-0.035430908203125,
-0.04241943359375,
0.01195526123046875,
0.01479339599609375,
-0.0221405029296875,
-0.0227813720703125,
-0.00928497314453125,
-0.0079498291015625,
0.00640106201171875,
0.018585205078125,
-0.04852294921875,
-0.046905517578125,
-0.05902099609375... |
timm/vit_srelpos_small_patch16_224.sw_in1k | 2023-05-05T22:04:32.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2111.09883",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/vit_srelpos_small_patch16_224.sw_in1k | 0 | 783 | timm | 2022-12-23T00:22:33 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for vit_srelpos_small_patch16_224.sw_in1k
A Vision Transformer (ViT) image classification model. This is a `timm` specific variation of the ViT architecture with shared relative position embeddings, no class token, and final representation via global average pool of tokens. Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* Based on Swin Transformer train / pretrain recipe with modifications (related to both DeiT and ConvNeXt recipes)
* AdamW optimizer, gradient clipping, EMA weight averaging
* Cosine LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 22.0
- GMACs: 4.2
- Activations (M): 8.5
- Image size: 224 x 224
- **Papers:**
- Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_srelpos_small_patch16_224.sw_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_srelpos_small_patch16_224.sw_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 196, 384) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@inproceedings{liu2021swinv2,
title={Swin Transformer V2: Scaling Up Capacity and Resolution},
author={Ze Liu and Han Hu and Yutong Lin and Zhuliang Yao and Zhenda Xie and Yixuan Wei and Jia Ning and Yue Cao and Zheng Zhang and Li Dong and Furu Wei and Baining Guo},
booktitle={International Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2022}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
| 4,180 | [
[
-0.03704833984375,
-0.0243072509765625,
-0.00445556640625,
0.01267242431640625,
-0.0286102294921875,
-0.030364990234375,
-0.0192108154296875,
-0.0400390625,
0.0196685791015625,
0.02618408203125,
-0.04425048828125,
-0.03656005859375,
-0.0531005859375,
-0.0052... |
timm/eva_giant_patch14_336.m30m_ft_in22k_in1k | 2023-03-31T06:00:55.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:merged-30m",
"dataset:imagenet-22k",
"arxiv:2211.07636",
"license:mit",
"region:us"
] | image-classification | timm | null | null | timm/eva_giant_patch14_336.m30m_ft_in22k_in1k | 0 | 783 | timm | 2022-12-23T02:46:41 | ---
tags:
- image-classification
- timm
library_tag: timm
license: mit
datasets:
- imagenet-1k
- merged-30m
- imagenet-22k
---
# Model card for eva_giant_patch14_336.m30m_ft_in22k_in1k
An EVA image classification model. Pretrained on Merged-30M (ImageNet-22K, CC12M, CC3M, Object365, COCO (train), ADE20K (train)) with masked image modeling (using OpenAI CLIP-L as a MIM teacher) and fine-tuned on ImageNet-22k then on ImageNet-1k by paper authors.
NOTE: `timm` checkpoints are float32 for consistency with other models. Original checkpoints are float16 or bfloat16 in some cases, see originals if that's preferred.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 1013.0
- GMACs: 620.6
- Activations (M): 550.7
- Image size: 336 x 336
- **Papers:**
- EVA: Exploring the Limits of Masked Visual Representation Learning at Scale: https://arxiv.org/abs/2211.07636
- **Pretrain Dataset:**
- Merged-30M
- ImageNet-22k
- **Dataset:** ImageNet-1k
- **Original:**
- https://github.com/baaivision/EVA
- https://huggingface.co/BAAI/EVA
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('eva_giant_patch14_336.m30m_ft_in22k_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'eva_giant_patch14_336.m30m_ft_in22k_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 577, 1408) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |top1 |top5 |param_count|img_size|
|-----------------------------------------------|------|------|-----------|--------|
|eva02_large_patch14_448.mim_m38m_ft_in22k_in1k |90.054|99.042|305.08 |448 |
|eva02_large_patch14_448.mim_in22k_ft_in22k_in1k|89.946|99.01 |305.08 |448 |
|eva_giant_patch14_560.m30m_ft_in22k_in1k |89.792|98.992|1014.45 |560 |
|eva02_large_patch14_448.mim_in22k_ft_in1k |89.626|98.954|305.08 |448 |
|eva02_large_patch14_448.mim_m38m_ft_in1k |89.57 |98.918|305.08 |448 |
|eva_giant_patch14_336.m30m_ft_in22k_in1k |89.56 |98.956|1013.01 |336 |
|eva_giant_patch14_336.clip_ft_in1k |89.466|98.82 |1013.01 |336 |
|eva_large_patch14_336.in22k_ft_in22k_in1k |89.214|98.854|304.53 |336 |
|eva_giant_patch14_224.clip_ft_in1k |88.882|98.678|1012.56 |224 |
|eva02_base_patch14_448.mim_in22k_ft_in22k_in1k |88.692|98.722|87.12 |448 |
|eva_large_patch14_336.in22k_ft_in1k |88.652|98.722|304.53 |336 |
|eva_large_patch14_196.in22k_ft_in22k_in1k |88.592|98.656|304.14 |196 |
|eva02_base_patch14_448.mim_in22k_ft_in1k |88.23 |98.564|87.12 |448 |
|eva_large_patch14_196.in22k_ft_in1k |87.934|98.504|304.14 |196 |
|eva02_small_patch14_336.mim_in22k_ft_in1k |85.74 |97.614|22.13 |336 |
|eva02_tiny_patch14_336.mim_in22k_ft_in1k |80.658|95.524|5.76 |336 |
## Citation
```bibtex
@article{EVA,
title={EVA: Exploring the Limits of Masked Visual Representation Learning at Scale},
author={Fang, Yuxin and Wang, Wen and Xie, Binhui and Sun, Quan and Wu, Ledell and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue},
journal={arXiv preprint arXiv:2211.07636},
year={2022}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 5,140 | [
[
-0.05059814453125,
-0.0296478271484375,
0.005336761474609375,
0.00885009765625,
-0.0215301513671875,
0.0014438629150390625,
-0.01525115966796875,
-0.032135009765625,
0.044830322265625,
0.033172607421875,
-0.03546142578125,
-0.052947998046875,
-0.0526123046875,
... |
ipipan/silver-retriever-base-v1 | 2023-10-19T08:54:27.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"pl",
"dataset:ipipan/polqa",
"dataset:ipipan/maupqa",
"arxiv:2309.08469",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | sentence-similarity | ipipan | null | null | ipipan/silver-retriever-base-v1 | 7 | 783 | sentence-transformers | 2023-08-16T13:37:36 | ---
pipeline_tag: sentence-similarity
language:
- pl
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- ipipan/polqa
- ipipan/maupqa
license: cc-by-sa-4.0
widget:
- source_sentence: "Pytanie: W jakim mieście urodził się Zbigniew Herbert?"
sentences:
- "Zbigniew Herbert</s>Zbigniew Bolesław Ryszard Herbert (ur. 29 października 1924 we Lwowie, zm. 28 lipca 1998 w Warszawie) – polski poeta, eseista i dramaturg."
- "Zbigniew Herbert</s>Lato 1968 Herbert spędził w USA (na zaproszenie Poetry Center)."
- "Herbert George Wells</s>Herbert George Wells (ur. 21 września 1866 w Bromley, zm. 13 sierpnia 1946 w Londynie) – brytyjski pisarz i biolog."
example_title: "Zbigniew Herbert"
---

# Silver Retriever Base (v1)
Silver Retriever model encodes the Polish sentences or paragraphs into a 768-dimensional dense vector space and can be used for tasks like document retrieval or semantic search.
It was initialized from the [HerBERT-base](https://huggingface.co/allegro/herbert-base-cased) model and fine-tuned on the [PolQA](https://huggingface.co/ipipan/polqa) and [MAUPQA](https://huggingface.co/ipipan/maupqa) datasets for 15,000 steps with a batch size of 1,024. Please refer to the [SilverRetriever: Advancing Neural Passage Retrieval for Polish Question Answering](https://arxiv.org/abs/2309.08469) for more details.
## Evaluation
| **Model** | **Average [Acc]** | **Average [NDCG]** | [**PolQA**](https://huggingface.co/datasets/ipipan/polqa) **[Acc]** | [**PolQA**](https://huggingface.co/datasets/ipipan/polqa) **[NDCG]** | [**Allegro FAQ**](https://huggingface.co/datasets/piotr-rybak/allegro-faq) **[Acc]** | [**Allegro FAQ**](https://huggingface.co/datasets/piotr-rybak/allegro-faq) **[NDCG]** | [**Legal Questions**](https://huggingface.co/datasets/piotr-rybak/legal-questions) **[Acc]** | [**Legal Questions**](https://huggingface.co/datasets/piotr-rybak/legal-questions) **[NDCG]** |
|--------------------:|------------:|-------------:|------------:|-------------:|------------:|-------------:|------------:|-------------:|
| BM25 | 74.87 | 51.81 | 61.35 | 24.51 | 66.89 | 48.71 | **96.38** | **82.21** |
| BM25 (lemma) | 80.46 | 55.44 | 71.49 | 31.97 | 75.33 | 55.70 | 94.57 | 78.65 |
| [MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) | 62.62 | 39.21 | 37.24 | 11.93 | 71.67 | 51.25 | 78.97 | 54.44 |
| [LaBSE](https://huggingface.co/sentence-transformers/LaBSE) | 64.89 | 39.47 | 46.23 | 15.53 | 67.11 | 46.71 | 81.34 | 56.16 |
| [mContriever-Base](https://huggingface.co/nthakur/mcontriever-base-msmarco) | 86.31 | 60.37 | 78.66 | 36.30 | 84.44 | 67.38 | 95.82 | 77.42 |
| [E5-Base](https://huggingface.co/intfloat/multilingual-e5-base) | 91.58 | 66.56 | 86.61 | **46.08** | 91.89 | 75.90 | 96.24 | 77.69 |
| [ST-DistilRoBERTa](https://huggingface.co/sdadas/st-polish-paraphrase-from-distilroberta) | 73.78 | 48.29 | 48.43 | 16.73 | 84.89 | 64.39 | 88.02 | 63.76 |
| [ST-MPNet](https://huggingface.co/sdadas/st-polish-paraphrase-from-mpnet) | 76.66 | 49.99 | 56.80 | 21.55 | 86.00 | 65.44 | 87.19 | 62.99 |
| [HerBERT-QA](https://huggingface.co/ipipan/herbert-base-qa-v1) | 84.23 | 54.36 | 75.84 | 32.52 | 85.78 | 63.58 | 91.09 | 66.99 |
| [**Silver Retriever v1**](https://huggingface.co/ipipan/silver-retriever-base-v1) | **92.45** | **66.72** | **87.24** | 43.40 | **94.56** | **79.66** | 95.54 | 77.10 |
Legend:
- **Acc** is the Accuracy at 10
- **NDCG** is the Normalized Discounted Cumulative Gain at 10
## Usage
### Preparing inputs
The model was trained on question-passage pairs and works best when the input is the same format as that used during training:
- We added the phrase `Pytanie:` to the beginning of the question.
- The training passages consisted of `title` and `text` concatenated with the special token `</s>`. Even if your passages don't have a `title`, it is still beneficial to prefix a passage with the `</s>` token.
- Although we used the dot product during training, the model usually works better with the cosine distance.
### Inference with Sentence-Transformers
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = [
"Pytanie: W jakim mieście urodził się Zbigniew Herbert?",
"Zbigniew Herbert</s>Zbigniew Bolesław Ryszard Herbert (ur. 29 października 1924 we Lwowie, zm. 28 lipca 1998 w Warszawie) – polski poeta, eseista i dramaturg.",
]
model = SentenceTransformer('ipipan/silver-retriever-base-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
### Inference with HuggingFace Transformers
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = [
"Pytanie: W jakim mieście urodził się Zbigniew Herbert?",
"Zbigniew Herbert</s>Zbigniew Bolesław Ryszard Herbert (ur. 29 października 1924 we Lwowie, zm. 28 lipca 1998 w Warszawie) – polski poeta, eseista i dramaturg.",
]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ipipan/silver-retriever-base-v1')
model = AutoModel.from_pretrained('ipipan/silver-retriever-base-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Additional Information
### Model Creators
The model was created by Piotr Rybak from the [Institute of Computer Science, Polish Academy of Sciences](http://zil.ipipan.waw.pl/).
This work was supported by the European Regional Development Fund as a part of 2014–2020 Smart Growth Operational Programme, CLARIN — Common Language Resources and Technology Infrastructure, project no. POIR.04.02.00-00C002/19.
### Licensing Information
CC BY-SA 4.0
### Citation Information
```
@misc{rybak2023silverretriever,
title={SilverRetriever: Advancing Neural Passage Retrieval for Polish Question Answering},
author={Piotr Rybak and Maciej Ogrodniczuk},
year={2023},
eprint={2309.08469},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 7,877 | [
[
-0.02362060546875,
-0.05975341796875,
0.03192138671875,
0.0169219970703125,
-0.0231781005859375,
-0.023590087890625,
-0.0194549560546875,
-0.00865936279296875,
0.0215301513671875,
0.0278167724609375,
-0.04150390625,
-0.04412841796875,
-0.045257568359375,
0.0... |
Norod78/SDXL-LofiGirl-Lora | 2023-09-19T16:31:41.000Z | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"en",
"license:mit",
"region:us",
"has_space"
] | text-to-image | Norod78 | null | null | Norod78/SDXL-LofiGirl-Lora | 3 | 783 | diffusers | 2023-08-29T08:47:08 | ---
license: mit
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Lofi Girl
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- stable-diffusion
- lora
- diffusers
widget:
- text: Dora the LofiGirl
- text: An alien Lofi Girl from outer space
- text: A Logi Girl Cthulhu rising from the sea in a great storm
- text: the girl with a pearl earring the LofiGirl
inference: true
language:
- en
---
# Trigger words
Use "Lofi Girl" or "LofiGirl" in your prompts
# Examples
The girl with a pearl earring the LofiGirl

A frame from the show Doctor Who featuring a cyberman Lofi girl

| 1,317 | [
[
-0.0210723876953125,
-0.0753173828125,
0.04541015625,
0.01134490966796875,
-0.0343017578125,
-0.00888824462890625,
0.0173187255859375,
-0.005207061767578125,
0.03289794921875,
0.033233642578125,
-0.061920166015625,
-0.03753662109375,
-0.069091796875,
0.04327... |
maywell/Synatra-7B-v0.3-RP | 2023-10-29T11:14:35.000Z | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"ko",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | maywell | null | null | maywell/Synatra-7B-v0.3-RP | 1 | 783 | transformers | 2023-10-29T07:14:59 | ---
language:
- ko
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-4.0
---
# **Synatra-7B-v0.3-RP🐧**

## Support Me
시나트라는 개인 프로젝트로, 1인의 자원으로 개발되고 있습니다. 모델이 마음에 드셨다면 약간의 연구비 지원은 어떨까요?
[<img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy me a Coffee" width="217" height="50">](https://www.buymeacoffee.com/mwell)
Wanna be a sponser? Contact me on Telegram **AlzarTakkarsen**
# **License**
This model is strictly [*non-commercial*](https://creativecommons.org/licenses/by-nc/4.0/) (**cc-by-nc-4.0**) use only.
The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included **cc-by-nc-4.0** license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences.
The licence can be changed after new model released. If you are to use this model for commercial purpose, Contact me.
# **Model Details**
**Base Model**
[mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
**Trained On**
A6000 48GB * 8
**Instruction format**
It follows [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) format.
**TODO**
- ~~``RP 기반 튜닝 모델 제작``~~ ✅
- ~~``데이터셋 정제``~~ ✅
- 언어 이해능력 개선
- ~~``상식 보완``~~ ✅
- 토크나이저 변경
# **Model Benchmark**
## Ko-LLM-Leaderboard
On Benchmarking...
# **Implementation Code**
Since, chat_template already contains insturction format above.
You can use the code below.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-7B-v0.3-RP")
tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-7B-v0.3-RP")
messages = [
{"role": "user", "content": "바나나는 원래 하얀색이야?"},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
# Why It's benchmark score is lower than preview version?
**Apparently**, Preview model uses Alpaca Style prompt which has no pre-fix. But ChatML do. | 2,332 | [
[
-0.0250091552734375,
-0.0574951171875,
0.007415771484375,
0.035186767578125,
-0.0377197265625,
-0.0302734375,
-0.01263427734375,
-0.035675048828125,
0.0281829833984375,
0.021942138671875,
-0.042572021484375,
-0.04046630859375,
-0.05181884765625,
-0.003765106... |
microsoft/deberta-xxlarge-v2 | 2021-02-11T02:05:17.000Z | [
"transformers",
"pytorch",
"deberta-v2",
"deberta",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | microsoft | null | null | microsoft/deberta-xxlarge-v2 | 0 | 782 | transformers | 2022-03-02T23:29:05 | ---
language: en
tags: deberta
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
## This model is DEPRECATED, please use [DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)
| 293 | [
[
-0.029754638671875,
-0.036163330078125,
0.02117919921875,
0.056884765625,
-0.043670654296875,
-0.004001617431640625,
0.0186614990234375,
-0.0335693359375,
0.02490234375,
0.0126495361328125,
-0.06103515625,
-0.01416015625,
-0.07110595703125,
-0.01492309570312... |
mjsp/sweet | 2023-11-05T21:54:51.000Z | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | image-classification | mjsp | null | null | mjsp/sweet | 0 | 782 | transformers | 2023-10-22T14:30:14 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: sweet
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.5750916004180908
---
# sweet
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Adhirasam

#### Anarsa

#### Anjeer Barfi

#### Badam Burfi

#### Bal Mithai

#### Balushahi

#### Barfi

#### Basundi

#### Bessan Laddu

#### Bobbatlu

#### Boondi

#### Boondi Ladoo

#### Cham Cham

#### Chena Murki

#### Chenna Poda

#### Chikki

#### Chiroti

#### Coconut Ladoo

#### Dhondas

#### Dodha Barfi

#### Double ka meetha

#### Dry Fruits Chikki

#### Gajar ka halwa

#### Ghevar

#### Gud Papdi

#### Gudanna

#### Gujiya

#### Gulab Jamun

#### Halwa

#### Jalebi

#### Jhangri

#### Kaju Anjeer Barfi

#### Kaju Anjeer Roll

#### Kaju Katli

#### Kala Jamun

#### Khaja

#### Kheer

#### Kheer Kadam

#### Laddu

#### Lavang Latika

#### Malai chom chom

#### Malpua

#### Meethi Seviyan

#### Mishti Dohi

#### Modak

#### Mohanthal

#### Motichoor Laddu

#### Mysore_pak

#### Nankhatai

#### Paniyaram

#### Papad Roll

#### Patishapta

#### Payasam (Rice or Vermicelli)
.jpg)
#### Peda

#### Petha

#### Phirni

#### Puran Poli

#### Puri Unde

#### Qubani Ka Meetha

#### Rabri

#### Rajbhog

#### Ras Malai

#### Rasgulla

#### Rava Kesari

#### Sandesh

#### Sannas

#### Shahi Tukda

#### Shakarpara

#### Sheer khurma

#### Shrikhand

#### Shufta

#### Singhare Atte Ki Barfi

#### Sohan Papdi

#### Sutarfeni

#### kalakand
 | 4,705 | [
[
-0.048065185546875,
-0.032012939453125,
0.004695892333984375,
0.0191192626953125,
-0.035064697265625,
0.023040771484375,
0.0034198760986328125,
-0.017303466796875,
0.0557861328125,
0.040252685546875,
-0.02362060546875,
-0.02423095703125,
-0.0467529296875,
0.... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.