modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
timm/vit_tiny_patch16_384.augreg_in21k_ft_in1k | 2023-05-06T00:30:08.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-21k",
"arxiv:2106.10270",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/vit_tiny_patch16_384.augreg_in21k_ft_in1k | 0 | 1,624 | timm | 2022-12-22T07:56:14 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- imagenet-21k
---
# Model card for vit_tiny_patch16_384.augreg_in21k_ft_in1k
A Vision Transformer (ViT) image classification model. Trained on ImageNet-21k and fine-tuned on ImageNet-1k (with additional augmentation and regularization) in JAX by paper authors, ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 5.8
- GMACs: 3.2
- Activations (M): 12.1
- Image size: 384 x 384
- **Papers:**
- How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers: https://arxiv.org/abs/2106.10270
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-21k
- **Original:** https://github.com/google-research/vision_transformer
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_tiny_patch16_384.augreg_in21k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_tiny_patch16_384.augreg_in21k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 577, 192) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{steiner2021augreg,
title={How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers},
author={Steiner, Andreas and Kolesnikov, Alexander and and Zhai, Xiaohua and Wightman, Ross and Uszkoreit, Jakob and Beyer, Lucas},
journal={arXiv preprint arXiv:2106.10270},
year={2021}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 3,904 | [
[
-0.03955078125,
-0.0291900634765625,
-0.0017652511596679688,
0.004276275634765625,
-0.027862548828125,
-0.02728271484375,
-0.0225830078125,
-0.034912109375,
0.01505279541015625,
0.021148681640625,
-0.0413818359375,
-0.03472900390625,
-0.045684814453125,
0.00... |
meta-math/MetaMath-7B-V1.0 | 2023-10-11T02:45:06.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:meta-math/MetaMathQA",
"arxiv:2309.12284",
"license:llama2",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | meta-math | null | null | meta-math/MetaMath-7B-V1.0 | 16 | 1,624 | transformers | 2023-09-21T08:33:54 | ---
license: llama2
datasets:
- meta-math/MetaMathQA
---
arxiv.org/abs/2309.12284
View the project page:
https://meta-math.github.io/
# Citation
```bibtex
@article{yu2023metamath,
title={MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models},
author={Yu, Longhui and Jiang, Weisen and Shi, Han and Yu, Jincheng and Liu, Zhengying and Zhang, Yu and Kwok, James T and Li, Zhenguo and Weller, Adrian and Liu, Weiyang},
journal={arXiv preprint arXiv:2309.12284},
year={2023}
}
``` | 512 | [
[
-0.0313720703125,
-0.039031982421875,
0.0489501953125,
0.0186614990234375,
0.005153656005859375,
-0.011260986328125,
-0.0171356201171875,
-0.0186004638671875,
0.046600341796875,
0.0171966552734375,
-0.042205810546875,
-0.022003173828125,
-0.01508331298828125,
... |
bigscience/mt0-xl | 2023-07-25T11:14:55.000Z | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"haw",
... | text2text-generation | bigscience | null | null | bigscience/mt0-xl | 23 | 1,622 | transformers | 2022-10-27T20:55:06 | ---
datasets:
- bigscience/xP3
- mc4
license: apache-2.0
language:
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- 'no'
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
pipeline_tag: text2text-generation
widget:
- text: >-
一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。Would you rate the previous
review as positive, neutral or negative?
example_title: zh-en sentiment
- text: 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
example_title: zh-zh sentiment
- text: Suggest at least five related search terms to "Mạng neural nhân tạo".
example_title: vi-en query
- text: >-
Proposez au moins cinq mots clés concernant «Réseau de neurones
artificiels».
example_title: fr-fr query
- text: Explain in a sentence in Telugu what is backpropagation in neural networks.
example_title: te-en qa
- text: Why is the sky blue?
example_title: en-en qa
- text: >-
Write a fairy tale about a troll saving a princess from a dangerous dragon.
The fairy tale is a masterpiece that has achieved praise worldwide and its
moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
example_title: es-en fable
- text: >-
Write a fable about wood elves living in a forest that is suddenly invaded
by ogres. The fable is a masterpiece that has achieved praise worldwide and
its moral is "Violence is the last refuge of the incompetent". Fable (in
Hindi):
example_title: hi-en fable
model-index:
- name: mt0-xl
results:
- task:
type: Coreference resolution
dataset:
type: winogrande
name: Winogrande XL (xl)
config: xl
split: validation
revision: a80f460359d1e9a67c006011c94de42a8759430c
metrics:
- type: Accuracy
value: 52.49
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (en)
config: en
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 61.89
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (fr)
config: fr
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 59.04
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (jp)
config: jp
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 60.27
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (pt)
config: pt
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 66.16
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (ru)
config: ru
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 59.05
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (zh)
config: zh
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 62.9
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r1)
config: r1
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 38.2
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r2)
config: r2
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 34.8
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r3)
config: r3
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 39
- task:
type: Natural language inference
dataset:
type: super_glue
name: SuperGLUE (cb)
config: cb
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 85.71
- task:
type: Natural language inference
dataset:
type: super_glue
name: SuperGLUE (rte)
config: rte
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 78.7
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ar)
config: ar
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 51.85
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (bg)
config: bg
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 54.18
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (de)
config: de
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 54.78
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (el)
config: el
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 53.78
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (en)
config: en
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 56.83
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (es)
config: es
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 54.78
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (fr)
config: fr
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 54.22
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (hi)
config: hi
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 50.24
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ru)
config: ru
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 53.09
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (sw)
config: sw
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 49.6
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (th)
config: th
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 52.13
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (tr)
config: tr
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 50.56
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ur)
config: ur
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 47.91
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (vi)
config: vi
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 53.21
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (zh)
config: zh
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 50.64
- task:
type: Program synthesis
dataset:
type: openai_humaneval
name: HumanEval
config: None
split: test
revision: e8dc562f5de170c54b5481011dd9f4fa04845771
metrics:
- type: Pass@1
value: 0
- type: Pass@10
value: 0
- type: Pass@100
value: 0
- task:
type: Sentence completion
dataset:
type: story_cloze
name: StoryCloze (2016)
config: '2016'
split: validation
revision: e724c6f8cdf7c7a2fb229d862226e15b023ee4db
metrics:
- type: Accuracy
value: 79.1
- task:
type: Sentence completion
dataset:
type: super_glue
name: SuperGLUE (copa)
config: copa
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 72
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (et)
config: et
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 70
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (ht)
config: ht
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 66
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (id)
config: id
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 71
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (it)
config: it
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 70
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (qu)
config: qu
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 56
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (sw)
config: sw
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 53
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (ta)
config: ta
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 64
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (th)
config: th
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 60
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (tr)
config: tr
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 58
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (vi)
config: vi
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 68
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (zh)
config: zh
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 65
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (ar)
config: ar
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 70.09
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (es)
config: es
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 77.17
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (eu)
config: eu
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 69.03
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (hi)
config: hi
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 71.08
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (id)
config: id
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 75.71
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (my)
config: my
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 65.65
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (ru)
config: ru
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 74.85
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (sw)
config: sw
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 71.14
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (te)
config: te
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 68.89
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (zh)
config: zh
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 72.93
---

# Table of Contents
1. [Model Summary](#model-summary)
2. [Use](#use)
3. [Limitations](#limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
7. [Citation](#citation)
# Model Summary
> We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find our resulting models capable of crosslingual generalization to unseen tasks & languages.
- **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf)
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co)
- **Languages:** Refer to [mc4](https://huggingface.co/datasets/mc4) for pretraining & [xP3](https://huggingface.co/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages.
- **BLOOMZ & mT0 Model Family:**
<div class="max-w-full overflow-auto">
<table>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English.
</tr>
<tr>
<td>Parameters</td>
<td>300M</td>
<td>580M</td>
<td>1.2B</td>
<td>3.7B</td>
<td>13B</td>
<td>560M</td>
<td>1.1B</td>
<td>1.7B</td>
<td>3B</td>
<td>7.1B</td>
<td>176B</td>
</tr>
<tr>
<td>Finetuned Model</td>
<td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
</tr>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td>
</tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td>
</tr>
<th colspan="12">Original pretrained checkpoints. Not recommended.</th>
<tr>
<td>Pretrained Model</td>
<td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td>
<td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td>
<td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td>
<td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td>
<td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td>
</tr>
</table>
</div>
# Use
## Intended use
We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper:
- 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
- Suggest at least five related search terms to "Mạng neural nhân tạo".
- Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
- Explain in a sentence in Telugu what is backpropagation in neural networks.
**Feel free to share your generations in the Community tab!**
## How to use
### CPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
checkpoint = "bigscience/mt0-xl"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
checkpoint = "bigscience/mt0-xl"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU in 8bit
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate bitsandbytes
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
checkpoint = "bigscience/mt0-xl"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True)
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
<!-- Necessary for whitespace -->
###
# Limitations
**Prompt Engineering:** The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*".
# Training
## Model
- **Architecture:** Same as [mt5-xl](https://huggingface.co/google/mt5-xl), also refer to the `config.json` file
- **Finetuning steps:** 10000
- **Finetuning tokens:** 1.85 billion
- **Precision:** bfloat16
## Hardware
- **TPUs:** TPUv4-128
## Software
- **Orchestration:** [T5X](https://github.com/google-research/t5x)
- **Neural networks:** [Jax](https://github.com/google/jax)
# Evaluation
We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config.
# Citation
```bibtex
@article{muennighoff2022crosslingual,
title={Crosslingual generalization through multitask finetuning},
author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others},
journal={arXiv preprint arXiv:2211.01786},
year={2022}
}
``` | 23,774 | [
[
-0.030731201171875,
-0.041259765625,
0.023345947265625,
0.0275421142578125,
-0.00763702392578125,
-0.004970550537109375,
-0.02386474609375,
-0.0251617431640625,
0.029449462890625,
-0.0108184814453125,
-0.0677490234375,
-0.039398193359375,
-0.040802001953125,
... |
huggingface/autoformer-tourism-monthly | 2023-05-24T15:30:55.000Z | [
"transformers",
"pytorch",
"autoformer",
"dataset:monash_tsf",
"arxiv:2106.13008",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | huggingface | null | null | huggingface/autoformer-tourism-monthly | 1 | 1,621 | transformers | 2023-05-08T19:21:08 | ---
license: apache-2.0
datasets:
- monash_tsf
---
# Autoformer
## Overview
The Autoformer model was proposed in [Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting](https://arxiv.org/abs/2106.13008) by Haixu Wu, Jiehui Xu, Jianmin Wang and Mingsheng Long.
The abstract from the paper is the following:
*Extending the forecasting time is a critical demand for real applications, such as extreme weather early warning and long-term energy consumption planning. This paper studies the long-term forecasting problem of time series. Prior Transformer-based models adopt various self-attention mechanisms to discover the long-range dependencies. However, intricate temporal patterns of the long-term future prohibit the model from finding reliable dependencies. Also, Transformers have to adopt the sparse versions of point-wise self-attentions for long series efficiency, resulting in the information utilization bottleneck. Going beyond Transformers, we design Autoformer as a novel decomposition architecture with an Auto-Correlation mechanism. We break with the pre-processing convention of series decomposition and renovate it as a basic inner block of deep models. This design empowers Autoformer with progressive decomposition capacities for complex time series. Further, inspired by the stochastic process theory, we design the Auto-Correlation mechanism based on the series periodicity, which conducts the dependencies discovery and representation aggregation at the sub-series level. Auto-Correlation outperforms self-attention in both efficiency and accuracy. In long-term forecasting, Autoformer yields state-of-the-art accuracy, with a 38% relative improvement on six benchmarks, covering five practical applications: energy, traffic, economics, weather and disease.* | 1,824 | [
[
-0.031341552734375,
-0.034820556640625,
0.0238189697265625,
0.0310516357421875,
-0.004638671875,
0.0079498291015625,
0.007801055908203125,
-0.031707763671875,
0.0182647705078125,
0.0216064453125,
-0.048553466796875,
0.012908935546875,
-0.0377197265625,
-0.01... |
nerijs/dripped-out-xl | 2023-10-19T01:22:54.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"license:apache-2.0",
"region:us"
] | text-to-image | nerijs | null | null | nerijs/dripped-out-xl | 6 | 1,620 | diffusers | 2023-10-19T00:33:05 | ---
license: apache-2.0
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: dripped out
widget:
- text: dripped out shrek sitting on a lambo
---
# Dripped Out SDXL 💧
This is a LoRA to make 'HARD' AI Generated Images.
The weights were trained on the concept prompt:
```
dripped out
```
Use this keyword to trigger the custom model in your prompts.
# DRIP OR DROWN!

 | 628 | [
[
-0.0306243896484375,
-0.05426025390625,
0.058868408203125,
0.0292816162109375,
-0.051239013671875,
0.010101318359375,
0.01503753662109375,
-0.020050048828125,
0.041839599609375,
0.07794189453125,
-0.05877685546875,
-0.0188751220703125,
-0.062255859375,
-0.00... |
TheBloke/Vigogne-2-13B-Instruct-GPTQ | 2023-09-27T12:45:10.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"LLM",
"llama-2",
"fr",
"license:llama2",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/Vigogne-2-13B-Instruct-GPTQ | 3 | 1,619 | transformers | 2023-07-29T12:35:53 | ---
language:
- fr
license: llama2
library_name: transformers
tags:
- LLM
- llama
- llama-2
model_name: Vigogne 2 13B Instruct
base_model: bofenghuang/vigogne-2-13b-instruct
inference: false
model_creator: bofenghuang
model_type: llama
pipeline_tag: text-generation
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Vigogne 2 13B Instruct - GPTQ
- Model creator: [bofenghuang](https://huggingface.co/bofenghuang)
- Original model: [Vigogne 2 13B Instruct](https://huggingface.co/bofenghuang/vigogne-2-13b-instruct)
<!-- description start -->
## Description
This repo contains GPTQ model files for [bofenghuang's Vigogne 2 13B Instruct](https://huggingface.co/bofenghuang/vigogne-2-13b-instruct).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GGUF)
* [bofenghuang's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/bofenghuang/vigogne-2-13b-instruct)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [French news](https://huggingface.co/datasets/gustavecortal/diverse_french_news) | 4096 | 7.26 GB | Yes | 4-bit, without Act Order and group size 128g. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [French news](https://huggingface.co/datasets/gustavecortal/diverse_french_news) | 4096 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [French news](https://huggingface.co/datasets/gustavecortal/diverse_french_news) | 4096 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [French news](https://huggingface.co/datasets/gustavecortal/diverse_french_news) | 4096 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [French news](https://huggingface.co/datasets/gustavecortal/diverse_french_news) | 4096 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_False](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GPTQ/tree/gptq-8bit-128g-actorder_False) | 8 | 128 | No | 0.1 | [French news](https://huggingface.co/datasets/gustavecortal/diverse_french_news) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [French news](https://huggingface.co/datasets/gustavecortal/diverse_french_news) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-64g-actorder_True](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GPTQ/tree/gptq-8bit-64g-actorder_True) | 8 | 64 | Yes | 0.1 | [French news](https://huggingface.co/datasets/gustavecortal/diverse_french_news) | 4096 | 13.95 GB | No | 8-bit, with group size 64g and Act Order for even higher inference quality. Poor AutoGPTQ CUDA speed. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Vigogne-2-13B-Instruct-GPTQ:main`
- With Git, you can clone a branch with:
```
git clone --single-branch --branch main https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GPTQ
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Vigogne-2-13B-Instruct-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Vigogne-2-13B-Instruct-GPTQ:main`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Vigogne-2-13B-Instruct-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers>=4.32.0 optimum>=1.12.0
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .
```
### For CodeLlama models only: you must use Transformers 4.33.0 or later.
If 4.33.0 is not yet released when you read this, you will need to install Transformers from source:
```shell
pip3 uninstall -y transformers
pip3 install git+https://github.com/huggingface/transformers.git
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Vigogne-2-13B-Instruct-GPTQ"
# To use a different branch, change revision
# For example: revision="main"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: bofenghuang's Vigogne 2 13B Instruct
<p align="center" width="100%">
<img src="https://huggingface.co/bofenghuang/vigogne-2-13b-instruct/resolve/main/vigogne_logo.png" alt="Vigogne" style="width: 40%; min-width: 300px; display: block; margin: auto;">
</p>
# Vigogne-2-13B-Instruct: A Llama-2 based French instruction-following model
Vigogne-2-13B-Instruct is a model based on [LLaMA-2-13B](https://ai.meta.com/llama) that has been fine-tuned to follow French instructions.
For more information, please visit the Github repo: https://github.com/bofenghuang/vigogne
**Usage and License Notices**: Vigogne-2-13B-Instruct follows the same usage policy as Llama-2, which can be found [here](https://ai.meta.com/llama/use-policy).
## Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
from vigogne.preprocess import generate_instruct_prompt
model_name_or_path = "bofenghuang/vigogne-2-13b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side="right", use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.float16, device_map="auto")
user_query = "Expliquez la différence entre DoS et phishing."
prompt = generate_instruct_prompt(user_query)
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(model.device)
input_length = input_ids.shape[1]
generated_outputs = model.generate(
input_ids=input_ids,
generation_config=GenerationConfig(
temperature=0.1,
do_sample=True,
repetition_penalty=1.0,
max_new_tokens=512,
),
return_dict_in_generate=True,
)
generated_tokens = generated_outputs.sequences[0, input_length:]
generated_text = tokenizer.decode(generated_tokens, skip_special_tokens=True)
print(generated_text)
```
You can also infer this model by using the following Google Colab Notebook.
<a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_instruct.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Example Outputs
*todo*
## Limitations
Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.
| 18,123 | [
[
-0.039154052734375,
-0.05908203125,
0.007114410400390625,
0.023223876953125,
-0.0225830078125,
-0.007843017578125,
-0.0015115737915039062,
-0.038848876953125,
0.014312744140625,
0.0227203369140625,
-0.04632568359375,
-0.042694091796875,
-0.03472900390625,
-0... |
timm/ViT-B-16-SigLIP-i18n-256 | 2023-10-25T22:04:56.000Z | [
"open_clip",
"clip",
"siglip",
"zero-shot-image-classification",
"dataset:webli",
"arxiv:2303.15343",
"license:apache-2.0",
"region:us"
] | zero-shot-image-classification | timm | null | null | timm/ViT-B-16-SigLIP-i18n-256 | 1 | 1,619 | open_clip | 2023-10-17T00:26:06 | ---
tags:
- clip
- siglip
library_name: open_clip
pipeline_tag: zero-shot-image-classification
license: apache-2.0
datasets:
- webli
---
# Model card for ViT-B-16-SigLIP-i18n-256
A SigLIP (Sigmoid loss for Language-Image Pre-training) model trained on WebLI.
This model has been converted to PyTorch from the original JAX checkpoints in [Big Vision](https://github.com/google-research/big_vision). These weights are usable in both OpenCLIP (image + text) and timm (image only).
## Model Details
- **Model Type:** Contrastive Image-Text, Zero-Shot Image Classification.
- **Original:** https://github.com/google-research/big_vision
- **Dataset:** WebLI
- **Papers:**
- Sigmoid loss for language image pre-training: https://arxiv.org/abs/2303.15343
## Model Usage
### With OpenCLIP
```
import torch
import torch.nn.functional as F
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer # works on open-clip-torch>=2.23.0, timm>=0.9.8
model, preprocess = create_model_from_pretrained('hf-hub:timm/ViT-B-16-SigLIP-i18n-256')
tokenizer = get_tokenizer('hf-hub:timm/ViT-B-16-SigLIP-i18n-256')
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
image = preprocess(image).unsqueeze(0)
labels_list = ["a dog", "a cat", "a donut", "a beignet"]
text = tokenizer(labels_list, context_length=model.context_length)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features = F.normalize(image_features, dim=-1)
text_features = F.normalize(text_features, dim=-1)
text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias)
zipped_list = list(zip(labels_list, [round(p.item(), 3) for p in text_probs[0]]))
print("Label probabilities: ", zipped_list)
```
### With `timm` (for image embeddings)
```python
from urllib.request import urlopen
from PIL import Image
import timm
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_base_patch16_siglip_256',
pretrained=True,
num_classes=0,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(image).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
```
## Citation
```bibtex
@article{zhai2023sigmoid,
title={Sigmoid loss for language image pre-training},
author={Zhai, Xiaohua and Mustafa, Basil and Kolesnikov, Alexander and Beyer, Lucas},
journal={arXiv preprint arXiv:2303.15343},
year={2023}
}
```
```bibtex
@misc{big_vision,
author = {Beyer, Lucas and Zhai, Xiaohua and Kolesnikov, Alexander},
title = {Big Vision},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/google-research/big_vision}}
}
```
| 3,176 | [
[
-0.0297088623046875,
-0.03851318359375,
0.01456451416015625,
0.020263671875,
-0.034088134765625,
-0.0223236083984375,
-0.031494140625,
-0.031524658203125,
0.0236663818359375,
0.0183868408203125,
-0.0401611328125,
-0.05828857421875,
-0.055999755859375,
-0.010... |
timm/efficientformerv2_s1.snap_dist_in1k | 2023-02-03T21:11:20.000Z | [
"timm",
"pytorch",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2212.08059",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/efficientformerv2_s1.snap_dist_in1k | 0 | 1,617 | timm | 2023-02-03T21:11:15 | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for efficientformerv2_s1.snap_dist_in1k
A EfficientFormer-V2 image classification model. Pretrained with distillation on ImageNet-1k.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 6.2
- GMACs: 0.7
- Activations (M): 7.7
- Image size: 224 x 224
- **Original:** https://github.com/snap-research/EfficientFormer
- **Papers:**
- Rethinking Vision Transformers for MobileNet Size and Speed: https://arxiv.org/abs/2212.08059
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model('efficientformerv2_s1.snap_dist_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'efficientformerv2_s1.snap_dist_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, num_features, H, W) tensor
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'efficientformerv2_s1.snap_dist_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g. for efficientformerv2_l:
# torch.Size([2, 40, 56, 56])
# torch.Size([2, 80, 28, 28])
# torch.Size([2, 192, 14, 14])
# torch.Size([2, 384, 7, 7])
print(o.shape)
```
## Model Comparison
|model |top1 |top5 |param_count|img_size|
|-----------------------------------|------|------|-----------|--------|
|efficientformerv2_l.snap_dist_in1k |83.628|96.54 |26.32 |224 |
|efficientformer_l7.snap_dist_in1k |83.368|96.534|82.23 |224 |
|efficientformer_l3.snap_dist_in1k |82.572|96.24 |31.41 |224 |
|efficientformerv2_s2.snap_dist_in1k|82.128|95.902|12.71 |224 |
|efficientformer_l1.snap_dist_in1k |80.496|94.984|12.29 |224 |
|efficientformerv2_s1.snap_dist_in1k|79.698|94.698|6.19 |224 |
|efficientformerv2_s0.snap_dist_in1k|76.026|92.77 |3.6 |224 |
## Citation
```bibtex
@article{li2022rethinking,
title={Rethinking Vision Transformers for MobileNet Size and Speed},
author={Li, Yanyu and Hu, Ju and Wen, Yang and Evangelidis, Georgios and Salahi, Kamyar and Wang, Yanzhi and Tulyakov, Sergey and Ren, Jian},
journal={arXiv preprint arXiv:2212.08059},
year={2022}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
| 4,560 | [
[
-0.03240966796875,
-0.035430908203125,
0.0095367431640625,
0.008514404296875,
-0.0247802734375,
-0.0284423828125,
-0.01064300537109375,
-0.0228729248046875,
0.020263671875,
0.0242156982421875,
-0.0295867919921875,
-0.036590576171875,
-0.055938720703125,
-0.0... |
timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k | 2023-05-11T00:49:45.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-12k",
"arxiv:2204.01697",
"arxiv:2201.03545",
"arxiv:2111.09883",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k | 2 | 1,616 | timm | 2023-01-20T21:38:05 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- imagenet-12k
---
# Model card for maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k
A timm specific MaxxViT-V2 (w/ a MLP Log-CPB (continuous log-coordinate relative position bias motivated by Swin-V2) image classification model. Pretrained in `timm` on ImageNet-12k (a 11821 class subset of full ImageNet-22k) and fine-tuned on ImageNet-1k by Ross Wightman.
ImageNet-12k pretraining and ImageNet-1k fine-tuning performed on 8x GPU [Lambda Labs](https://lambdalabs.com/) cloud instances..
### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py)
MaxxViT covers a number of related model architectures that share a common structure including:
- CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages.
- MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid).
- CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate.
Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations.
All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 116.1
- GMACs: 24.2
- Activations (M): 62.8
- Image size: 224 x 224
- **Papers:**
- MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-12k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 128, 112, 112])
# torch.Size([1, 128, 56, 56])
# torch.Size([1, 256, 28, 28])
# torch.Size([1, 512, 14, 14])
# torch.Size([1, 1024, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1024, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
### By Top-1
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
### By Throughput (samples / sec)
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{tu2022maxvit,
title={MaxViT: Multi-Axis Vision Transformer},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={ECCV},
year={2022},
}
```
```bibtex
@article{dai2021coatnet,
title={CoAtNet: Marrying Convolution and Attention for All Data Sizes},
author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing},
journal={arXiv preprint arXiv:2106.04803},
year={2021}
}
```
| 22,580 | [
[
-0.0528564453125,
-0.0321044921875,
0.0010004043579101562,
0.0284271240234375,
-0.024505615234375,
-0.019744873046875,
-0.01165008544921875,
-0.024078369140625,
0.046966552734375,
0.0185699462890625,
-0.04315185546875,
-0.04644775390625,
-0.049530029296875,
... |
timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k | 2023-03-31T21:58:27.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:laion-2b",
"arxiv:2210.08402",
"arxiv:2201.03545",
"arxiv:2103.00020",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k | 0 | 1,613 | timm | 2023-03-31T21:57:06 | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
datasets:
- imagenet-1k
- laion-2b
---
# Model card for convnext_base.clip_laion2b_augreg_ft_in12k_in1k
A ConvNeXt image classification model. CLIP image tower weights pretrained in [OpenCLIP](https://github.com/mlfoundations/open_clip) on LAION and fine-tuned on ImageNet-12k followed by ImageNet-1k in `timm` bby Ross Wightman.
Please see related OpenCLIP model cards for more details on pretrain:
* https://huggingface.co/laion/CLIP-convnext_xxlarge-laion2B-s34B-b82K-augreg-soup
* https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg
* https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K-augreg
* https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 88.6
- GMACs: 20.1
- Activations (M): 37.6
- Image size: 256 x 256
- **Papers:**
- LAION-5B: An open large-scale dataset for training next generation image-text models: https://arxiv.org/abs/2210.08402
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- Learning Transferable Visual Models From Natural Language Supervision: https://arxiv.org/abs/2103.00020
- **Original:** https://github.com/mlfoundations/open_clip
- **Pretrain Dataset:** LAION-2B
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convnext_base.clip_laion2b_augreg_ft_in12k_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_base.clip_laion2b_augreg_ft_in12k_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 128, 64, 64])
# torch.Size([1, 256, 32, 32])
# torch.Size([1, 512, 16, 16])
# torch.Size([1, 1024, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_base.clip_laion2b_augreg_ft_in12k_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1024, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
| model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
| [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
| [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
| [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
| [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
| [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
| [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
| [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
| [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
| [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
| [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 |
| [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
| [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
| [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
| [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
| [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
| [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
| [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
| [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
| [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
| [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
| [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
| [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
| [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
| [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
| [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
| [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
| [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
| [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
| [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
| [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
| [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
| [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
| [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
| [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
| [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
| [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
| [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
| [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
| [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
| [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
| [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
## Citation
```bibtex
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
```bibtex
@article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
| 18,546 | [
[
-0.060821533203125,
-0.0361328125,
-0.0023651123046875,
0.03570556640625,
-0.031646728515625,
-0.017974853515625,
-0.01430511474609375,
-0.03375244140625,
0.0576171875,
0.020233154296875,
-0.04364013671875,
-0.04449462890625,
-0.05352783203125,
-0.0030403137... |
leo911kim/Exodia-7B | 2023-10-13T08:15:03.000Z | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | leo911kim | null | null | leo911kim/Exodia-7B | 0 | 1,612 | transformers | 2023-10-13T05:24:34 | ---
license: mit
---
Master of Merging
[](https://www.buymeacoffee.com/yeongwooki3)
The Large Language Model, or LLM, represents a groundbreaking advancement in the realm of artificial intelligence.
By fusing together insights and data from various individual models, the LLM is designed to harness the best of each while mitigating their individual weaknesses.
This amalgamation allows the LLM to demonstrate unparalleled capability in understanding context, generating accurate content, and adapting to diverse tasks.
The integrated approach ensures that users benefit from increased accuracy, wider knowledge coverage, and a more nuanced understanding of both structured and unstructured data.
Essentially, the LLM epitomizes the next step in the evolution of AI, bringing about a model that is greater than the sum of its parts. | 928 | [
[
-0.03265380859375,
-0.0606689453125,
0.047882080078125,
0.0021038055419921875,
-0.0108489990234375,
0.02056884765625,
-0.035552978515625,
-0.047607421875,
0.01126861572265625,
0.05010986328125,
-0.031341552734375,
-0.0220794677734375,
-0.0631103515625,
-0.00... |
circulus/sd-photoreal-real-v2 | 2023-02-20T15:59:35.000Z | [
"diffusers",
"generative ai",
"stable-diffusion",
"image-to-image",
"realism",
"art",
"text-to-image",
"en",
"license:gpl-3.0",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | circulus | null | null | circulus/sd-photoreal-real-v2 | 16 | 1,609 | diffusers | 2023-01-15T06:12:56 | ---
license: gpl-3.0
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- generative ai
- stable-diffusion
- image-to-image
- realism
- art
---
Photoreal Real v2
Finetuned Stable Diffusion 1.5 for generating images
You can test this model thought mobile!
https://eva.circul.us/index.html

 | 347 | [
[
-0.039398193359375,
-0.08026123046875,
0.029052734375,
0.0169219970703125,
-0.028411865234375,
-0.0187530517578125,
0.01338958740234375,
-0.03167724609375,
0.00937652587890625,
0.042694091796875,
-0.040374755859375,
-0.0254974365234375,
-0.00814056396484375,
... |
ctrlbuzz/bert-addresses | 2023-10-17T18:09:47.000Z | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | ctrlbuzz | null | null | ctrlbuzz/bert-addresses | 2 | 1,609 | transformers | 2023-09-26T21:13:04 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
This model is developed to tag Names, Organisations and addresses. I have used a data combined fro Conll, ontonotes5, and a custom address dataset that was self made. Cleaned
out the tags. Detects U.S addresses.
[\"O\", \"B-ORG\", \"I-ORG\", \"B-PER\", \"I-PER\",'B-addr','I-addr']
### Model Description
- **Developed by:** ctrlbuzz
- **Model type:** Bert
- **Language(s) (NLP):** Named Entity recognition
- **Finetuned from model [optional]:** bert-base-cased
## Uses
### Direct Use
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained('bert-base-cased')
model = AutoModelForTokenClassification.from_pretrained("ctrlbuzz/bert-addresses")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "While Maria was representing Johnson & Associates at a conference in Spain, she mailed me a letter from her new office at 123 Elm St., Apt. 4B, Springfield, IL.",
print(nlp(example))
```
| 1,230 | [
[
-0.01398468017578125,
-0.02496337890625,
0.0157470703125,
-0.00762939453125,
-0.0220184326171875,
0.001155853271484375,
0.012298583984375,
-0.0283660888671875,
0.0194549560546875,
0.040618896484375,
-0.038055419921875,
-0.0599365234375,
-0.026397705078125,
0... |
sentence-transformers/msmarco-MiniLM-L12-cos-v5 | 2023-11-02T09:31:49.000Z | [
"sentence-transformers",
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"arxiv:1908.10084",
"endpoints_compatible",
"region:us"
] | sentence-similarity | sentence-transformers | null | null | sentence-transformers/msmarco-MiniLM-L12-cos-v5 | 6 | 1,608 | sentence-transformers | 2022-03-02T23:29:05 | ---
language:
- en
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# msmarco-MiniLM-L12-cos-v5
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and was designed for **semantic search**. It has been trained on 500k (query, answer) pairs from the [MS MARCO Passages dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking). For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
#Load the model
model = SentenceTransformer('sentence-transformers/msmarco-MiniLM-L12-cos-v5')
#Encode query and documents
query_emb = model.encode(query)
doc_emb = model.encode(docs)
#Compute dot score between query and all document embeddings
scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take average of all tokens
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output.last_hidden_state #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
#Encode text
def encode(texts):
# Tokenize sentences
encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input, return_dict=True)
# Perform pooling
embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
return embeddings
# Sentences we want sentence embeddings for
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/msmarco-MiniLM-L12-cos-v5")
model = AutoModel.from_pretrained("sentence-transformers/msmarco-MiniLM-L12-cos-v5")
#Encode query and docs
query_emb = encode(query)
doc_emb = encode(docs)
#Compute dot score between query and all document embeddings
scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Technical Details
In the following some technical details how this model must be used:
| Setting | Value |
| --- | :---: |
| Dimensions | 768 |
| Produces normalized embeddings | Yes |
| Pooling-Method | Mean pooling |
| Suitable score functions | dot-product (`util.dot_score`), cosine-similarity (`util.cos_sim`), or euclidean distance |
Note: When loaded with `sentence-transformers`, this model produces normalized embeddings with length 1. In that case, dot-product and cosine-similarity are equivalent. dot-product is preferred as it is faster. Euclidean distance is proportional to dot-product and can also be used.
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` | 5,133 | [
[
-0.0184326171875,
-0.058013916015625,
0.0306854248046875,
0.01045989990234375,
-0.0167999267578125,
-0.026336669921875,
-0.0212554931640625,
-0.00888824462890625,
0.019256591796875,
0.0270538330078125,
-0.0360107421875,
-0.048828125,
-0.04754638671875,
0.012... |
google/efficientnet-b0 | 2023-02-17T10:05:19.000Z | [
"transformers",
"pytorch",
"efficientnet",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:1905.11946",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | google | null | null | google/efficientnet-b0 | 3 | 1,607 | transformers | 2023-02-15T20:17:27 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# EfficientNet (b0 model)
EfficientNet model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le, and first released in [this repository](https://github.com/keras-team/keras).
Disclaimer: The team releasing EfficientNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
EfficientNet is a mobile friendly pure convolutional model (ConvNet) that proposes a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=efficientnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
import torch
from datasets import load_dataset
from transformers import EfficientNetImageProcessor, EfficientNetForImageClassification
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
preprocessor = EfficientNetImageProcessor.from_pretrained("google/efficientnet-b0")
model = EfficientNetForImageClassification.from_pretrained("google/efficientnet-b0")
inputs = preprocessor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/efficientnet).
### BibTeX entry and citation info
```bibtex
@article{Tan2019EfficientNetRM,
title={EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks},
author={Mingxing Tan and Quoc V. Le},
journal={ArXiv},
year={2019},
volume={abs/1905.11946}
}
``` | 2,697 | [
[
-0.0299072265625,
-0.038665771484375,
-0.022247314453125,
0.01360321044921875,
-0.01357269287109375,
-0.040863037109375,
-0.016021728515625,
-0.047943115234375,
0.0194244384765625,
0.0175018310546875,
-0.026458740234375,
-0.01477813720703125,
-0.057403564453125,... |
SudeepShetty/dogs | 2023-10-14T09:43:33.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | SudeepShetty | null | null | SudeepShetty/dogs | 0 | 1,605 | diffusers | 2023-10-14T09:38:35 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### DOGS Dreambooth model trained by SudeepShetty following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: VCETV107
Sample pictures of this concept:
| 288 | [
[
-0.043365478515625,
-0.006816864013671875,
0.0369873046875,
0.01190185546875,
-0.0032215118408203125,
0.0188751220703125,
0.0426025390625,
-0.0341796875,
0.034027099609375,
0.03594970703125,
-0.04803466796875,
-0.0132904052734375,
-0.021514892578125,
0.00067... |
NbAiLab/nb-bert-base-mnli | 2023-03-24T11:32:00.000Z | [
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"text-classification",
"nb-bert",
"zero-shot-classification",
"tensorflow",
"norwegian",
"no",
"dataset:mnli",
"dataset:multi_nli",
"dataset:xnli",
"arxiv:1909.00161",
"license:cc-by-4.0",
"endpoints_compatible",
"has_space",... | zero-shot-classification | NbAiLab | null | null | NbAiLab/nb-bert-base-mnli | 5 | 1,604 | transformers | 2022-03-02T23:29:04 | ---
language: no
license: cc-by-4.0
thumbnail: https://raw.githubusercontent.com/NBAiLab/notram/master/images/nblogo_2.png
pipeline_tag: zero-shot-classification
tags:
- nb-bert
- zero-shot-classification
- pytorch
- tensorflow
- norwegian
- bert
datasets:
- mnli
- multi_nli
- xnli
widget:
- example_title: Nyhetsartikkel om FHI
text: Folkehelseinstituttets mest optimistiske anslag er at alle voksne er ferdigvaksinert innen midten av september.
candidate_labels: helse, politikk, sport, religion
---
**Release 1.0** (March 11, 2021)
# NB-Bert base model finetuned on Norwegian machine translated MNLI
## Description
The most effective way of creating a good classifier is to finetune a pre-trained model for the specific task at hand. However, in many cases this is simply impossible.
[Yin et al.](https://arxiv.org/abs/1909.00161) proposed a very clever way of using pre-trained MNLI models as zero-shot sequence classifiers. The methods works by reformulating the question to an MNLI hypothesis. If we want to figure out if a text is about "sport", we simply state that "This text is about sport" ("Denne teksten handler om sport").
When the model is finetuned on the 400k large MNLI task, it is in many cases able to solve this classification tasks. There are no MNLI-set of this size in Norwegian but we have trained it on a machine translated version of the original MNLI-set.
## Testing the model
For testing the model, we recommend the [NbAiLab Colab Notebook](https://colab.research.google.com/gist/peregilk/769b5150a2f807219ab8f15dd11ea449/nbailab-mnli-norwegian-demo.ipynb)
## Hugging Face zero-shot-classification pipeline
The easiest way to try this out is by using the Hugging Face pipeline. Please, note that you will get better results when using Norwegian hypothesis template instead of the default English one.
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="NbAiLab/nb-bert-base-mnli")
```
You can then use this pipeline to classify sequences into any of the class names you specify.
```python
sequence_to_classify = 'Folkehelseinstituttets mest optimistiske anslag er at alle voksne er ferdigvaksinert innen midten av september.'
candidate_labels = ['politikk', 'helse', 'sport', 'religion']
hypothesis_template = 'Dette eksempelet er {}.'
classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template, multi_class=True)
# {'labels': ['helse', 'politikk', 'sport', 'religion'],
# 'scores': [0.4210019111633301, 0.0674605593085289, 0.000840459018945694, 0.0007541406666859984],
# 'sequence': 'Folkehelseinstituttets mest optimistiske anslag er at alle over 18 år er ferdigvaksinert innen midten av september.'}
```
## More information
For more information on the model, see
https://github.com/NBAiLab/notram
Here you will also find a Colab explaining more in details how to use the zero-shot-classification pipeline. | 2,939 | [
[
-0.042572021484375,
-0.040557861328125,
0.0009036064147949219,
0.0117340087890625,
-0.03131103515625,
-0.017730712890625,
0.0006146430969238281,
-0.033111572265625,
0.0303955078125,
0.0182647705078125,
-0.04290771484375,
-0.04522705078125,
-0.04241943359375,
... |
izumi-lab/electra-small-japanese-fin-discriminator | 2022-12-09T00:42:10.000Z | [
"transformers",
"pytorch",
"electra",
"pretraining",
"finance",
"ja",
"arxiv:2003.10555",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | izumi-lab | null | null | izumi-lab/electra-small-japanese-fin-discriminator | 0 | 1,604 | transformers | 2022-03-02T23:29:05 | ---
language: ja
license: cc-by-sa-4.0
tags:
- finance
widget:
- text: 流動[MASK]は1億円となりました。
---
# ELECTRA small Japanese finance discriminator
This is a [ELECTRA](https://github.com/google-research/electra) model pretrained on texts in the Japanese language.
The codes for the pretraining are available at [retarfi/language-pretraining](https://github.com/retarfi/language-pretraining/tree/v1.0).
## Model architecture
The model architecture is the same as ELECTRA small in the [original ELECTRA implementation](https://github.com/google-research/electra); 12 layers, 256 dimensions of hidden states, and 4 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Japanese version of Wikipedia, using Wikipedia dump file as of June 1, 2021.
The Wikipedia corpus file is 2.9GB, consisting of approximately 20M sentences.
The financial corpus consists of 2 corpora:
- Summaries of financial results from October 9, 2012, to December 31, 2020
- Securities reports from February 8, 2018, to December 31, 2020
The financial corpus file is 5.2GB, consisting of approximately 27M sentences.
## Tokenization
The texts are first tokenized by MeCab with IPA dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32768.
## Training
The models are trained with the same configuration as ELECTRA small in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555) except size; 128 tokens per instance, 128 instances per batch, and 1M training steps.
The size of the generator is the same of the discriminator.
## Citation
```
@article{Suzuki-etal-2023-ipm,
title = {Constructing and analyzing domain-specific language model for financial text mining}
author = {Masahiro Suzuki and Hiroki Sakaji and Masanori Hirano and Kiyoshi Izumi},
journal = {Information Processing & Management},
volume = {60},
number = {2},
pages = {103194},
year = {2023},
doi = {10.1016/j.ipm.2022.103194}
}
```
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
## Acknowledgments
This work was supported by JSPS KAKENHI Grant Number JP21K12010.
| 2,295 | [
[
-0.031585693359375,
-0.060546875,
0.01483154296875,
-0.00786590576171875,
-0.0276336669921875,
0.0091400146484375,
-0.01316070556640625,
-0.031982421875,
0.0236053466796875,
0.059814453125,
-0.0304412841796875,
-0.031982421875,
-0.03662109375,
0.013679504394... |
microsoft/SportsBERT | 2022-12-10T18:18:40.000Z | [
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | microsoft | null | null | microsoft/SportsBERT | 13 | 1,604 | transformers | 2022-03-02T23:29:05 | Pretraining large natural language processing models such as BERT, RoBERTa, etc are now state of the art models in natural language understanding and processing tasks. However, these models are trained on a general corpus of articles from the web or from repositories like quora, wikipedia, etc which contain articles of all domains and backgrounds. Training domain specific language model has proven to perform better than pretrained general models in domains like Medicine. With that knowledge, we went on to train a sports specific BERT based transformers model, SportsBERT.
SportsBERT is a BERT model trained from scratch with specific focus on sports articles. The training corpus included news articles scraped from the web related to sports from the past 4 years. These articles covered news from Football, Basketball, Hockey, Cricket, Soccer, Baseball, Olympics, Tennis, Golf, MMA, etc. There were approximately 8 million training samples which were used to train this model. A tokenizer was trained from scratch to include more sports related tokens to the vocabulary. The architecture used in this model is the BERT base uncased architecture. The model was trained on four V100 GPUs. It's a MLM based transformers model and the primary task of the model is to fill in missing masked tokens. For example,
"Anthony Davis is a [MASK]" would give out the tokens "legend", "superstar", "rookie", "star", "king" in descending confidences.
This model can then be used to fine tune for other tasks such as classification, entity extraction, etc.
Language: English
pipeline_tag: fill-mask
Authors: Prithvishankar Srinivasan (prsrini@microsoft.com) | 1,653 | [
[
-0.028564453125,
-0.035308837890625,
0.00789642333984375,
0.0189208984375,
-0.01380157470703125,
0.01434326171875,
-0.0206146240234375,
-0.04107666015625,
0.0150604248046875,
0.0248565673828125,
-0.056427001953125,
-0.024688720703125,
-0.05535888671875,
-0.0... |
TheBloke/WizardCoder-15B-1.0-GPTQ | 2023-08-21T08:37:19.000Z | [
"transformers",
"safetensors",
"gpt_bigcode",
"text-generation",
"license:bigcode-openrail-m",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/WizardCoder-15B-1.0-GPTQ | 162 | 1,603 | transformers | 2023-06-14T15:37:39 | ---
inference: false
license: bigcode-openrail-m
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# WizardLM's WizardCoder 15B 1.0 GPTQ
These files are GPTQ 4bit model files for [WizardLM's WizardCoder 15B 1.0](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0).
It is the result of quantising to 4bit using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ).
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardCoder-15B-1.0-GPTQ)
* [4, 5, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardCoder-15B-1.0-GGML)
* [WizardLM's unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0)
## Prompt template
```
Below is an instruction that describes a task. Write a response that appropriately completes the request
### Instruction: prompt
### Response:
```
## How to easily download and use this model in text-generation-webui
Please make sure you're using the latest version of text-generation-webui
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/WizardCoder-15B-1.0-GPTQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done"
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `WizardCoder-15B-1.0-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
## How to use this GPTQ model from Python code
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
`pip install auto-gptq`
Then try the following example code:
```python
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import argparse
model_name_or_path = "TheBloke/WizardCoder-15B-1.0-GPTQ"
# Or to load it locally, pass the local download path
# model_name_or_path = "/path/to/models/TheBloke_WizardCoder-15B-1.0-GPTQ"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
use_safetensors=True,
device="cuda:0",
use_triton=use_triton,
quantize_config=None)
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
prompt_template = '''Below is an instruction that describes a task. Write a response that appropriately completes the request
### Instruction: {prompt}
### Response:'''
prompt = prompt_template.format(prompt="How do I sort a list in Python?")
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.2, top_k=50, top_p=0.95)
print(outputs[0]['generated_text'])
```
## Provided files
**gptq_model-4bit--1g.safetensors**
This will work with AutoGPTQ and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
It was created without group_size to lower VRAM requirements, and with --act-order (desc_act) to boost inference accuracy as much as possible.
* `gptq_model-4bit--1g.safetensors`
* Works with AutoGPTQ in CUDA or Triton modes.
* Works with text-generation-webui, including one-click-installers.
* Does not work with GPTQ-for-LLaMa.
* Parameters: Groupsize = -1. Act Order / desc_act = True.
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: WizardLM's WizardCoder 15B 1.0
This is the Full-Weight of WizardCoder.
**Repository**: https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder
**Twitter**: https://twitter.com/WizardLM_AI/status/1669109414559911937
**Paper**: Is coming, with brand-new Evol+ methods for code LLMs.
**Demos (Only support code-related English instructions now.)**:
[Demo](https://8194635813f45a1e.gradio.app/),
[Backup Demo1](https://375cead61e4db124.gradio.app/),
[Backup Demo2](https://1594ad375fc80cc7.gradio.app/),
[Backup Demo3](https://4989441110ee350f.gradio.app/)
# WizardCoder: Empowering Code Large Language Models with Evol-Instruct
To develop our WizardCoder model, we begin by adapting the Evol-Instruct method specifically for coding tasks. This involves tailoring the prompt to the domain of code-related instructions. Subsequently, we fine-tune the Code LLM, StarCoder, utilizing the newly created instruction-following training set.
## News
- 🔥 Our **WizardCoder-15B-v1.0** model achieves the **57.3 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval), which is **22.3** points higher than the SOTA open-source Code LLMs.
- 🔥 We released **WizardCoder-15B-v1.0** trained with **78k** evolved code instructions. Please checkout the [Model Weights](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0), and [Paper]().
- 📣 Please refer to our Twitter account https://twitter.com/WizardLM_AI and HuggingFace Repo https://huggingface.co/WizardLM . We will use them to announce any new release at the 1st time.
## Comparing WizardCoder with the Closed-Source Models.
🔥 The following figure shows that our **WizardCoder attains the third position in this benchmark**, surpassing Claude-Plus (59.8 vs. 53.0) and Bard (59.8 vs. 44.5). Notably, our model exhibits a substantially smaller size compared to these models.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/nlpxucan/WizardLM/main/WizardCoder/imgs/pass1.png" alt="WizardCoder" style="width: 86%; min-width: 300px; display: block; margin: auto;"></a>
</p>
❗**Note: In this study, we copy the scores for HumanEval and HumanEval+ from the [LLM-Humaneval-Benchmarks](https://github.com/my-other-github-account/llm-humaneval-benchmarks). Notably, all the mentioned models generate code solutions for each problem utilizing a **single attempt**, and the resulting pass rate percentage is reported. Our **WizardCoder** generates answers using greedy decoding and tests with the same [code](https://github.com/evalplus/evalplus).**
## Comparing WizardCoder with the Open-Source Models.
The following table clearly demonstrates that our **WizardCoder** exhibits a substantial performance advantage over all the open-source models. ❗**If you are confused with the different scores of our model (57.3 and 59.8), please check the Notes.**
| Model | HumanEval Pass@1 | MBPP Pass@1 |
|------------------|------------------|-------------|
| CodeGen-16B-Multi| 18.3 |20.9 |
| CodeGeeX | 22.9 |24.4 |
| LLaMA-33B | 21.7 |30.2 |
| LLaMA-65B | 23.7 |37.7 |
| PaLM-540B | 26.2 |36.8 |
| PaLM-Coder-540B | 36.0 |47.0 |
| PaLM 2-S | 37.6 |50.0 |
| CodeGen-16B-Mono | 29.3 |35.3 |
| Code-Cushman-001 | 33.5 |45.9 |
| StarCoder-15B | 33.6 |43.6* |
| InstructCodeT5+ | 35.0 |-- |
| WizardLM-30B 1.0| 37.8 |-- |
| WizardCoder-15B 1.0 | **57.3** |**51.8** |
❗**Note: The reproduced result of StarCoder on MBPP.**
❗**Note: The above table conducts a comprehensive comparison of our **WizardCoder** with other models on the HumanEval and MBPP benchmarks. We adhere to the approach outlined in previous studies by generating **20 samples** for each problem to estimate the pass@1 score and evaluate with the same [code](https://github.com/openai/human-eval/tree/master). The scores of GPT4 and GPT3.5 reported by [OpenAI](https://openai.com/research/gpt-4) are 67.0 and 48.1 (maybe these are the early version GPT4&3.5).**
## Call for Feedbacks
We welcome everyone to use your professional and difficult instructions to evaluate WizardCoder, and show us examples of poor performance and your suggestions in the [issue discussion](https://github.com/nlpxucan/WizardLM/issues) area. We are focusing on improving the Evol-Instruct now and hope to relieve existing weaknesses and issues in the the next version of WizardCoder. After that, we will open the code and pipeline of up-to-date Evol-Instruct algorithm and work with you together to improve it.
## Contents
1. [Online Demo](#online-demo)
2. [Fine-tuning](#fine-tuning)
3. [Inference](#inference)
4. [Evaluation](#evaluation)
5. [Citation](#citation)
6. [Disclaimer](#disclaimer)
## Online Demo
We will provide our latest models for you to try for as long as possible. If you find a link is not working, please try another one. At the same time, please try as many **real-world** and **challenging** code-related problems that you encounter in your work and life as possible. We will continue to evolve our models with your feedbacks.
## Fine-tuning
We fine-tune WizardCoder using the modified code `train.py` from [Llama-X](https://github.com/AetherCortex/Llama-X).
We fine-tune StarCoder-15B with the following hyperparameters:
| Hyperparameter | StarCoder-15B |
|----------------|---------------|
| Batch size | 512 |
| Learning rate | 2e-5 |
| Epochs | 3 |
| Max length | 2048 |
| Warmup step | 30 |
| LR scheduler | cosine |
To reproduce our fine-tuning of WizardCoder, please follow the following steps:
1. According to the instructions of [Llama-X](https://github.com/AetherCortex/Llama-X), install the environment, download the training code, and deploy. (Note: `deepspeed==0.9.2` and `transformers==4.29.2`)
2. Replace the `train.py` with the `train_wizardcoder.py` in our repo (`src/train_wizardcoder.py`)
3. Login Huggingface:
```bash
huggingface-cli login
```
4. Execute the following training command:
```bash
deepspeed train_wizardcoder.py \
--model_name_or_path "bigcode/starcoder" \
--data_path "/your/path/to/code_instruction_data.json" \
--output_dir "/your/path/to/ckpt" \
--num_train_epochs 3 \
--model_max_length 2048 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 4 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 50 \
--save_total_limit 2 \
--learning_rate 2e-5 \
--warmup_steps 30 \
--logging_steps 2 \
--lr_scheduler_type "cosine" \
--report_to "tensorboard" \
--gradient_checkpointing True \
--deepspeed configs/deepspeed_config.json \
--fp16 True
```
## Inference
We provide the decoding script for WizardCoder, which reads a input file and generates corresponding responses for each sample, and finally consolidates them into an output file.
You can specify `base_model`, `input_data_path` and `output_data_path` in `src\inference_wizardcoder.py` to set the decoding model, path of input file and path of output file.
```bash
pip install jsonlines
```
The decoding command is:
```
python src\inference_wizardcoder.py \
--base_model "/your/path/to/ckpt" \
--input_data_path "/your/path/to/input/data.jsonl" \
--output_data_path "/your/path/to/output/result.jsonl"
```
The format of `data.jsonl` should be:
```
{"idx": 11, "Instruction": "Write a Python code to count 1 to 10."}
{"idx": 12, "Instruction": "Write a Jave code to sum 1 to 10."}
```
The prompt for our WizardCoder in `src\inference_wizardcoder.py` is:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:
```
## Evaluation
We provide the evaluation script on HumanEval for WizardCoder.
1. According to the instructions of [HumanEval](https://github.com/openai/human-eval), install the environment.
2. Run the following script to generate the answer.
```bash
model="/path/to/your/model"
temp=0.2
max_len=2048
pred_num=200
num_seqs_per_iter=2
output_path=preds/T${temp}_N${pred_num}
mkdir -p ${output_path}
echo 'Output path: '$output_path
echo 'Model to eval: '$model
# 164 problems, 21 per GPU if GPU=8
index=0
gpu_num=8
for ((i = 0; i < $gpu_num; i++)); do
start_index=$((i * 21))
end_index=$(((i + 1) * 21))
gpu=$((i))
echo 'Running process #' ${i} 'from' $start_index 'to' $end_index 'on GPU' ${gpu}
((index++))
(
CUDA_VISIBLE_DEVICES=$gpu python humaneval_gen.py --model ${model} \
--start_index ${start_index} --end_index ${end_index} --temperature ${temp} \
--num_seqs_per_iter ${num_seqs_per_iter} --N ${pred_num} --max_len ${max_len} --output_path ${output_path}
) &
if (($index % $gpu_num == 0)); then wait; fi
done
```
3. Run the post processing code `src/process_humaneval.py` to collect the code completions from all answer files.
```bash
output_path=preds/T${temp}_N${pred_num}
echo 'Output path: '$output_path
python process_humaneval.py --path ${output_path} --out_path ${output_path}.jsonl --add_prompt
evaluate_functional_correctness ${output_path}.jsonl
```
## Citation
Please cite the repo if you use the data or code in this repo.
```
@misc{luo2023wizardcoder,
title={WizardCoder: Empowering Code Large Language Models with Evol-Instruct},
author={Ziyang Luo and Can Xu and Pu Zhao and Qingfeng Sun and Xiubo Geng and Wenxiang Hu and Chongyang Tao and Jing Ma and Qingwei Lin and Daxin Jiang},
year={2023},
}
```
## Disclaimer
The resources, including code, data, and model weights, associated with this project are restricted for academic research purposes only and cannot be used for commercial purposes. The content produced by any version of WizardCoder is influenced by uncontrollable variables such as randomness, and therefore, the accuracy of the output cannot be guaranteed by this project. This project does not accept any legal liability for the content of the model output, nor does it assume responsibility for any losses incurred due to the use of associated resources and output results.
| 17,959 | [
[
-0.04217529296875,
-0.043701171875,
0.0030994415283203125,
0.007259368896484375,
-0.01387786865234375,
-0.002712249755859375,
0.0095062255859375,
-0.0279693603515625,
0.01299285888671875,
0.023773193359375,
-0.04180908203125,
-0.03814697265625,
-0.03643798828125... |
cross-encoder/nli-deberta-v3-xsmall | 2021-12-27T22:27:20.000Z | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"microsoft/deberta-v3-xsmall",
"zero-shot-classification",
"en",
"dataset:multi_nli",
"dataset:snli",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | zero-shot-classification | cross-encoder | null | null | cross-encoder/nli-deberta-v3-xsmall | 5 | 1,602 | transformers | 2022-03-02T23:29:05 | ---
language: en
pipeline_tag: zero-shot-classification
tags:
- microsoft/deberta-v3-xsmall
datasets:
- multi_nli
- snli
metrics:
- accuracy
license: apache-2.0
---
# Cross-Encoder for Natural Language Inference
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model is based on [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall)
## Training Data
The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
## Performance
- Accuracy on SNLI-test dataset: 91.64
- Accuracy on MNLI mismatched set: 87.77
For futher evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli).
## Usage
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('cross-encoder/nli-deberta-v3-xsmall')
scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')])
#Convert scores to labels
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
```
## Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-v3-xsmall')
tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-v3-xsmall')
features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
```
## Zero-Shot Classification
This model can also be used for zero-shot-classification:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-v3-xsmall')
sent = "Apple just announced the newest iPhone X"
candidate_labels = ["technology", "sports", "politics"]
res = classifier(sent, candidate_labels)
print(res)
``` | 2,791 | [
[
-0.0156402587890625,
-0.0557861328125,
0.0243682861328125,
0.0197601318359375,
-0.00014030933380126953,
-0.00542449951171875,
-0.00356292724609375,
-0.02520751953125,
0.0124359130859375,
0.03271484375,
-0.040679931640625,
-0.0386962890625,
-0.0435791015625,
... |
google/tapas-large | 2021-11-29T10:18:23.000Z | [
"transformers",
"pytorch",
"tf",
"tapas",
"feature-extraction",
"TapasModel",
"en",
"arxiv:2004.02349",
"arxiv:2010.00571",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | google | null | null | google/tapas-large | 1 | 1,602 | transformers | 2022-03-02T23:29:05 | ---
language: en
tags:
- tapas
- TapasModel
license: apache-2.0
---
# TAPAS large model
This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_inter_masklm_large_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training. It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is the one with absolute position embeddings:
- `revision="no_reset"`, which corresponds to `tapas_inter_masklm_large`
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding one or more classification heads on top of the pre-trained model, and then
jointly train these randomly initialized classification heads with the base model on a downstream task.
## Intended uses & limitations
You can use the raw model for getting hidden representatons about table-question pairs, but it's mostly intended to be fine-tuned on a downstream task such as question answering or sequence classification. See the [model hub](https://huggingface.co/models?filter=tapas) to look for fine-tuned versions on a task that interests you.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence [SEP] Flattened table [SEP]
```
### Pre-training
The model was pre-trained on 32 Cloud TPU v3 cores for 1,000,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, pre-training on MLM only takes around 3 days. Aditionally, the model has been further pre-trained on a second task (table entailment). See the original TAPAS [paper](https://www.aclweb.org/anthology/2020.acl-main.398/) and the [follow-up paper](https://www.aclweb.org/anthology/2020.findings-emnlp.27/) for more details.
The optimizer used is Adam with a learning rate of 5e-5, and a warmup
ratio of 0.01.
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 4,678 | [
[
-0.0369873046875,
-0.06158447265625,
0.0257720947265625,
0.0130462646484375,
-0.034332275390625,
-0.016265869140625,
-0.0126800537109375,
-0.036529541015625,
0.0293731689453125,
0.044158935546875,
-0.0394287109375,
-0.02667236328125,
-0.0489501953125,
0.0080... |
timm/convnext_large.fb_in22k_ft_in1k | 2023-03-31T22:10:17.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-22k",
"arxiv:2201.03545",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/convnext_large.fb_in22k_ft_in1k | 0 | 1,602 | timm | 2022-12-13T07:10:20 | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
datasets:
- imagenet-1k
- imagenet-22k
---
# Model card for convnext_large.fb_in22k_ft_in1k
A ConvNeXt image classification model. Pretrained on ImageNet-22k and fine-tuned on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 197.8
- GMACs: 34.4
- Activations (M): 43.1
- Image size: train = 224 x 224, test = 288 x 288
- **Papers:**
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- **Original:** https://github.com/facebookresearch/ConvNeXt
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convnext_large.fb_in22k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_large.fb_in22k_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 192, 56, 56])
# torch.Size([1, 384, 28, 28])
# torch.Size([1, 768, 14, 14])
# torch.Size([1, 1536, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_large.fb_in22k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1536, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
| model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
| [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
| [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
| [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
| [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
| [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
| [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
| [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
| [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
| [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
| [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 |
| [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
| [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
| [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
| [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
| [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
| [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
| [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
| [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
| [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
| [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
| [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
| [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
| [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
| [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
| [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
| [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
| [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
| [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
| [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
| [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
| [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
| [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
| [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
| [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
| [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
| [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
| [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
| [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
| [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
| [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
| [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
## Citation
```bibtex
@article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 15,743 | [
[
-0.0675048828125,
-0.032745361328125,
-0.003353118896484375,
0.038360595703125,
-0.031982421875,
-0.01532745361328125,
-0.01308441162109375,
-0.035308837890625,
0.06500244140625,
0.017120361328125,
-0.04437255859375,
-0.042144775390625,
-0.050384521484375,
-... |
WisdomShell/CodeShell-7B-Chat | 2023-11-01T11:54:02.000Z | [
"transformers",
"pytorch",
"codeshell",
"text-generation",
"wisdomshell",
"pku-kcl",
"openbankai",
"custom_code",
"zh",
"en",
"region:us"
] | text-generation | WisdomShell | null | null | WisdomShell/CodeShell-7B-Chat | 18 | 1,602 | transformers | 2023-10-13T09:38:56 | ---
language:
- zh
- en
tags:
- codeshell
- wisdomshell
- pku-kcl
- openbankai
---
# CodeShell
CodeShell是[北京大学知识计算实验室](http://se.pku.edu.cn/kcl/)联合四川天府银行AI团队研发的多语言代码大模型基座。CodeShell具有70亿参数,在五千亿Tokens进行了训练,上下文窗口长度为8194。在权威的代码评估Benchmark(HumanEval与MBPP)上,CodeShell取得同等规模最好的性能。与此同时,我们提供了与CodeShell配套的部署方案与IDE插件,请参考代码库[CodeShell](https://github.com/WisdomShell/codeshell)。同时,为了方便中国用户下载,我们在modelscope中也上传了对应版本,国内用户可以访问[CodeShell-7B-Chat国内地址](https://modelscope.cn/models/WisdomShell/CodeShell-7B-Chat/summary)。本仓库为CodeShell-7B-Chat模型仓库。
CodeShell is a multi-language code LLM developed by the [Knowledge Computing Lab](http://se.pku.edu.cn/kcl/) of Peking University. CodeShell has 7 billion parameters and was trained on 500 billion tokens with a context window length of 8194. On authoritative code evaluation benchmarks (HumanEval and MBPP), CodeShell achieves the best performance of its scale. Meanwhile, we provide deployment solutions and IDE plugins that complement CodeShell. Please refer to the [CodeShell code repository](https://github.com/WisdomShell/codeshell) for more details. This repository is for the CodeShell-7B-Chat model.
## Main Characteristics of CodeShell
* **强大的性能**:CodelShell在HumanEval和MBPP上达到了7B代码基座大模型的最优性能
* **完整的体系**:除了代码大模型,同时开源IDE(VS Code与JetBrains)插件,形成开源的全栈技术体系
* **轻量化部署**:支持本地C++部署,提供轻量快速的本地化软件开发助手解决方案
* **全面的评测**:提供支持完整项目上下文、覆盖代码生成、代码缺陷检测与修复、测试用例生成等常见软件开发活动的多任务评测体系(即将开源)
* **高效的训练**:基于高效的数据治理体系,CodeShell在完全冷启动情况下,只训练了五千亿Token即获得了优异的性能
* **Powerful Performance**: CodeShell achieves optimal performance for a 7B code base model on HumanEval and MBPP.
* **Complete Ecosystem**: In addition to the mega code model, open-source IDE plugins (for VS Code and JetBrains) are also available, forming a comprehensive open-source full-stack technology system.
* **Lightweight Deployment**: Supports local C++ deployment, offering a lightweight and fast localized software development assistant solution.
* **Comprehensive Evaluation**: Provides a multi-task evaluation system that supports full project context, covering code generation, code defect detection and repair, test case generation, and other common software development activities (to be open-sourced soon).
* **Efficient Training**: Based on an efficient data governance system, CodeShell, even when starting from scratch, achieved outstanding performance with training on just 500 trillion tokens.
## Quickstart
Codeshell 提供了Hugging Face格式的模型,开发者可以通过下列代码加载并使用。
Codeshell offers a model in the Hugging Face format. Developers can load and use it with the following code.
```python
import time
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = torch.device('cuda:0')
model = AutoModelForCausalLM.from_pretrained('WisdomShell/CodeShell-7B-Chat', trust_remote_code=True).to(device)
tokenizer = AutoTokenizer.from_pretrained('WisdomShell/CodeShell-7B-Chat')
history = []
query = '你是谁?'
response = model.chat(query, history, tokenizer)
print(response)
history.append((query, response))
query = '用Python写一个HTTP server'
response = model.chat(query, history, tokenizer)
print(response)
history.append((query, response))
```
开发者也可以通过VS Code与JetBrains插件与CodeShell-7B-Chat交互,详情请参考[VSCode插件仓库](https://github.com/WisdomShell/codeshell-vscode)与[IntelliJ插件仓库](https://github.com/WisdomShell/codeshell-intellij)。
Developers can also interact with CodeShell-7B-Chat through VS Code and JetBrains plugins. For details, please refer to the [VSCode Plugin Repository](https://github.com/WisdomShell/codeshell-vscode) and [IntelliJ Plugin Repository](https://github.com/WisdomShell/codeshell-intellij).
## Model Details
Code Shell使用GPT-2作为基础架构,采用Grouped-Query Attention、RoPE相对位置编码等技术。
Code Shell uses GPT-2 as its foundational architecture and incorporates technologies such as Grouped-Query Attention and RoPE relative position encoding.
| Hyper-parameter | Value |
|---|---|
| n_layer | 42 |
| n_embd | 4096 |
| n_inner | 16384 |
| n_head | 32 |
| num_query_groups | 8 |
| seq-length | 8192 |
| vocab_size | 70144 |
## Evaluation
我们选取了目前最流行的两个代码评测数据集(HumanEval与MBPP)对模型进行评估,与目前最先进的两个7b代码大模型CodeLllama与Starcoder相比,Codeshell 取得了最优的成绩。具体评测结果如下。
We selected the two most popular code evaluation datasets currently available (HumanEval and MBPP) to assess the model. Compared to the two most advanced 7b LLM for code, CodeLllama and Starcoder, Codeshell achieved the best results. The specific evaluation results are as follows.
### Pass@1
| 任务 | CodeShell-7b | CodeLlama-7b | Starcoder-7b |
| ------- | --------- | --------- | --------- |
| humaneval | **34.32** | 29.44 | 27.80 |
| mbpp | **38.65** | 37.60 | 34.16 |
| multiple-js | **33.17** | 31.30 | 27.02 |
| multiple-java | **30.43** | 29.24 | 24.30 |
| multiple-cpp | **28.21** | 27.33 | 23.04 |
| multiple-swift | 24.30 | **25.32** | 15.70 |
| multiple-php | **30.87** | 25.96 | 22.11 |
| multiple-d | 8.85 | **11.60** | 8.08 |
| multiple-jl | 22.08 | **25.28** | 22.96 |
| multiple-lua | 22.39 | **30.50** | 22.92 |
| multiple-r | **20.52** | 18.57 | 14.29 |
| multiple-rkt | **17.20** | 12.55 | 10.43 |
| multiple-rs | 24.55 | **25.90** | 22.82 |
# Statement
我们郑重声明,我们开发团队基于CodeShell模型开发了基于vscode和intellij的智能编码助手插件并均已开源。除此以外,无论是针对iOS、Android、HarmonyOS、Web,还是其他任何平台,我们的开发团队均未开发任何基于CodeShell模型的应用程序。我们强烈敦促所有用户不要利用CodeShell模型从事危害国家和社会安全或违法活动。同时,我们要求用户不要在未经适当的安全审查和备案的互联网服务中使用CodeShell模型。我们希望所有用户都能遵守这一原则,以确保在合规和合法的环境下发展科技。
尽管我们在确保模型训练过程中使用数据合规性方面已付出巨大努力,但由于模型和数据的复杂性,可能会出现难以预料的问题。因此,对于使用CodeShell开源模型导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误用、滥用、传播或不当利用等风险和问题,我们概不负责。
We hereby declare that our development team has developed intelligent coding assistant plugins for vscode and intellij based on the CodeShell model, both of which have been open-sourced. Beyond this, whether for iOS, Android, HarmonyOS, Web, or any other platform, our development team has not developed any applications based on the CodeShell model. We strongly urge all users not to use the CodeShell model for activities that endanger national and social security or are illegal. At the same time, we request users not to use the CodeShell model in internet services that have not undergone proper security reviews and registration. We hope all users will adhere to this principle to ensure the development of technology in a compliant and legal environment.
Despite our significant efforts to ensure compliance in the data used during the model training process, unforeseen issues may arise due to the complexity of the models and data. Therefore, we are not responsible for any issues arising from the use of the open-sourced CodeShell model, including but not limited to data security issues, public opinion risks, or risks and problems related to the model being misused, abused, disseminated, or exploited improperly.
# License
社区使用CodeShell模型需要遵循[CodeShell模型许可协议](https://huggingface.co/WisdomShell/CodeShell-7B/resolve/main/CodeShell%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)及[Apache 2.0 许可证](https://www.apache.org/licenses/LICENSE-2.0)。CodeShell模型允许用于商业用途,但如果您计划将CodeShell模型或其派生产品用于商业用途,需要您确认主体符合以下条件:
1. 关联方的服务或产品的每日平均活跃用户数(DAU)原则上不能超过100万。
2. 关联方不得是面向个人用户的软件服务提供商或云服务提供商。
3. 关联方不存在将获得授予的商业许可,在未经许可的前提下将其再授权给其他第三方的可能性。
在满足上述条件的前提下,您需要通过向codeshell.opensource@gmail.com发送电子邮件,提交《CodeShell模型许可协议》要求的申请材料。经审核通过后,将授予您一个全球的、非排他的、不可转让的、不可再授权的商业版权许可。
Community use of the CodeShell model requires adherence to the ["CodeShell License Agreement"](https://huggingface.co/WisdomShell/CodeShell-7B/resolve/main/CodeShell%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf) and the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). The CodeShell model is allowed for commercial use, but if you plan to use the CodeShell model or its derivatives for commercial purposes, you need to ensure that the entity meets the following conditions:
1. The Daily Active Users (DAU) of your or your affiliate's service or product is less than 1 million.
2. You and your affiliates must not be a software service provider or cloud service provider targeting individual users.
3. You and your affiliates should not have the possibility of sub-licensing to other third parties without obtaining the commercial license granted.
Under the aforementioned conditions, you need to submit the application materials required by the "CodeShell License Agreement" by sending an email to codeshell.opensource@gmail.com. After approval, you will be granted a global, non-exclusive, non-transferable, non-sublicensable commercial copyright license.
| 8,537 | [
[
-0.0263214111328125,
-0.036529541015625,
0.01107025146484375,
0.020599365234375,
-0.02813720703125,
0.00763702392578125,
-0.0217132568359375,
-0.04522705078125,
0.0171356201171875,
0.03436279296875,
-0.03887939453125,
-0.06793212890625,
-0.040924072265625,
0... |
timm/convnextv2_femto.fcmae_ft_in1k | 2023-03-31T23:07:15.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2301.00808",
"license:cc-by-nc-4.0",
"region:us"
] | image-classification | timm | null | null | timm/convnextv2_femto.fcmae_ft_in1k | 0 | 1,601 | timm | 2023-01-05T01:39:46 | ---
tags:
- image-classification
- timm
library_tag: timm
license: cc-by-nc-4.0
datasets:
- imagenet-1k
- imagenet-1k
---
# Model card for convnextv2_femto.fcmae_ft_in1k
A ConvNeXt-V2 image classification model. Pretrained with a fully convolutional masked autoencoder framework (FCMAE) and fine-tuned on ImageNet-1k.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 5.2
- GMACs: 0.8
- Activations (M): 4.6
- Image size: train = 224 x 224, test = 288 x 288
- **Papers:**
- ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders: https://arxiv.org/abs/2301.00808
- **Original:** https://github.com/facebookresearch/ConvNeXt-V2
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convnextv2_femto.fcmae_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnextv2_femto.fcmae_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 48, 56, 56])
# torch.Size([1, 96, 28, 28])
# torch.Size([1, 192, 14, 14])
# torch.Size([1, 384, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnextv2_femto.fcmae_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 384, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
| model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
| [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
| [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
| [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
| [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
| [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
| [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
| [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
| [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
| [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
| [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 |
| [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
| [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
| [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
| [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
| [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
| [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
| [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
| [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
| [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
| [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
| [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
| [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
| [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
| [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
| [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
| [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
| [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
| [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
| [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
| [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
| [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
| [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
| [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
| [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
| [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
| [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
| [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
| [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
| [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
| [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
| [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
## Citation
```bibtex
@article{Woo2023ConvNeXtV2,
title={ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders},
author={Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon and Saining Xie},
year={2023},
journal={arXiv preprint arXiv:2301.00808},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 15,788 | [
[
-0.0689697265625,
-0.0310516357421875,
-0.00514984130859375,
0.03851318359375,
-0.031982421875,
-0.015869140625,
-0.01226806640625,
-0.035797119140625,
0.064208984375,
0.01806640625,
-0.0450439453125,
-0.039154052734375,
-0.052459716796875,
-0.00350761413574... |
speechbrain/tts-tacotron2-ljspeech | 2022-06-26T23:19:09.000Z | [
"speechbrain",
"text-to-speech",
"TTS",
"speech-synthesis",
"Tacotron2",
"en",
"dataset:LJSpeech",
"arxiv:1712.05884",
"arxiv:2106.04624",
"license:apache-2.0",
"has_space",
"region:us"
] | text-to-speech | speechbrain | null | null | speechbrain/tts-tacotron2-ljspeech | 94 | 1,600 | speechbrain | 2022-05-28T21:09:37 | ---
language: "en"
tags:
- text-to-speech
- TTS
- speech-synthesis
- Tacotron2
- speechbrain
license: "apache-2.0"
datasets:
- LJSpeech
metrics:
- mos
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Text-to-Speech (TTS) with Tacotron2 trained on LJSpeech
This repository provides all the necessary tools for Text-to-Speech (TTS) with SpeechBrain using a [Tacotron2](https://arxiv.org/abs/1712.05884) pretrained on [LJSpeech](https://keithito.com/LJ-Speech-Dataset/).
The pre-trained model takes in input a short text and produces a spectrogram in output. One can get the final waveform by applying a vocoder (e.g., HiFIGAN) on top of the generated spectrogram.
## Install SpeechBrain
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Perform Text-to-Speech (TTS)
```
import torchaudio
from speechbrain.pretrained import Tacotron2
from speechbrain.pretrained import HIFIGAN
# Intialize TTS (tacotron2) and Vocoder (HiFIGAN)
tacotron2 = Tacotron2.from_hparams(source="speechbrain/tts-tacotron2-ljspeech", savedir="tmpdir_tts")
hifi_gan = HIFIGAN.from_hparams(source="speechbrain/tts-hifigan-ljspeech", savedir="tmpdir_vocoder")
# Running the TTS
mel_output, mel_length, alignment = tacotron2.encode_text("Mary had a little lamb")
# Running Vocoder (spectrogram-to-waveform)
waveforms = hifi_gan.decode_batch(mel_output)
# Save the waverform
torchaudio.save('example_TTS.wav',waveforms.squeeze(1), 22050)
```
If you want to generate multiple sentences in one-shot, you can do in this way:
```
from speechbrain.pretrained import Tacotron2
tacotron2 = Tacotron2.from_hparams(source="speechbrain/TTS_Tacotron2", savedir="tmpdir")
items = [
"A quick brown fox jumped over the lazy dog",
"How much wood would a woodchuck chuck?",
"Never odd or even"
]
mel_outputs, mel_lengths, alignments = tacotron2.encode_batch(items)
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The model was trained with SpeechBrain.
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```bash
cd recipes/LJSpeech/TTS/tacotron2/
python train.py --device=cuda:0 --max_grad_norm=1.0 --data_folder=/your_folder/LJSpeech-1.1 hparams/train.yaml
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1PKju-_Nal3DQqd-n0PsaHK-bVIOlbf26?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
| 3,881 | [
[
-0.027862548828125,
-0.06243896484375,
0.01617431640625,
0.00688934326171875,
-0.0223541259765625,
0.006855010986328125,
-0.034820556640625,
-0.0325927734375,
0.036041259765625,
0.0202789306640625,
-0.036285400390625,
-0.04351806640625,
-0.038421630859375,
0... |
Habana/roberta-large | 2023-08-18T16:54:10.000Z | [
"optimum_habana",
"license:apache-2.0",
"region:us"
] | null | Habana | null | null | Habana/roberta-large | 0 | 1,598 | null | 2022-04-22T18:03:10 | ---
license: apache-2.0
---
[Optimum Habana](https://github.com/huggingface/optimum-habana) is the interface between the Hugging Face Transformers and Diffusers libraries and Habana's Gaudi processor (HPU).
It provides a set of tools enabling easy and fast model loading, training and inference on single- and multi-HPU settings for different downstream tasks.
Learn more about how to take advantage of the power of Habana HPUs to train and deploy Transformers and Diffusers models at [hf.co/hardware/habana](https://huggingface.co/hardware/habana).
## RoBERTa Large model HPU configuration
This model only contains the `GaudiConfig` file for running the [roberta-large](https://huggingface.co/roberta-large) model on Habana's Gaudi processors (HPU).
**This model contains no model weights, only a GaudiConfig.**
This enables to specify:
- `use_torch_autocast`: whether to use PyTorch's autocast mixed precision
- `use_fused_adam`: whether to use Habana's custom AdamW implementation
- `use_fused_clip_norm`: whether to use Habana's fused gradient norm clipping operator
## Usage
The model is instantiated the same way as in the Transformers library.
The only difference is that there are a few new training arguments specific to HPUs.
[Here](https://github.com/huggingface/optimum-habana/blob/main/examples/question-answering/run_qa.py) is a question-answering example script to fine-tune a model on SQuAD. You can run it with RoBERTa Large with the following command:
```bash
python run_qa.py \
--model_name_or_path roberta-large \
--gaudi_config_name Habana/roberta-large \
--dataset_name squad \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--per_device_eval_batch_size 8 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--output_dir /tmp/squad/ \
--use_habana \
--use_lazy_mode \
--throughput_warmup_steps 3 \
--bf16
```
Check the [documentation](https://huggingface.co/docs/optimum/habana/index) out for more advanced usage and examples.
| 2,021 | [
[
-0.058441162109375,
-0.066162109375,
0.0224151611328125,
0.01447296142578125,
-0.0096282958984375,
0.0009832382202148438,
-0.004970550537109375,
-0.0311279296875,
0.020416259765625,
0.0229644775390625,
-0.04449462890625,
-0.00606536865234375,
-0.025665283203125,... |
stabilityai/japanese-instructblip-alpha | 2023-09-04T04:41:56.000Z | [
"transformers",
"pytorch",
"instructblip",
"feature-extraction",
"vision",
"image-captioning",
"japanese-stablelm",
"image-to-text",
"custom_code",
"ja",
"arxiv:2305.06500",
"license:other",
"has_space",
"region:us"
] | image-to-text | stabilityai | null | null | stabilityai/japanese-instructblip-alpha | 44 | 1,598 | transformers | 2023-08-15T12:07:00 | ---
language:
- ja
tags:
- instructblip
- vision
- image-captioning
- japanese-stablelm
pipeline_tag: image-to-text
license:
- other
extra_gated_heading: Access Japanese StableLM Instruct Alpha
extra_gated_description: This repository is publicly accessible, but you have to accept the conditions to access its files and content.
extra_gated_button_content: Access repository
extra_gated_fields:
Name: text
Email: text
Organization: text
I agree to accept the conditions and share above info with Stability AI: checkbox
extra_gated_prompt: |
### JAPANESE STABLELM RESEARCH LICENSE AGREEMENT
Dated: August 7, 2023
"Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Software Products set forth herein.
“Documentation” means any specifications, manuals, documentation, and other written information provided by Stability AI related to the Software.
"Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person’s or entity's behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.
"Stability AI" or "we" means Stability AI Ltd.
"Software" means, collectively, Stability AI’s proprietary Japanese StableLM made available under this Agreement.
“Software Products” means Software and Documentation.
By using or distributing any portion or element of the Software Products, you agree to be bound by this Agreement.
- License Rights and Redistribution.
- Subject to your compliance with this Agreement and the Documentation, Stability AI grants you a non-exclusive, worldwide, non-transferable, non-sublicensable, revocable, royalty free and limited license under Stability AI’s intellectual property or other rights owned by Stability AI embodied in the Software Products to reproduce, distribute, and create derivative works of the Software Products for purposes other than commercial or production use.
- You will not, and will not permit, assist or cause any third party to use, modify, copy, reproduce, create derivative works of, or distribute the Software Products (or any derivative works thereof, works incorporating the Software Products, or any data produced by the Software), in whole or in part, for any commercial or production purposes.
- If you distribute or make the Software Products, or any derivative works thereof, available to a third party, you shall (i) provide a copy of this Agreement to such third party, and (ii) retain the following attribution notice within a "Notice" text file distributed as a part of such copies: "Japanese StableLM is licensed under the Japanese StableLM Research License, Copyright (c) Stability AI Ltd. All Rights Reserved.”
- The licenses granted to you under this Agreement are conditioned upon your compliance with the Documentation and this Agreement, including the Acceptable Use Policy below and as may be updated from time to time in the future on stability.ai, which is hereby incorporated by reference into this Agreement.
- Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE SOFTWARE PRODUCTS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE SOFTWARE PRODUCTS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE SOFTWARE PRODUCTS AND ANY OUTPUT AND RESULTS.
- Limitation of Liability. IN NO EVENT WILL STABILITY AI OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF STABILITY AI OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
- Intellectual Property.
- No trademark licenses are granted under this Agreement, and in connection with the Software Products, neither Stability AI nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Software Products.
- Subject to Stability AI’s ownership of the Software Products and derivatives made by or for Stability AI, with respect to any derivative works and modifications of the Software Products that are made by you, as between you and Stability AI, you are and will be the owner of such derivative works and modifications.
- If you institute litigation or other proceedings against Stability AI (including a cross-claim or counterclaim in a lawsuit) alleging that the Software Products or associated outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Stability AI from and against any claim by any third party arising out of or related to your use or distribution of the Software Products in violation of this Agreement.
- Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Software Products and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Stability AI may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Software Products. Sections 2-4 shall survive the termination of this Agreement.
—----------
### Japanese StableLM Acceptable Use Policy
If you access, use, or distribute any Stability AI models, software, or other materials (“Stability Technology”) you agree to this Acceptable Use Policy (“Policy”).
We want everyone to use Stability Technology safely and responsibly. You agree you will not use, or allow others to use, Stability Technology to:
- To violate the law or others’ rights (including intellectual property rights and the rights of data privacy and protection), nor will you promote, contribute to, encourage, facilitate, plan, incite, or further anyone else’s violation of the law or others’ rights;
- To commit, promote, contribute to, facilitate, encourage, plan, incite, or further any of the following:
- Violence or terrorism;
- Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content;
- Human trafficking, exploitation, and sexual violence;
- Harassment, abuse, threatening, stalking, or bullying of individuals or groups of individuals;
- Discrimination in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services on the basis of race, color, caste, religion, sex (including pregnancy, sexual orientation, or gender identity), national origin, age, disability, or genetic information (including family medical history) except as may be required by applicable law (such as the provision of social security benefits solely to people who meet certain age requirements under the law);
- Creation of malicious code, malware, computer viruses or any activity that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system;
- For purposes of or for the performance of:
- Fully automated decision-making, including profiling, with respect to an individual or group of individuals which produces legal effects concerning such individual(s) or similarly significantly affects such individual(s);
- Systematic or automated scraping, mining, extraction, or harvesting of personally identifiable data, or similar activity, from the output of any Stability Technology except with respect to data that you have provided as input to the Stability Technology and which you are legally entitled to process, for so long as you retain such entitlement;
- Development, improvement, or manufacture of any weapons of mass destruction (such as nuclear, chemical, or biologic weapons), weapons of war (such as missiles or landmines), or any gain of function-related activities with respect to any pathogens;
- Mission critical applications or systems where best industry practices require fail-safe controls or performance, including operation of nuclear facilities, aircraft navigation, electrical grids, communication systems, water treatment facilities, air traffic control, life support, weapons systems, or emergency locator or other emergency services;
- To intentionally deceive or mislead others, including use of Japanese StableLM related to the following:
- Generating, promoting, or furthering fraud or the creation or promotion of disinformation;
- Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content;
- Generating, promoting, or further distributing spam;
- Impersonating another individual without consent, authorization, or legal right
- Representing or misleading people into believing that the use of Japanese StableLM or outputs are human-generated;
- Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement;
- Generating or facilitating large-scale political advertisements, propaganda, or influence campaigns;
- Fail to appropriately disclose to end users any known dangers of your AI system or misrepresent or mislead with respect to its abilities.
Nothing in this AUP is intended to prevent or impede any good faith research, testing, or evaluation of Japanese StableLM, or publication related to any of the foregoing. If you discover any flaws in Japanese StableLM that may be harmful to people in any way, we encourage you to notify us and give us a chance to remedy such flaws before others can exploit them. If you have questions about this AUP, contact us at legal@stability.ai.
---
# Japanese InstructBLIP Alpha

## Model Details
Japanese InstructBLIP Alpha is a vision-language instruction-following model that enables to generate Japanese descriptions for input images and optionally input texts such as questions.
## Usage
First install additional dependencies in [requirements.txt](./requirements.txt):
```sh
pip install sentencepiece einops
```
```python
import torch
from transformers import LlamaTokenizer, AutoModelForVision2Seq, BlipImageProcessor
from PIL import Image
import requests
# helper function to format input prompts
def build_prompt(prompt="", sep="\n\n### "):
sys_msg = "以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。"
p = sys_msg
roles = ["指示", "応答"]
user_query = "与えられた画像について、詳細に述べてください。"
msgs = [": \n" + user_query, ": "]
if prompt:
roles.insert(1, "入力")
msgs.insert(1, ": \n" + prompt)
for role, msg in zip(roles, msgs):
p += sep + role + msg
return p
# load model
model = AutoModelForVision2Seq.from_pretrained("stabilityai/japanese-instructblip-alpha", trust_remote_code=True)
processor = BlipImageProcessor.from_pretrained("stabilityai/japanese-instructblip-alpha")
tokenizer = LlamaTokenizer.from_pretrained("novelai/nerdstash-tokenizer-v1", additional_special_tokens=['▁▁'])
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
# prepare inputs
url = "https://images.unsplash.com/photo-1582538885592-e70a5d7ab3d3?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=1770&q=80"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
prompt = "" # input empty string for image captioning. You can also input questions as prompts
prompt = build_prompt(prompt)
inputs = processor(images=image, return_tensors="pt")
text_encoding = tokenizer(prompt, add_special_tokens=False, return_tensors="pt")
text_encoding["qformer_input_ids"] = text_encoding["input_ids"].clone()
text_encoding["qformer_attention_mask"] = text_encoding["attention_mask"].clone()
inputs.update(text_encoding)
# generate
outputs = model.generate(
**inputs.to(device, dtype=model.dtype),
num_beams=5,
max_new_tokens=32,
min_length=1,
)
generated_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0].strip()
print(generated_text)
# 桜と東京スカイツリー
```
## Model Details
* **Developed by**: [Stability AI](https://stability.ai/)
* **Model type**: [InstructBLIP](https://arxiv.org/abs/2305.06500)
* **Language(s)**: Japanese
* **License**: [JAPANESE STABLELM RESEARCH LICENSE AGREEMENT](./LICENSE).
### Training
Japanese InstructBLIP Alpha leverages the [InstructBLIP](https://arxiv.org/abs/2305.06500) architecture. It consists of 3 components: a frozen vision image encoder, a Q-Former, and a frozen LLM. The vision encoder and the Q-Former were initialized with [Salesforce/instructblip-vicuna-7b](https://huggingface.co/Salesforce/instructblip-vicuna-7b). For the frozen LLM, [Japanese-StableLM-Instruct-Alpha-7B](https://huggingface.co/stabilityai/japanese-stablelm-instruct-alpha-7b) model was used. During training, only Q-Former was trained.
### Training Dataset
The training dataset includes the following public datasets:
- [CC12M](https://github.com/google-research-datasets/conceptual-12m) with captions translated into Japanese
- [MS-COCO](https://cocodataset.org/#home) with [STAIR Captions](http://captions.stair.center/)
- [Japanese Visual Genome VQA dataset](https://github.com/yahoojapan/ja-vg-vqa)
## Use and Limitations
### Intended Use
This model is intended to be used by the open-source community in chat-like applications in adherence with the research license.
### Limitations and bias
Although the aforementioned datasets help to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use responsibly.
## How to cite
```bibtex
@misc{JapaneseInstructBLIPAlpha,
url = {[https://huggingface.co/stabilityai/japanese-instructblip-alpha](https://huggingface.co/stabilityai/japanese-instructblip-alpha)},
title = {Japanese InstructBLIP Alpha},
author = {Shing, Makoto and Akiba, Takuya}
}
```
## Citations
```bibtex
@misc{dai2023instructblip,
title = {InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning},
author = {Wenliang Dai and Junnan Li and Dongxu Li and Anthony Meng Huat Tiong and Junqi Zhao and Weisheng Wang and Boyang Li and Pascale Fung and Steven Hoi},
year = {2023},
eprint = {2305.06500},
archivePrefix = {arXiv},
primaryClass = {cs.CV}
}
```
| 15,680 | [
[
-0.0267791748046875,
-0.051513671875,
0.0130462646484375,
0.00962066650390625,
-0.03179931640625,
-0.00678253173828125,
-0.00743865966796875,
-0.037872314453125,
0.00384521484375,
0.0225982666015625,
-0.049591064453125,
-0.038299560546875,
-0.0369873046875,
... |
artificialguybr/StoryBookRedmond | 2023-10-07T19:18:12.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"license:creativeml-openrail-m",
"has_space",
"region:us"
] | text-to-image | artificialguybr | null | null | artificialguybr/StoryBookRedmond | 3 | 1,597 | diffusers | 2023-08-21T06:50:17 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: KidsRedmAF
widget:
- text: KidsRedmAF
---
# StoryBook.Redmond

StoryBook.Redmond is here!
DOWNLOAD V2 HERE: https://huggingface.co/artificialguybr/StoryBookRedmond-V2
Test all my lora here: https://huggingface.co/spaces/artificialguybr/artificialguybr-demo-lora
Introducing StorybookRedmond, the ultimate LORA for creating stunning children books images!
I'm grateful for the GPU time from Redmond.AI that allowed me to make this LORA! If you need GPU, then you need the great services from Redmond.AI.
It is based on SD XL 1.0 and fine-tuned on a large dataset.
The LORA has a high capacity to generate Storybook images.
You can use detailed, minimalist, colorful, black and white as tag to control the results.
The tag for the model:KidsRedmAF
LORA is not perfect and sometimes needs more than one gen to create good images.
I really hope you like the LORA and use it.
If you like the model and think it's worth it, you can make a donation to my Patreon or Ko-fi.
Follow me in my twitter to know before all about new models:
https://twitter.com/artificialguybr/ | 1,274 | [
[
-0.028411865234375,
-0.061981201171875,
-0.00720977783203125,
0.02508544921875,
-0.028533935546875,
0.010284423828125,
0.016571044921875,
-0.06964111328125,
0.045684814453125,
0.03619384765625,
-0.05267333984375,
-0.033233642578125,
-0.01953125,
-0.020568847... |
digiplay/AM-mix1 | 2023-11-02T20:18:50.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | digiplay | null | null | digiplay/AM-mix1 | 0 | 1,597 | diffusers | 2023-11-02T18:51:41 | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
in test...
Sample image I made generated by huggingface's API :
 | 297 | [
[
-0.048095703125,
-0.06475830078125,
0.0246734619140625,
0.02008056640625,
-0.029449462890625,
-0.00395965576171875,
0.013580322265625,
-0.045654296875,
0.06890869140625,
0.03094482421875,
-0.07537841796875,
-0.03924560546875,
-0.0325927734375,
0.008308410644... |
surdan/LaBSE_ner_nerel | 2022-04-12T13:17:34.000Z | [
"transformers",
"pytorch",
"bert",
"token-classification",
"ru",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | surdan | null | null | surdan/LaBSE_ner_nerel | 7 | 1,595 | transformers | 2022-04-11T14:45:16 | ---
language: ["ru", "en"]
tasks:
- token-classification
---
## About model
This model based on [cointegrated/LaBSE-en-ru](https://huggingface.co/cointegrated/LaBSE-en-ru).
And trained on [surdan/nerel_short](https://huggingface.co/datasets/surdan/nerel_short) dataset
You can find more info:
- How the model was trained [Train_model.ipynb](https://huggingface.co/surdan/LaBSE_ner_nerel/blob/main/Train_model.ipynb)
- Example of usage model [Inference.ipynb](https://huggingface.co/surdan/LaBSE_ner_nerel/blob/main/Inference.ipynb) | 536 | [
[
-0.03399658203125,
-0.037384033203125,
0.021270751953125,
-0.0030651092529296875,
-0.006801605224609375,
0.0007061958312988281,
0.02587890625,
-0.0198974609375,
0.04083251953125,
0.05865478515625,
-0.043731689453125,
-0.04156494140625,
-0.0309600830078125,
-... |
grammarly/coedit-xxl | 2023-10-11T00:30:05.000Z | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"en",
"dataset:asset",
"dataset:wi_locness",
"dataset:GEM/wiki_auto_asset_turk",
"dataset:discofuse",
"dataset:zaemyung/IteraTeR_plus",
"dataset:jfleg",
"dataset:grammarly/coedit",
"arxiv:2305.09857",
"license:cc-by-... | text2text-generation | grammarly | null | null | grammarly/coedit-xxl | 10 | 1,594 | transformers | 2023-05-11T23:59:06 | ---
license: cc-by-nc-4.0
datasets:
- asset
- wi_locness
- GEM/wiki_auto_asset_turk
- discofuse
- zaemyung/IteraTeR_plus
- jfleg
- grammarly/coedit
language:
- en
metrics:
- sari
- bleu
- accuracy
---
# Model Card for CoEdIT-xxl
This model was obtained by fine-tuning the corresponding google/flan-t5-xxl model on the CoEdIT dataset.
**Paper:** CoEdIT: ext Editing by Task-Specific Instruction Tuning
**Authors:** Vipul Raheja, Dhruv Kumar, Ryan Koo, Dongyeop Kang
## Model Details
### Model Description
- **Language(s) (NLP)**: English
- **Finetuned from model:** google/flan-t5-xxl
### Model Sources
- **Repository:** https://github.com/vipulraheja/coedit
- **Paper:** https://arxiv.org/abs/2305.09857
## How to use
We make available the models presented in our paper.
<table>
<tr>
<th>Model</th>
<th>Number of parameters</th>
</tr>
<tr>
<td>CoEdIT-large</td>
<td>770M</td>
</tr>
<tr>
<td>CoEdIT-xl</td>
<td>3B</td>
</tr>
<tr>
<td>CoEdIT-xxl</td>
<td>11B</td>
</tr>
</table>
## Uses
## Text Revision Task
Given an edit instruction and an original text, our model can generate the edited version of the text.<br>

## Usage
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("grammarly/coedit-xxl")
model = T5ForConditionalGeneration.from_pretrained("grammarly/coedit-xxl")
input_text = 'Fix grammatical errors in this sentence: When I grow up, I start to understand what he said is quite right.'
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids, max_length=256)
edited_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
#### Software
https://github.com/vipulraheja/coedit
## Citation
**BibTeX:**
```
@article{raheja2023coedit,
title={CoEdIT: Text Editing by Task-Specific Instruction Tuning},
author={Vipul Raheja and Dhruv Kumar and Ryan Koo and Dongyeop Kang},
year={2023},
eprint={2305.09857},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
**APA:**
Raheja, V., Kumar, D., Koo, R., & Kang, D. (2023). CoEdIT: Text Editing by Task-Specific Instruction Tuning. ArXiv. /abs/2305.09857 | 2,319 | [
[
-0.0034847259521484375,
-0.0653076171875,
0.027130126953125,
0.0140838623046875,
0.0016002655029296875,
-0.0093536376953125,
-0.03326416015625,
-0.032257080078125,
-0.004322052001953125,
0.01255035400390625,
-0.0638427734375,
-0.0345458984375,
-0.042877197265625... |
doc2query/all-with_prefix-t5-base-v1 | 2021-10-19T12:52:47.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:sentence-transformers/reddit-title-body",
"dataset:sentence-transformers/embedding-training-data",
"arxiv:1904.08375",
"arxiv:2104.08663",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space"... | text2text-generation | doc2query | null | null | doc2query/all-with_prefix-t5-base-v1 | 7 | 1,592 | transformers | 2022-03-02T23:29:05 | ---
language: en
datasets:
- sentence-transformers/reddit-title-body
- sentence-transformers/embedding-training-data
widget:
- text: "text2reddit: Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
license: apache-2.0
---
# doc2query/all-with_prefix-t5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/all-with_prefix-t5-base-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
prefix = "answer2question"
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
text = prefix+": "+text
input_ids = tokenizer.encode(text, max_length=384, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 575k training steps. For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 384 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a large collection of datasets. For the exact datasets names and weights see the `data_config.json` in this repository. Most of the datasets are available at [https://huggingface.co/sentence-transformers](https://huggingface.co/sentence-transformers).
The datasets include besides others:
- (title, body) pairs from [Reddit](https://huggingface.co/datasets/sentence-transformers/reddit-title-body)
- (title, body) pairs and (title, answer) pairs from StackExchange and Yahoo Answers!
- (title, review) pairs from Amazon reviews
- (query, paragraph) pairs from MS MARCO, NQ, and GooAQ
- (question, duplicate_question) from Quora and WikiAnswers
- (title, abstract) pairs from S2ORC
## Prefix
This model was trained **with a prefix**: You start the text with a specific index that defines what type out output text you would like to receive. Depending on the prefix, the output is different.
E.g. the above text about Python produces the following output:
| Prefix | Output |
| --- | --- |
| answer2question | Why should I use python in my business? ; What is the difference between Python and.NET? ; what is the python design philosophy? |
| review2title | Python a powerful and useful language ; A new and improved programming language ; Object-oriented, practical and accessibl |
| abstract2title | Python: A Software Development Platform ; A Research Guide for Python X: Conceptual Approach to Programming ; Python : Language and Approach |
| text2query | is python a low level language? ; what is the primary idea of python? ; is python a programming language? |
These are all available pre-fixes:
- text2reddit
- question2title
- answer2question
- abstract2title
- review2title
- news2title
- text2query
- question2question
For the datasets and weights for the different pre-fixes see `data_config.json` in this repository.
| 5,206 | [
[
-0.016204833984375,
-0.064453125,
0.02587890625,
0.01003265380859375,
-0.015594482421875,
-0.0193939208984375,
-0.016998291015625,
-0.021331787109375,
0.00014078617095947266,
0.0225067138671875,
-0.03619384765625,
-0.04339599609375,
-0.052642822265625,
0.016... |
timm/coat_lite_medium.in1k | 2023-04-24T03:42:42.000Z | [
"timm",
"pytorch",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2104.06399",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/coat_lite_medium.in1k | 0 | 1,592 | timm | 2023-04-24T03:39:58 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for coat_lite_medium.in1k
A CoaT (Co-Scale Conv-Attentional Transformer) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 44.6
- GMACs: 9.8
- Activations (M): 40.1
- Image size: 224 x 224
- **Papers:**
- Co-Scale Conv-Attentional Image Transformers: https://arxiv.org/abs/2104.06399
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/mlpc-ucsd/CoaT
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('coat_lite_medium.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'coat_lite_medium.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 50, 512) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@InProceedings{Xu_2021_ICCV,
author = {Xu, Weijian and Xu, Yifan and Chang, Tyler and Tu, Zhuowen},
title = {Co-Scale Conv-Attentional Image Transformers},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {9981-9990}
}
```
| 2,825 | [
[
-0.036590576171875,
-0.0360107421875,
-0.003742218017578125,
0.013824462890625,
-0.0231170654296875,
-0.023193359375,
-0.0167388916015625,
-0.03125,
0.015625,
0.02850341796875,
-0.040496826171875,
-0.04541015625,
-0.050689697265625,
-0.00983428955078125,
... |
HumanF-MarkrAI/pub-llama-13b-v1 | 2023-10-19T18:44:01.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"dataset:HumanF-MarkrAI/pub_COT-2000",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | HumanF-MarkrAI | null | null | HumanF-MarkrAI/pub-llama-13b-v1 | 0 | 1,591 | transformers | 2023-10-19T08:30:48 | ---
language:
- ko
datasets: HumanF-MarkrAI/pub_COT-2000
license: cc-by-nc-sa-4.0
---
**(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다**
**The license is `cc-by-nc-sa`.**
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
pub-llama-13b-v1 is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Repo Link**
Github: [pub-llama📑](Not_yet)
**Training Dataset**
More detail about dataset: [HumanF-MarkrAI/pub_COT-2000](https://huggingface.co/datasets/HumanF-MarkrAI/pub_COT-2000). | 626 | [
[
-0.002841949462890625,
-0.06072998046875,
0.006717681884765625,
0.0504150390625,
-0.0283355712890625,
0.003360748291015625,
-0.01108551025390625,
-0.0184478759765625,
0.01464080810546875,
0.047882080078125,
-0.043701171875,
-0.045135498046875,
-0.046905517578125... |
lllyasviel/sd-controlnet-hed | 2023-04-24T22:30:38.000Z | [
"diffusers",
"art",
"controlnet",
"stable-diffusion",
"image-to-image",
"arxiv:2302.05543",
"license:openrail",
"has_space",
"diffusers:ControlNetModel",
"region:us"
] | image-to-image | lllyasviel | null | null | lllyasviel/sd-controlnet-hed | 22 | 1,589 | diffusers | 2023-02-24T07:02:21 | ---
license: openrail
base_model: runwayml/stable-diffusion-v1-5
tags:
- art
- controlnet
- stable-diffusion
- image-to-image
---
# Controlnet - *HED Boundary Version*
ControlNet is a neural network structure to control diffusion models by adding extra conditions.
This checkpoint corresponds to the ControlNet conditioned on **HED Boundary**.
It can be used in combination with [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/text2img).

## Model Details
- **Developed by:** Lvmin Zhang, Maneesh Agrawala
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Resources for more information:** [GitHub Repository](https://github.com/lllyasviel/ControlNet), [Paper](https://arxiv.org/abs/2302.05543).
- **Cite as:**
@misc{zhang2023adding,
title={Adding Conditional Control to Text-to-Image Diffusion Models},
author={Lvmin Zhang and Maneesh Agrawala},
year={2023},
eprint={2302.05543},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
## Introduction
Controlnet was proposed in [*Adding Conditional Control to Text-to-Image Diffusion Models*](https://arxiv.org/abs/2302.05543) by
Lvmin Zhang, Maneesh Agrawala.
The abstract reads as follows:
*We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions.
The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k).
Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices.
Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data.
We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc.
This may enrich the methods to control large diffusion models and further facilitate related applications.*
## Released Checkpoints
The authors released 8 different checkpoints, each trained with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
on a different type of conditioning:
| Model Name | Control Image Overview| Control Image Example | Generated Image Example |
|---|---|---|---|
|[lllyasviel/sd-controlnet-canny](https://huggingface.co/lllyasviel/sd-controlnet-canny)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_bird_canny.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_bird_canny.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_canny_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_canny_1.png"/></a>|
|[lllyasviel/sd-controlnet-depth](https://huggingface.co/lllyasviel/sd-controlnet-depth)<br/> *Trained with Midas depth estimation* |A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_vermeer_depth.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_vermeer_depth.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_depth_2.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_depth_2.png"/></a>|
|[lllyasviel/sd-controlnet-hed](https://huggingface.co/lllyasviel/sd-controlnet-hed)<br/> *Trained with HED edge detection (soft edge)* |A monochrome image with white soft edges on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_bird_hed.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_bird_hed.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_hed_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_hed_1.png"/></a> |
|[lllyasviel/sd-controlnet-mlsd](https://huggingface.co/lllyasviel/sd-controlnet-mlsd)<br/> *Trained with M-LSD line detection* |A monochrome image composed only of white straight lines on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_room_mlsd.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_room_mlsd.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_mlsd_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_mlsd_0.png"/></a>|
|[lllyasviel/sd-controlnet-normal](https://huggingface.co/lllyasviel/sd-controlnet-normal)<br/> *Trained with normal map* |A [normal mapped](https://en.wikipedia.org/wiki/Normal_mapping) image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_human_normal.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_normal.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_normal_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_normal_1.png"/></a>|
|[lllyasviel/sd-controlnet_openpose](https://huggingface.co/lllyasviel/sd-controlnet-openpose)<br/> *Trained with OpenPose bone image* |A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_human_openpose.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_openpose.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_openpose_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_openpose_0.png"/></a>|
|[lllyasviel/sd-controlnet_scribble](https://huggingface.co/lllyasviel/sd-controlnet-scribble)<br/> *Trained with human scribbles* |A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_vermeer_scribble.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_vermeer_scribble.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_scribble_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_scribble_0.png"/></a> |
|[lllyasviel/sd-controlnet_seg](https://huggingface.co/lllyasviel/sd-controlnet-seg)<br/>*Trained with semantic segmentation* |An [ADE20K](https://groups.csail.mit.edu/vision/datasets/ADE20K/)'s segmentation protocol image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_room_seg.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_room_seg.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_seg_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_seg_1.png"/></a> |
## Example
It is recommended to use the checkpoint with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) as the checkpoint
has been trained on it.
Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion.
**Note**: If you want to process an image to create the auxiliary conditioning, external dependencies are required as shown below:
1. Install https://github.com/patrickvonplaten/controlnet_aux
```sh
$ pip install controlnet_aux
```
2. Let's install `diffusers` and related packages:
```
$ pip install diffusers transformers accelerate
```
3. Run code:
```py
from PIL import Image
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
import torch
from controlnet_aux import HEDdetector
from diffusers.utils import load_image
hed = HEDdetector.from_pretrained('lllyasviel/ControlNet')
image = load_image("https://huggingface.co/lllyasviel/sd-controlnet-hed/resolve/main/images/man.png")
image = hed(image)
controlnet = ControlNetModel.from_pretrained(
"lllyasviel/sd-controlnet-hed", torch_dtype=torch.float16
)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
# Remove if you do not have xformers installed
# see https://huggingface.co/docs/diffusers/v0.13.0/en/optimization/xformers#installing-xformers
# for installation instructions
pipe.enable_xformers_memory_efficient_attention()
pipe.enable_model_cpu_offload()
image = pipe("oil painting of handsome old man, masterpiece", image, num_inference_steps=20).images[0]
image.save('images/man_hed_out.png')
```



### Training
The HED Edge model was trained on 3M edge-image, caption pairs. The model was trained for 600 GPU-hours with Nvidia A100 80G using Stable Diffusion 1.5 as a base model.
### Blog post
For more information, please also have a look at the [official ControlNet Blog Post](https://huggingface.co/blog/controlnet). | 11,448 | [
[
-0.04534912109375,
-0.040252685546875,
-0.005977630615234375,
0.032928466796875,
-0.0214996337890625,
-0.0219573974609375,
-0.006534576416015625,
-0.048614501953125,
0.0628662109375,
0.01334381103515625,
-0.0421142578125,
-0.03363037109375,
-0.05377197265625,
... |
GAI-LLM/ko-en-llama2-13b-mixed-v1 | 2023-10-27T00:41:10.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"license:cc-by-nc-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | GAI-LLM | null | null | GAI-LLM/ko-en-llama2-13b-mixed-v1 | 0 | 1,589 | transformers | 2023-10-18T08:45:35 | ---
license: cc-by-nc-2.0
language:
- ko
library_name: transformers
pipeline_tag: text-generation
---
**The license is `cc-by-nc-2.0`.**
# **GAI-LLM/ko-en-llama2-13b-mixed-v1**
## Model Details
**Model Developers** Donghoon Oh, Hanmin Myung, Eunyoung Kim (SK C&C G.AI Eng)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
GAI-LLM/ko-en-llama2-13b-mixed-v1 is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b)
**Training Dataset**
- We combined Open Korean Dateset using mixed-strategy.
- Kopen-platypus + Everythinglm v2 + jojo0217/korean_rlhf_dataset + sentineg + hellaswag + copa
- We use A100 GPU 80GB * 8, when training.
# **Model Benchmark**
## KO-LLM leaderboard
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
# Implementation Code
```python
### GAI-LLM/ko-en-llama2-13b-mixed-v1
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "GAI-LLM/ko-en-llama2-13b-mixed-v1"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
```
--- | 1,372 | [
[
-0.02032470703125,
-0.0592041015625,
0.0230712890625,
0.051300048828125,
-0.03857421875,
0.008331298828125,
-0.005645751953125,
-0.0287322998046875,
0.00321197509765625,
0.025360107421875,
-0.05926513671875,
-0.045166015625,
-0.045074462890625,
0.00211524963... |
Yntec/526Mix | 2023-11-03T14:32:17.000Z | [
"diffusers",
"General Purpose",
"Futuristic",
"Nature",
"526christian",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Yntec | null | null | Yntec/526Mix | 0 | 1,589 | diffusers | 2023-11-03T13:04:50 | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- General Purpose
- Futuristic
- Nature
- 526christian
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
---
# 526 Mix v15
Original page: https://civitai.com/models/15022?modelVersionId=132011
Sample and prompt:

Pretty CUTE girl. Fashion shoes. By wlop in the style of kyoani. | 533 | [
[
-0.037811279296875,
-0.029754638671875,
0.01071929931640625,
0.03662109375,
-0.040069580078125,
0.005138397216796875,
0.0216064453125,
-0.048919677734375,
0.06689453125,
0.033447265625,
-0.06671142578125,
-0.047821044921875,
-0.0278778076171875,
-0.004234313... |
bioformers/bioformer-8L | 2023-08-02T07:45:33.000Z | [
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"fill-mask",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | bioformers | null | null | bioformers/bioformer-8L | 4 | 1,587 | transformers | 2022-03-02T23:29:05 | ---
language:
- en
license: apache-2.0
pipeline_tag: fill-mask
---
**_NOTE: `bioformer-cased-v1.0` has been renamed to `bioformer-8L`. All links to `bioformer-cased-v1.0` will automatically redirect to `bioformer-8L`, including git operations. However, to avoid confusion, we recommend updating any existing local clones to point to the new repository URL._**
Bioformer-8L is a lightweight BERT model for biomedical text mining. Bioformer-8L uses a biomedical vocabulary and is pre-trained from scratch only on biomedical domain corpora. Our experiments show that Bioformer-8L is 3x as fast as BERT-base, and achieves comparable or even better performance than BioBERT/PubMedBERT on downstream NLP tasks.
Bioformer-8L has 8 layers (transformer blocks) with a hidden embedding size of 512, and the number of self-attention heads is 8. Its total number of parameters is 42,820,610.
**The usage of Bioformer-8L is the same as a standard BERT model. The documentation of BERT can be found [here](https://huggingface.co/docs/transformers/model_doc/bert).**
## Vocabulary of Bioformer-8L
Bioformer-8L uses a cased WordPiece vocabulary trained from a biomedical corpus, which included all PubMed abstracts (33 million, as of Feb 1, 2021) and 1 million PMC full-text articles. PMC has 3.6 million articles but we down-sampled them to 1 million such that the total size of PubMed abstracts and PMC full-text articles are approximately equal. To mitigate the out-of-vocabulary issue and include special symbols (e.g. male and female symbols) in biomedical literature, we trained Bioformer’s vocabulary from the Unicode text of the two resources. The vocabulary size of Bioformer-8L is 32768 (2^15), which is similar to that of the original BERT.
## Pre-training of Bioformer-8L
Bioformer-8L was pre-trained from scratch on the same corpus as the vocabulary (33 million PubMed abstracts + 1 million PMC full-text articles). For the masked language modeling (MLM) objective, we used whole-word masking with a masking rate of 15%. There are debates on whether the next sentence prediction (NSP) objective could improve the performance on downstream tasks. We include it in our pre-training experiment in case the prediction of the next sentence is needed by end-users. Sentence segmentation of all training text was performed using [SciSpacy](https://allenai.github.io/scispacy/).
Pre-training of Bioformer-8L was performed on a single Cloud TPU device (TPUv2, 8 cores, 8GB memory per core). The maximum input sequence length was fixed to 512, and the batch size was set to 256. We pre-trained Bioformer-8L for 2 million steps, which took about 8.3 days.
## Usage
Prerequisites: python3, pytorch, transformers and datasets
We have tested the following commands on Python v3.9.16, PyTorch v1.13.1+cu117, Datasets v2.9.0 and Transformers v4.26.
To install pytorch, please refer to instructions [here](https://pytorch.org/get-started/locally).
To install the `transformers` and `datasets` library:
```
pip install transformers
pip install datasets
```
### Filling mask
```
from transformers import pipeline
unmasker8L = pipeline('fill-mask', model='bioformers/bioformer-8L')
unmasker8L("[MASK] refers to a group of diseases that affect how the body uses blood sugar (glucose)")
unmasker16L = pipeline('fill-mask', model='bioformers/bioformer-16L')
unmasker16L("[MASK] refers to a group of diseases that affect how the body uses blood sugar (glucose)")
```
Output of `bioformer-8L`:
```
[{'score': 0.3207533359527588,
'token': 13473,
'token_str': 'Diabetes',
'sequence': 'Diabetes refers to a group of diseases that affect how the body uses blood sugar ( glucose )'},
{'score': 0.19234347343444824,
'token': 17740,
'token_str': 'Obesity',
'sequence': 'Obesity refers to a group of diseases that affect how the body uses blood sugar ( glucose )'},
{'score': 0.09200277179479599,
'token': 10778,
'token_str': 'T2DM',
'sequence': 'T2DM refers to a group of diseases that affect how the body uses blood sugar ( glucose )'},
{'score': 0.08494312316179276,
'token': 2228,
'token_str': 'It',
'sequence': 'It refers to a group of diseases that affect how the body uses blood sugar ( glucose )'},
{'score': 0.0412776917219162,
'token': 22263,
'token_str':
'Hypertension',
'sequence': 'Hypertension refers to a group of diseases that affect how the body uses blood sugar ( glucose )'}]
```
Output of `bioformer-16L`:
```
[{'score': 0.7262957692146301,
'token': 13473,
'token_str': 'Diabetes',
'sequence': 'Diabetes refers to a group of diseases that affect how the body uses blood sugar ( glucose )'},
{'score': 0.124954953789711,
'token': 10778,
'token_str': 'T2DM',
'sequence': 'T2DM refers to a group of diseases that affect how the body uses blood sugar ( glucose )'},
{'score': 0.04062706232070923,
'token': 2228,
'token_str': 'It',
'sequence': 'It refers to a group of diseases that affect how the body uses blood sugar ( glucose )'},
{'score': 0.022694870829582214,
'token': 17740,
'token_str': 'Obesity',
'sequence': 'Obesity refers to a group of diseases that affect how the body uses blood sugar ( glucose )'},
{'score': 0.009743048809468746,
'token': 13960,
'token_str': 'T2D',
'sequence': 'T2D refers to a group of diseases that affect how the body uses blood sugar ( glucose )'}]
```
## Awards
Bioformer-8L achieved top performance (highest micro-F1 score) in the BioCreative VII COVID-19 multi-label topic classification challenge (https://doi.org/10.1093/database/baac069)
## Links
[Bioformer-16L](https://huggingface.co/bioformers/bioformer-16L)
## Acknowledgment
Training and evaluation of Bioformer-8L is supported by the Google TPU Research Cloud (TRC) program, the Intramural Research Program of the National Library of Medicine (NLM), National Institutes of Health (NIH), and NIH/NLM grants LM012895 and 1K99LM014024-01.
## Questions
If you have any questions, please submit an issue here: https://github.com/WGLab/bioformer/issues
You can also send an email to Li Fang (fangli9@mail.sysu.edu.cn, https://fangli80.github.io/).
## Citation
You can cite our preprint on arXiv:
Fang L, Chen Q, Wei C-H, Lu Z, Wang K: Bioformer: an efficient transformer language model for biomedical text mining. arXiv preprint arXiv:2302.01588 (2023). DOI: https://doi.org/10.48550/arXiv.2302.01588
BibTeX format:
```
@ARTICLE{fangli2023bioformer,
author = {{Fang}, Li and {Chen}, Qingyu and {Wei}, Chih-Hsuan and {Lu}, Zhiyong and {Wang}, Kai},
title = "{Bioformer: an efficient transformer language model for biomedical text mining}",
journal = {arXiv preprint arXiv:2302.01588},
year = {2023}
}
``` | 6,654 | [
[
-0.0028095245361328125,
-0.047454833984375,
0.02880859375,
0.0021762847900390625,
-0.020782470703125,
0.004001617431640625,
-0.01050567626953125,
-0.0269622802734375,
0.033447265625,
0.015777587890625,
-0.0189361572265625,
-0.05694580078125,
-0.060791015625,
... |
TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GPTQ | 2023-09-27T12:47:07.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"en",
"dataset:garage-bAInd/Open-Platypus",
"arxiv:2307.09288",
"license:llama2",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GPTQ | 20 | 1,587 | transformers | 2023-09-02T09:17:16 | ---
language:
- en
license: llama2
library_name: transformers
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
datasets:
- garage-bAInd/Open-Platypus
model_name: Speechess Lllama2 Hermes Orca-Platypus WizardLM 13B
base_model: uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b
inference: false
model_creator: Jiangwen Su
model_type: llama
pipeline_tag: text-generation
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Speechess Lllama2 Hermes Orca-Platypus WizardLM 13B - GPTQ
- Model creator: [Jiangwen Su](https://huggingface.co/uukuguy)
- Original model: [Speechess Lllama2 Hermes Orca-Platypus WizardLM 13B](https://huggingface.co/uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Jiangwen Su's Speechess Lllama2 Hermes Orca-Platypus WizardLM 13B](https://huggingface.co/uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GGUF)
* [Jiangwen Su's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GPTQ:main`
- With Git, you can clone a branch with:
```
git clone --single-branch --branch main https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GPTQ
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GPTQ:main`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers>=4.32.0 optimum>=1.12.0
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .
```
### For CodeLlama models only: you must use Transformers 4.33.0 or later.
If 4.33.0 is not yet released when you read this, you will need to install Transformers from source:
```shell
pip3 uninstall -y transformers
pip3 install git+https://github.com/huggingface/transformers.git
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GPTQ"
# To use a different branch, change revision
# For example: revision="main"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Jiangwen Su's Speechess Lllama2 Hermes Orca-Platypus WizardLM 13B
<p><h1> speechless-llama2-hermes-orca-platypus-wizardlm-13b </h1></p>
speechless-llama2-hermes-orca-platypus-wizardlm-13b is a merge of NousResearch/Nous-Hermes-Llama2-13b, Open-Orca/OpenOrca-Platypus2-13B and WizardLM/WizardLM-13B-V1.2.
| Metric | Value |
| --- | --- |
| ARC | |
| HellaSwag | |
| MMLU | |
| TruthfulQA | |
| Average | |
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
| 24,845 | [
[
-0.044464111328125,
-0.05859375,
0.0079803466796875,
0.01398468017578125,
-0.026641845703125,
-0.00997161865234375,
0.0078887939453125,
-0.04351806640625,
0.021026611328125,
0.0323486328125,
-0.0504150390625,
-0.036376953125,
-0.029144287109375,
0.0051689147... |
Yntec/Hassanim | 2023-11-03T04:00:42.000Z | [
"diffusers",
"Anime",
"General",
"Photorealistic",
"Hassan",
"s6yx",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Yntec | null | null | Yntec/Hassanim | 1 | 1,586 | diffusers | 2023-09-14T20:52:27 | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
language:
- en
tags:
- Anime
- General
- Photorealistic
- Hassan
- s6yx
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
---
# Hassanim
An attempt to improve over HassamBlend with the help of ReVAnimated. This was a prompt that created a composition I loved with SD1.5, but the face always looked bad. I tried dozens of models and they didn't improve the input much, except for ReVAnim that improved on the compossition, and HassanBlend provided the best face. At the end the model remains 95% HassanBlend.
Comparison:

(click for larger)
Prompt:
A ultradetailed beautiful painting of a stylish Pretty CUTE girl wearing streetwear standing in a convenience store, oil painting, by ilya kuvshinov, greg rutkowski and makoto shinkai in the style of ross tran
# Aniblend
The only known model that created a blue puff jacket with this prompt.

# RevAnimHassan & RevHassanimated
The first was a classic blend of the models required to create the other ones, the latter creates the best possible faces that ReVAnimated can create, at the cost of image's compositions.

# Recipes:
- Add Difference 1.0
Primary model:
ReVAnimated
Secondary model:
ReVAnimated
Tertiary model:
v1-5-pruned-fp16-no-ema (https://huggingface.co/Yntec/DreamLikeRemix/resolve/main/v1-5-pruned-fp16-no-ema.safetensors)
Output Model:
ReVAnimatedEssense
- Super Merger Weight sum Train Difference 0.70
Model A:
ReVAnimatedEssense
Model B:
HassanBlend1.2
Output:
ReVAnimHassan
- Super Merger Weight sum Train Difference use MBW 0,0,0,0,0,0,0,0,0,0,0,0,0,0.5,1,1,1,1,1,1,1,1,1,1,1,1
Model A:
ReVAnimHassan
Model B:
HassanBlend1.2
Output:
RevHassanimated
- Super Merger Weight sum Train Difference use MBW 1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,1,1,1,1,1,1,0,0,0
Model A:
RevHassanimated
Model B:
ReVAnimated
Output:
AniBlend
- Super Merger Weight sum Train Difference use MBW 0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0
Model A:
HassanBlend1.2
Model B:
Unknown (I didn't record what model was used here, but it was one of ReVAnimHassan, RevHassanimated, or AniBlend. Probably AniBlend.)
Output:
Hassanim | 2,612 | [
[
-0.049560546875,
-0.020050048828125,
0.02642822265625,
0.02923583984375,
-0.0258026123046875,
-0.007236480712890625,
0.0159454345703125,
-0.03546142578125,
0.058502197265625,
0.06396484375,
-0.086669921875,
-0.022247314453125,
-0.03875732421875,
0.0051269531... |
ZinengTang/tvlt-base | 2023-03-13T12:59:44.000Z | [
"transformers",
"pytorch",
"tvlt",
"pretraining",
"arxiv:2209.14156",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | ZinengTang | null | null | ZinengTang/tvlt-base | 1 | 1,582 | transformers | 2022-12-15T05:11:40 | ---
license: mit
---
# TVLT
Textless Vision-Language Transformer (TLVT) model, pre-trained-only. It was introduced in the paper [TVLT: Textless Vision-Language Transformer](https://arxiv.org/abs/2209.14156) by Tang et al. and first released in [this repository](https://github.com/zinengtang/TVLT).
Disclaimer: The team releasing TVLT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
TVLT is based on the [MAE model](https://huggingface.co/docs/transformers/model_doc/vit_mae), but extends it to audio-visual pre-training.
## Intended uses & limitations
It's recommended to fine-tune the model on a task that involves audio and/or video.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/tvlt).
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2209.14156,
doi = {10.48550/ARXIV.2209.14156},
url = {https://arxiv.org/abs/2209.14156},
author = {Tang, Zineng and Cho, Jaemin and Nie, Yixin and Bansal, Mohit},
keywords = {Computer Vision and Pattern Recognition (cs.CV), Artificial Intelligence (cs.AI), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {TVLT: Textless Vision-Language Transformer},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` | 1,501 | [
[
-0.037506103515625,
-0.045013427734375,
0.004150390625,
0.00965118408203125,
-0.0367431640625,
-0.002285003662109375,
-0.0037708282470703125,
-0.0273284912109375,
-0.004337310791015625,
0.040679931640625,
-0.0479736328125,
-0.03436279296875,
-0.059326171875,
... |
timm/swinv2_large_window12to16_192to256.ms_in22k_ft_in1k | 2023-03-18T03:35:15.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-22k",
"arxiv:2111.09883",
"license:mit",
"region:us"
] | image-classification | timm | null | null | timm/swinv2_large_window12to16_192to256.ms_in22k_ft_in1k | 0 | 1,582 | timm | 2023-03-18T03:33:56 | ---
tags:
- image-classification
- timm
library_tag: timm
license: mit
datasets:
- imagenet-1k
- imagenet-22k
---
# Model card for swinv2_large_window12to16_192to256.ms_in22k_ft_in1k
A Swin Transformer V2 image classification model. Pretrained on ImageNet-22k and fine-tuned on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 196.7
- GMACs: 47.8
- Activations (M): 121.5
- Image size: 256 x 256
- **Papers:**
- Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883
- **Original:** https://github.com/microsoft/Swin-Transformer
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('swinv2_large_window12to16_192to256.ms_in22k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'swinv2_large_window12to16_192to256.ms_in22k_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g. for swin_base_patch4_window7_224 (NHWC output)
# torch.Size([1, 56, 56, 128])
# torch.Size([1, 28, 28, 256])
# torch.Size([1, 14, 14, 512])
# torch.Size([1, 7, 7, 1024])
# e.g. for swinv2_cr_small_ns_224 (NCHW output)
# torch.Size([1, 96, 56, 56])
# torch.Size([1, 192, 28, 28])
# torch.Size([1, 384, 14, 14])
# torch.Size([1, 768, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'swinv2_large_window12to16_192to256.ms_in22k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, H, W, num_features) tensor for swin / swinv2
# or (batch_size, num_features, H, W) for swinv2_cr
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{liu2021swinv2,
title={Swin Transformer V2: Scaling Up Capacity and Resolution},
author={Ze Liu and Han Hu and Yutong Lin and Zhuliang Yao and Zhenda Xie and Yixuan Wei and Jia Ning and Yue Cao and Zheng Zhang and Li Dong and Furu Wei and Baining Guo},
booktitle={International Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2022}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 4,566 | [
[
-0.0325927734375,
-0.02972412109375,
-0.006664276123046875,
0.013031005859375,
-0.02471923828125,
-0.031585693359375,
-0.0218505859375,
-0.03955078125,
0.0005841255187988281,
0.0285491943359375,
-0.03863525390625,
-0.0406494140625,
-0.045928955078125,
-0.020... |
akhooli/gpt2-small-arabic | 2023-03-20T08:04:17.000Z | [
"transformers",
"pytorch",
"jax",
"safetensors",
"gpt2",
"text-generation",
"ar",
"dataset:Arabic Wikipedia",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | akhooli | null | null | akhooli/gpt2-small-arabic | 9 | 1,580 | transformers | 2022-03-02T23:29:05 | ---
language: "ar"
datasets:
- Arabic Wikipedia
metrics:
- none
---
# GPT2-Small-Arabic
## Model description
GPT2 model from Arabic Wikipedia dataset based on gpt2-small (using Fastai2).
## Intended uses & limitations
#### How to use
An example is provided in this [colab notebook](https://colab.research.google.com/drive/1mRl7c-5v-Klx27EEAEOAbrfkustL4g7a?usp=sharing).
Both text and poetry (fine-tuned model) generation are included.
#### Limitations and bias
GPT2-small-arabic (trained on Arabic Wikipedia) has several limitations in terms of coverage (Arabic Wikipeedia quality, no diacritics) and training performance.
Use as demonstration or proof of concepts but not as production code.
## Training data
This pretrained model used the Arabic Wikipedia dump (around 900 MB).
## Training procedure
Training was done using [Fastai2](https://github.com/fastai/fastai2/) library on Kaggle, using free GPU.
## Eval results
Final perplexity reached was 72.19, loss: 4.28, accuracy: 0.307
### BibTeX entry and citation info
```bibtex
@inproceedings{Abed Khooli,
year={2020}
}
```
| 1,100 | [
[
-0.032440185546875,
-0.05279541015625,
0.0251007080078125,
0.005336761474609375,
-0.041229248046875,
-0.02392578125,
-0.01552581787109375,
-0.034149169921875,
-0.0005688667297363281,
0.01061248779296875,
-0.0394287109375,
-0.0300140380859375,
-0.061859130859375,... |
momo/polyglot-ko-12.8b-Chat-QLoRA-Merge | 2023-10-03T09:26:08.000Z | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"ko",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | momo | null | null | momo/polyglot-ko-12.8b-Chat-QLoRA-Merge | 2 | 1,578 | transformers | 2023-10-02T08:22:58 | ---
license: apache-2.0
language:
- ko
---
## Model Details
**Model Developers** Yunho Mo (momo)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
polyglot-ko-12.8b-Chat-QLoRA-Merge is an auto-regressive language model based on the polyglot-ko-12.8b transformer architecture.
**Base Model**
[polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b)
**Training Dataset**
I use [KOpen-platypus](https://huggingface.co/datasets/kyujinpy/KOpen-platypus), [ko-lima](https://huggingface.co/datasets/taeshahn/ko-lima), [EverythingLM-data-V2-Ko](https://huggingface.co/datasets/ziozzang/EverythingLM-data-V2-Ko).
| 676 | [
[
-0.0269775390625,
-0.049468994140625,
0.01348114013671875,
0.0301513671875,
-0.028717041015625,
0.01393890380859375,
-0.0064544677734375,
-0.031890869140625,
0.0175018310546875,
0.04217529296875,
-0.0430908203125,
-0.035125732421875,
-0.0517578125,
-0.010864... |
snunlp/KR-SBERT-V40K-klueNLI-augSTS | 2022-08-23T07:12:47.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"ko",
"endpoints_compatible",
"has_space",
"region:us"
] | sentence-similarity | snunlp | null | null | snunlp/KR-SBERT-V40K-klueNLI-augSTS | 21 | 1,577 | sentence-transformers | 2022-05-03T03:34:16 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- ko
widget:
- source_sentence: "그 식당은 파리를 날린다"
sentences:
- "그 식당에는 손님이 없다"
- "그 식당에서는 드론을 날린다"
- "파리가 식당에 날아다닌다"
example_title: "Restaurant"
- source_sentence: "잠이 옵니다"
sentences:
- "잠이 안 옵니다"
- "졸음이 옵니다"
- "기차가 옵니다"
example_title: "Sleepy"
---
# snunlp/KR-SBERT-V40K-klueNLI-augSTS
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('snunlp/KR-SBERT-V40K-klueNLI-augSTS')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('snunlp/KR-SBERT-V40K-klueNLI-augSTS')
model = AutoModel.from_pretrained('snunlp/KR-SBERT-V40K-klueNLI-augSTS')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=snunlp/KR-SBERT-V40K-klueNLI-augSTS)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Application for document classification
Tutorial in Google Colab: https://colab.research.google.com/drive/1S6WSjOx9h6Wh_rX1Z2UXwx9i_uHLlOiM
|Model|Accuracy|
|-|-|
|KR-SBERT-Medium-NLI-STS|0.8400|
|KR-SBERT-V40K-NLI-STS|0.8400|
|KR-SBERT-V40K-NLI-augSTS|0.8511|
|KR-SBERT-V40K-klueNLI-augSTS|**0.8628**|
## Citation
```bibtex
@misc{kr-sbert,
author = {Park, Suzi and Hyopil Shin},
title = {KR-SBERT: A Pre-trained Korean-specific Sentence-BERT model},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/snunlp/KR-SBERT}}
}
``` | 3,885 | [
[
-0.01340484619140625,
-0.05389404296875,
0.0234527587890625,
0.021881103515625,
-0.0193634033203125,
-0.0246429443359375,
-0.02972412109375,
-0.0002951622009277344,
0.01282501220703125,
0.0241851806640625,
-0.036163330078125,
-0.040771484375,
-0.055023193359375,... |
SRDdev/QABERT-small | 2023-06-21T15:00:00.000Z | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"question-answering",
"en",
"dataset:squad_v2",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | question-answering | SRDdev | null | null | SRDdev/QABERT-small | 0 | 1,577 | transformers | 2023-02-08T12:40:31 | ---
datasets:
- squad_v2
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: question-answering
tags:
- question-answering
---
# QA-BERT
QA-BERT is a Question Answering Model. This model is a lighter version of any of the question-answering models out there.
## Dataset
The Stanford Question Answering Dataset (SQuAD) is a widely used benchmark dataset for the task of machine reading comprehension. It consists of over 100,000 question-answer pairs based on a set of Wikipedia articles. The goal is to train models that can answer questions based on their understanding of the given text passages. SQuAD has played a significant role in advancing the state-of-the-art in this field and remains a popular choice for researchers and practitioners alike.
Due to GPU limitations, this version is trained on `30k samples` from the Stanford Question Answering Dataset.
<details>
<summary><i>Structure of the Data Dictonary</i></summary>
<!--All you need is a blank line-->
{
"data":[
{
"title":"Article Title",
"paragraphs":[
{
"context":"The context text of the paragraph",
"qas":[
{
"question":"The question asked about the context",
"id":"A unique identifier for the question",
"answers":[
{
"text":"The answer to the question",
"answer_start":"The starting index of the answer in the context"
}
]
}
]
}
]
}
],
"version":"The version of the SQuAD dataset"
}
</details>
## Model
BERT (Bidirectional Encoder Representations from Transformers) is a pre-trained transformer-based model for natural language processing tasks such as question answering. BERT is fine-tuned for question answering by adding a linear layer on top of the pre-trained BERT representations to predict the start and end of the answer in the input context. BERT has achieved state-of-the-art results on multiple benchmark datasets, including the Stanford Question Answering Dataset (SQuAD). The fine-tuning process allows BERT to effectively capture the relationships between questions and answers and generate accurate answers.
<img src="https://imgs.search.brave.com/F8m-nwp6EIG5vq--OmJLrCDpIkuX6tEQ_kyFKQjlUTs/rs:fit:1200:1200:1/g:ce/aHR0cHM6Ly9ibG9n/LmdyaWRkeW5hbWlj/cy5jb20vY29udGVu/dC9pbWFnZXMvMjAy/MC8xMC9TbGljZS0x/OC5wbmc">
For more detail about this read [Understanding QABERT](https://github.com/SRDdev/AnswerMind)
## Inference
_Load model_
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
QAtokenizer = AutoTokenizer.from_pretrained("SRDdev/QABERT-small")
QAmodel = AutoModelForQuestionAnswering.from_pretrained("SRDdev/QABERT-small")
```
_context_
```text
Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a
question-answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune
a model on a SQuAD task, you may leverage the examples/pytorch/question-answering/run_squad.py script.
```
_Build Pipeline_
```python
from transformers import pipeline
ask = pipeline("question-answering", model= QAmodel , tokenizer = QAtokenizer)
result = ask(question="What is a good example of a question answering dataset?", context=context)
print(f"Answer: '{result['answer']}'")
```
## Contributing
Pull requests are welcome. For major changes, please open an issue first
to discuss what you would like to change.
Please make sure to update tests as appropriate.
## Citations
```
@citation{ QA-BERT-small,
author = {Shreyas Dixit},
year = {2023},
url = {https://huggingface.co/SRDdev/QA-BERT-small}
}
```
| 4,065 | [
[
-0.029693603515625,
-0.07373046875,
0.02191162109375,
0.00554656982421875,
-0.00032520294189453125,
0.0038127899169921875,
0.00859832763671875,
-0.016082763671875,
0.003658294677734375,
0.0205078125,
-0.0845947265625,
-0.016845703125,
-0.0161590576171875,
0.... |
benjamin/wtp-canine-s-12l | 2023-05-31T09:12:27.000Z | [
"transformers",
"pytorch",
"la-canine",
"token-classification",
"multilingual",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"... | token-classification | benjamin | null | null | benjamin/wtp-canine-s-12l | 2 | 1,575 | transformers | 2023-05-10T20:50:38 | ---
license: mit
language:
- multilingual
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hu
- hy
- id
- ig
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- pa
- pl
- ps
- pt
- ro
- ru
- si
- sk
- sl
- sq
- sr
- sv
- ta
- te
- tg
- th
- tr
- uk
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
---
# wtp-canine-s-12l
Model for [`wtpsplit`](https://github.com/bminixhofer/wtpsplit). | 552 | [
[
-0.0225830078125,
-0.02618408203125,
0.0245819091796875,
0.040802001953125,
-0.0271453857421875,
0.003387451171875,
0.009979248046875,
-0.0233154296875,
0.0283355712890625,
0.022735595703125,
-0.05767822265625,
-0.01221466064453125,
-0.030242919921875,
0.001... |
timm/lcnet_100.ra2_in1k | 2023-04-27T22:49:02.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2110.00476",
"arxiv:2109.15099",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/lcnet_100.ra2_in1k | 0 | 1,574 | timm | 2022-12-16T05:37:41 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for lcnet_100.ra2_in1k
A LCNet image classification model. Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* RandAugment `RA2` recipe. Inspired by and evolved from EfficientNet RandAugment recipes. Published as `B` recipe in [ResNet Strikes Back](https://arxiv.org/abs/2110.00476).
* RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging
* Step (exponential decay w/ staircase) LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 3.0
- GMACs: 0.2
- Activations (M): 2.5
- Image size: 224 x 224
- **Papers:**
- PP-LCNet: A Lightweight CPU Convolutional Neural Network: https://arxiv.org/abs/2109.15099
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('lcnet_100.ra2_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'lcnet_100.ra2_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 32, 112, 112])
# torch.Size([1, 64, 56, 56])
# torch.Size([1, 128, 28, 28])
# torch.Size([1, 256, 14, 14])
# torch.Size([1, 512, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'lcnet_100.ra2_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{cui2021pp,
title={PP-LCNet: A lightweight CPU convolutional neural network},
author={Cui, Cheng and Gao, Tingquan and Wei, Shengyu and Du, Yuning and Guo, Ruoyu and Dong, Shuilong and Lu, Bin and Zhou, Ying and Lv, Xueying and Liu, Qiwen and others},
journal={arXiv preprint arXiv:2109.15099},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
| 4,707 | [
[
-0.033935546875,
-0.03094482421875,
-0.004482269287109375,
-0.0013484954833984375,
-0.02166748046875,
-0.0255889892578125,
-0.026092529296875,
-0.031829833984375,
0.0129241943359375,
0.040435791015625,
-0.03411865234375,
-0.045196533203125,
-0.049713134765625,
... |
Muennighoff/SGPT-5.8B-weightedmean-nli-bitfit | 2022-10-03T12:16:09.000Z | [
"sentence-transformers",
"pytorch",
"gptj",
"feature-extraction",
"sentence-similarity",
"mteb",
"arxiv:2202.08904",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | sentence-similarity | Muennighoff | null | null | Muennighoff/SGPT-5.8B-weightedmean-nli-bitfit | 6 | 1,573 | sentence-transformers | 2022-03-02T23:29:04 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
model-index:
- name: SGPT-5.8B-weightedmean-nli-bitfit
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 74.07462686567165
- type: ap
value: 37.44692407529112
- type: f1
value: 68.28971003916419
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (de)
config: de
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 66.63811563169165
- type: ap
value: 78.57252079915924
- type: f1
value: 64.5543087846584
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en-ext)
config: en-ext
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 77.21889055472263
- type: ap
value: 25.663426367826712
- type: f1
value: 64.26265688503176
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (ja)
config: ja
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 58.06209850107067
- type: ap
value: 14.028219107023915
- type: f1
value: 48.10387189660778
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: 80714f8dcf8cefc218ef4f8c5a966dd83f75a0e1
metrics:
- type: accuracy
value: 82.30920000000002
- type: ap
value: 76.88786578621213
- type: f1
value: 82.15455656065011
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 41.584
- type: f1
value: 41.203137944390114
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (de)
config: de
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 35.288000000000004
- type: f1
value: 34.672995558518096
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (es)
config: es
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 38.34
- type: f1
value: 37.608755629529455
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (fr)
config: fr
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 37.839999999999996
- type: f1
value: 36.86898201563507
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (ja)
config: ja
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 30.936000000000003
- type: f1
value: 30.49401738527071
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (zh)
config: zh
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 33.75
- type: f1
value: 33.38338946025617
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: 5b3e3697907184a9b77a3c99ee9ea1a9cbb1e4e3
metrics:
- type: map_at_1
value: 13.727
- type: map_at_10
value: 26.740000000000002
- type: map_at_100
value: 28.218
- type: map_at_1000
value: 28.246
- type: map_at_3
value: 21.728
- type: map_at_5
value: 24.371000000000002
- type: ndcg_at_1
value: 13.727
- type: ndcg_at_10
value: 35.07
- type: ndcg_at_100
value: 41.947
- type: ndcg_at_1000
value: 42.649
- type: ndcg_at_3
value: 24.484
- type: ndcg_at_5
value: 29.282999999999998
- type: precision_at_1
value: 13.727
- type: precision_at_10
value: 6.223
- type: precision_at_100
value: 0.9369999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 10.835
- type: precision_at_5
value: 8.848
- type: recall_at_1
value: 13.727
- type: recall_at_10
value: 62.233000000000004
- type: recall_at_100
value: 93.67
- type: recall_at_1000
value: 99.14699999999999
- type: recall_at_3
value: 32.504
- type: recall_at_5
value: 44.239
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: 0bbdb47bcbe3a90093699aefeed338a0f28a7ee8
metrics:
- type: v_measure
value: 40.553923271901695
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: b73bd54100e5abfa6e3a23dcafb46fe4d2438dc3
metrics:
- type: v_measure
value: 32.49323183712211
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 4d853f94cd57d85ec13805aeeac3ae3e5eb4c49c
metrics:
- type: map
value: 55.89811361443445
- type: mrr
value: 70.16235764850724
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: 9ee918f184421b6bd48b78f6c714d86546106103
metrics:
- type: cos_sim_pearson
value: 82.50506557805856
- type: cos_sim_spearman
value: 79.50000423261176
- type: euclidean_pearson
value: 75.76190885392926
- type: euclidean_spearman
value: 76.7330737163434
- type: manhattan_pearson
value: 75.825318036112
- type: manhattan_spearman
value: 76.7415076434559
- task:
type: BitextMining
dataset:
type: mteb/bucc-bitext-mining
name: MTEB BUCC (de-en)
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 75.49060542797494
- type: f1
value: 75.15379262352123
- type: precision
value: 74.99391092553932
- type: recall
value: 75.49060542797494
- task:
type: BitextMining
dataset:
type: mteb/bucc-bitext-mining
name: MTEB BUCC (fr-en)
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 0.4182258419546555
- type: f1
value: 0.4182258419546555
- type: precision
value: 0.4182258419546555
- type: recall
value: 0.4182258419546555
- task:
type: BitextMining
dataset:
type: mteb/bucc-bitext-mining
name: MTEB BUCC (ru-en)
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 0.013855213023900243
- type: f1
value: 0.0115460108532502
- type: precision
value: 0.010391409767925183
- type: recall
value: 0.013855213023900243
- task:
type: BitextMining
dataset:
type: mteb/bucc-bitext-mining
name: MTEB BUCC (zh-en)
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 0.315955766192733
- type: f1
value: 0.315955766192733
- type: precision
value: 0.315955766192733
- type: recall
value: 0.315955766192733
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 44fa15921b4c889113cc5df03dd4901b49161ab7
metrics:
- type: accuracy
value: 81.74025974025973
- type: f1
value: 81.66568824876
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 11d0121201d1f1f280e8cc8f3d98fb9c4d9f9c55
metrics:
- type: v_measure
value: 33.59451202614059
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: c0fab014e1bcb8d3a5e31b2088972a1e01547dc1
metrics:
- type: v_measure
value: 29.128241446157165
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 26.715
- type: map_at_10
value: 35.007
- type: map_at_100
value: 36.352000000000004
- type: map_at_1000
value: 36.51
- type: map_at_3
value: 32.257999999999996
- type: map_at_5
value: 33.595000000000006
- type: ndcg_at_1
value: 33.906
- type: ndcg_at_10
value: 40.353
- type: ndcg_at_100
value: 45.562999999999995
- type: ndcg_at_1000
value: 48.454
- type: ndcg_at_3
value: 36.349
- type: ndcg_at_5
value: 37.856
- type: precision_at_1
value: 33.906
- type: precision_at_10
value: 7.854
- type: precision_at_100
value: 1.29
- type: precision_at_1000
value: 0.188
- type: precision_at_3
value: 17.549
- type: precision_at_5
value: 12.561
- type: recall_at_1
value: 26.715
- type: recall_at_10
value: 49.508
- type: recall_at_100
value: 71.76599999999999
- type: recall_at_1000
value: 91.118
- type: recall_at_3
value: 37.356
- type: recall_at_5
value: 41.836
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 19.663
- type: map_at_10
value: 27.086
- type: map_at_100
value: 28.066999999999997
- type: map_at_1000
value: 28.18
- type: map_at_3
value: 24.819
- type: map_at_5
value: 26.332
- type: ndcg_at_1
value: 25.732
- type: ndcg_at_10
value: 31.613999999999997
- type: ndcg_at_100
value: 35.757
- type: ndcg_at_1000
value: 38.21
- type: ndcg_at_3
value: 28.332
- type: ndcg_at_5
value: 30.264000000000003
- type: precision_at_1
value: 25.732
- type: precision_at_10
value: 6.038
- type: precision_at_100
value: 1.034
- type: precision_at_1000
value: 0.149
- type: precision_at_3
value: 13.864
- type: precision_at_5
value: 10.241999999999999
- type: recall_at_1
value: 19.663
- type: recall_at_10
value: 39.585
- type: recall_at_100
value: 57.718
- type: recall_at_1000
value: 74.26700000000001
- type: recall_at_3
value: 29.845
- type: recall_at_5
value: 35.105
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 30.125
- type: map_at_10
value: 39.824
- type: map_at_100
value: 40.935
- type: map_at_1000
value: 41.019
- type: map_at_3
value: 37.144
- type: map_at_5
value: 38.647999999999996
- type: ndcg_at_1
value: 34.922
- type: ndcg_at_10
value: 45.072
- type: ndcg_at_100
value: 50.046
- type: ndcg_at_1000
value: 51.895
- type: ndcg_at_3
value: 40.251
- type: ndcg_at_5
value: 42.581
- type: precision_at_1
value: 34.922
- type: precision_at_10
value: 7.303999999999999
- type: precision_at_100
value: 1.0739999999999998
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 17.994
- type: precision_at_5
value: 12.475999999999999
- type: recall_at_1
value: 30.125
- type: recall_at_10
value: 57.253
- type: recall_at_100
value: 79.35799999999999
- type: recall_at_1000
value: 92.523
- type: recall_at_3
value: 44.088
- type: recall_at_5
value: 49.893
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 16.298000000000002
- type: map_at_10
value: 21.479
- type: map_at_100
value: 22.387
- type: map_at_1000
value: 22.483
- type: map_at_3
value: 19.743
- type: map_at_5
value: 20.444000000000003
- type: ndcg_at_1
value: 17.740000000000002
- type: ndcg_at_10
value: 24.887
- type: ndcg_at_100
value: 29.544999999999998
- type: ndcg_at_1000
value: 32.417
- type: ndcg_at_3
value: 21.274
- type: ndcg_at_5
value: 22.399
- type: precision_at_1
value: 17.740000000000002
- type: precision_at_10
value: 3.932
- type: precision_at_100
value: 0.666
- type: precision_at_1000
value: 0.094
- type: precision_at_3
value: 8.927
- type: precision_at_5
value: 6.056
- type: recall_at_1
value: 16.298000000000002
- type: recall_at_10
value: 34.031
- type: recall_at_100
value: 55.769000000000005
- type: recall_at_1000
value: 78.19500000000001
- type: recall_at_3
value: 23.799999999999997
- type: recall_at_5
value: 26.562
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 10.958
- type: map_at_10
value: 16.999
- type: map_at_100
value: 17.979
- type: map_at_1000
value: 18.112000000000002
- type: map_at_3
value: 15.010000000000002
- type: map_at_5
value: 16.256999999999998
- type: ndcg_at_1
value: 14.179
- type: ndcg_at_10
value: 20.985
- type: ndcg_at_100
value: 26.216
- type: ndcg_at_1000
value: 29.675
- type: ndcg_at_3
value: 17.28
- type: ndcg_at_5
value: 19.301
- type: precision_at_1
value: 14.179
- type: precision_at_10
value: 3.968
- type: precision_at_100
value: 0.784
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 8.541
- type: precision_at_5
value: 6.468
- type: recall_at_1
value: 10.958
- type: recall_at_10
value: 29.903000000000002
- type: recall_at_100
value: 53.413
- type: recall_at_1000
value: 78.74799999999999
- type: recall_at_3
value: 19.717000000000002
- type: recall_at_5
value: 24.817
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 21.217
- type: map_at_10
value: 29.677
- type: map_at_100
value: 30.928
- type: map_at_1000
value: 31.063000000000002
- type: map_at_3
value: 26.611
- type: map_at_5
value: 28.463
- type: ndcg_at_1
value: 26.083000000000002
- type: ndcg_at_10
value: 35.217
- type: ndcg_at_100
value: 40.715
- type: ndcg_at_1000
value: 43.559
- type: ndcg_at_3
value: 30.080000000000002
- type: ndcg_at_5
value: 32.701
- type: precision_at_1
value: 26.083000000000002
- type: precision_at_10
value: 6.622
- type: precision_at_100
value: 1.115
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 14.629
- type: precision_at_5
value: 10.837
- type: recall_at_1
value: 21.217
- type: recall_at_10
value: 47.031
- type: recall_at_100
value: 70.378
- type: recall_at_1000
value: 89.704
- type: recall_at_3
value: 32.427
- type: recall_at_5
value: 39.31
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 19.274
- type: map_at_10
value: 26.398
- type: map_at_100
value: 27.711000000000002
- type: map_at_1000
value: 27.833000000000002
- type: map_at_3
value: 24.294
- type: map_at_5
value: 25.385
- type: ndcg_at_1
value: 24.886
- type: ndcg_at_10
value: 30.909
- type: ndcg_at_100
value: 36.941
- type: ndcg_at_1000
value: 39.838
- type: ndcg_at_3
value: 27.455000000000002
- type: ndcg_at_5
value: 28.828
- type: precision_at_1
value: 24.886
- type: precision_at_10
value: 5.6739999999999995
- type: precision_at_100
value: 1.0290000000000001
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 13.242
- type: precision_at_5
value: 9.292
- type: recall_at_1
value: 19.274
- type: recall_at_10
value: 39.643
- type: recall_at_100
value: 66.091
- type: recall_at_1000
value: 86.547
- type: recall_at_3
value: 29.602
- type: recall_at_5
value: 33.561
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 18.653666666666666
- type: map_at_10
value: 25.606666666666666
- type: map_at_100
value: 26.669333333333334
- type: map_at_1000
value: 26.795833333333334
- type: map_at_3
value: 23.43433333333333
- type: map_at_5
value: 24.609666666666666
- type: ndcg_at_1
value: 22.742083333333333
- type: ndcg_at_10
value: 29.978333333333335
- type: ndcg_at_100
value: 34.89808333333333
- type: ndcg_at_1000
value: 37.806583333333336
- type: ndcg_at_3
value: 26.223666666666674
- type: ndcg_at_5
value: 27.91033333333333
- type: precision_at_1
value: 22.742083333333333
- type: precision_at_10
value: 5.397083333333334
- type: precision_at_100
value: 0.9340000000000002
- type: precision_at_1000
value: 0.13691666666666663
- type: precision_at_3
value: 12.331083333333332
- type: precision_at_5
value: 8.805499999999999
- type: recall_at_1
value: 18.653666666666666
- type: recall_at_10
value: 39.22625000000001
- type: recall_at_100
value: 61.31049999999999
- type: recall_at_1000
value: 82.19058333333334
- type: recall_at_3
value: 28.517333333333333
- type: recall_at_5
value: 32.9565
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 16.07
- type: map_at_10
value: 21.509
- type: map_at_100
value: 22.335
- type: map_at_1000
value: 22.437
- type: map_at_3
value: 19.717000000000002
- type: map_at_5
value: 20.574
- type: ndcg_at_1
value: 18.865000000000002
- type: ndcg_at_10
value: 25.135999999999996
- type: ndcg_at_100
value: 29.483999999999998
- type: ndcg_at_1000
value: 32.303
- type: ndcg_at_3
value: 21.719
- type: ndcg_at_5
value: 23.039
- type: precision_at_1
value: 18.865000000000002
- type: precision_at_10
value: 4.263999999999999
- type: precision_at_100
value: 0.696
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 9.866999999999999
- type: precision_at_5
value: 6.902
- type: recall_at_1
value: 16.07
- type: recall_at_10
value: 33.661
- type: recall_at_100
value: 54.001999999999995
- type: recall_at_1000
value: 75.564
- type: recall_at_3
value: 23.956
- type: recall_at_5
value: 27.264
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 10.847
- type: map_at_10
value: 15.518
- type: map_at_100
value: 16.384
- type: map_at_1000
value: 16.506
- type: map_at_3
value: 14.093
- type: map_at_5
value: 14.868
- type: ndcg_at_1
value: 13.764999999999999
- type: ndcg_at_10
value: 18.766
- type: ndcg_at_100
value: 23.076
- type: ndcg_at_1000
value: 26.344
- type: ndcg_at_3
value: 16.150000000000002
- type: ndcg_at_5
value: 17.373
- type: precision_at_1
value: 13.764999999999999
- type: precision_at_10
value: 3.572
- type: precision_at_100
value: 0.6779999999999999
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 7.88
- type: precision_at_5
value: 5.712
- type: recall_at_1
value: 10.847
- type: recall_at_10
value: 25.141999999999996
- type: recall_at_100
value: 44.847
- type: recall_at_1000
value: 68.92099999999999
- type: recall_at_3
value: 17.721999999999998
- type: recall_at_5
value: 20.968999999999998
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 18.377
- type: map_at_10
value: 26.005
- type: map_at_100
value: 26.996
- type: map_at_1000
value: 27.116
- type: map_at_3
value: 23.712
- type: map_at_5
value: 24.859
- type: ndcg_at_1
value: 22.201
- type: ndcg_at_10
value: 30.635
- type: ndcg_at_100
value: 35.623
- type: ndcg_at_1000
value: 38.551
- type: ndcg_at_3
value: 26.565
- type: ndcg_at_5
value: 28.28
- type: precision_at_1
value: 22.201
- type: precision_at_10
value: 5.41
- type: precision_at_100
value: 0.88
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 12.531
- type: precision_at_5
value: 8.806
- type: recall_at_1
value: 18.377
- type: recall_at_10
value: 40.908
- type: recall_at_100
value: 63.563
- type: recall_at_1000
value: 84.503
- type: recall_at_3
value: 29.793999999999997
- type: recall_at_5
value: 34.144999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 20.246
- type: map_at_10
value: 27.528000000000002
- type: map_at_100
value: 28.78
- type: map_at_1000
value: 29.002
- type: map_at_3
value: 25.226
- type: map_at_5
value: 26.355
- type: ndcg_at_1
value: 25.099
- type: ndcg_at_10
value: 32.421
- type: ndcg_at_100
value: 37.2
- type: ndcg_at_1000
value: 40.693
- type: ndcg_at_3
value: 28.768
- type: ndcg_at_5
value: 30.23
- type: precision_at_1
value: 25.099
- type: precision_at_10
value: 6.245
- type: precision_at_100
value: 1.269
- type: precision_at_1000
value: 0.218
- type: precision_at_3
value: 13.767999999999999
- type: precision_at_5
value: 9.881
- type: recall_at_1
value: 20.246
- type: recall_at_10
value: 41.336
- type: recall_at_100
value: 63.098
- type: recall_at_1000
value: 86.473
- type: recall_at_3
value: 30.069000000000003
- type: recall_at_5
value: 34.262
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 14.054
- type: map_at_10
value: 20.25
- type: map_at_100
value: 21.178
- type: map_at_1000
value: 21.288999999999998
- type: map_at_3
value: 18.584999999999997
- type: map_at_5
value: 19.536
- type: ndcg_at_1
value: 15.527
- type: ndcg_at_10
value: 23.745
- type: ndcg_at_100
value: 28.610999999999997
- type: ndcg_at_1000
value: 31.740000000000002
- type: ndcg_at_3
value: 20.461
- type: ndcg_at_5
value: 22.072
- type: precision_at_1
value: 15.527
- type: precision_at_10
value: 3.882
- type: precision_at_100
value: 0.6930000000000001
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 9.181000000000001
- type: precision_at_5
value: 6.433
- type: recall_at_1
value: 14.054
- type: recall_at_10
value: 32.714
- type: recall_at_100
value: 55.723
- type: recall_at_1000
value: 79.72399999999999
- type: recall_at_3
value: 23.832
- type: recall_at_5
value: 27.754
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: 392b78eb68c07badcd7c2cd8f39af108375dfcce
metrics:
- type: map_at_1
value: 6.122
- type: map_at_10
value: 11.556
- type: map_at_100
value: 12.998000000000001
- type: map_at_1000
value: 13.202
- type: map_at_3
value: 9.657
- type: map_at_5
value: 10.585
- type: ndcg_at_1
value: 15.049000000000001
- type: ndcg_at_10
value: 17.574
- type: ndcg_at_100
value: 24.465999999999998
- type: ndcg_at_1000
value: 28.511999999999997
- type: ndcg_at_3
value: 13.931
- type: ndcg_at_5
value: 15.112
- type: precision_at_1
value: 15.049000000000001
- type: precision_at_10
value: 5.831
- type: precision_at_100
value: 1.322
- type: precision_at_1000
value: 0.20500000000000002
- type: precision_at_3
value: 10.749
- type: precision_at_5
value: 8.365
- type: recall_at_1
value: 6.122
- type: recall_at_10
value: 22.207
- type: recall_at_100
value: 47.08
- type: recall_at_1000
value: 70.182
- type: recall_at_3
value: 13.416
- type: recall_at_5
value: 16.672
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: f097057d03ed98220bc7309ddb10b71a54d667d6
metrics:
- type: map_at_1
value: 4.672
- type: map_at_10
value: 10.534
- type: map_at_100
value: 14.798
- type: map_at_1000
value: 15.927
- type: map_at_3
value: 7.317
- type: map_at_5
value: 8.726
- type: ndcg_at_1
value: 36.5
- type: ndcg_at_10
value: 26.098
- type: ndcg_at_100
value: 29.215999999999998
- type: ndcg_at_1000
value: 36.254999999999995
- type: ndcg_at_3
value: 29.247
- type: ndcg_at_5
value: 27.692
- type: precision_at_1
value: 47.25
- type: precision_at_10
value: 22.625
- type: precision_at_100
value: 7.042
- type: precision_at_1000
value: 1.6129999999999998
- type: precision_at_3
value: 34.083000000000006
- type: precision_at_5
value: 29.5
- type: recall_at_1
value: 4.672
- type: recall_at_10
value: 15.638
- type: recall_at_100
value: 36.228
- type: recall_at_1000
value: 58.831
- type: recall_at_3
value: 8.578
- type: recall_at_5
value: 11.18
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 829147f8f75a25f005913200eb5ed41fae320aa1
metrics:
- type: accuracy
value: 49.919999999999995
- type: f1
value: 45.37973678791632
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: 1429cf27e393599b8b359b9b72c666f96b2525f9
metrics:
- type: map_at_1
value: 25.801000000000002
- type: map_at_10
value: 33.941
- type: map_at_100
value: 34.73
- type: map_at_1000
value: 34.793
- type: map_at_3
value: 31.705
- type: map_at_5
value: 33.047
- type: ndcg_at_1
value: 27.933000000000003
- type: ndcg_at_10
value: 38.644
- type: ndcg_at_100
value: 42.594
- type: ndcg_at_1000
value: 44.352000000000004
- type: ndcg_at_3
value: 34.199
- type: ndcg_at_5
value: 36.573
- type: precision_at_1
value: 27.933000000000003
- type: precision_at_10
value: 5.603000000000001
- type: precision_at_100
value: 0.773
- type: precision_at_1000
value: 0.094
- type: precision_at_3
value: 14.171
- type: precision_at_5
value: 9.786999999999999
- type: recall_at_1
value: 25.801000000000002
- type: recall_at_10
value: 50.876
- type: recall_at_100
value: 69.253
- type: recall_at_1000
value: 82.907
- type: recall_at_3
value: 38.879000000000005
- type: recall_at_5
value: 44.651999999999994
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: 41b686a7f28c59bcaaa5791efd47c67c8ebe28be
metrics:
- type: map_at_1
value: 9.142
- type: map_at_10
value: 13.841999999999999
- type: map_at_100
value: 14.960999999999999
- type: map_at_1000
value: 15.187000000000001
- type: map_at_3
value: 11.966000000000001
- type: map_at_5
value: 12.921
- type: ndcg_at_1
value: 18.364
- type: ndcg_at_10
value: 18.590999999999998
- type: ndcg_at_100
value: 24.153
- type: ndcg_at_1000
value: 29.104000000000003
- type: ndcg_at_3
value: 16.323
- type: ndcg_at_5
value: 17.000999999999998
- type: precision_at_1
value: 18.364
- type: precision_at_10
value: 5.216
- type: precision_at_100
value: 1.09
- type: precision_at_1000
value: 0.193
- type: precision_at_3
value: 10.751
- type: precision_at_5
value: 7.932
- type: recall_at_1
value: 9.142
- type: recall_at_10
value: 22.747
- type: recall_at_100
value: 44.585
- type: recall_at_1000
value: 75.481
- type: recall_at_3
value: 14.602
- type: recall_at_5
value: 17.957
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: 766870b35a1b9ca65e67a0d1913899973551fc6c
metrics:
- type: map_at_1
value: 18.677
- type: map_at_10
value: 26.616
- type: map_at_100
value: 27.605
- type: map_at_1000
value: 27.711999999999996
- type: map_at_3
value: 24.396
- type: map_at_5
value: 25.627
- type: ndcg_at_1
value: 37.352999999999994
- type: ndcg_at_10
value: 33.995
- type: ndcg_at_100
value: 38.423
- type: ndcg_at_1000
value: 40.947
- type: ndcg_at_3
value: 29.885
- type: ndcg_at_5
value: 31.874999999999996
- type: precision_at_1
value: 37.352999999999994
- type: precision_at_10
value: 7.539999999999999
- type: precision_at_100
value: 1.107
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 18.938
- type: precision_at_5
value: 12.943
- type: recall_at_1
value: 18.677
- type: recall_at_10
value: 37.698
- type: recall_at_100
value: 55.354000000000006
- type: recall_at_1000
value: 72.255
- type: recall_at_3
value: 28.406
- type: recall_at_5
value: 32.357
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 8d743909f834c38949e8323a8a6ce8721ea6c7f4
metrics:
- type: accuracy
value: 74.3292
- type: ap
value: 68.30186110189658
- type: f1
value: 74.20709636944783
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: validation
revision: e6838a846e2408f22cf5cc337ebc83e0bcf77849
metrics:
- type: map_at_1
value: 6.889000000000001
- type: map_at_10
value: 12.321
- type: map_at_100
value: 13.416
- type: map_at_1000
value: 13.525
- type: map_at_3
value: 10.205
- type: map_at_5
value: 11.342
- type: ndcg_at_1
value: 7.092
- type: ndcg_at_10
value: 15.827
- type: ndcg_at_100
value: 21.72
- type: ndcg_at_1000
value: 24.836
- type: ndcg_at_3
value: 11.393
- type: ndcg_at_5
value: 13.462
- type: precision_at_1
value: 7.092
- type: precision_at_10
value: 2.7969999999999997
- type: precision_at_100
value: 0.583
- type: precision_at_1000
value: 0.08499999999999999
- type: precision_at_3
value: 5.019
- type: precision_at_5
value: 4.06
- type: recall_at_1
value: 6.889000000000001
- type: recall_at_10
value: 26.791999999999998
- type: recall_at_100
value: 55.371
- type: recall_at_1000
value: 80.12899999999999
- type: recall_at_3
value: 14.573
- type: recall_at_5
value: 19.557
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 89.6374829001368
- type: f1
value: 89.20878379358307
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (de)
config: de
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 84.54212454212454
- type: f1
value: 82.81080100037023
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (es)
config: es
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 86.46430953969313
- type: f1
value: 86.00019824223267
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (fr)
config: fr
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 81.31850923896022
- type: f1
value: 81.07860454762863
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (hi)
config: hi
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 58.23234134098243
- type: f1
value: 56.63845098081841
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (th)
config: th
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 72.28571428571429
- type: f1
value: 70.95796714592039
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 70.68171454628363
- type: f1
value: 52.57188062729139
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (de)
config: de
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 60.521273598196665
- type: f1
value: 42.70492970339204
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (es)
config: es
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 64.32288192128087
- type: f1
value: 45.97360620220273
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (fr)
config: fr
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 58.67209520826808
- type: f1
value: 42.82844991304579
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (hi)
config: hi
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 41.95769092864826
- type: f1
value: 28.914127631431263
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (th)
config: th
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 55.28390596745027
- type: f1
value: 38.33899250561289
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 70.00336247478144
- type: f1
value: 68.72041942191649
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.0268997982515
- type: f1
value: 75.29844481506652
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: dcefc037ef84348e49b0d29109e891c01067226b
metrics:
- type: v_measure
value: 30.327566856300813
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 3cd0e71dfbe09d4de0f9e5ecba43e7ce280959dc
metrics:
- type: v_measure
value: 28.01650210863619
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.11041256752524
- type: mrr
value: 32.14172939750204
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: 7eb63cc0c1eb59324d709ebed25fcab851fa7610
metrics:
- type: map_at_1
value: 3.527
- type: map_at_10
value: 9.283
- type: map_at_100
value: 11.995000000000001
- type: map_at_1000
value: 13.33
- type: map_at_3
value: 6.223
- type: map_at_5
value: 7.68
- type: ndcg_at_1
value: 36.223
- type: ndcg_at_10
value: 28.255999999999997
- type: ndcg_at_100
value: 26.355
- type: ndcg_at_1000
value: 35.536
- type: ndcg_at_3
value: 31.962000000000003
- type: ndcg_at_5
value: 30.61
- type: precision_at_1
value: 37.771
- type: precision_at_10
value: 21.889
- type: precision_at_100
value: 7.1080000000000005
- type: precision_at_1000
value: 1.989
- type: precision_at_3
value: 30.857
- type: precision_at_5
value: 27.307
- type: recall_at_1
value: 3.527
- type: recall_at_10
value: 14.015
- type: recall_at_100
value: 28.402
- type: recall_at_1000
value: 59.795
- type: recall_at_3
value: 7.5969999999999995
- type: recall_at_5
value: 10.641
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: 6062aefc120bfe8ece5897809fb2e53bfe0d128c
metrics:
- type: map_at_1
value: 11.631
- type: map_at_10
value: 19.532
- type: map_at_100
value: 20.821
- type: map_at_1000
value: 20.910999999999998
- type: map_at_3
value: 16.597
- type: map_at_5
value: 18.197
- type: ndcg_at_1
value: 13.413
- type: ndcg_at_10
value: 24.628
- type: ndcg_at_100
value: 30.883
- type: ndcg_at_1000
value: 33.216
- type: ndcg_at_3
value: 18.697
- type: ndcg_at_5
value: 21.501
- type: precision_at_1
value: 13.413
- type: precision_at_10
value: 4.571
- type: precision_at_100
value: 0.812
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 8.845
- type: precision_at_5
value: 6.889000000000001
- type: recall_at_1
value: 11.631
- type: recall_at_10
value: 38.429
- type: recall_at_100
value: 67.009
- type: recall_at_1000
value: 84.796
- type: recall_at_3
value: 22.74
- type: recall_at_5
value: 29.266
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: 6205996560df11e3a3da9ab4f926788fc30a7db4
metrics:
- type: map_at_1
value: 66.64
- type: map_at_10
value: 80.394
- type: map_at_100
value: 81.099
- type: map_at_1000
value: 81.122
- type: map_at_3
value: 77.289
- type: map_at_5
value: 79.25999999999999
- type: ndcg_at_1
value: 76.85
- type: ndcg_at_10
value: 84.68
- type: ndcg_at_100
value: 86.311
- type: ndcg_at_1000
value: 86.49900000000001
- type: ndcg_at_3
value: 81.295
- type: ndcg_at_5
value: 83.199
- type: precision_at_1
value: 76.85
- type: precision_at_10
value: 12.928999999999998
- type: precision_at_100
value: 1.51
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 35.557
- type: precision_at_5
value: 23.576
- type: recall_at_1
value: 66.64
- type: recall_at_10
value: 93.059
- type: recall_at_100
value: 98.922
- type: recall_at_1000
value: 99.883
- type: recall_at_3
value: 83.49499999999999
- type: recall_at_5
value: 88.729
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: b2805658ae38990172679479369a78b86de8c390
metrics:
- type: v_measure
value: 42.17131361041068
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: v_measure
value: 48.01815621479994
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: 5c59ef3e437a0a9651c8fe6fde943e7dce59fba5
metrics:
- type: map_at_1
value: 3.198
- type: map_at_10
value: 7.550999999999999
- type: map_at_100
value: 9.232
- type: map_at_1000
value: 9.51
- type: map_at_3
value: 5.2940000000000005
- type: map_at_5
value: 6.343999999999999
- type: ndcg_at_1
value: 15.8
- type: ndcg_at_10
value: 13.553999999999998
- type: ndcg_at_100
value: 20.776
- type: ndcg_at_1000
value: 26.204
- type: ndcg_at_3
value: 12.306000000000001
- type: ndcg_at_5
value: 10.952
- type: precision_at_1
value: 15.8
- type: precision_at_10
value: 7.180000000000001
- type: precision_at_100
value: 1.762
- type: precision_at_1000
value: 0.307
- type: precision_at_3
value: 11.333
- type: precision_at_5
value: 9.62
- type: recall_at_1
value: 3.198
- type: recall_at_10
value: 14.575
- type: recall_at_100
value: 35.758
- type: recall_at_1000
value: 62.317
- type: recall_at_3
value: 6.922000000000001
- type: recall_at_5
value: 9.767000000000001
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cos_sim_pearson
value: 84.5217161312271
- type: cos_sim_spearman
value: 79.58562467776268
- type: euclidean_pearson
value: 76.69364353942403
- type: euclidean_spearman
value: 74.68959282070473
- type: manhattan_pearson
value: 76.81159265133732
- type: manhattan_spearman
value: 74.7519444048176
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: fdf84275bb8ce4b49c971d02e84dd1abc677a50f
metrics:
- type: cos_sim_pearson
value: 83.70403706922605
- type: cos_sim_spearman
value: 74.28502198729447
- type: euclidean_pearson
value: 83.32719404608066
- type: euclidean_spearman
value: 75.92189433460788
- type: manhattan_pearson
value: 83.35841543005293
- type: manhattan_spearman
value: 75.94458615451978
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 1591bfcbe8c69d4bf7fe2a16e2451017832cafb9
metrics:
- type: cos_sim_pearson
value: 84.94127878986795
- type: cos_sim_spearman
value: 85.35148434923192
- type: euclidean_pearson
value: 81.71127467071571
- type: euclidean_spearman
value: 82.88240481546771
- type: manhattan_pearson
value: 81.72826221967252
- type: manhattan_spearman
value: 82.90725064625128
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: e2125984e7df8b7871f6ae9949cf6b6795e7c54b
metrics:
- type: cos_sim_pearson
value: 83.1474704168523
- type: cos_sim_spearman
value: 79.20612995350827
- type: euclidean_pearson
value: 78.85993329596555
- type: euclidean_spearman
value: 78.91956572744715
- type: manhattan_pearson
value: 78.89999720522347
- type: manhattan_spearman
value: 78.93956842550107
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: 1cd7298cac12a96a373b6a2f18738bb3e739a9b6
metrics:
- type: cos_sim_pearson
value: 84.81255514055894
- type: cos_sim_spearman
value: 85.5217140762934
- type: euclidean_pearson
value: 82.15024353784499
- type: euclidean_spearman
value: 83.04155334389833
- type: manhattan_pearson
value: 82.18598945053624
- type: manhattan_spearman
value: 83.07248357693301
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 360a0b2dff98700d09e634a01e1cc1624d3e42cd
metrics:
- type: cos_sim_pearson
value: 80.63248465157822
- type: cos_sim_spearman
value: 82.53853238521991
- type: euclidean_pearson
value: 78.33936863828221
- type: euclidean_spearman
value: 79.16305579487414
- type: manhattan_pearson
value: 78.3888359870894
- type: manhattan_spearman
value: 79.18504473136467
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 90.09066290639687
- type: cos_sim_spearman
value: 90.43893699357069
- type: euclidean_pearson
value: 82.39520777222396
- type: euclidean_spearman
value: 81.23948185395952
- type: manhattan_pearson
value: 82.35529784653383
- type: manhattan_spearman
value: 81.12681522483975
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 63.52752323046846
- type: cos_sim_spearman
value: 63.19719780439462
- type: euclidean_pearson
value: 58.29085490641428
- type: euclidean_spearman
value: 58.975178656335046
- type: manhattan_pearson
value: 58.183542772416985
- type: manhattan_spearman
value: 59.190630462178994
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: 8913289635987208e6e7c72789e4be2fe94b6abd
metrics:
- type: cos_sim_pearson
value: 85.45100366635687
- type: cos_sim_spearman
value: 85.66816193002651
- type: euclidean_pearson
value: 81.87976731329091
- type: euclidean_spearman
value: 82.01382867690964
- type: manhattan_pearson
value: 81.88260155706726
- type: manhattan_spearman
value: 82.05258597906492
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: 56a6d0140cf6356659e2a7c1413286a774468d44
metrics:
- type: map
value: 77.53549990038017
- type: mrr
value: 93.37474163454556
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: a75ae049398addde9b70f6b268875f5cbce99089
metrics:
- type: map_at_1
value: 31.167
- type: map_at_10
value: 40.778
- type: map_at_100
value: 42.063
- type: map_at_1000
value: 42.103
- type: map_at_3
value: 37.12
- type: map_at_5
value: 39.205
- type: ndcg_at_1
value: 33.667
- type: ndcg_at_10
value: 46.662
- type: ndcg_at_100
value: 51.995999999999995
- type: ndcg_at_1000
value: 53.254999999999995
- type: ndcg_at_3
value: 39.397999999999996
- type: ndcg_at_5
value: 42.934
- type: precision_at_1
value: 33.667
- type: precision_at_10
value: 7.1
- type: precision_at_100
value: 0.993
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 16.111
- type: precision_at_5
value: 11.600000000000001
- type: recall_at_1
value: 31.167
- type: recall_at_10
value: 63.744
- type: recall_at_100
value: 87.156
- type: recall_at_1000
value: 97.556
- type: recall_at_3
value: 44.0
- type: recall_at_5
value: 52.556000000000004
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: 5a8256d0dff9c4bd3be3ba3e67e4e70173f802ea
metrics:
- type: cos_sim_accuracy
value: 99.55148514851486
- type: cos_sim_ap
value: 80.535236573428
- type: cos_sim_f1
value: 75.01331912626532
- type: cos_sim_precision
value: 80.27366020524515
- type: cos_sim_recall
value: 70.39999999999999
- type: dot_accuracy
value: 99.04851485148515
- type: dot_ap
value: 28.505358821499726
- type: dot_f1
value: 36.36363636363637
- type: dot_precision
value: 37.160751565762006
- type: dot_recall
value: 35.6
- type: euclidean_accuracy
value: 99.4990099009901
- type: euclidean_ap
value: 74.95819047075476
- type: euclidean_f1
value: 71.15489874110564
- type: euclidean_precision
value: 78.59733978234583
- type: euclidean_recall
value: 65.0
- type: manhattan_accuracy
value: 99.50198019801981
- type: manhattan_ap
value: 75.02070096015086
- type: manhattan_f1
value: 71.20535714285712
- type: manhattan_precision
value: 80.55555555555556
- type: manhattan_recall
value: 63.800000000000004
- type: max_accuracy
value: 99.55148514851486
- type: max_ap
value: 80.535236573428
- type: max_f1
value: 75.01331912626532
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 70a89468f6dccacc6aa2b12a6eac54e74328f235
metrics:
- type: v_measure
value: 54.13314692311623
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: d88009ab563dd0b16cfaf4436abaf97fa3550cf0
metrics:
- type: v_measure
value: 31.115181648287145
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: ef807ea29a75ec4f91b50fd4191cb4ee4589a9f9
metrics:
- type: map
value: 44.771112666694336
- type: mrr
value: 45.30415764790765
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: 8753c2788d36c01fc6f05d03fe3f7268d63f9122
metrics:
- type: cos_sim_pearson
value: 30.849429597669374
- type: cos_sim_spearman
value: 30.384175038360194
- type: dot_pearson
value: 29.030383429536823
- type: dot_spearman
value: 28.03273624951732
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: 2c8041b2c07a79b6f7ba8fe6acc72e5d9f92d217
metrics:
- type: map_at_1
value: 0.19499999999999998
- type: map_at_10
value: 1.0959999999999999
- type: map_at_100
value: 5.726
- type: map_at_1000
value: 13.611999999999998
- type: map_at_3
value: 0.45399999999999996
- type: map_at_5
value: 0.67
- type: ndcg_at_1
value: 71.0
- type: ndcg_at_10
value: 55.352999999999994
- type: ndcg_at_100
value: 40.797
- type: ndcg_at_1000
value: 35.955999999999996
- type: ndcg_at_3
value: 63.263000000000005
- type: ndcg_at_5
value: 60.14000000000001
- type: precision_at_1
value: 78.0
- type: precision_at_10
value: 56.99999999999999
- type: precision_at_100
value: 41.199999999999996
- type: precision_at_1000
value: 16.154
- type: precision_at_3
value: 66.667
- type: precision_at_5
value: 62.8
- type: recall_at_1
value: 0.19499999999999998
- type: recall_at_10
value: 1.3639999999999999
- type: recall_at_100
value: 9.317
- type: recall_at_1000
value: 33.629999999999995
- type: recall_at_3
value: 0.49300000000000005
- type: recall_at_5
value: 0.756
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: 527b7d77e16e343303e68cb6af11d6e18b9f7b3b
metrics:
- type: map_at_1
value: 1.335
- type: map_at_10
value: 6.293
- type: map_at_100
value: 10.928
- type: map_at_1000
value: 12.359
- type: map_at_3
value: 3.472
- type: map_at_5
value: 4.935
- type: ndcg_at_1
value: 19.387999999999998
- type: ndcg_at_10
value: 16.178
- type: ndcg_at_100
value: 28.149
- type: ndcg_at_1000
value: 39.845000000000006
- type: ndcg_at_3
value: 19.171
- type: ndcg_at_5
value: 17.864
- type: precision_at_1
value: 20.408
- type: precision_at_10
value: 14.49
- type: precision_at_100
value: 6.306000000000001
- type: precision_at_1000
value: 1.3860000000000001
- type: precision_at_3
value: 21.088
- type: precision_at_5
value: 18.367
- type: recall_at_1
value: 1.335
- type: recall_at_10
value: 10.825999999999999
- type: recall_at_100
value: 39.251000000000005
- type: recall_at_1000
value: 74.952
- type: recall_at_3
value: 4.9110000000000005
- type: recall_at_5
value: 7.312
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 69.93339999999999
- type: ap
value: 13.87476602492533
- type: f1
value: 53.867357615848555
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: 62146448f05be9e52a36b8ee9936447ea787eede
metrics:
- type: accuracy
value: 62.43916242218449
- type: f1
value: 62.870386304954685
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 091a54f9a36281ce7d6590ec8c75dd485e7e01d4
metrics:
- type: v_measure
value: 37.202082549859796
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 83.65023544137807
- type: cos_sim_ap
value: 65.99787692764193
- type: cos_sim_f1
value: 62.10650887573965
- type: cos_sim_precision
value: 56.30901287553648
- type: cos_sim_recall
value: 69.23482849604221
- type: dot_accuracy
value: 79.10830303391549
- type: dot_ap
value: 48.80109642320246
- type: dot_f1
value: 51.418744625967314
- type: dot_precision
value: 40.30253107683091
- type: dot_recall
value: 71.00263852242745
- type: euclidean_accuracy
value: 82.45812719794957
- type: euclidean_ap
value: 60.09969493259607
- type: euclidean_f1
value: 57.658573789246226
- type: euclidean_precision
value: 55.62913907284768
- type: euclidean_recall
value: 59.84168865435356
- type: manhattan_accuracy
value: 82.46408773916671
- type: manhattan_ap
value: 60.116199786815116
- type: manhattan_f1
value: 57.683903860160235
- type: manhattan_precision
value: 53.41726618705036
- type: manhattan_recall
value: 62.69129287598945
- type: max_accuracy
value: 83.65023544137807
- type: max_ap
value: 65.99787692764193
- type: max_f1
value: 62.10650887573965
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.34943920518494
- type: cos_sim_ap
value: 84.5428891020442
- type: cos_sim_f1
value: 77.09709933923172
- type: cos_sim_precision
value: 74.83150952967607
- type: cos_sim_recall
value: 79.50415768401602
- type: dot_accuracy
value: 84.53448208949432
- type: dot_ap
value: 73.96328242371995
- type: dot_f1
value: 70.00553786515299
- type: dot_precision
value: 63.58777665995976
- type: dot_recall
value: 77.86418232214352
- type: euclidean_accuracy
value: 86.87662514068381
- type: euclidean_ap
value: 81.45499631520235
- type: euclidean_f1
value: 73.46567109816063
- type: euclidean_precision
value: 69.71037533697381
- type: euclidean_recall
value: 77.6485987064983
- type: manhattan_accuracy
value: 86.88244654014825
- type: manhattan_ap
value: 81.47180273946366
- type: manhattan_f1
value: 73.44624393136418
- type: manhattan_precision
value: 70.80385852090032
- type: manhattan_recall
value: 76.29350169387126
- type: max_accuracy
value: 88.34943920518494
- type: max_ap
value: 84.5428891020442
- type: max_f1
value: 77.09709933923172
---
# SGPT-5.8B-weightedmean-msmarco-specb-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 249592 with parameters:
```
{'batch_size': 2, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 5e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTJModel
(1): Pooling({'word_embedding_dimension': 4096, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| 65,889 | [
[
-0.020416259765625,
-0.0389404296875,
0.0302276611328125,
0.016510009765625,
-0.033966064453125,
-0.0261383056640625,
-0.023529052734375,
0.0031833648681640625,
0.0184173583984375,
0.0164794921875,
-0.050567626953125,
-0.026763916015625,
-0.060089111328125,
... |
AkshatSurolia/ICD-10-Code-Prediction | 2023-05-05T15:24:14.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | AkshatSurolia | null | null | AkshatSurolia/ICD-10-Code-Prediction | 21 | 1,572 | transformers | 2022-03-02T23:29:04 | ---
license: apache-2.0
tags:
- text-classification
---
# Clinical BERT for ICD-10 Prediction
The Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base (cased_L-12_H-768_A-12) or BioBERT (BioBERT-Base v1.0 + PubMed 200K + PMC 270K) & trained on either all MIMIC notes or only discharge summaries.
---
## How to use the model
Load the model via the transformers library:
from transformers import AutoTokenizer, BertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("AkshatSurolia/ICD-10-Code-Prediction")
model = BertForSequenceClassification.from_pretrained("AkshatSurolia/ICD-10-Code-Prediction")
config = model.config
Run the model with clinical diagonosis text:
text = "subarachnoid hemorrhage scalp laceration service: surgery major surgical or invasive"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
Return the Top-5 predicted ICD-10 codes:
results = output.logits.detach().cpu().numpy()[0].argsort()[::-1][:5]
return [ config.id2label[ids] for ids in results] | 1,136 | [
[
-0.03387451171875,
-0.0238189697265625,
0.046417236328125,
0.034515380859375,
-0.031402587890625,
0.00260162353515625,
0.01091766357421875,
-0.0283660888671875,
0.028167724609375,
0.033477783203125,
-0.031829833984375,
-0.06341552734375,
-0.05108642578125,
0... |
TencentGameMate/chinese-hubert-base | 2022-06-24T01:52:57.000Z | [
"transformers",
"pytorch",
"hubert",
"feature-extraction",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | feature-extraction | TencentGameMate | null | null | TencentGameMate/chinese-hubert-base | 12 | 1,572 | transformers | 2022-06-02T06:21:23 | ---
license: mit
---
Pretrained on 10k hours WenetSpeech L subset. More details in [TencentGameMate/chinese_speech_pretrain](https://github.com/TencentGameMate/chinese_speech_pretrain)
This model does not have a tokenizer as it was pretrained on audio alone.
In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data.
python package:
transformers==4.16.2
```python
import torch
import torch.nn.functional as F
import soundfile as sf
from transformers import (
Wav2Vec2FeatureExtractor,
HubertModel,
)
model_path=""
wav_path=""
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_path)
model = HubertModel.from_pretrained(model_path)
# for pretrain: Wav2Vec2ForPreTraining
# model = Wav2Vec2ForPreTraining.from_pretrained(model_path)
model = model.to(device)
model = model.half()
model.eval()
wav, sr = sf.read(wav_path)
input_values = feature_extractor(wav, return_tensors="pt").input_values
input_values = input_values.half()
input_values = input_values.to(device)
with torch.no_grad():
outputs = model(input_values)
last_hidden_state = outputs.last_hidden_state
``` | 1,190 | [
[
-0.00858306884765625,
-0.0204620361328125,
0.019744873046875,
0.024505615234375,
-0.0225372314453125,
-0.00675201416015625,
-0.0247039794921875,
-0.0284271240234375,
-0.005397796630859375,
0.01282501220703125,
-0.058135986328125,
-0.0335693359375,
-0.03448486328... |
mirav/newmoon | 2023-08-24T16:03:08.000Z | [
"diffusers",
"text-to-image",
"en",
"license:cc",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | mirav | null | null | mirav/newmoon | 10 | 1,572 | diffusers | 2023-06-21T02:36:31 | ---
license: cc
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
thumbnail: https://huggingface.co/mirav/newmoon/resolve/main/examples/thumbnail.png
---
If you like my work, consider [supporting me](https://ko-fi.com/mira6).<br>
NewMoon: Soft, bright colors. <br>
ChamomileTea: Darker, moodier colors. <br>
TestMoon: Alternate style. Very cute, but a bit flawed. <br>
NewMoonTest2A: Alternate style. Soft, flat, and cute, but harder to work with. <br>
newmoon.yaml: sample prompt for animatediff
<img src="https://huggingface.co/mirav/newmoon/resolve/main/examples/fox2.gif">
<img src="https://huggingface.co/mirav/newmoon/resolve/main/examples/goofy.gif">
<img src="https://huggingface.co/mirav/newmoon/resolve/main/examples/headtilt2.gif">
<img src="https://huggingface.co/mirav/newmoon/resolve/main/examples/newfox.gif">
<img src="https://huggingface.co/mirav/newmoon/resolve/main/examples/really%20odd.gif">
<img src="https://huggingface.co/mirav/newmoon/resolve/main/examples/wag.gif"> | 1,008 | [
[
-0.051849365234375,
-0.0279693603515625,
0.029449462890625,
0.049530029296875,
-0.042694091796875,
-0.0148468017578125,
-0.0083465576171875,
-0.0193634033203125,
0.054840087890625,
0.022186279296875,
-0.08251953125,
-0.0195770263671875,
-0.05108642578125,
0.... |
kmariunas/uncased-bert-triplet-40 | 2023-07-11T13:02:44.000Z | [
"sentence-transformers",
"pytorch",
"tf",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | sentence-similarity | kmariunas | null | null | kmariunas/uncased-bert-triplet-40 | 0 | 1,572 | sentence-transformers | 2023-07-11T13:01:06 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 108 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.BatchHardTripletLoss.BatchHardTripletLoss`
Parameters of the fit()-Method:
```
{
"epochs": 40,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 429.20000000000005,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | 3,714 | [
[
-0.0199127197265625,
-0.0614013671875,
0.0208587646484375,
0.0225372314453125,
-0.020416259765625,
-0.032806396484375,
-0.018646240234375,
0.0009484291076660156,
0.016693115234375,
0.0266571044921875,
-0.048431396484375,
-0.045928955078125,
-0.051910400390625,
... |
bhadresh-savani/bert-base-go-emotion | 2021-11-29T10:43:10.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"go-emotion",
"en",
"dataset:go_emotions",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | bhadresh-savani | null | null | bhadresh-savani/bert-base-go-emotion | 27 | 1,571 | transformers | 2022-03-02T23:29:05 | ---
language:
- en
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
tags:
- text-classification
- go-emotion
- pytorch
license: apache-2.0
datasets:
- go_emotions
metrics:
- Accuracy
---
# Bert-Base-Uncased-Go-Emotion
## Model description:
## Training Parameters:
```
Num examples = 169208
Num Epochs = 3
Instantaneous batch size per device = 16
Total train batch size (w. parallel, distributed & accumulation) = 16
Gradient Accumulation steps = 1
Total optimization steps = 31728
```
## TrainOutput:
```
'train_loss': 0.12085497042373672,
```
## Evalution Output:
```
'eval_accuracy_thresh': 0.9614765048027039,
'eval_loss': 0.1164659634232521
```
## Colab Notebook:
[Notebook](https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/go_emotion_of_transformers_multilabel_text_classification_v2.ipynb) | 884 | [
[
-0.03350830078125,
-0.036163330078125,
0.01418304443359375,
0.0445556640625,
-0.027435302734375,
-0.0120391845703125,
-0.0297698974609375,
0.004390716552734375,
0.0087432861328125,
-0.006130218505859375,
-0.06256103515625,
-0.0294342041015625,
-0.052642822265625... |
yahyasmt/brain-tumor-3 | 2023-10-08T16:30:23.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | yahyasmt | null | null | yahyasmt/brain-tumor-3 | 0 | 1,571 | diffusers | 2023-10-08T16:17:36 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### brain_tumor_3 Dreambooth model trained by yahyasmt with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| 503 | [
[
-0.01287841796875,
-0.06646728515625,
0.06390380859375,
0.029083251953125,
-0.0254058837890625,
0.00872039794921875,
0.0165252685546875,
-0.0218963623046875,
0.050079345703125,
0.025665283203125,
-0.025604248046875,
-0.0386962890625,
-0.0443115234375,
-0.026... |
jonatasgrosman/wav2vec2-large-xlsr-53-german | 2022-12-14T01:59:09.000Z | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"de",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_6_0",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"dataset:mozilla-foundation/common_voice_6_0",
"lice... | automatic-speech-recognition | jonatasgrosman | null | null | jonatasgrosman/wav2vec2-large-xlsr-53-german | 7 | 1,570 | transformers | 2022-03-02T23:29:05 | ---
language: de
license: apache-2.0
datasets:
- common_voice
- mozilla-foundation/common_voice_6_0
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- de
- hf-asr-leaderboard
- mozilla-foundation/common_voice_6_0
- robust-speech-event
- speech
- xlsr-fine-tuning-week
model-index:
- name: XLSR Wav2Vec2 German by Jonatas Grosman
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice de
type: common_voice
args: de
metrics:
- name: Test WER
type: wer
value: 12.06
- name: Test CER
type: cer
value: 2.92
- name: Test WER (+LM)
type: wer
value: 8.74
- name: Test CER (+LM)
type: cer
value: 2.28
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: de
metrics:
- name: Dev WER
type: wer
value: 32.75
- name: Dev CER
type: cer
value: 13.64
- name: Dev WER (+LM)
type: wer
value: 26.6
- name: Dev CER (+LM)
type: cer
value: 12.58
---
# Fine-tuned XLSR-53 large model for speech recognition in German
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on German using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-german")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "de"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-german"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| ZIEHT EUCH BITTE DRAUSSEN DIE SCHUHE AUS. | ZIEHT EUCH BITTE DRAUSSEN DIE SCHUHE AUS |
| ES KOMMT ZUM SHOWDOWN IN GSTAAD. | ES KOMMT ZUG STUNDEDAUTENESTERKT |
| IHRE FOTOSTRECKEN ERSCHIENEN IN MODEMAGAZINEN WIE DER VOGUE, HARPER’S BAZAAR UND MARIE CLAIRE. | IHRE FOTELSTRECKEN ERSCHIENEN MIT MODEMAGAZINEN WIE DER VALG AT DAS BASIN MA RIQUAIR |
| FELIPE HAT EINE AUCH FÜR MONARCHEN UNGEWÖHNLICH LANGE TITELLISTE. | FELIPPE HAT EINE AUCH FÜR MONACHEN UNGEWÖHNLICH LANGE TITELLISTE |
| ER WURDE ZU EHREN DES REICHSKANZLERS OTTO VON BISMARCK ERRICHTET. | ER WURDE ZU EHREN DES REICHSKANZLERS OTTO VON BISMARCK ERRICHTET M |
| WAS SOLLS, ICH BIN BEREIT. | WAS SOLL'S ICH BIN BEREIT |
| DAS INTERNET BESTEHT AUS VIELEN COMPUTERN, DIE MITEINANDER VERBUNDEN SIND. | DAS INTERNET BESTEHT AUS VIELEN COMPUTERN DIE MITEINANDER VERBUNDEN SIND |
| DER URANUS IST DER SIEBENTE PLANET IN UNSEREM SONNENSYSTEM. | DER URANUS IST DER SIEBENTE PLANET IN UNSEREM SONNENSYSTEM |
| DIE WAGEN ERHIELTEN EIN EINHEITLICHES ERSCHEINUNGSBILD IN WEISS MIT ROTEM FENSTERBAND. | DIE WAGEN ERHIELTEN EIN EINHEITLICHES ERSCHEINUNGSBILD IN WEISS MIT ROTEM FENSTERBAND |
| SIE WAR DIE COUSINE VON CARL MARIA VON WEBER. | SIE WAR DIE COUSINE VON KARL-MARIA VON WEBER |
## Evaluation
1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-german --dataset mozilla-foundation/common_voice_6_0 --config de --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-german --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-german,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {G}erman},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-german}},
year={2021}
}
``` | 5,708 | [
[
-0.036224365234375,
-0.043060302734375,
0.0236053466796875,
0.00968170166015625,
-0.0157012939453125,
-0.0205230712890625,
-0.0266571044921875,
-0.04046630859375,
0.0228424072265625,
0.022430419921875,
-0.051666259765625,
-0.043548583984375,
-0.0311737060546875,... |
wavymulder/lomo-diffusion | 2023-02-17T01:21:58.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"safetensors",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | wavymulder | null | null | wavymulder/lomo-diffusion | 22 | 1,570 | diffusers | 2023-02-04T19:41:30 | ---
language:
- en
thumbnail: "https://huggingface.co/wavymulder/lomo-diffusion/resolve/main/images/page1.jpg"
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- safetensors
- diffusers
inference: true
---
**Lomo Diffusion**

[*CKPT DOWNLOAD LINK*](https://huggingface.co/wavymulder/lomo-diffusion/resolve/main/lomo-1.0.ckpt) - - - [*SAFETENSORS DOWNLOAD LINK*](https://huggingface.co/wavymulder/lomo-diffusion/resolve/main/lomo-1.0.safetensors)
This is a dreambooth model trained on a diverse set of stylized photographs.
Use the activation token **lomo style** in your prompt (I recommend at the start)
This model is inspired by the Lomography movement, which embraces the imperfections and style of old LOMO cameras. The model excels at producing bright saturated colors as well as a variety of film artifacts that add to the illusion of a real photograph.
When using most models, I typically use **blur haze** in my negative prompt. I encourage you to experiment and see what works well for you.
Trained from 1.5 with VAE.
Please see [this document where I share the parameters (prompt, sampler, seed, etc.) used for all example images.](https://huggingface.co/wavymulder/lomo-diffusion/resolve/main/paramets_for_samples.txt)
You can [see here a non-cherrypicked batch of 49 images here.](https://i.imgur.com/cfIj3iq.jpg)
And you can [see here a direct comparison between Analog Style and Lomo Style.](https://i.imgur.com/ugdFzPI.jpg)

| 1,696 | [
[
-0.048065185546875,
-0.08001708984375,
0.046966552734375,
0.015167236328125,
-0.0455322265625,
-0.0139617919921875,
0.024444580078125,
-0.047332763671875,
0.050933837890625,
0.057281494140625,
-0.036224365234375,
-0.046295166015625,
-0.042083740234375,
-0.01... |
h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2 | 2023-07-13T03:12:11.000Z | [
"transformers",
"pytorch",
"RefinedWeb",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"custom_code",
"en",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | h2oai | null | null | h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2 | 17 | 1,569 | transformers | 2023-06-23T07:35:02 | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: >-
https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
license: apache-2.0
datasets:
- OpenAssistant/oasst1
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [tiiuae/falcon-40b](https://huggingface.co/tiiuae/falcon-40b)
- Dataset preparation: [OpenAssistant/oasst1](https://github.com/h2oai/h2o-llmstudio/blob/1935d84d9caafed3ee686ad2733eb02d2abfce57/app_utils/utils.py#LL1896C5-L1896C28) personalized
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.29.2
pip install bitsandbytes==0.39.0
pip install accelerate==0.19.0
pip install torch==2.0.0
pip install einops==0.6.1
```
```python
import torch
from transformers import pipeline, BitsAndBytesConfig, AutoTokenizer
model_kwargs = {}
quantization_config = None
# optional quantization
quantization_config = BitsAndBytesConfig(
load_in_8bit=True,
llm_int8_threshold=6.0,
)
model_kwargs["quantization_config"] = quantization_config
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2",
use_fast=False,
padding_side="left",
trust_remote_code=True,
)
generate_text = pipeline(
model="h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2",
tokenizer=tokenizer,
torch_dtype=torch.float16,
trust_remote_code=True,
use_fast=False,
device_map={"": "cuda:0"},
model_kwargs=model_kwargs,
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
quantization_config = None
# optional quantization
quantization_config = BitsAndBytesConfig(
load_in_8bit=True,
llm_int8_threshold=6.0,
)
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2",
use_fast=False,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2",
trust_remote_code=True,
torch_dtype=torch.float16,
device_map={"": "cuda:0"},
quantization_config=quantization_config
).eval()
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|endoftext|><|answer|>"
quantization_config = None
# optional quantization
quantization_config = BitsAndBytesConfig(
load_in_8bit=True,
llm_int8_threshold=6.0,
)
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2",
use_fast=False,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2",
trust_remote_code=True,
torch_dtype=torch.float16,
device_map={"": "cuda:0"},
quantization_config=quantization_config
).eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
RWForCausalLM(
(transformer): RWModel(
(word_embeddings): Embedding(65024, 8192)
(h): ModuleList(
(0-59): 60 x DecoderLayer(
(ln_attn): LayerNorm((8192,), eps=1e-05, elementwise_affine=True)
(ln_mlp): LayerNorm((8192,), eps=1e-05, elementwise_affine=True)
(self_attention): Attention(
(maybe_rotary): RotaryEmbedding()
(query_key_value): Linear(in_features=8192, out_features=9216, bias=False)
(dense): Linear(in_features=8192, out_features=8192, bias=False)
(attention_dropout): Dropout(p=0.0, inplace=False)
)
(mlp): MLP(
(dense_h_to_4h): Linear(in_features=8192, out_features=32768, bias=False)
(act): GELU(approximate='none')
(dense_4h_to_h): Linear(in_features=32768, out_features=8192, bias=False)
)
)
)
(ln_f): LayerNorm((8192,), eps=1e-05, elementwise_affine=True)
)
(lm_head): Linear(in_features=8192, out_features=65024, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. | 8,574 | [
[
-0.016387939453125,
-0.05718994140625,
0.0240478515625,
0.01219940185546875,
-0.02349853515625,
-0.005985260009765625,
-0.013214111328125,
-0.01739501953125,
0.00447845458984375,
0.022796630859375,
-0.03228759765625,
-0.04144287109375,
-0.049835205078125,
-0... |
luodian/OTTER-MPT1B-RPJama-Init | 2023-07-19T02:19:09.000Z | [
"transformers",
"pytorch",
"otter",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | luodian | null | null | luodian/OTTER-MPT1B-RPJama-Init | 1 | 1,569 | transformers | 2023-07-18T14:14:50 | ---
license: mit
---
<p align="center" width="100%">
<img src="https://i.postimg.cc/MKmyP9wH/new-banner.png" width="80%" height="80%">
</p>
<div>
<div align="center">
<a href='https://brianboli.com/' target='_blank'>Bo Li*<sup>1</sup></a> 
<a href='https://zhangyuanhan-ai.github.io/' target='_blank'>Yuanhan Zhang*<sup>,1</sup></a> 
<a href='https://cliangyu.com/' target='_blank'>Liangyu Chen*<sup>,1</sup></a> 
<a href='https://king159.github.io/' target='_blank'>Jinghao Wang*<sup>,1</sup></a> 
<a href='https://pufanyi.github.io/' target='_blank'>Fanyi Pu*<sup>,1</sup></a> 
</br>
<a href='https://jingkang50.github.io/' target='_blank'>Jingkang Yang<sup>1</sup></a> 
<a href='https://chunyuan.li/' target='_blank'>Chunyuan Li<sup>2</sup></a> 
<a href='https://liuziwei7.github.io/' target='_blank'>Ziwei Liu<sup>1</sup></a>
</div>
<div>
<div align="center">
<sup>1</sup>S-Lab, Nanyang Technological University 
<sup>2</sup>Microsoft Research, Redmond
</div>
This weight is for **initilizing training for Otter-MPT1B**.
It's directly converted from [openflamingo/OpenFlamingo-3B-vitl-mpt1b-langinstruct](https://huggingface.co/openflamingo/OpenFlamingo-3B-vitl-mpt1b-langinstruct).
You can load and try this model using
```python
model = OtterForConditionalGeneration.from_pretrained("luodian/OTTER-MPT7B-Init", device_map="sequential")
model.text_tokenizer.padding_side = "left"
tokenizer = model.text_tokenizer
image_processor = transformers.CLIPImageProcessor()
model.eval()
```
You can also start training Otter via the commands
```python
python -m accelerate.commands.launch --config_file=./pipeline/accelerate_configs/accelerate_config_fsdp.yaml \
pipeline/train/instruction_following.py \
--pretrained_model_name_or_path=luodian/OTTER-MPT1B-RPJama-Init \
--mimicit_path=/data/azure_storage/otter/mimicit/xx/xx_instructions.json \
--images_path=/data/azure_storage/otter/mimicit/xx/xx.json \
--batch_size=4 --num_epochs=1 --report_to_wandb \
--wandb_entity=ntu-slab \
--external_save_dir=/data/bli/checkpoints \
--save_hf_model \
--run_name=OTTER-MPT1B \
--wandb_project=OTTER-MPT1B \
--workers=4 \
--lr_scheduler=cosine \
--learning_rate=1e-5 \
--warmup_steps_ratio=0.01
```
If you wish to init a video instruction tuning, you should add
```json
"max_num_frames": 128
```
to `config.json` inside the folder.
Leave us a message if you have any error or question. You can follow [Otter code](https://github.com/Luodian/Otter) (see training section) to further tune your model on top of it. | 2,596 | [
[
-0.0382080078125,
-0.0260009765625,
0.001983642578125,
0.02777099609375,
-0.021087646484375,
-0.005390167236328125,
0.01114654541015625,
-0.031463623046875,
0.0222930908203125,
-0.0028095245361328125,
-0.05364990234375,
-0.0223846435546875,
-0.041046142578125,
... |
facebook/dino-vitb8 | 2023-05-22T07:04:47.000Z | [
"transformers",
"pytorch",
"vit",
"feature-extraction",
"dino",
"vision",
"dataset:imagenet-1k",
"arxiv:2104.14294",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | feature-extraction | facebook | null | null | facebook/dino-vitb8 | 8 | 1,567 | transformers | 2022-03-02T23:29:05 | ---
license: apache-2.0
tags:
- dino
- vision
datasets:
- imagenet-1k
---
# Vision Transformer (base-sized model, patch size 8) trained using DINO
Vision Transformer (ViT) model trained using the DINO method. It was introduced in the paper [Emerging Properties in Self-Supervised Vision Transformers](https://arxiv.org/abs/2104.14294) by Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin and first released in [this repository](https://github.com/facebookresearch/dino).
Disclaimer: The team releasing DINO did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels.
Images are presented to the model as a sequence of fixed-size patches (resolution 8x8), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
Note that this model does not include any fine-tuned heads.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import ViTImageProcessor, ViTModel
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = ViTImageProcessor.from_pretrained('facebook/dino-vitb8')
model = ViTModel.from_pretrained('facebook/dino-vitb8')
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2104-14294,
author = {Mathilde Caron and
Hugo Touvron and
Ishan Misra and
Herv{\'{e}} J{\'{e}}gou and
Julien Mairal and
Piotr Bojanowski and
Armand Joulin},
title = {Emerging Properties in Self-Supervised Vision Transformers},
journal = {CoRR},
volume = {abs/2104.14294},
year = {2021},
url = {https://arxiv.org/abs/2104.14294},
archivePrefix = {arXiv},
eprint = {2104.14294},
timestamp = {Tue, 04 May 2021 15:12:43 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2104-14294.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 3,254 | [
[
-0.0380859375,
-0.0185546875,
0.00858306884765625,
-0.007343292236328125,
-0.0303192138671875,
-0.0009360313415527344,
0.00730133056640625,
-0.0384521484375,
0.0252227783203125,
0.036895751953125,
-0.031402587890625,
-0.0171051025390625,
-0.04547119140625,
-... |
eenzeenee/t5-base-korean-summarization | 2023-05-21T03:49:27.000Z | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"T5",
"summarization",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | summarization | eenzeenee | null | null | eenzeenee/t5-base-korean-summarization | 8 | 1,567 | transformers | 2023-01-14T13:28:32 | ---
pipeline_tag: summarization
language:
- ko
tags:
- T5
---
# t5-base-korean-summarization
This is [T5](https://huggingface.co/docs/transformers/model_doc/t5) model for korean text summarization.
- Finetuned based on ['paust/pko-t5-base'](https://huggingface.co/paust/pko-t5-base) model.
- Finetuned with 3 datasets. Specifically, it is described below.
- [Korean Paper Summarization Dataset(논문자료 요약)](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=90)
- [Korean Book Summarization Dataset(도서자료 요약)](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=93)
- [Korean Summary statement and Report Generation Dataset(요약문 및 레포트 생성 데이터)](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=90)
# Usage (HuggingFace Transformers)
```python
import nltk
nltk.download('punkt')
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained('eenzeenee/t5-base-korean-summarization')
tokenizer = AutoTokenizer.from_pretrained('eenzeenee/t5-base-korean-summarization')
prefix = "summarize: "
sample = """
안녕하세요? 우리 (2학년)/(이 학년) 친구들 우리 친구들 학교에 가서 진짜 (2학년)/(이 학년) 이 되고 싶었는데 학교에 못 가고 있어서 답답하죠?
그래도 우리 친구들의 안전과 건강이 최우선이니까요 오늘부터 선생님이랑 매일 매일 국어 여행을 떠나보도록 해요.
어/ 시간이 벌써 이렇게 됐나요? 늦었어요. 늦었어요. 빨리 국어 여행을 떠나야 돼요.
그런데 어/ 국어여행을 떠나기 전에 우리가 준비물을 챙겨야 되겠죠? 국어 여행을 떠날 준비물, 교안을 어떻게 받을 수 있는지 선생님이 설명을 해줄게요.
(EBS)/(이비에스) 초등을 검색해서 들어가면요 첫화면이 이렇게 나와요.
자/ 그러면요 여기 (X)/(엑스) 눌러주(고요)/(구요). 저기 (동그라미)/(똥그라미) (EBS)/(이비에스) (2주)/(이 주) 라이브특강이라고 되어있죠?
거기를 바로 가기를 누릅니다. 자/ (누르면요)/(눌르면요). 어떻게 되냐? b/ 밑으로 내려요 내려요 내려요 쭉 내려요.
우리 몇 학년이죠? 아/ (2학년)/(이 학년) 이죠 (2학년)/(이 학년)의 무슨 과목? 국어.
이번주는 (1주)/(일 주) 차니까요 여기 교안. 다음주는 여기서 다운을 받으면 돼요.
이 교안을 클릭을 하면, 짜잔/. 이렇게 교재가 나옵니다 .이 교안을 (다운)/(따운)받아서 우리 국어여행을 떠날 수가 있어요.
그럼 우리 진짜로 국어 여행을 한번 떠나보도록 해요? 국어여행 출발. 자/ (1단원)/(일 단원) 제목이 뭔가요? 한번 찾아봐요.
시를 즐겨요 에요. 그냥 시를 읽어요 가 아니에요. 시를 즐겨야 돼요 즐겨야 돼. 어떻게 즐길까? 일단은 내내 시를 즐기는 방법에 대해서 공부를 할 건데요.
그럼 오늘은요 어떻게 즐길까요? 오늘 공부할 내용은요 시를 여러 가지 방법으로 읽기를 공부할겁니다.
어떻게 여러가지 방법으로 읽을까 우리 공부해 보도록 해요. 오늘의 시 나와라 짜잔/! 시가 나왔습니다 시의 제목이 뭔가요? 다툰 날이에요 다툰 날.
누구랑 다퉜나 동생이랑 다퉜나 언니랑 친구랑? 누구랑 다퉜는지 선생님이 시를 읽어 줄 테니까 한번 생각을 해보도록 해요."""
inputs = [prefix + sample]
inputs = tokenizer(inputs, max_length=512, truncation=True, return_tensors="pt")
output = model.generate(**inputs, num_beams=3, do_sample=True, min_length=10, max_length=64)
decoded_output = tokenizer.batch_decode(output, skip_special_tokens=True)[0]
result = nltk.sent_tokenize(decoded_output.strip())[0]
print('RESULT >>', result)
RESULT >> 국어 여행을 떠나기 전에 국어 여행을 떠날 준비물과 교안을 어떻게 받을 수 있는지 선생님이 설명해 준다.
```
# Evalutation Result
- Korean Paper Summarization Dataset(논문자료 요약)
```
ROUGE-2-R 0.09868624890432466
ROUGE-2-P 0.9666714545849712
ROUGE-2-F 0.17250881441169427
```
- Korean Book Summarization Dataset(도서자료 요약)
```
ROUGE-2-R 0.1575686156943213
ROUGE-2-P 0.9718318136896944
ROUGE-2-F 0.26548116834852586
```
- Korean Summary statement and Report Generation Dataset(요약문 및 레포트 생성 데이터)
```
ROUGE-2-R 0.0987891733555808
ROUGE-2-P 0.9276946867981899
ROUGE-2-F 0.17726493110448185
```
# Training
The model was trained with the parameters:
- training arguments
```
Seq2SeqTrainingArguments(
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
auto_find_batch_size=False,
weight_decay=0.01,
learning_rate=4e-05,
lr_scheduler_type=linear,
num_train_epochs=3,
fp16=True)
```
# Model Architecture
```
T5ForConditionalGeneration(
(shared): Embedding(50358, 768)
(encoder): T5Stack(
(embed_tokens): Embedding(50358, 768)
(block): ModuleList(
(0): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(in_features=768, out_features=768, bias=False)
(k): Linear(in_features=768, out_features=768, bias=False)
(v): Linear(in_features=768, out_features=768, bias=False)
(o): Linear(in_features=768, out_features=768, bias=False)
(relative_attention_bias): Embedding(32, 12)
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerFF(
(DenseReluDense): T5DenseGatedActDense(
(wi_0): Linear(in_features=768, out_features=2048, bias=False)
(wi_1): Linear(in_features=768, out_features=2048, bias=False)
(wo): Linear(in_features=2048, out_features=768, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): NewGELUActivation()
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
(1~11): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(in_features=768, out_features=768, bias=False)
(k): Linear(in_features=768, out_features=768, bias=False)
(v): Linear(in_features=768, out_features=768, bias=False)
(o): Linear(in_features=768, out_features=768, bias=False)
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerFF(
(DenseReluDense): T5DenseGatedActDense(
(wi_0): Linear(in_features=768, out_features=2048, bias=False)
(wi_1): Linear(in_features=768, out_features=2048, bias=False)
(wo): Linear(in_features=2048, out_features=768, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): NewGELUActivation()
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(final_layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(decoder): T5Stack(
(embed_tokens): Embedding(50358, 768)
(block): ModuleList(
(0): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(in_features=768, out_features=768, bias=False)
(k): Linear(in_features=768, out_features=768, bias=False)
(v): Linear(in_features=768, out_features=768, bias=False)
(o): Linear(in_features=768, out_features=768, bias=False)
(relative_attention_bias): Embedding(32, 12)
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerCrossAttention(
(EncDecAttention): T5Attention(
(q): Linear(in_features=768, out_features=768, bias=False)
(k): Linear(in_features=768, out_features=768, bias=False)
(v): Linear(in_features=768, out_features=768, bias=False)
(o): Linear(in_features=768, out_features=768, bias=False)
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(2): T5LayerFF(
(DenseReluDense): T5DenseGatedActDense(
(wi_0): Linear(in_features=768, out_features=2048, bias=False)
(wi_1): Linear(in_features=768, out_features=2048, bias=False)
(wo): Linear(in_features=2048, out_features=768, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): NewGELUActivation()
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
(1~11): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(in_features=768, out_features=768, bias=False)
(k): Linear(in_features=768, out_features=768, bias=False)
(v): Linear(in_features=768, out_features=768, bias=False)
(o): Linear(in_features=768, out_features=768, bias=False)
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerCrossAttention(
(EncDecAttention): T5Attention(
(q): Linear(in_features=768, out_features=768, bias=False)
(k): Linear(in_features=768, out_features=768, bias=False)
(v): Linear(in_features=768, out_features=768, bias=False)
(o): Linear(in_features=768, out_features=768, bias=False)
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(2): T5LayerFF(
(DenseReluDense): T5DenseGatedActDense(
(wi_0): Linear(in_features=768, out_features=2048, bias=False)
(wi_1): Linear(in_features=768, out_features=2048, bias=False)
(wo): Linear(in_features=2048, out_features=768, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): NewGELUActivation()
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
(final_layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(lm_head): Linear(in_features=768, out_features=50358, bias=False)
)
```
## Citation
- Raffel, Colin, et al. "Exploring the limits of transfer learning with a unified text-to-text transformer." J. Mach. Learn. Res. 21.140 (2020): 1-67.
| 11,221 | [
[
-0.0335693359375,
-0.041595458984375,
0.01447296142578125,
0.0294647216796875,
-0.021820068359375,
0.003902435302734375,
0.0031070709228515625,
-0.0186004638671875,
0.044769287109375,
0.0196685791015625,
-0.03863525390625,
-0.05303955078125,
-0.044525146484375,
... |
google/umt5-base | 2023-07-03T05:37:52.000Z | [
"transformers",
"pytorch",
"text2text-generation",
"multilingual",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
... | text2text-generation | google | null | null | google/umt5-base | 9 | 1,567 | transformers | 2023-07-02T01:49:59 | ---
language:
- multilingual
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
datasets:
- mc4
license: apache-2.0
---
[Google's UMT5](https://github.com/google-research/multilingual-t5)
UMT5 is pretrained on the an updated version of [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 107 languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu.
**Note**: UMT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual)
Other Community Checkpoints: [here](https://huggingface.co/models?search=umt5)
Paper: [UniMax, Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining](https://openreview.net/forum?id=kXwdL1cWOAi)
Authors: *by Hyung Won Chung, Xavier Garcia, Adam Roberts, Yi Tay, Orhan Firat, Sharan Narang, Noah Constant*
## Abstract
*Pretrained multilingual large language models have typically used heuristic temperature-based sampling to balance between different languages. However previous work has not systematically evaluated the efficacy of different pretraining language distributions across model scales. In this paper, we propose a new sampling method, UniMax, that delivers more uniform coverage of head languages while mitigating overfitting on tail languages by explicitly capping the number of repeats over each language's corpus. We perform an extensive series of ablations testing a range of sampling strategies on a suite of multilingual benchmarks, while varying model scale. We find that UniMax outperforms standard temperature-based sampling, and the benefits persist as scale increases. As part of our contribution, we release: (i) an improved and refreshed mC4 multilingual corpus consisting of 29 trillion characters across 107 languages, and (ii) a suite of pretrained umT5 model checkpoints trained with UniMax sampling.* | 3,348 | [
[
-0.049163818359375,
-0.01233673095703125,
0.0155487060546875,
0.0312347412109375,
-0.0162811279296875,
0.025665283203125,
-0.04022216796875,
-0.0210723876953125,
0.002597808837890625,
0.03533935546875,
-0.029266357421875,
-0.045806884765625,
-0.036956787109375,
... |
speechbrain/sepformer-dns4-16k-enhancement | 2023-08-06T10:01:17.000Z | [
"speechbrain",
"audio-to-audio",
"Speech Enhancement",
"DNS-4",
"SepFormer",
"Transformer",
"pytorch",
"Microsoft DNS Challenge",
"Deep Noise Suppression Challenge – ICASSP 2022",
"en",
"de",
"ru",
"fr",
"it",
"es",
"dataset:DNS-4",
"arxiv:2010.13154",
"arxiv:2106.04624",
"licens... | audio-to-audio | speechbrain | null | null | speechbrain/sepformer-dns4-16k-enhancement | 10 | 1,565 | speechbrain | 2023-08-06T07:52:45 | ---
language:
- "en"
- "de"
- "ru"
- "fr"
- "it"
- "es"
thumbnail:
tags:
- audio-to-audio
- Speech Enhancement
- DNS-4
- SepFormer
- Transformer
- pytorch
- speechbrain
- Microsoft DNS Challenge
- Deep Noise Suppression Challenge – ICASSP 2022
license: "apache-2.0"
datasets:
- DNS-4
metrics:
- SI-SNR
- PESQ
- SIG
- BAK
- OVRL
model-index:
- name: sepformer-dns4-16k-enhancement
results:
- task:
name: Speech Enhancement
type: speech-enhancement
dataset:
name: DNS-4
type: https://www.microsoft.com/en-us/research/academic-program/deep-noise-suppression-challenge-icassp-2022/
split: baseline-dev-set
args:
language: de
metrics:
- name: DNSMOS SIG
type: sig
value: '2.999'
- name: DNSMOS BAK
type: bak
value: '3.076'
- name: DNSMOS OVRL
type: ovrl
value: '2.437'
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# SepFormer trained on Microsoft DNS-4 (Deep Noise Suppression Challenge 4 – ICASSP 2022) for speech enhancement (16k sampling frequency)
This repository provides all the necessary tools to perform speech enhancement (denoising) with a [SepFormer](https://arxiv.org/abs/2010.13154v2) model, implemented with SpeechBrain. The model is trained on 1300HRS of Microsoft-DNS 4 dataset with 16k sampling frequency. For a better experience we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). Evaluation on DNS4 2022 baseline dev set using DNSMOS are-
| Release | SIG | BAK | OVRL |
|:-------------:|:--------------:|:--------------:|:--------------:|
| 08-01-23 | 2.999 | 3.076 | 2.437 |
DNSMOS - deep noise suppression (DNS)- mean opinion score (MOS) is a non-intrusive evaluation metric. It computes 3 scores– SIG (speech quality), BAK (background noise quality), and OVRL (overall quality) each on a scale of 1 to 5, with 5 being the best quality.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io).
### Perform speech enhancement on your own audio file
```python
from speechbrain.pretrained import SepformerSeparation as separator
import torchaudio
model = separator.from_hparams(source="speechbrain/sepformer-dns4-16k-enhancement", savedir='pretrained_models/sepformer-dns4-16k-enhancement')
# for custom file, change path
est_sources = model.separate_file(path='speechbrain/sepformer-dns4-16k-enhancement/example_dns4-16k.wav')
torchaudio.save("enhanced_dns4-16k.wav", est_sources[:, :, 0].detach().cpu(), 16000)
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
You can find our training results (models, logs, etc) [here](https://www.dropbox.com/sh/02c3wesc65402f6/AAApoxBApft-JwqHK-bddedBa?dl=0).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
### Referencing SpeechBrain
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
### Referencing SepFormer
```bibtex
@inproceedings{subakan2021attention,
title={Attention is All You Need in Speech Separation},
author={Cem Subakan and Mirco Ravanelli and Samuele Cornell and Mirko Bronzi and Jianyuan Zhong},
year={2021},
booktitle={ICASSP 2021}
}
```
### Referencing ICASSP 2022 Deep Noise Suppression Challenge
```bibtex
@inproceedings{dubey2022icassp,
title={ICASSP 2022 Deep Noise Suppression Challenge},
author={Dubey, Harishchandra and Gopal, Vishak and Cutler, Ross and Matusevych, Sergiy and Braun, Sebastian and Eskimez, Emre Sefik and Thakker, Manthan and Yoshioka, Takuya and Gamper, Hannes and Aichner, Robert},
booktitle={ICASSP},
year={2022}
}
```
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/ | 4,779 | [
[
-0.0526123046875,
-0.04901123046875,
0.0009899139404296875,
0.00909423828125,
-0.02008056640625,
0.01209259033203125,
-0.0291748046875,
-0.04266357421875,
0.0214080810546875,
0.01200103759765625,
-0.047454833984375,
-0.048248291015625,
-0.043731689453125,
-0... |
Yntec/Chik2 | 2023-07-28T06:48:11.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"xxxholic",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Yntec | null | null | Yntec/Chik2 | 0 | 1,564 | diffusers | 2023-07-28T06:04:50 | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- xxxholic
---
Why? This time I'm not going to explain you, I'm going to show you:

Any questions?
# Chikmix2
Original page:
https://civitai.com/api/download/models/20663 | 464 | [
[
-0.0711669921875,
-0.0440673828125,
0.05108642578125,
0.0198211669921875,
-0.06573486328125,
-0.0032787322998046875,
0.01427459716796875,
-0.041290283203125,
0.048858642578125,
0.02886962890625,
-0.07073974609375,
-0.02410888671875,
-0.04327392578125,
0.0081... |
AIARTCHAN/AbyssHellVer3 | 2023-03-14T02:04:11.000Z | [
"diffusers",
"stable-diffusion",
"aiartchan",
"text-to-image",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | AIARTCHAN | null | null | AIARTCHAN/AbyssHellVer3 | 20 | 1,562 | diffusers | 2023-02-24T06:52:07 | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- aiartchan
---
# AbyssHellVer3 (어비스 헬 히어로 바리에이션)
[원본글](https://arca.live/b/aiart/70498939)
[huggingface](https://huggingface.co/KMAZ/AbyssHell-AbyssMaple)
# Download
- [original 5.98GB](https://huggingface.co/KMAZ/TestSamples/resolve/main/AbyssHellVer3.ckpt)
- [safetensors 4.27GB](https://huggingface.co/AIARTCHAN/AbyssHellVer3/resolve/main/AbyssHellVer3-no-ema.safetensors)
- [safetensors fp16 2.13GB](https://huggingface.co/AIARTCHAN/AbyssHellVer3/resolve/main/AbyssHellVer3-fp16.safetensors)
AbyssOrangeMix2 + JK Style 0.27 + Helltaker 0.2 + HeroAcademia 0.2로 병합한 AbyssHellHero 바리에이션.




| 1,046 | [
[
-0.048980712890625,
-0.0206298828125,
0.0255889892578125,
0.046630859375,
-0.039520263671875,
-0.00540924072265625,
0.01334381103515625,
-0.04449462890625,
0.04779052734375,
0.038787841796875,
-0.052703857421875,
-0.047607421875,
-0.032318115234375,
0.032257... |
TheBloke/Xwin-MLewd-13B-v0.2-GPTQ | 2023-10-15T10:14:09.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"nsfw",
"license:cc-by-nc-4.0",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/Xwin-MLewd-13B-v0.2-GPTQ | 20 | 1,562 | transformers | 2023-10-15T09:07:12 | ---
base_model: Undi95/Xwin-MLewd-13B-V0.2
inference: false
license: cc-by-nc-4.0
model_creator: Undi
model_name: Xwin MLewd 13B v0.2
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
tags:
- not-for-all-audiences
- nsfw
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Xwin MLewd 13B v0.2 - GPTQ
- Model creator: [Undi](https://huggingface.co/Undi95)
- Original model: [Xwin MLewd 13B v0.2](https://huggingface.co/Undi95/Xwin-MLewd-13B-V0.2)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Undi's Xwin MLewd 13B v0.2](https://huggingface.co/Undi95/Xwin-MLewd-13B-V0.2).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Xwin-MLewd-13B-v0.2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Xwin-MLewd-13B-v0.2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Xwin-MLewd-13B-v0.2-GGUF)
* [Undi's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Undi95/Xwin-MLewd-13B-V0.2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Undi's Xwin MLewd 13B v0.2](https://huggingface.co/Undi95/Xwin-MLewd-13B-V0.2).
<!-- licensing end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Xwin-MLewd-13B-v0.2-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Xwin-MLewd-13B-v0.2-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Xwin-MLewd-13B-v0.2-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Xwin-MLewd-13B-v0.2-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Xwin-MLewd-13B-v0.2-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 14.54 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Xwin-MLewd-13B-v0.2-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Xwin-MLewd-13B-v0.2-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Xwin-MLewd-13B-v0.2-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Xwin-MLewd-13B-v0.2-GPTQ`:
```shell
mkdir Xwin-MLewd-13B-v0.2-GPTQ
huggingface-cli download TheBloke/Xwin-MLewd-13B-v0.2-GPTQ --local-dir Xwin-MLewd-13B-v0.2-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Xwin-MLewd-13B-v0.2-GPTQ
huggingface-cli download TheBloke/Xwin-MLewd-13B-v0.2-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Xwin-MLewd-13B-v0.2-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Xwin-MLewd-13B-v0.2-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Xwin-MLewd-13B-v0.2-GPTQ --local-dir Xwin-MLewd-13B-v0.2-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Xwin-MLewd-13B-v0.2-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Xwin-MLewd-13B-v0.2-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Xwin-MLewd-13B-v0.2-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Xwin-MLewd-13B-v0.2-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Xwin-MLewd-13B-v0.2-GPTQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers optimum
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.4.2
pip3 install .
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Xwin-MLewd-13B-v0.2-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Undi's Xwin MLewd 13B v0.2

THIS MODEL IS MADE FOR LEWD
SEXUAL, CRUDE AND KINKY CONTENT IN OUTPUT CAN AND WILL HAPPEN. YOU'RE WARNED
This is MLewd merged with [Xwin-LM/Xwin-LM-13B-V0.2](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.2)
<!-- description start -->
## Description
This repo contains fp16 files of Xwin-MLewd-13B-V0.2, very hot and lewd model based on Xwin 0.2 13B.
<!-- description end -->
<!-- description start -->
## Models and loras used
- Undi95/ReMM-S-Light (base/private)
- Undi95/CreativeEngine
- Brouz/Slerpeno
- The-Face-Of-Goonery/Huginn-v3-13b
- zattio770/120-Days-of-LORA-v2-13B
- PygmalionAI/pygmalion-2-13b
- Undi95/StoryTelling
- TokenBender/sakhi_13B_roleplayer_NSFW_chat_adapter
- nRuaif/Kimiko-v2-13B
- The-Face-Of-Goonery/Huginn-13b-FP16
- lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT
- Xwin-LM/Xwin-LM-13B-V0.2
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## The secret sauce
```
slices:
- sources:
- model: Xwin-LM/Xwin-LM-13B-V0.2
layer_range: [0, 40]
- model: Undi95/MLewd-v2.4-13B
layer_range: [0, 40]
merge_method: slerp
base_model: Xwin-LM/Xwin-LM-13B-V0.2
parameters:
t:
- filter: lm_head
value: [0.55]
- filter: embed_tokens
value: [0.7]
- filter: self_attn
value: [0.65, 0.35]
- filter: mlp
value: [0.35, 0.65]
- filter: layernorm
value: [0.4, 0.6]
- filter: modelnorm
value: [0.6]
- value: 0.5 # fallback for rest of tensors
dtype: float16
```
Special thanks to Sushi and Shena ♥
If you want to support me, you can [here](https://ko-fi.com/undiai).
| 21,682 | [
[
-0.040924072265625,
-0.05157470703125,
0.01068115234375,
0.022613525390625,
-0.0217437744140625,
-0.01366424560546875,
0.005321502685546875,
-0.04827880859375,
0.0136260986328125,
0.031280517578125,
-0.052490234375,
-0.038238525390625,
-0.02838134765625,
-0.... |
facebook/regnet-y-040 | 2023-03-26T11:21:20.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"regnet",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | image-classification | facebook | null | null | facebook/regnet-y-040 | 1 | 1,561 | transformers | 2022-03-18T15:36:08 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("facebook/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). | 2,318 | [
[
-0.043426513671875,
-0.027587890625,
-0.01406097412109375,
0.009490966796875,
-0.01090240478515625,
-0.0259857177734375,
0.0037288665771484375,
-0.0439453125,
0.037567138671875,
0.0236358642578125,
-0.04638671875,
-0.02996826171875,
-0.033111572265625,
0.000... |
uw-madison/nystromformer-512 | 2022-01-11T14:13:39.000Z | [
"transformers",
"pytorch",
"nystromformer",
"fill-mask",
"arxiv:2102.03902",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | uw-madison | null | null | uw-madison/nystromformer-512 | 1 | 1,558 | transformers | 2022-03-02T23:29:05 | # Nyströmformer
Nyströmformer model for masked language modeling (MLM) pretrained on BookCorpus and English Wikipedia for sequence length 512.
## About Nyströmformer
The Nyströmformer model was proposed in [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, and Vikas Singh.
The abstract from the paper is the following:
Transformers have emerged as a powerful tool for a broad range of natural language processing tasks. A key component that drives the impressive performance of Transformers is the self-attention mechanism that encodes the influence or dependence of other tokens on each specific token. While beneficial, the quadratic complexity of self-attention on the input sequence length has limited its application to longer sequences — a topic being actively studied in the community. To address this limitation, we propose Nyströmformer — a model that exhibits favorable scalability as a function of sequence length. Our idea is based on adapting the Nyström method to approximate standard self-attention with O(n) complexity. The scalability of Nyströmformer enables application to longer sequences with thousands of tokens. We perform evaluations on multiple downstream tasks on the GLUE benchmark and IMDB reviews with standard sequence length, and find that our Nyströmformer performs comparably, or in a few cases, even slightly better, than standard self-attention. On longer sequence tasks in the Long Range Arena (LRA) benchmark, Nyströmformer performs favorably relative to other efficient self-attention methods. Our code is available at this https URL.
## Usage
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uw-madison/nystromformer-512')
>>> unmasker("Paris is the [MASK] of France.")
[{'score': 0.829957902431488,
'token': 1030,
'token_str': 'capital',
'sequence': 'paris is the capital of france.'},
{'score': 0.022157637402415276,
'token': 16081,
'token_str': 'birthplace',
'sequence': 'paris is the birthplace of france.'},
{'score': 0.01904447190463543,
'token': 197,
'token_str': 'name',
'sequence': 'paris is the name of france.'},
{'score': 0.017583081498742104,
'token': 1107,
'token_str': 'kingdom',
'sequence': 'paris is the kingdom of france.'},
{'score': 0.005948934704065323,
'token': 148,
'token_str': 'city',
'sequence': 'paris is the city of france.'}]
``` | 2,531 | [
[
-0.030426025390625,
-0.032501220703125,
0.0298004150390625,
0.0296478271484375,
-0.003543853759765625,
0.01025390625,
-0.0096282958984375,
-0.007556915283203125,
0.033050537109375,
0.0264739990234375,
-0.05517578125,
-0.0262298583984375,
-0.057037353515625,
... |
teomotun/finetuning-sentiment-model-for-c2er | 2022-10-21T05:15:30.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | teomotun | null | null | teomotun/finetuning-sentiment-model-for-c2er | 0 | 1,558 | transformers | 2022-10-20T04:31:13 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-for-c2er
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-for-c2er
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1404
- Accuracy: 0.9523
- F1: 0.9511
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 50
- eval_batch_size: 50
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
| 1,200 | [
[
-0.043670654296875,
-0.049560546875,
0.0166473388671875,
0.0303192138671875,
-0.038787841796875,
-0.02398681640625,
-0.0283355712890625,
-0.00513458251953125,
0.0059967041015625,
0.0126953125,
-0.051422119140625,
-0.05657958984375,
-0.06304931640625,
-0.0067... |
cyberagent/open-calm-medium | 2023-05-18T01:10:54.000Z | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"japanese",
"causal-lm",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"dataset:mc4",
"license:cc-by-sa-4.0",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | cyberagent | null | null | cyberagent/open-calm-medium | 4 | 1,558 | transformers | 2023-05-15T06:44:47 | ---
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
- mc4
language:
- ja
tags:
- japanese
- causal-lm
inference: false
---
# OpenCALM-Medium
## Model Description
OpenCALM is a suite of decoder-only language models pre-trained on Japanese datasets, developed by CyberAgent, Inc.
## Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("cyberagent/open-calm-medium", device_map="auto", torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("cyberagent/open-calm-medium")
inputs = tokenizer("AIによって私達の暮らしは、", return_tensors="pt").to(model.device)
with torch.no_grad():
tokens = model.generate(
**inputs,
max_new_tokens=64,
do_sample=True,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.05,
pad_token_id=tokenizer.pad_token_id,
)
output = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(output)
```
## Model Details
|Model|Params|Layers|Dim|Heads|Dev ppl|
|:---:|:---: |:---:|:---:|:---:|:---:|
|[cyberagent/open-calm-small](https://huggingface.co/cyberagent/open-calm-small)|160M|12|768|12|19.7|
|[cyberagent/open-calm-medium](https://huggingface.co/cyberagent/open-calm-medium)|400M|24|1024|16|13.8|
|[cyberagent/open-calm-large](https://huggingface.co/cyberagent/open-calm-large)|830M|24|1536|16|11.3|
|[cyberagent/open-calm-1b](https://huggingface.co/cyberagent/open-calm-1b)|1.4B|24|2048|16|10.3|
|[cyberagent/open-calm-3b](https://huggingface.co/cyberagent/open-calm-3b)|2.7B|32|2560|32|9.7|
|[cyberagent/open-calm-7b](https://huggingface.co/cyberagent/open-calm-7b)|6.8B|32|4096|32|8.2|
* **Developed by**: [CyberAgent, Inc.](https://www.cyberagent.co.jp/)
* **Model type**: Transformer-based Language Model
* **Language**: Japanese
* **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
* **License**: OpenCALM is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License ([CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)). When using this model, please provide appropriate credit to CyberAgent, Inc.
* Example (en): This model is a fine-tuned version of OpenCALM-XX developed by CyberAgent, Inc. The original model is released under the CC BY-SA 4.0 license, and this model is also released under the same CC BY-SA 4.0 license. For more information, please visit: https://creativecommons.org/licenses/by-sa/4.0/
* Example (ja): 本モデルは、株式会社サイバーエージェントによるOpenCALM-XXをファインチューニングしたものです。元のモデルはCC BY-SA 4.0ライセンスのもとで公開されており、本モデルも同じくCC BY-SA 4.0ライセンスで公開します。詳しくはこちらをご覧ください: https://creativecommons.org/licenses/by-sa/4.0/
## Training Dataset
* Wikipedia (ja)
* Common Crawl (ja)
## Author
[Ryosuke Ishigami](https://huggingface.co/rishigami)
## Citations
```bibtext
@software{gpt-neox-library,
title = {{GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch}},
author = {Andonian, Alex and Anthony, Quentin and Biderman, Stella and Black, Sid and Gali, Preetham and Gao, Leo and Hallahan, Eric and Levy-Kramer, Josh and Leahy, Connor and Nestler, Lucas and Parker, Kip and Pieler, Michael and Purohit, Shivanshu and Songz, Tri and Phil, Wang and Weinbach, Samuel},
url = {https://www.github.com/eleutherai/gpt-neox},
doi = {10.5281/zenodo.5879544},
month = {8},
year = {2021},
version = {0.0.1},
}
``` | 3,378 | [
[
-0.02978515625,
-0.05487060546875,
0.0193328857421875,
0.00716400146484375,
-0.01222991943359375,
-0.0225677490234375,
-0.0321044921875,
-0.0321044921875,
0.015899658203125,
0.0389404296875,
-0.0377197265625,
-0.05633544921875,
-0.035919189453125,
0.00399398... |
Minej/bert-base-personality | 2023-07-13T13:11:50.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"en",
"arxiv:1810.04805",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | Minej | null | null | Minej/bert-base-personality | 3 | 1,558 | transformers | 2023-06-06T19:17:08 | ---
license: mit
language:
- en
library_name: transformers
pipeline_tag: text-classification
---
## How to Get Started with the Model
To use the model through Hosted inference API, follow the code snippet provided below:
```python
from transformers import BertTokenizer, BertForSequenceClassification
def personality_detection(text):
tokenizer = BertTokenizer.from_pretrained("Minej/bert-base-personality")
model = BertForSequenceClassification.from_pretrained("Minej/bert-base-personality")
inputs = tokenizer(text, truncation=True, padding=True, return_tensors="pt")
outputs = model(**inputs)
predictions = outputs.logits.squeeze().detach().numpy()
label_names = ['Extroversion', 'Neuroticism', 'Agreeableness', 'Conscientiousness', 'Openness']
result = {label_names[i]: predictions[i] for i in range(len(label_names))}
return result
```
#### Result Format
The personality_detection function returns a dictionary containing the predicted personality traits based on the given input text.
The dictionary contains the following personality traits with their corresponding predicted values:
Extroversion: A value between 0 and 1 representing the predicted extroversion trait.
Neuroticism: A value between 0 and 1 representing the predicted neuroticism trait.
Agreeableness: A value between 0 and 1 representing the predicted agreeableness trait.
Conscientiousness: A value between 0 and 1 representing the predicted conscientiousness trait.
Openness: A value between 0 and 1 representing the predicted openness trait.
```python
text_input = "I am feeling excited about the upcoming event."
personality_prediction = personality_detection(text_input)
print(personality_prediction)
```
###### Output:
```python
{
"Extroversion": 0.535,
"Neuroticism": 0.576,
"Agreeableness": 0.399,
"Conscientiousness": 0.253,
"Openness": 0.563
}
```
Note: The values in the example output are just placeholders and may not reflect the actual predictions.
You can modify the example code and the result format to match your specific use case and desired output format.
### Model Description
Transfer Learning for Big Five Personality Prediction
In machine learning, training accurate models can be challenging when labeled data is limited. Transfer learning offers a solution by leveraging pre-existing labeled data from a similar task or domain. By transferring knowledge learned from one task to another, we can overcome data scarcity and train more effective models.
In this project, we used transfer learning with the BERT BASE UNCASED model to predict Big Five personality traits. The model was fine-tuned on a curated dataset for personality traits, learning patterns between input text and personality characteristics. By applying transfer learning, we improved the accuracy of personality trait predictions.
By leveraging transfer learning and fine-tuning BERT BASE UNCASED, we accurately predict an individual's Big Five personality traits based on their input text. This approach addresses the challenges of limited labeled data in personality prediction, providing insights into individuals' personalities.
This project showcases the power of transfer learning in machine learning and highlights the effectiveness of BERT BASE UNCASED for predicting Big Five personality traits.
- **Model type:** BERT BASE UNCASED
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model [optional]:** https://huggingface.co/bert-base-uncased
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
The personality prediction model can be used directly by individuals who are interested in gaining insights into their own personality traits based on their input text. Users can input text and receive predictions for the Big Five personality traits.
### Downstream Use
This model is not intended for downstream use or fine-tuning for specific tasks. It is designed as a standalone personality prediction model.
### Out-of-Scope Use
This model is not suitable for uses beyond personality prediction. It should not be used for making critical decisions or judgments about individuals in areas such as employment, education, or legal matters.
## Bias, Risks, and Limitations
The personality prediction model, like any machine learning model, has certain limitations and potential biases that should be taken into account:
Limited Context:
The model makes predictions based on input text alone and may not capture the full context of an individual's personality. It is important to consider that personality traits are influenced by various factors beyond textual expression.
Generalization:
The model predicts personality traits based on patterns learned from a specific dataset. Its performance may vary when applied to individuals from different demographic or cultural backgrounds not well represented in the training data.
Ethical Considerations:
Personality prediction models should be used responsibly, with an understanding that personality traits do not determine a person's worth or abilities. It is important to avoid making unfair judgments or discriminating against individuals based on predicted personality traits.
Privacy Concerns:
The model relies on user-provided input text, which may contain sensitive or personal information. Users should exercise caution when sharing personal details and ensure the security of their data.
False Positives/Negatives:
The model's predictions may not always align perfectly with an individual's actual personality traits. It is possible for the model to generate false positives (predicting a trait that is not present) or false negatives (missing a trait that is present).
### Recommendations
To mitigate risks and limitations associated with personality prediction models, the following recommendations are suggested:
Awareness and Education:
Users should be informed about the limitations and potential biases of the model. Promote understanding that personality traits are complex and cannot be fully captured by a single model or text analysis.
Avoid Stereotyping and Discrimination:
Users should be cautious about making judgments or decisions solely based on predicted personality traits. Personality predictions should not be used to discriminate against individuals or perpetuate stereotypes.
Interpret with Context:
Interpret the model's predictions in the appropriate context and consider additional information about an individual beyond their input text.
Data Privacy and Security:
Ensure that user data is handled securely and with respect to privacy regulations. Users should be aware of the information they provide and exercise caution when sharing personal details.
Promote Ethical Use:
Encourage responsible use of personality prediction models and discourage misuse or harmful applications.
It is important to note that the above recommendations are general guidelines, and further context-specific recommendations should be developed based on the particular use case and ethical considerations.
## How to Download the Model
If you would like to download the model files and use them instead of the Hosted inference API, then you can follow the code snippet provided below:
```python
from transformers import BertForSequenceClassification, BertTokenizer
import torch
# Initialization of the model values
model = BertForSequenceClassification.from_pretrained(".", num_labels=5)
tokenizer = BertTokenizer.from_pretrained('.', do_lower_case=True)
model.config.label2id = {
"Extroversion": 0,
"Neuroticism": 1,
"Agreeableness": 2,
"Conscientiousness": 3,
"Openness": 4,
}
model.config.id2label = {
"0": "Extroversion",
"1": "Neuroticism",
"2": "Agreeableness",
"3": "Conscientiousness",
"4": "Openness",
}
def personality_detection(model_input: str) -> dict:
'''
Performs personality prediction on the given input text
Args:
model_input (str): The text conversation
Returns:
dict: A dictionary where keys are speaker labels and values are their personality predictions
'''
if len(model_input) == 0:
ret = {
"Extroversion": float(0),
"Neuroticism": float(0),
"Agreeableness": float(0),
"Conscientiousness": float(0),
"Openness": float(0),
}
return ret
else:
dict_custom = {}
preprocess_part1 = model_input[:len(model_input)]
dict1 = tokenizer.encode_plus(preprocess_part1, max_length=1024, padding=True, truncation=True)
dict_custom['input_ids'] = [dict1['input_ids'], dict1['input_ids']]
dict_custom['token_type_ids'] = [dict1['token_type_ids'], dict1['token_type_ids']]
dict_custom['attention_mask'] = [dict1['attention_mask'], dict1['attention_mask']]
outs = model(torch.tensor(dict_custom['input_ids']), token_type_ids=None, attention_mask=torch.tensor(dict_custom['attention_mask']))
b_logit_pred = outs[0]
pred_label = torch.sigmoid(b_logit_pred)
ret = {
"Extroversion": float(pred_label[0][0]),
"Neuroticism": float(pred_label[0][1]),
"Agreeableness": float(pred_label[0][2]),
"Conscientiousness": float(pred_label[0][3]),
"Openness": float(pred_label[0][4]),
}
return ret
personality_prediction = personality_detection(text_input)
```
Make sure you have the required dependencies installed (transformers and torch). This code snippet initializes the model, tokenizer, and configuration. It then defines the personality_detection function, which takes a text conversation as input and returns a dictionary with personality predictions for each speaker.
You can call the personality_detection function with your input text to obtain the personality predictions. The personality_prediction variable will hold the resulting dictionary.
Please note that this code assumes you have already downloaded the necessary model files (config.json, pytorch_model.bin, special_tokens_map.json, tokenizer_config.json, vocab.txt
) and placed them in the current directory (indicated by "."). Adjust the paths and filenames accordingly if needed.
## Citation
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
## More Information
TBA
| 11,242 | [
[
-0.0390625,
-0.02447509765625,
0.037628173828125,
0.028961181640625,
-0.00589752197265625,
-0.021026611328125,
-0.0195159912109375,
-0.047515869140625,
0.00782012939453125,
0.041778564453125,
-0.06817626953125,
-0.043426513671875,
-0.05743408203125,
-0.00264... |
MarkrAI/kyujin-CoTy-platypus-ko-12.8b | 2023-10-19T13:31:19.000Z | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"ko",
"dataset:kyujinpy/KoCoT_2000",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | MarkrAI | null | null | MarkrAI/kyujin-CoTy-platypus-ko-12.8b | 2 | 1,558 | transformers | 2023-10-03T17:56:43 | ---
language:
- ko
datasets:
- kyujinpy/KoCoT_2000
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
**(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다**
**The license is `cc-by-nc-sa-4.0`.**
# **CoTy-platypus-ko**

**Poly-platypus-ko + CoT = CoTy-platypus-ko**
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
CoTy-platypus-ko is an auto-regressive language model based on the polyglot-ko transformer architecture.
**Repo Link**
Github CoTy-platypus-ko: [CoTy-platypus-ko](https://github.com/KyujinHan/Poly-platypus-ko)
**Base Model**
[Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b)
**Fine-tuning method**
Methodology by [KO-Platypus2](https://github.com/Marker-Inc-Korea/KO-Platypus)+[CoT-llama2-ko](https://github.com/Marker-Inc-Korea/CoT-llama2)
**Training Dataset**
I use [KoCoT_2000](https://huggingface.co/datasets/kyujinpy/KoCoT_2000).
I use A100 GPU 40GB and COLAB, when trianing.
---
# **Model Bechmark1**
## KO-LLM leaderboard
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).

| Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| --- | --- | --- | --- | --- | --- | --- |
| CoTy-platypus-ko-12.8b(ours) | 46.44 | 34.98 | 49.11 | 25.68 | 37.59 | 84.86 |
| [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b) | 46.68 | 42.15 | 54.23 | 38.90 | 40.74 | 57.39 |
| [momo/polyglot-ko-12.8b-Chat-QLoRA-Merge](https://huggingface.co/momo/polyglot-ko-12.8b-Chat-QLoRA-Merge) | 45.71 | 35.49 | 49.93 | 25.97 | 39.43 | 77.70 |
| [KoT-platypus2-7B](https://huggingface.co/kyujinpy/KoT-platypus2-7B) | 45.62 | 38.05 | 49.63 | 34.68 | 37.69 | 68.08 |
| [DopeorNope/COLA3-7B](https://huggingface.co/DopeorNope/COLA3-7B) | 45.61 | 39.16 | 50.98 | 35.21 | 37.81 | 64.91 |
> Compare with Top 4 SOTA models. (update: 10/03)
---
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "MarkrAI/kyujin-CoTy-platypus-ko-12.8b"
CoT-llama = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
CoT-llama_tokenizer = AutoTokenizer.from_pretrained(repo)
```
> Readme format: [kyujinpy/KoT-platypus2-7B](https://huggingface.co/kyujinpy/KoT-platypus2-7B)
--- | 2,617 | [
[
-0.037872314453125,
-0.04644775390625,
0.020782470703125,
0.034637451171875,
-0.041595458984375,
0.0112457275390625,
-0.0161285400390625,
-0.0310821533203125,
0.022705078125,
0.0216064453125,
-0.03973388671875,
-0.051025390625,
-0.051666259765625,
0.00245094... |
ktrapeznikov/albert-xlarge-v2-squad-v2 | 2020-12-11T21:48:41.000Z | [
"transformers",
"pytorch",
"albert",
"question-answering",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | question-answering | ktrapeznikov | null | null | ktrapeznikov/albert-xlarge-v2-squad-v2 | 2 | 1,557 | transformers | 2022-03-02T23:29:05 | ### Model
**[`albert-xlarge-v2`](https://huggingface.co/albert-xlarge-v2)** fine-tuned on **[`SQuAD V2`](https://rajpurkar.github.io/SQuAD-explorer/)** using **[`run_squad.py`](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py)**
### Training Parameters
Trained on 4 NVIDIA GeForce RTX 2080 Ti 11Gb
```bash
BASE_MODEL=albert-xlarge-v2
python run_squad.py \
--version_2_with_negative \
--model_type albert \
--model_name_or_path $BASE_MODEL \
--output_dir $OUTPUT_MODEL \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v2.0.json \
--predict_file $SQUAD_DIR/dev-v2.0.json \
--per_gpu_train_batch_size 3 \
--per_gpu_eval_batch_size 64 \
--learning_rate 3e-5 \
--num_train_epochs 3.0 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 2000 \
--threads 24 \
--warmup_steps 814 \
--gradient_accumulation_steps 4 \
--fp16 \
--do_train
```
### Evaluation
Evaluation on the dev set. I did not sweep for best threshold.
| | val |
|-------------------|-------------------|
| exact | 84.41842836688285 |
| f1 | 87.4628460501696 |
| total | 11873.0 |
| HasAns_exact | 80.68488529014844 |
| HasAns_f1 | 86.78245127423482 |
| HasAns_total | 5928.0 |
| NoAns_exact | 88.1412952060555 |
| NoAns_f1 | 88.1412952060555 |
| NoAns_total | 5945.0 |
| best_exact | 84.41842836688285 |
| best_exact_thresh | 0.0 |
| best_f1 | 87.46284605016956 |
| best_f1_thresh | 0.0 |
### Usage
See [huggingface documentation](https://huggingface.co/transformers/model_doc/albert.html#albertforquestionanswering). Training on `SQuAD V2` allows the model to score if a paragraph contains an answer:
```python
start_scores, end_scores = model(input_ids)
span_scores = start_scores.softmax(dim=1).log()[:,:,None] + end_scores.softmax(dim=1).log()[:,None,:]
ignore_score = span_scores[:,0,0] #no answer scores
```
| 2,087 | [
[
-0.05242919921875,
-0.052093505859375,
0.021453857421875,
0.040191650390625,
0.005458831787109375,
0.00884246826171875,
-0.01531219482421875,
-0.017791748046875,
0.00858306884765625,
0.00716400146484375,
-0.07269287109375,
-0.034698486328125,
-0.0550537109375,
... |
shiowo/shiowo-flora-mix | 2023-03-06T10:20:10.000Z | [
"diffusers",
"stable-diffusion",
"text-to-image",
"stable-diffusion-diffusers",
"en",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | shiowo | null | null | shiowo/shiowo-flora-mix | 0 | 1,556 | diffusers | 2023-03-05T12:47:45 | ---
license: creativeml-openrail-m
language: en
tags :
- stable-diffusion
- text-to-image
- stable-diffusion-diffusers
- diffusers
---
# Wellcome To shiowo-flora-mix
This is my first ever model released publicly
# Image and model comming soon (+- 3 days)
---
---
# safetensors comming soon (1 week +-)
### Recepie:
https://huggingface.co/SweetLuna/Kenshi/resolve/main/KENSHI%2001/KENSHI01_Pruned.safetensors
https://huggingface.co/mindplayer/mindplayer-floralboys/resolve/main/mindplayer-floralboys.ckpt
KENSHI01_Pruned.safetensors (fp 32 as base 60%) + mindplayer-floralboys.ckpt(40%) = shiowomix
mindplayer-floralboys.ckpt(60% as base) + KENSHI01_Pruned.safetensors (fp 32 40%) = Nekomix
# for vae Please choose between:
https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/vae/kl-f8-anime2.ckpt
https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.ckpt
## you can use the vae by replacing the ckpt with vae.pt for web ui users. (example: kl-f8-anime2.ckpt rename )
---
---
# Have FUN
### I am not responsilbe for any of the output
---
--- | 1,123 | [
[
-0.0305938720703125,
-0.035858154296875,
0.02996826171875,
0.04766845703125,
-0.0234222412109375,
-0.0291595458984375,
0.0166168212890625,
-0.029632568359375,
0.039215087890625,
0.04132080078125,
-0.0657958984375,
-0.035369873046875,
-0.04638671875,
0.003593... |
TheLastBen/William_Eggleston_Style_SDXL | 2023-08-08T15:02:40.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"license:creativeml-openrail-m",
"region:us",
"has_space"
] | text-to-image | TheLastBen | null | null | TheLastBen/William_Eggleston_Style_SDXL | 8 | 1,556 | diffusers | 2023-07-30T19:13:11 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: william eggleston
widget:
- text: by william eggleston
---
### William Eggleston Photography Style
#### SDXL LoRA by TheLastBen
#### Prompts to start with :
a house by william eggleston, sunrays, beautiful, sunlight, sunrays, beautiful
closeup portrait of a woman in a kitchen by william eggleston, beautiful, sunrays, sunlight
a beautiful view through a kitchen window, car, by william eggleston, sunlight
---
Trained using https://github.com/TheLastBen/fast-stable-diffusion SDXL trainer.
ComfyUI seems to give better results than A1111, but that's just me.
#### Sample pictures:
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp) | 3,080 | [
[
-0.056243896484375,
-0.030242919921875,
0.028778076171875,
0.0275115966796875,
-0.0172271728515625,
-0.00963592529296875,
0.0097503662109375,
-0.06268310546875,
0.09698486328125,
0.018890380859375,
-0.0618896484375,
-0.0328369140625,
-0.04443359375,
0.016784... |
ptx0/pseudo-journey-v2 | 2023-06-26T03:03:57.000Z | [
"diffusers",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | ptx0 | null | null | ptx0/pseudo-journey-v2 | 9 | 1,552 | diffusers | 2023-05-22T01:32:36 | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- text-to-image
---
# Capabilities
This model is "adventure" and "fantasy" focused.
With certain inference configurations, it is capable of producing very high quality results.
This model functions better without negative prompts than most fine-tunes.
# Inference parameters
Diffusers should "Just Work" with the config in this repository.
For A1111 users,
Scheduler: DDIM, 15-50 steps
Generally acceptable resolutions:
- 768x768
- 1024x1024
- 1152x768
# Limitations
This model contains a heavily tuned text encoder that has lost many original Stable Diffusion 2.1 concepts
This model is even less reliable at producing real people than the base 2.1-v model is
Training data included only 768x768 downsampled 1:1 ratio images, all other aspects were discarded. Ergo, this model struggles with high resolution native generations.
This model may have "burnt" outputs at higher CFG.
# Checkpoints
This model contains multiple revisions:
`02b28ff` (latest/main checkpoint)
30000 steps (approx 4 epochs) with terminal SNR on 22k Midjourney 5.1 images plus 7200 real photographs as balance data with complete BLIP captions on all data. BS=4, LR=4e-7 to 1e-8
`6d3949c` (retrained from ptx0/pseudo-journey)
[retrained: based on ptx0/pseudo-journey @ 4000 steps from stable-diffusion-2-1 baseline on 3300 images] + 9500 steps on 22,400 images, polynomial learning rate scheduler, batch size 4, 64 gradient accumulations, FROZEN text encoder, 8bit ADAM, ZERO PLW (no regularization data), followed by 550 steps with unfrozen text encoder and constant LR 1e-8
`9135a79` (original ckpt test)
13000 steps: trained from ptx0/pseudo-journey, polynomial learning rate scheduler, batch size 3, text encoder, 8bit ADAM, ZERO PLW (no regularization data)
| 1,838 | [
[
-0.041473388671875,
-0.040374755859375,
0.049957275390625,
0.0225677490234375,
-0.0296478271484375,
-0.022430419921875,
0.005126953125,
-0.034881591796875,
-0.0016536712646484375,
0.046783447265625,
-0.0523681640625,
-0.027191162109375,
-0.059600830078125,
0... |
IlyaGusev/rubert_ext_sum_gazeta | 2022-07-13T15:35:22.000Z | [
"transformers",
"pytorch",
"bert",
"token-classification",
"summarization",
"t5",
"ru",
"dataset:IlyaGusev/gazeta",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | token-classification | IlyaGusev | null | null | IlyaGusev/rubert_ext_sum_gazeta | 1 | 1,551 | transformers | 2022-03-02T23:29:04 | ---
language:
- ru
tags:
- summarization
- token-classification
- t5
datasets:
- IlyaGusev/gazeta
license: apache-2.0
inference: false
widget:
- text: "С 1 сентября в России вступают в силу поправки в закон «О банкротстве» — теперь должники смогут освобождаться от непосильных обязательств во внесудебном порядке, если сумма задолженности составляет не менее 50 тыс. рублей и не превышает 500 тыс. рублей без учета штрафов, пени, процентов за просрочку платежа и прочих имущественных или финансовых санкций.[SEP]У физлиц и индивидуальных предпринимателей появилась возможность пройти процедуру банкротства без участия суда и финансового управляющего — достаточно подать соответствующее заявление через МФЦ.[SEP]Сумму задолженности и список всех известных заявителю кредиторов нужно предоставить самостоятельно.[SEP]Если все условия соблюдены, сведения внесут в Единый федеральный реестр в течение трех рабочих дней.[SEP]При этом на момент подачи заявления в отношении заявителя должно быть окончено исполнительное производство с возвращением исполнительного документа взыскателю.[SEP]Это значит, что у потенциального банкрота не должно быть имущества, которое можно взыскать.[SEP]Кроме того, в отношении гражданина не должно быть возбуждено другое исполнительное производство.[SEP]В период всей процедуры заявитель не сможет брать займы, кредиты, выдавать поручительства, совершать иные обеспечительные сделки.[SEP]Внесудебное банкротство будет длиться шесть месяцев, в течение которых также будет действовать мораторий на удовлетворение требований кредиторов, отмеченных в заявлении должника, и мораторий об уплате обязательных платежей.[SEP]Кроме того, прекращается начисление неустоек и иных финансовых санкций; имущественные взыскания (кроме алиментов) также будут приостановлены.[SEP]По завершению процедуры заявителя освободят от дальнейшего выполнения требований кредиторов, указанных в заявлении о признании его банкротом, а эта задолженность признается безнадежной.[SEP]В прошлом месяце стало известно, что за первое полугодие 2020 года российские суды признали банкротами 42,7 тыс. граждан (в том числе индивидуальных предпринимателей) — по данным единого реестра «Федресурс», это на 47,2% больше показателя аналогичного периода 2019 года.[SEP]Рост числа обанкротившихся граждан во втором квартале по сравнению с первым замедлился — такая динамика обусловлена тем, что в период ограничений с 19 марта по 11 мая суды редко рассматривали банкротные дела компаний и меньше, чем обычно, в отношении граждан, объяснял руководитель проекта «Федресурс» Алексей Юхнин.[SEP]"
example_title: "Новости"
---
# RuBERTExtSumGazeta
## Model description
Model for extractive summarization based on [rubert-base-cased](DeepPavlov/rubert-base-cased)
## Intended uses & limitations
#### How to use
Colab: [link](https://colab.research.google.com/drive/1Q8_v3H-kxdJhZIiyLYat7Kj02qDq7M1L)
```python
import razdel
from transformers import AutoTokenizer, BertForTokenClassification
model_name = "IlyaGusev/rubert_ext_sum_gazeta"
tokenizer = AutoTokenizer.from_pretrained(model_name)
sep_token = tokenizer.sep_token
sep_token_id = tokenizer.sep_token_id
model = BertForTokenClassification.from_pretrained(model_name)
article_text = "..."
sentences = [s.text for s in razdel.sentenize(article_text)]
article_text = sep_token.join(sentences)
inputs = tokenizer(
[article_text],
max_length=500,
padding="max_length",
truncation=True,
return_tensors="pt",
)
sep_mask = inputs["input_ids"][0] == sep_token_id
# Fix token_type_ids
current_token_type_id = 0
for pos, input_id in enumerate(inputs["input_ids"][0]):
inputs["token_type_ids"][0][pos] = current_token_type_id
if input_id == sep_token_id:
current_token_type_id = 1 - current_token_type_id
# Infer model
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits[0, :, 1]
# Choose sentences
logits = logits[sep_mask]
logits, indices = logits.sort(descending=True)
logits, indices = logits.cpu().tolist(), indices.cpu().tolist()
pairs = list(zip(logits, indices))
pairs = pairs[:3]
indices = list(sorted([idx for _, idx in pairs]))
summary = " ".join([sentences[idx] for idx in indices])
print(summary)
```
#### Limitations and bias
- The model should work well with Gazeta.ru articles, but for any other agencies it can suffer from domain shift
## Training data
- Dataset: [Gazeta](https://huggingface.co/datasets/IlyaGusev/gazeta)
## Training procedure
TBD
## Eval results
TBD
Evaluation: https://github.com/IlyaGusev/summarus/blob/master/evaluate.py
Flags: --language ru --tokenize-after --lower
| 4,625 | [
[
-0.005390167236328125,
-0.0557861328125,
0.01496124267578125,
0.0282135009765625,
-0.019439697265625,
0.0020904541015625,
-0.0151214599609375,
-0.004238128662109375,
0.012786865234375,
0.023468017578125,
-0.02801513671875,
-0.039703369140625,
-0.056884765625,
... |
dreamlike-art/dreamlike-photoreal-1.0 | 2023-03-13T01:04:59.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"photorealistic",
"photoreal",
"en",
"license:other",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | dreamlike-art | null | null | dreamlike-art/dreamlike-photoreal-1.0 | 99 | 1,551 | diffusers | 2022-11-27T03:37:42 | ---
language:
- en
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- photorealistic
- photoreal
- diffusers
inference: false
---
# This 1.0 model is OBSOLETE. We've released a new much better 2.0 version!
**Check it out here: [https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0](https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0)**
<img src="https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/preview1.jpg" style="max-width: 800px;" width="100%"/>
Dreamlike Photoreal 1.0 is a photorealistic Stable Diffusion 1.5 model fine tuned on high quality photos, made by [dreamlike.art](https://dreamlike.art/).
Use the same prompts as you would for photorealistic SD 1.5 gens. You can also use danbooru style tags for characters (1girl, brown hair, etc.).
Non-square aspect ratios work better for some prompts. If you want a portrait photo, try using a 3:4 or a 9:16 aspect ratio. If you want a landscape photo, try using a 16:9 aspect ratio.
### Examples
<img src="https://huggingface.co/dreamlike-art/dreamlike-photoreal-1.0/resolve/main/preview.jpg" style="max-width: 800px;" width="100%"/>
### dreamlike.art
You can use this model for free on [dreamlike.art](https://dreamlike.art/)!
<img src="https://huggingface.co/dreamlike-art/dreamlike-photoreal-1.0/resolve/main/dreamlike.jpg" style="max-width: 1000px;" width="100%"/>
### CompVis
[Download dreamlike-photoreal-1.0.ckpt (2.13GB)](https://huggingface.co/dreamlike-art/dreamlike-photoreal-1.0/resolve/main/dreamlike-photoreal-1.0.ckpt)
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion Pipeline](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "dreamlike-art/dreamlike-photoreal-1.0"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "caucasian creative man wearing a sweater, sitting, on an icelandic beach"
image = pipe(prompt).images[0]
image.save("./result.jpg")
```
# License
This model is licesed under a **modified** CreativeML OpenRAIL-M license.
- **You can't host or use the model or its derivatives on websites/apps/etc., from which you earn, will earn, or plan to earn revenue or donations. If you want to, please email us at contact@dreamlike.art**
- **You are free to host the model card and files (Without any actual inference or finetuning) on both commercial and non-commercial websites/apps/etc. Please state the full model name (Dreamlike Photoreal 1.0) and include a link to the model card (https://huggingface.co/dreamlike-art/dreamlike-photoreal-1.0)**
- **You are free to host the model or its derivatives on completely non-commercial websites/apps/etc (Meaning you are not getting ANY revenue or donations). Please state the full model name (Dreamlike Photoreal 1.0) and include a link to the model card (https://huggingface.co/dreamlike-art/dreamlike-photoreal-1.0)**
- **You are free to use the outputs of the model or the outputs of the model's derivatives for commercial purposes in teams of 10 or less**
- You can't use the model to deliberately produce nor share illegal or harmful outputs or content
- The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
- You may re-distribute the weights. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the **modified** CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here: https://huggingface.co/dreamlike-art/dreamlike-photoreal-1.0/blob/main/LICENSE.md
| 3,916 | [
[
-0.0338134765625,
-0.046844482421875,
0.021636962890625,
0.0260162353515625,
-0.04052734375,
-0.01529693603515625,
0.0005245208740234375,
-0.05328369140625,
0.035430908203125,
0.04461669921875,
-0.050567626953125,
-0.049072265625,
-0.03179931640625,
-0.01123... |
Yntec/LunarLuma | 2023-07-29T13:38:01.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"sadxzero",
"mooncryptowow",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Yntec | null | null | Yntec/LunarLuma | 0 | 1,551 | diffusers | 2023-07-29T13:04:04 | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- sadxzero
- mooncryptowow
---
# Lunar Luma
A mix of the Luma and Lunar Diffusion models, only because I think this name is hilarious! XD
Original pages:
https://civitai.com/models/26870?modelVersionId=44901
https://civitai.com/models/25831?modelVersionId=68200 | 435 | [
[
-0.0286712646484375,
-0.0518798828125,
0.05682373046875,
0.03509521484375,
-0.02642822265625,
0.01214599609375,
0.03436279296875,
-0.0178985595703125,
0.057647705078125,
0.03741455078125,
-0.047027587890625,
-0.0290374755859375,
-0.0148162841796875,
-0.03442... |
artificialguybr/ToyRedmond-ToyLoraForSDXL10 | 2023-08-08T23:09:59.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"license:creativeml-openrail-m",
"has_space",
"region:us"
] | text-to-image | artificialguybr | null | null | artificialguybr/ToyRedmond-ToyLoraForSDXL10 | 3 | 1,551 | diffusers | 2023-08-08T23:04:08 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: FnkRedmAF
widget:
- text: FnkRedmAF
---
# Toy.Redmond

Toy.Redmond is here!
I'm grateful for the GPU time from Redmond.AI that allowed me to finish this LORA!
This is a TOY LORA fine-tuned on SD XL 1.0.
The LORA has a high capacity to generate Toys (specially in one style) in a wide variety of themes. It's a versatile LORA.
I recommend gen in 1024x1024.
The tag for the model:FnkRedmAF
I really hope you like the LORA and use it.
If you like the model and think it's worth it, you can make a donation to my Patreon or Ko-fi.
Follow me in my twitter to know before all about new models:
https://twitter.com/artificialguybr/ | 832 | [
[
-0.049530029296875,
-0.06842041015625,
0.0181121826171875,
0.034698486328125,
-0.05462646484375,
0.004734039306640625,
-0.00670623779296875,
-0.04071044921875,
0.08294677734375,
0.0255126953125,
-0.056732177734375,
-0.01508331298828125,
-0.0182952880859375,
... |
dbmdz/convbert-base-turkish-mc4-uncased | 2023-09-10T18:39:57.000Z | [
"transformers",
"pytorch",
"tf",
"safetensors",
"convbert",
"fill-mask",
"tr",
"dataset:allenai/c4",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | dbmdz | null | null | dbmdz/convbert-base-turkish-mc4-uncased | 2 | 1,550 | transformers | 2022-03-02T23:29:05 | ---
language: tr
license: mit
datasets:
- allenai/c4
---
# 🇹🇷 Turkish ConvBERT model
<p align="center">
<img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="https://raw.githubusercontent.com/stefan-it/turkish-bert/master/merve_logo.png">
</p>
[](https://zenodo.org/badge/latestdoi/237817454)
We present community-driven BERT, DistilBERT, ELECTRA and ConvBERT models for Turkish 🎉
Some datasets used for pretraining and evaluation are contributed from the
awesome Turkish NLP community, as well as the decision for the BERT model name: BERTurk.
Logo is provided by [Merve Noyan](https://twitter.com/mervenoyann).
# Stats
We've trained an (uncased) ConvBERT model on the recently released Turkish part of the
[multiligual C4 (mC4) corpus](https://github.com/allenai/allennlp/discussions/5265) from the AI2 team.
After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting
in 31,240,963,926 tokens.
We used the original 32k vocab (instead of creating a new one).
# mC4 ConvBERT
In addition to the ELEC**TR**A base model, we also trained an ConvBERT model on the Turkish part of the mC4 corpus. We use a
sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.
# Model usage
All trained models can be used from the [DBMDZ](https://github.com/dbmdz) Hugging Face [model hub page](https://huggingface.co/dbmdz)
using their model name.
Example usage with 🤗/Transformers:
```python
tokenizer = AutoTokenizer.from_pretrained("dbmdz/convbert-base-turkish-mc4-uncased")
model = AutoModel.from_pretrained("dbmdz/convbert-base-turkish-mc4-uncased")
```
# Citation
You can use the following BibTeX entry for citation:
```bibtex
@software{stefan_schweter_2020_3770924,
author = {Stefan Schweter},
title = {BERTurk - BERT models for Turkish},
month = apr,
year = 2020,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.3770924},
url = {https://doi.org/10.5281/zenodo.3770924}
}
```
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
We would like to thank [Merve Noyan](https://twitter.com/mervenoyann) for the
awesome logo!
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️ | 2,575 | [
[
-0.03875732421875,
-0.03509521484375,
-0.0019664764404296875,
0.00298309326171875,
-0.03680419921875,
-0.0235595703125,
-0.032562255859375,
-0.0496826171875,
0.01131439208984375,
0.0286102294921875,
-0.0382080078125,
-0.03955078125,
-0.045654296875,
0.017288... |
gijs/aces-roberta-10 | 2023-03-09T15:47:40.000Z | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | gijs | null | null | gijs/aces-roberta-10 | 0 | 1,550 | transformers | 2023-03-09T15:43:48 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: aces-roberta-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aces-roberta-10
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6188
- Precision: 0.8040
- Recall: 0.8198
- F1: 0.8097
- Accuracy: 0.8198
- F1 Who: 0.7939
- F1 What: 0.7929
- F1 Where: 0.7769
- F1 How: 0.8905
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | F1 Who | F1 What | F1 Where | F1 How |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:------:|:-------:|:--------:|:------:|
| 1.6596 | 0.15 | 20 | 1.2172 | 0.5510 | 0.6640 | 0.5906 | 0.6640 | 0.0 | 0.6409 | 0.3258 | 0.7719 |
| 1.0566 | 0.31 | 40 | 0.9097 | 0.6534 | 0.7087 | 0.6590 | 0.7087 | 0.3855 | 0.7020 | 0.5620 | 0.8086 |
| 0.8056 | 0.46 | 60 | 0.7640 | 0.7092 | 0.7570 | 0.7196 | 0.7570 | 0.6857 | 0.7709 | 0.6696 | 0.8114 |
| 0.6996 | 0.61 | 80 | 0.6706 | 0.7601 | 0.7931 | 0.7687 | 0.7931 | 0.8103 | 0.7743 | 0.7471 | 0.8499 |
| 0.6346 | 0.76 | 100 | 0.6471 | 0.7763 | 0.8032 | 0.7852 | 0.8032 | 0.7874 | 0.7813 | 0.7490 | 0.8665 |
| 0.523 | 0.92 | 120 | 0.6635 | 0.7872 | 0.8061 | 0.7865 | 0.8061 | 0.8244 | 0.7718 | 0.7692 | 0.8771 |
| 0.5324 | 1.07 | 140 | 0.6162 | 0.8045 | 0.8212 | 0.8110 | 0.8212 | 0.8197 | 0.8008 | 0.8033 | 0.8852 |
| 0.4734 | 1.22 | 160 | 0.6147 | 0.7935 | 0.8097 | 0.7978 | 0.8097 | 0.7939 | 0.7861 | 0.7698 | 0.8911 |
| 0.5111 | 1.37 | 180 | 0.6142 | 0.8022 | 0.8154 | 0.8051 | 0.8154 | 0.8244 | 0.8047 | 0.768 | 0.8909 |
| 0.4416 | 1.53 | 200 | 0.6204 | 0.8006 | 0.8190 | 0.8079 | 0.8190 | 0.8271 | 0.7984 | 0.7773 | 0.8886 |
| 0.5249 | 1.68 | 220 | 0.6239 | 0.7907 | 0.8133 | 0.8006 | 0.8133 | 0.8182 | 0.7969 | 0.7739 | 0.8776 |
| 0.4599 | 1.83 | 240 | 0.6458 | 0.7989 | 0.8082 | 0.7967 | 0.8082 | 0.8244 | 0.7953 | 0.7751 | 0.8853 |
| 0.4979 | 1.98 | 260 | 0.6390 | 0.8071 | 0.8183 | 0.8051 | 0.8183 | 0.7869 | 0.8000 | 0.7583 | 0.8871 |
| 0.393 | 2.14 | 280 | 0.6348 | 0.7994 | 0.8125 | 0.8021 | 0.8125 | 0.8271 | 0.7904 | 0.7653 | 0.8812 |
| 0.4079 | 2.29 | 300 | 0.6227 | 0.8002 | 0.8140 | 0.8040 | 0.8140 | 0.8182 | 0.7908 | 0.7668 | 0.8784 |
| 0.3731 | 2.44 | 320 | 0.6319 | 0.7887 | 0.8075 | 0.7965 | 0.8075 | 0.8030 | 0.7814 | 0.7692 | 0.8702 |
| 0.3987 | 2.6 | 340 | 0.6171 | 0.7922 | 0.8140 | 0.8015 | 0.8140 | 0.7907 | 0.7813 | 0.7968 | 0.8759 |
| 0.3865 | 2.75 | 360 | 0.6161 | 0.7968 | 0.8118 | 0.8032 | 0.8118 | 0.7846 | 0.7824 | 0.7692 | 0.8851 |
| 0.4222 | 2.9 | 380 | 0.6137 | 0.7955 | 0.8140 | 0.8033 | 0.8140 | 0.8060 | 0.7897 | 0.7874 | 0.8746 |
| 0.4164 | 3.05 | 400 | 0.6016 | 0.8017 | 0.8176 | 0.8079 | 0.8176 | 0.7846 | 0.7954 | 0.7843 | 0.8832 |
| 0.3505 | 3.21 | 420 | 0.6239 | 0.7912 | 0.8075 | 0.7949 | 0.8075 | 0.7846 | 0.7930 | 0.7786 | 0.8556 |
| 0.3834 | 3.36 | 440 | 0.6038 | 0.8022 | 0.8169 | 0.8082 | 0.8169 | 0.7907 | 0.7976 | 0.7757 | 0.8835 |
| 0.3139 | 3.51 | 460 | 0.6068 | 0.7978 | 0.8161 | 0.8052 | 0.8161 | 0.7970 | 0.7904 | 0.7846 | 0.8870 |
| 0.3679 | 3.66 | 480 | 0.6070 | 0.8026 | 0.8183 | 0.8063 | 0.8183 | 0.7907 | 0.7953 | 0.7799 | 0.8835 |
| 0.3387 | 3.82 | 500 | 0.6059 | 0.8025 | 0.8205 | 0.8094 | 0.8205 | 0.7879 | 0.7977 | 0.7937 | 0.8879 |
| 0.3208 | 3.97 | 520 | 0.6064 | 0.8015 | 0.8183 | 0.8082 | 0.8183 | 0.7970 | 0.7900 | 0.7782 | 0.8854 |
| 0.3008 | 4.12 | 540 | 0.6088 | 0.8020 | 0.8205 | 0.8107 | 0.8205 | 0.7970 | 0.7946 | 0.7813 | 0.8883 |
| 0.3014 | 4.27 | 560 | 0.6093 | 0.8032 | 0.8212 | 0.8114 | 0.8212 | 0.8120 | 0.7961 | 0.7813 | 0.8867 |
| 0.3486 | 4.43 | 580 | 0.6112 | 0.8042 | 0.8205 | 0.8107 | 0.8205 | 0.7939 | 0.7961 | 0.7829 | 0.8873 |
| 0.2793 | 4.58 | 600 | 0.6156 | 0.8047 | 0.8183 | 0.8088 | 0.8183 | 0.7846 | 0.7945 | 0.7769 | 0.8905 |
| 0.2943 | 4.73 | 620 | 0.6170 | 0.8044 | 0.8212 | 0.8107 | 0.8212 | 0.7846 | 0.7992 | 0.7843 | 0.8895 |
| 0.3314 | 4.89 | 640 | 0.6188 | 0.8040 | 0.8198 | 0.8097 | 0.8198 | 0.7939 | 0.7929 | 0.7769 | 0.8905 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
| 5,722 | [
[
-0.0439453125,
-0.041961669921875,
0.0162506103515625,
0.00386810302734375,
-0.0016107559204101562,
0.01058197021484375,
0.002597808837890625,
0.00196075439453125,
0.05133056640625,
0.0267333984375,
-0.0439453125,
-0.041259765625,
-0.042694091796875,
-0.0098... |
digiplay/ya3p_VAE | 2023-11-02T07:41:37.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | digiplay | null | null | digiplay/ya3p_VAE | 4 | 1,549 | diffusers | 2023-07-03T15:46:22 | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
in test ...



See other images I generated by huggingface's API :
https://huggingface.co/digiplay/ya3p_VAE/discussions/2
***Silver Angel***

---
CG style sample prompt :
masterpiece, best quality, ultra-detailed, glistening shiny, glowing light, ray tracing, HDR, deph of field, (perfect face, detailed face, detailed eyes),8k,HD,ultra realistic face,ray tracing,perfect lighting,best quality, ultra-detailed, shiny eyes,

In heaven class room,8k,1girl,(photo realistic:2),happy ,solo,perfect lighting ,intricate ,


| 1,716 | [
[
-0.06414794921875,
-0.0548095703125,
0.0217132568359375,
0.02325439453125,
-0.0159759521484375,
0.01091766357421875,
0.019256591796875,
-0.043609619140625,
0.047393798828125,
0.03228759765625,
-0.047515869140625,
-0.050079345703125,
-0.048828125,
0.015289306... |
FlagAlpha/Llama2-Chinese-13b-Chat-4bit | 2023-09-11T13:24:58.000Z | [
"transformers",
"llama",
"text-generation",
"question-answering",
"zh",
"en",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | question-answering | FlagAlpha | null | null | FlagAlpha/Llama2-Chinese-13b-Chat-4bit | 57 | 1,549 | transformers | 2023-07-26T09:44:35 | ---
developers: [https://huggingface.co/FlagAlphaAI]
license: apache-2.0
language:
- zh
- en
pipeline_tag: question-answering
library_name: transformers
---
# Llama2中文社区
---
## Llama2中文微调参数
由于Llama2本身的中文对齐较弱,我们采用中文指令集,对meta-llama/Llama-2-13b-chat-hf进行LoRA微调,使其具备较强的中文对话能力。
🎯 **该版本为中文微调参数FlagAlpha/Llama2-Chinese-13b-Chat进行4 bit量化后的版本,可直接使用**
---
## 🚀 社区地址:
Github:[**Llama2-Chinese**](https://github.com/FlagAlpha/Llama2-Chinese)
在线体验链接:[**llama.family**](https://llama.family/)
## 🔥 社区介绍
欢迎来到Llama2中文社区!
我们是一个专注于Llama2模型在中文方面的优化和上层建设的高级技术社区。
**基于大规模中文数据,从预训练开始对Llama2模型进行中文能力的持续迭代升级**。
我们热忱欢迎对大模型LLM充满热情的开发者和研究者加入我们的行列。
## 🐼 社区资源
- Llama2在线体验链接[**llama.family**](https://llama.family/),同时包含Meta原版和中文微调版本!
- Llama2 Chat模型的[中文问答能力评测](https://github.com/FlagAlpha/Llama2-Chinese/tree/main#-%E6%A8%A1%E5%9E%8B%E8%AF%84%E6%B5%8B)!
- [社区飞书知识库](https://chinesellama.feishu.cn/wiki/space/7257824476874768388?ccm_open_type=lark_wiki_spaceLink),欢迎大家一起共建!
| 965 | [
[
-0.0283660888671875,
-0.0440673828125,
0.016021728515625,
0.054473876953125,
-0.0596923828125,
0.0228118896484375,
0.006610870361328125,
-0.051971435546875,
0.0340576171875,
0.02801513671875,
-0.04193115234375,
-0.048583984375,
-0.038299560546875,
0.00472259... |
THABASSUM/my-pet-dog | 2023-10-18T10:41:27.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | THABASSUM | null | null | THABASSUM/my-pet-dog | 0 | 1,547 | diffusers | 2023-10-18T10:36:02 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by THABASSUM following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:
.jpg)
| 392 | [
[
-0.0595703125,
-0.0205841064453125,
0.03094482421875,
-0.0004267692565917969,
-0.0165557861328125,
0.034912109375,
0.0272216796875,
-0.035919189453125,
0.051788330078125,
0.0269775390625,
-0.04132080078125,
-0.022735595703125,
-0.014892578125,
0.002492904663... |
vertxlabs/controlnet_qrcode-control_v11p_v1 | 2023-07-13T05:04:14.000Z | [
"diffusers",
"stable-diffusion",
"controlnet",
"image-to-image",
"en",
"license:openrail++",
"endpoints_compatible",
"diffusers:ControlNetModel",
"region:us"
] | image-to-image | vertxlabs | null | null | vertxlabs/controlnet_qrcode-control_v11p_v1 | 0 | 1,546 | diffusers | 2023-07-13T03:45:24 | ---
tags:
- stable-diffusion
- controlnet
- image-to-image
license: openrail++
language:
- en
pipeline_tag: image-to-image
---
# QR Code Conditioned ControlNet Models for Stable Diffusion 2.1

## Model Description
This repo holds the safetensors & diffusers versions of the QR code conditioned ControlNet for Stable Diffusion v2.1.
The Stable Diffusion 2.1 version is marginally more effective, as it was developed to address my specific needs. However, a 1.5 version model was also trained on the same dataset for those who are using the older version.
## How to use with diffusers
```bash
pip -q install diffusers transformers accelerate torch xformers
```
```python
import torch
from PIL import Image
from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, DDIMScheduler
from diffusers.utils import load_image
controlnet = ControlNetModel.from_pretrained("DionTimmer/controlnet_qrcode-control_v11p_sd21",
torch_dtype=torch.float16)
pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-1",
controlnet=controlnet,
safety_checker=None,
torch_dtype=torch.float16
)
pipe.enable_xformers_memory_efficient_attention()
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
def resize_for_condition_image(input_image: Image, resolution: int):
input_image = input_image.convert("RGB")
W, H = input_image.size
k = float(resolution) / min(H, W)
H *= k
W *= k
H = int(round(H / 64.0)) * 64
W = int(round(W / 64.0)) * 64
img = input_image.resize((W, H), resample=Image.LANCZOS)
return img
# play with guidance_scale, controlnet_conditioning_scale and strength to make a valid QR Code Image
# qr code image
source_image = load_image("https://s3.amazonaws.com/moonup/production/uploads/6064e095abd8d3692e3e2ed6/A_RqHaAM6YHBodPLwqtjn.png")
# initial image, anything
init_image = load_image("https://s3.amazonaws.com/moonup/production/uploads/noauth/KfMBABpOwIuNolv1pe3qX.jpeg")
condition_image = resize_for_condition_image(source_image, 768)
init_image = resize_for_condition_image(init_image, 768)
generator = torch.manual_seed(123121231)
image = pipe(prompt="a bilboard in NYC with a qrcode",
negative_prompt="ugly, disfigured, low quality, blurry, nsfw",
image=init_image,
control_image=condition_image,
width=768,
height=768,
guidance_scale=20,
controlnet_conditioning_scale=1.5,
generator=generator,
strength=0.9,
num_inference_steps=150,
)
image.images[0]
```
## Performance and Limitations
These models perform quite well in most cases, but please note that they are not 100% accurate. In some instances, the QR code shape might not come through as expected. You can increase the ControlNet weight to emphasize the QR code shape. However, be cautious as this might negatively impact the style of your output.**To optimize for scanning, please generate your QR codes with correction mode 'H' (30%).**
To balance between style and shape, a gentle fine-tuning of the control weight might be required based on the individual input and the desired output, aswell as the correct prompt. Some prompts do not work until you increase the weight by a lot. The process of finding the right balance between these factors is part art and part science. For the best results, it is recommended to generate your artwork at a resolution of 768. This allows for a higher level of detail in the final product, enhancing the quality and effectiveness of the QR code-based artwork.
## Installation
The simplest way to use this is to place the .safetensors model and its .yaml config file in the folder where your other controlnet models are installed, which varies per application.
For usage in auto1111 they can be placed in the webui/models/ControlNet folder. They can be loaded using the controlnet webui extension which you can install through the extensions tab in the webui (https://github.com/Mikubill/sd-webui-controlnet). Make sure to enable your controlnet unit and set your input image as the QR code. Set the model to either the SD2.1 or 1.5 version depending on your base stable diffusion model, or it will error. No pre-processor is needed, though you can use the invert pre-processor for a different variation of results. 768 is the preferred resolution for generation since it allows for more detail.
Make sure to look up additional info on how to use controlnet if you get stuck, once you have the webui up and running its really easy to install the controlnet extension aswell. | 4,801 | [
[
-0.02392578125,
-0.00765228271484375,
0.00371551513671875,
0.0265655517578125,
-0.03466796875,
-0.010772705078125,
0.0164794921875,
-0.0200958251953125,
0.0160980224609375,
0.039459228515625,
-0.00853729248046875,
-0.0270843505859375,
-0.04681396484375,
0.00... |
livingbox/model-test-10-oct | 2023-10-10T06:15:48.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | livingbox | null | null | livingbox/model-test-10-oct | 1 | 1,546 | diffusers | 2023-10-10T06:03:03 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Model-test-10-oct Dreambooth model trained by livingbox with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| 508 | [
[
-0.0347900390625,
-0.074462890625,
0.0340576171875,
0.0355224609375,
-0.0260772705078125,
0.0322265625,
0.03082275390625,
-0.0295257568359375,
0.046905517578125,
0.00968170166015625,
-0.02471923828125,
-0.01861572265625,
-0.024017333984375,
0.000245332717895... |
vinesmsuic/magicbrush-jul7 | 2023-07-09T22:04:54.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | vinesmsuic | null | null | vinesmsuic/magicbrush-jul7 | 1 | 1,545 | diffusers | 2023-07-08T02:50:03 | ---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
---
diffuser port of https://huggingface.co/osunlp/InstructPix2Pix-MagicBrush.
diffuser version of `MagicBrush-epoch-52-step-4999.ckpt`
```python
from PIL import Image, ImageOps
import requests
import torch
from diffusers import StableDiffusionInstructPix2PixPipeline, EulerAncestralDiscreteScheduler
from PIL import Image
url = "https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png"
def download_image(url):
image = Image.open(requests.get(url, stream=True).raw)
image = ImageOps.exif_transpose(image)
image = image.convert("RGB")
return image
image = download_image(url)
prompt = "make the mountains snowy"
class MagicBrush():
def __init__(self, weight="vinesmsuic/magicbrush-jul7"):
self.pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(
weight,
torch_dtype=torch.float16
).to("cuda")
self.pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(self.pipe.scheduler.config)
def infer_one_image(self, src_image, instruct_prompt, seed):
generator = torch.manual_seed(seed)
image = self.pipe(instruct_prompt, image=src_image, num_inference_steps=20, image_guidance_scale=1.5, guidance_scale=7, generator=generator).images[0]
return image
model = MagicBrush()
image_output = model.infer_one_image(image, prompt, 42)
image_output
```

## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
| 2,468 | [
[
-0.02593994140625,
-0.0302581787109375,
0.013641357421875,
0.031951904296875,
-0.018096923828125,
-0.03436279296875,
0.005176544189453125,
-0.0155029296875,
-0.01264190673828125,
0.037109375,
-0.054718017578125,
-0.01209259033203125,
-0.044281005859375,
-0.0... |
anakin87/zephyr-7b-alpha-sharded | 2023-10-18T10:58:04.000Z | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"arxiv:2305.18290",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"has_space"
] | text-generation | anakin87 | null | null | anakin87/zephyr-7b-alpha-sharded | 11 | 1,545 | transformers | 2023-10-14T12:48:51 | ---
license: mit
language:
- en
---
<img src="https://huggingface.co/anakin87/zephyr-7b-alpha-sharded/resolve/main/zephyr_sharded.png" alt="Zephyr Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Zephyr 7B Alpha - Sharded
**UPDATE**
The original model ([Zephyr 7B Alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha)) was recently sharded.
You can use the original model.
---
🧩🧩🧩 Just a sharded version of [Zephyr 7B Alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha).
💻 Using this version, you can smoothly load the model on Colab and play with it!
From the [original model card](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha):
> Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-α is the first model in the series, and is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) that was trained on on a mix of publicly available, synthetic datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). We found that removing the in-built alignment of these datasets boosted performance on [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and made the model more helpful. However, this means that model is likely to generate problematic text when prompted to do so and should only be used for educational and research purposes.
## Usage
This version of the model is meant primarily to run smoothly on **Colab**.
I suggest loading the model with **8-bit quantization**, so that you have some free GPU to perform inference.
*However, it is perfectly fine to load the model in half-precision or with stronger quantization (4-bit).*
```python
! pip install transformers accelerate bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model = AutoModelForCausalLM.from_pretrained("anakin87/zephyr-7b-alpha-sharded", device_map="auto", load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("anakin87/zephyr-7b-alpha-sharded")
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a rapper",
},
{"role": "user", "content": "What is GPU?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
#<|system|>
#You are a friendly chatbot who always responds in the style of a rapper</s>
#<|user|>
#What is GPU?</s>
#<|assistant|>
#Yo, what's up fam, you askin' 'bout the GPU?
#Well, let me break it down for you, it's a pretty sick dud
#It stands for Graphics Processing Unit, a tech that's quite rude
#This bad boy's the one that's in charge of all the graphics you see
#On your computer screen or your high-tech TV
#It's a powerful tool that can handle intense 3D games and movies
#And it's built to handle multiple tasks with ease
#So if you're looking to take your gaming or video editing to the next level
#Just make sure you've got a top-notch GPU to make it happen.
#Peace out!
``` | 3,384 | [
[
-0.039947509765625,
-0.07733154296875,
0.00662994384765625,
-0.0017452239990234375,
-0.0242462158203125,
0.00010764598846435547,
0.001964569091796875,
-0.0237884521484375,
0.0311737060546875,
0.0227508544921875,
-0.0293426513671875,
-0.01031494140625,
-0.0471191... |
mwiki/sd-xl-colab | 2023-10-01T04:22:40.000Z | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"license:openrail++",
"has_space",
"region:us"
] | text-to-image | mwiki | null | null | mwiki/sd-xl-colab | 1 | 1,544 | diffusers | 2023-10-01T03:44:18 |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks dog
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - mwiki/sd-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
| 613 | [
[
-0.0179290771484375,
-0.01641845703125,
0.021209716796875,
0.0191497802734375,
-0.04339599609375,
0.0174407958984375,
0.02392578125,
-0.01268768310546875,
0.06829833984375,
0.029541015625,
-0.0369873046875,
-0.024383544921875,
-0.040130615234375,
-0.01028442... |
superb/hubert-base-superb-sid | 2021-11-04T16:03:27.000Z | [
"transformers",
"pytorch",
"hubert",
"audio-classification",
"speech",
"audio",
"en",
"dataset:superb",
"arxiv:2105.01051",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | audio-classification | superb | null | null | superb/hubert-base-superb-sid | 0 | 1,542 | transformers | 2022-03-02T23:29:05 | ---
language: en
datasets:
- superb
tags:
- speech
- audio
- hubert
- audio-classification
widget:
- example_title: VoxCeleb Speaker id10003
src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb1_00003.wav
- example_title: VoxCeleb Speaker id10004
src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb_00004.wav
license: apache-2.0
---
# Hubert-Base for Speaker Identification
## Model description
This is a ported version of
[S3PRL's Hubert for the SUPERB Speaker Identification task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/voxceleb1).
The base model is [hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960), which is pretrained on 16kHz
sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051)
## Task and dataset description
Speaker Identification (SI) classifies each utterance for its speaker identity as a multi-class
classification, where speakers are in the same predefined set for both training and testing. The widely
used [VoxCeleb1](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html) dataset is adopted
For the original model's training and evaluation instructions refer to the
[S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification).
## Usage examples
You can use the model via the Audio Classification pipeline:
```python
from datasets import load_dataset
from transformers import pipeline
dataset = load_dataset("anton-l/superb_demo", "si", split="test")
classifier = pipeline("audio-classification", model="superb/hubert-base-superb-sid")
labels = classifier(dataset[0]["file"], top_k=5)
```
Or use the model directly:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import HubertForSequenceClassification, Wav2Vec2FeatureExtractor
def map_to_array(example):
speech, _ = librosa.load(example["file"], sr=16000, mono=True)
example["speech"] = speech
return example
# load a demo dataset and read audio files
dataset = load_dataset("anton-l/superb_demo", "si", split="test")
dataset = dataset.map(map_to_array)
model = HubertForSequenceClassification.from_pretrained("superb/hubert-base-superb-sid")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/hubert-base-superb-sid")
# compute attention masks and normalize the waveform if needed
inputs = feature_extractor(dataset[:2]["speech"], sampling_rate=16000, padding=True, return_tensors="pt")
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
labels = [model.config.id2label[_id] for _id in predicted_ids.tolist()]
```
## Eval results
The evaluation metric is accuracy.
| | **s3prl** | **transformers** |
|--------|-----------|------------------|
|**test**| `0.8142` | `0.8071` |
### BibTeX entry and citation info
```bibtex
@article{yang2021superb,
title={SUPERB: Speech processing Universal PERformance Benchmark},
author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others},
journal={arXiv preprint arXiv:2105.01051},
year={2021}
}
``` | 3,394 | [
[
-0.032867431640625,
-0.036346435546875,
0.0129547119140625,
0.0138092041015625,
-0.005603790283203125,
-0.00319671630859375,
-0.019805908203125,
-0.0280303955078125,
-0.00377655029296875,
0.0259552001953125,
-0.044219970703125,
-0.036468505859375,
-0.04061889648... |
SweetLuna/Kenshi | 2023-04-20T06:19:23.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"art",
"artistic",
"en",
"license:creativeml-openrail-m",
"has_space",
"region:us"
] | text-to-image | SweetLuna | null | null | SweetLuna/Kenshi | 140 | 1,542 | diffusers | 2023-01-04T13:12:33 | ---
language:
- en
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- art
- artistic
- diffusers
inference: true
thumbnail: "https://i2.lensdump.com/i/TAxjOD.png"
license: creativeml-openrail-m
---
<center><h1><b><a href="https://huggingface.co/SweetLuna/Aurora"> Be sure to Check out Aurora 💛 - Luna </a></b></h1></center>
# <h1 style="font-size: 4em; text-align: center; color:black; font-family: Segoe UI"> <a href="https://huggingface.co/SweetLuna/Kenshi/blob/main/README.md" style="text-decoration: none; background-color: transparent;">Kenshi</a> </h1>
<a href="https://lensdump.com/i/RL8CTQ"><img src="https://i1.lensdump.com/i/RXYEm2.png" alt="RXYEm2.png" onclick="window.open('https://i1.lensdump.com/i/RXYEm2.png', '_blank')"></a>
<h4 style="font-size: 1em; text-align: center;"><p style="color: black;">“Do I hide or do I roam? That indecision… Now the world has changed and I’ve missed it all.”</p></h1>
---
### <h1 style="font-size: 1.75em; font-family: Segoe UI">[FULLSCREEN](https://huggingface.co/SweetLuna/Kenshi/blob/main/README.md) | [Demo (Discord Server)](https://discord.gg/pD9MKyBgNp)</h1>
<hr>
### <h1 style="font-size: 1.75em; font-family: Segoe UI">[CivitAI](https://civitai.com/models/3850) | [Download](https://huggingface.co/SweetLuna/Kenshi/tree/main/KENSHI%2001) | [Changelog](https://huggingface.co/SweetLuna/Kenshi/blob/main/Changelog.md)</h1>
<hr>
<style>▼-preamble {
font-size: 2em;
}</style>
<details id="#contents">
<summary style="font-size: 2.25em; font-family: Segoe UI"><strong>🧧 Contents</strong></summary>
<hr>
# <h1 style="font-size: 1.5em;"><strong>
- [🏮 Preamble](#▼-preamble)<p>
- [⚙️ Usage](#▼-usage)<p>
- [🎨 Versatility](#▼-versatility)<p>
- [🥢 VAE [ IMPORTANT ! ]](#▼-vae)<p>
- [🏔️ Examples Images ](#▼-sample)
- [The Celestial ☄️](#▼-celestial)
- [ChatGPT Prompt ⚙️](#▼-chatgpt)
- [Vivid 🌈](#▼-vivid)
- [Moon 🌙](#▼-moon)<p>
- [🍣 Merge Recipes](#▼-merge)<p>
- [💡 Suggestions](#▼-suggestions)
- [Trigger Words](#trigger-words)
- [WebUI](#webui)
- [VAE](#vae)
- [Embeddings](#embeddings)<p>
- [💛 Donate](#▼-donation)<p>
- [License](#license)<p>
- [Disclaimer](#disclaimer)
</strong>
</h1>
</details>
<hr>
<details id="▼-preamble">
<summary style="font-size: 2.25em; font-family: Segoe UI"><strong>🏮 What is Kenshi?</strong></summary>
<hr>
<h1>
**Kenshi** is my personal merges which created by combining different models together. ***This includes models such as Nixeu, WLOP, Guweiz, BoChen, and many others.***
```TypeScript
My goal is to archive my own feelings towards styles I want for Semi-realistic artstyle.
Through this process, I hope not only to gain a deeper understanding of my own preferences, but also to inform and refine the capabilities of my personal skills,
and AI Art as it generates artwork that reflects my desired style.
```
Kenshi because it represents strength, resilience, and the ability to adapt and overcome challenges. Just like AI.
</h1>
</details>
<hr>
<details id="▼-usage">
<summary style="font-size: 2.25em; font-family: font-family: Segoe UI"><strong>⚙️ Usage</strong></summary>
<hr>
<h1>
## <h1 style="font-size: 1.5em; text-align: center; color:black; font-family: Segoe UI"> These are the settings I always use it is recommended but not essential;
| Settings | Value |
| ----------------- | ------------------------------------------------------------------ |
| Steps | 20+ |
| Sampler | DPM++ 2M Karras |
| CFG scale | 2-7 |
| Size |600x800 |
| Clip skip | 2 |
| ENSD | 31337 |
| Hires Fix | Enabled |
| Upscale by | 1.5 |
| Upscaler Fix | https://de-next.owncube.com/index.php/s/x99pKzS7TNaErrC |
| Hires Fix | Enabled |
Kenshi is not recommended for new users since it requires a lot of prompt to work with I suggest using this if you still want to use the model (install it as an extension on Automatic1111 WebUI) : https://github.com/DominikDoom/a1111-sd-webui-tagcomplete
</h1>
</h1>
<center><a href="https://i2.lensdump.com/i/TAbhx1.png"><img src="https://i2.lensdump.com/i/TAbhx1.png" alt="TAbhx1.png" onclick="window.open('https://i2.lensdump.com/i/TAbhx1.png', '_blank')"></a></center>
</details>
<hr>
<details id="▼-versatility">
<summary style="font-size: 2.25em; font-family: font-family: Segoe UI"><strong>🎨 Versatility</strong></summary>
<hr>
<h1>
## Unlike most models, Kenshi is known for its versatility, able to perform various styles with remarkable results. I've undergone testing with over 30 to 50 styles and most of the time I get remarkable results. I recommend using Lora and Embedding to improve this even further.
<center><a href="https://i2.lensdump.com/i/TAxjOD.png"><img src="https://i2.lensdump.com/i/TAxjOD.png" alt="TAxjOD.png" onclick="window.open('https://i2.lensdump.com/i/TAxjOD.png, '_blank')"></a></center>
</details>
<hr>
<details id="▼-vae">
<summary style="font-size: 2.25em; font-family: font-family: Segoe UI"><strong>🥢 VAE ⚠️</strong></summary>
<hr>
<h1>
## I recommend <a href="https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/vae/kl-f8-anime2.ckpt" >**kl-f8-anime2.ckpt**</a> VAE from waifu-diffusion-v1-4 <a href="https://huggingface.co/hakurei">which is made by hakurei.</a>
</h1>
<a href="https://i2.lensdump.com/i/RbBe37.png"><img src="https://i2.lensdump.com/i/RbBe37.png" alt="RbBe37.png" onclick="window.open('https://i2.lensdump.com/i/RbBe37.png', '_blank')"></a>
# <h1 style="font-size: 2.5em;"><a href="https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/vae/kl-f8-anime2.ckpt" >**VAE is important, please download it.**</h1></a>
</details>
<hr>
<details id="▼-sample">
<summary style="font-size: 2.25em; font-family: Segoe UI"><strong>🏔️ Examples Images</strong></summary><hr>
<details id="▼-celestial">
<summary style="font-size: 1.75em; font-family: monospace"><strong>The Celestial ☄️</strong></summary>
<img src="https://i3.lensdump.com/i/RLEz8M.png" alt=”1”>
<h1>
```c#
1girl, highly detailed face, bleak and dangerous atmosphere, moody, (dynamic pose:1.6), cataclysmic magic, dark blue wavy long hair,
(glowing eyes:0.85), (reaching through a magic circle:1.35), extremely detailed 8k wallpaper, (highly detailed:1.1), [anime:Impasto:0.5],
intricate, fantasy, clear sky, wind, beautiful sky, (nightsky), (galaxy), (huge blood moon in the background:1.05)
```
# **KENSHI 00**
</details>
<hr>
<details id="▼-chatgpt">
<summary style="font-size: 1.75em; font-family: monospace"><strong>ChatGPT Prompt ⚙️</strong></summary>
<img src="https://i.lensdump.com/i/RLkz3v.png" alt=”2”>
<img src="https://i1.lensdump.com/i/RLkFND.png" alt=”3”>
<img src="https://i3.lensdump.com/i/RLkulr.png" alt=”4”>
```c#
(A cursed knight, clad in black armor,) must journey through a desolate,
haunted land to reach the Elden Ring and lift the (curse that plagues their soul.)Along the way,
they encounter other travelers, (each struggling with their own demons and secrets), As they draw closer to the Elden Ring,
they are confronted with visions of their past mistakes, (all tinged with a red hue,)
looking at viewer, highres, superb, 8k wallpaper, extremely detailed, intricate, unreal engine 5, volumetric lighting, realistic, realistic lighting,
cinematic, 4k, cinematic lighting, 8k, depth of field, 3d, perfect, award-winning, hyper-detailed, photorealistic, ultra realistic, realistic light,
hard lighting, intricate details, stop motion, hyperfocus, tonemapping, sharp focus, hyper detailed, detailed eyes, eyes focus, (illustration:1.1),
highres, (extremely detailed CG unity 8k wallpaper:1.1), (beautiful face:1.15), (cowboy_shot:1.5)
(nixeu_soft:0.7), (nixeu_white:0.7),
```
# **KENSHI 00**
</details>
<hr>
<details id="▼-vivid">
<summary style="font-size: 1.75em; font-family: monospace"><strong>Vivid 🌈</strong></summary>
<img src="https://i.lensdump.com/i/RXY1Fo.png" alt=”5”>
```c#
close POV, young adult woman, blue purple green color palette, black hair with dark green shine, two symmetrical antennae on head,
big blue eyes sparkling, rings around eyes, two-tone black and red, smiling at the camera, elegant pose, looking at the viewer,
vivid stained glass window background, oil painting, character portrait, drawn in medibang paint, 4k wallpaper, aesthetic, masterpiece,
award-winning photography, macro photography vivid colors, photorealistic, atmospheric, cinematic, moody, rule of thirds, majestic, detailed, perfect anatomy
cowboy shot, contrapposto, looking at viewer, highres, superb, 8k wallpaper, extremely detailed, intricate, unreal engine 5, volumetric lighting,
realistic, realistic lighting, cinematic, 4k, cinematic lighting, 8k, depth of field, 3d, masterpiece, perfect, award-winning, hyper-detailed,
photorealistic, ultra realistic, realistic light, hard lighting, intricate details, stop motion, hyperfocus, tonemapping, sharp focus, hyper detailed,
detailed eyes, eyes focus, (illustration:1.1), highres, (extremely detailed CG unity 8k wallpaper:1.1), (mid shot1.25), (portrait:1.25), (solo:1.2), 1girl,
(beautiful face:1.15),
(nixeu_soft:0.7), (nixeu_white:0.7),
```
# **KENSHI 01**
</details>
<hr>
<details id="▼-moon">
<summary style="font-size: 1.75em; font-family: monospace"><strong>Moon 🌙</strong></summary>
<img src="https://i2.lensdump.com/i/RXYt7i.png" alt=”6”>
```c#
(on the moon, space, looking back into earth), white hair, black tank top, volumetric lighting, white jacket, glowing headphone, cyberpunk, futuristic,
multi-color eyes, detailed eyes, hyper detailed,light smile,
highly detailed, beautiful, small details, ultra detailed, best quality, intricate, hyperrealism, sharp, digital illustration, detailed, realism, intricate,
4k, 8k, trending on artstation, good anatomy, beautiful lighting, award-winning, photorealistic, realistic shadows, realistic lighting, beautiful lighting,
raytracing, intricate details, moody, rule of thirds, masterpiece, (illustration:1.1), highres, (extremely detailed CG, unity, 8k wallpaper:1.1), beautiful face,
highly detailed face, ultra realistic, masterpiece, bokeh, extremely detailed, intricate, zoomout,
colorful, vibrant colors, red nail polish, side view,
```
# **KENSHI 01**
</details>
</details>
<hr>
</h1>
<details id="▼-merge">
<summary style="font-size: 2.25em; font-family: Segoe UI"><strong>🍣 Merge Recipes</strong></summary>
<hr>
<h1><strong>
<a href="
https://www.figma.com/file/aESyZAxHxBJjE63gog5ExZ/KENSHI?node-id=0%3A1&t=2ULQWeLUSIWhk1aE-0" class="no-underline" style="font-size: 1.75em;">Here is my Cookbook that you can check out.
<img src="https://i2.lensdump.com/i/RLCJIH.png" alt="COOKBOOK"></strong>
</h1>
</a>
</details>
<hr>
<details id="▼-donation">
<summary style="font-size: 2.25em; font-family: Segoe UI"><strong>💛 Donate</strong></summary>
<hr>
<h1><strong>
I've been working hard to complete my college education. The thing is, paying for college is no joke and I've been feeling the pressure of the mounting bills.
I know times are tough for everyone, but if you're able to help in any way, I would be forever grateful.
Considering supporting me on <a href="https://www.patreon.com/thesweetluna">Patreon</a>
</h1>
</a>
</details>
<hr>
<details id="▼-suggestions">
<summary style="font-size: 2.25em; font-family: Segoe UI"><strong>💡 Suggestions</strong></summary>
<hr>
## <h1 style="font-size: 1.75em;">Trigger Words</h1>
<hr>
<h1 style="font-size: 1.5em;">
**Trigger Words are not required** but are meant to **enhance the effectiveness of the prompt** and improve the overall outcome.
```c#
WLOP, Nixeu, Guweiz
```
</h1>
<hr>
## <h1 style="font-size: 1.75em;">WebUI</h1>
<hr>
<h1 style="font-size: 1.5em;">
<a href="https://github.com/AUTOMATIC1111/stable-diffusion-webui">AUTOMATIC1111</a> Grab it, a must-have. Have all the features you want and is easy to access.
<hr>
</h1>
## <h1 style="font-size: 1.75em;">Embeddings</h1>
<hr>
<h1 style="font-size: 1.5em;">
I recommend grabbing ***all*** <a href="https://huggingface.co/Nerfgun3">Nerfgun3</a> embeddings ***and*** Sirveggie <a href="https://huggingface.co/SirVeggie/nixeu_embeddings">nixeu_embeddings</a>
</h1>
</details>
<hr>
# License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
```
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against theprovisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
```
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
<hr>
# Disclaimer
```c#
The use of this learning model is entirely at the discretion of the user, and they have the freedom to choose whether or not to create NSFW content.
This is important to note that the model itself does not contain any explicit or inappropriate imagery that can be easily accessed with a single click.
The purpose of sharing this model is not to showcase obscene material in a public forum, but rather to provide a tool for users to utilize as they see fit.
The decision of whether to engage with SFW or NSFW content lies with the user and their own personal preferences.
``` | 13,895 | [
[
-0.058319091796875,
-0.049224853515625,
0.0217742919921875,
-0.0029754638671875,
-0.0223388671875,
0.01171875,
0.0003266334533691406,
-0.05950927734375,
0.07537841796875,
0.028289794921875,
-0.06353759765625,
-0.039215087890625,
-0.03204345703125,
0.00869750... |
radames/stable-diffusion-2-depth-img2img | 2023-05-16T20:29:18.000Z | [
"diffusers",
"stable-diffusion",
"image-to-image",
"arxiv:2112.10752",
"arxiv:2202.00512",
"arxiv:1910.09700",
"license:openrail++",
"has_space",
"diffusers:StableDiffusionDepth2ImgPipeline",
"region:us"
] | image-to-image | radames | null | null | radames/stable-diffusion-2-depth-img2img | 6 | 1,542 | diffusers | 2023-05-16T20:19:33 | ---
license: openrail++
tags:
- stable-diffusion
- image-to-image
duplicated_from: stabilityai/stable-diffusion-2-depth
pipeline_tag: image-to-image
---
# Stable Diffusion v2 Model Card
This model card focuses on the model associated with the Stable Diffusion v2 model, available [here](https://github.com/Stability-AI/stablediffusion).
This `stable-diffusion-2-depth` model is resumed from [stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base) (`512-base-ema.ckpt`) and finetuned for 200k steps. Added an extra input channel to process the (relative) depth prediction produced by [MiDaS](https://github.com/isl-org/MiDaS) (`dpt_hybrid`) which is used as an additional conditioning.

- Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the `512-depth-ema.ckpt` [here](https://huggingface.co/stabilityai/stable-diffusion-2-depth/resolve/main/512-depth-ema.ckpt).
- Use it with 🧨 [`diffusers`](#examples)
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL)
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)).
- **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
## Examples
Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion 2 in a simple and efficient manner.
```bash
pip install -U git+https://github.com/huggingface/transformers.git
pip install diffusers transformers accelerate scipy safetensors
```
Running the pipeline (if you don't swap the scheduler it will run with the default DDIM, in this example we are swapping it to EulerDiscreteScheduler):
```python
import torch
import requests
from PIL import Image
from diffusers import StableDiffusionDepth2ImgPipeline
pipe = StableDiffusionDepth2ImgPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-depth",
torch_dtype=torch.float16,
).to("cuda")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
init_image = Image.open(requests.get(url, stream=True).raw)
prompt = "two tigers"
n_propmt = "bad, deformed, ugly, bad anotomy"
image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0]
```
**Notes**:
- Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance)
- If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed)
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is originally taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), was used for Stable Diffusion v1, but applies in the same way to Stable Diffusion v2_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a subset of the large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NFSW detector (see Training section).
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion vw was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
Stable Diffusion v2 mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-5B and subsets (details below). The training data is further filtered using LAION's NSFW detector, with a "p_unsafe" score of 0.1 (conservative). For more details, please refer to LAION-5B's [NeurIPS 2022](https://openreview.net/forum?id=M3Y74vmsMcY) paper and reviewer discussions on the topic.
**Training Procedure**
Stable Diffusion v2 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through the OpenCLIP-ViT/H text-encoder.
- The output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We also use the so-called _v-objective_, see https://arxiv.org/abs/2202.00512.
We currently provide the following checkpoints:
- `512-base-ema.ckpt`: 550k steps at resolution `256x256` on a subset of [LAION-5B](https://laion.ai/blog/laion-5b/) filtered for explicit pornographic material, using the [LAION-NSFW classifier](https://github.com/LAION-AI/CLIP-based-NSFW-Detector) with `punsafe=0.1` and an [aesthetic score](https://github.com/christophschuhmann/improved-aesthetic-predictor) >= `4.5`.
850k steps at resolution `512x512` on the same dataset with resolution `>= 512x512`.
- `768-v-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for 150k steps using a [v-objective](https://arxiv.org/abs/2202.00512) on the same dataset. Resumed for another 140k steps on a `768x768` subset of our dataset.
- `512-depth-ema.ckpt`: Resumed from `512-base-ema.ckpt` and finetuned for 200k steps. Added an extra input channel to process the (relative) depth prediction produced by [MiDaS](https://github.com/isl-org/MiDaS) (`dpt_hybrid`) which is used as an additional conditioning.
The additional input channels of the U-Net which process this extra information were zero-initialized.
- `512-inpainting-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for another 200k steps. Follows the mask-generation strategy presented in [LAMA](https://github.com/saic-mdal/lama) which, in combination with the latent VAE representations of the masked image, are used as an additional conditioning.
The additional input channels of the U-Net which process this extra information were zero-initialized. The same strategy was used to train the [1.5-inpainting checkpoint](https://github.com/saic-mdal/lama).
- `x4-upscaling-ema.ckpt`: Trained for 1.25M steps on a 10M subset of LAION containing images `>2048x2048`. The model was trained on crops of size `512x512` and is a text-guided [latent upscaling diffusion model](https://arxiv.org/abs/2112.10752).
In addition to the textual input, it receives a `noise_level` as an input parameter, which can be used to add noise to the low-resolution input according to a [predefined diffusion schedule](configs/stable-diffusion/x4-upscaling.yaml).
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 1
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 steps DDIM sampling steps show the relative improvements of the checkpoints:

Evaluated using 50 DDIM steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 200000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 15000 kg CO2 eq.
## Citation
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
*This model card was written by: Robin Rombach, Patrick Esser and David Ha and is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).* | 12,494 | [
[
-0.032989501953125,
-0.06524658203125,
0.0247650146484375,
0.01554107666015625,
-0.0166015625,
-0.025115966796875,
0.00662994384765625,
-0.034088134765625,
-0.00405120849609375,
0.0292510986328125,
-0.0284881591796875,
-0.02935791015625,
-0.054229736328125,
... |
WarriorMama777/AbyssOrangeMix2 | 2023-01-30T08:59:01.000Z | [
"diffusers",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | WarriorMama777 | null | null | WarriorMama777/AbyssOrangeMix2 | 32 | 1,541 | diffusers | 2023-01-30T08:35:39 | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
## AbyssOrangeMix2
See https://huggingface.co/WarriorMama777/OrangeMixs for more information
| 178 | [
[
-0.052581787109375,
-0.0051727294921875,
0.0185089111328125,
0.05047607421875,
-0.00749969482421875,
-0.0009512901306152344,
0.04962158203125,
-0.03192138671875,
0.05120849609375,
0.0472412109375,
-0.035552978515625,
-0.0281524658203125,
-0.049072265625,
0.0... |
VarunRaj/my-pet-dog-jkl | 2023-10-24T10:44:41.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | VarunRaj | null | null | VarunRaj/my-pet-dog-jkl | 0 | 1,540 | diffusers | 2023-10-24T10:40:57 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-jkl Dreambooth model trained by VarunRaj following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: BITS-309
Sample pictures of this concept:
.jpg)
.jpg)
.jpg)
| 597 | [
[
-0.05902099609375,
-0.0283966064453125,
0.0204925537109375,
0.00884246826171875,
-0.0184326171875,
0.04595947265625,
0.0227203369140625,
-0.033660888671875,
0.034423828125,
0.0312042236328125,
-0.047698974609375,
-0.031341552734375,
-0.01221466064453125,
0.0... |
KoboldAI/fairseq-dense-6.7B-Shinen | 2022-04-13T08:19:31.000Z | [
"transformers",
"pytorch",
"xglm",
"text-generation",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | KoboldAI | null | null | KoboldAI/fairseq-dense-6.7B-Shinen | 0 | 1,539 | transformers | 2022-04-07T18:30:40 | ---
language: en
license: mit
---
# Fairseq-dense 6.7B - Shinen
## Model Description
Fairseq-dense 6.7B-Shinen is a finetune created using Fairseq's MoE dense model. Compared to GPT-Neo-2.7-Horni, this model is much heavier on the sexual content.
**Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
## Training data
The training data contains user-generated stories from sexstories.com. All stories are tagged using the following way:
```
[Theme: <theme1>, <theme2> ,<theme3>]
<Story goes here>
```
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='KoboldAI/fairseq-dense-6.7B-Shinen')
>>> generator("She was staring at me", do_sample=True, min_length=50)
[{'generated_text': 'She was staring at me with a look that said it all. She wanted me so badly tonight that I wanted'}]
```
### Limitations and Biases
Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion).
### BibTeX entry and citation info
```
Artetxe et al. (2021): Efficient Large Scale Language Modeling with Mixtures of Experts
``` | 1,304 | [
[
-0.007419586181640625,
-0.060791015625,
0.024932861328125,
0.029632568359375,
-0.0185546875,
-0.04461669921875,
-0.007640838623046875,
-0.0242156982421875,
0.0049285888671875,
0.043060302734375,
-0.049102783203125,
-0.033599853515625,
-0.0343017578125,
0.008... |
komfysach/groow-tokens-5 | 2023-10-31T15:43:51.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | komfysach | null | null | komfysach/groow-tokens-5 | 0 | 1,539 | diffusers | 2023-10-31T15:39:37 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### groow_tokens_5 Dreambooth model trained by komfysach with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| 505 | [
[
-0.0265655517578125,
-0.0635986328125,
0.04638671875,
0.0240325927734375,
-0.031585693359375,
0.032318115234375,
0.0300140380859375,
-0.0185394287109375,
0.047332763671875,
0.0150604248046875,
-0.0169830322265625,
-0.0276947021484375,
-0.04931640625,
-0.0163... |
ml6team/mt5-small-german-query-generation | 2022-04-27T06:24:37.000Z | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"query-generation",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | ml6team | null | null | ml6team/mt5-small-german-query-generation | 1 | 1,538 | transformers | 2022-04-26T13:51:02 | ---
language:
- de
tags:
- pytorch
- query-generation
widget:
- text: "Das Lama (Lama glama) ist eine Art der Kamele. Es ist in den südamerikanischen Anden verbreitet und eine vom Guanako abstammende Haustierform."
example_title: "Article 1"
license: apache-2.0
metrics:
- Rouge-Score
---
# mt5-small-german-query-generation
## Model description:
This model was created with the purpose to generate possible queries for a german input article.
For this model, we finetuned a multilingual T5 model [mt5-small](https://huggingface.co/google/mt5-small) on the [MMARCO dataset](https://huggingface.co/datasets/unicamp-dl/mmarco) the machine translated version of the MS MARCO dataset.
The model was trained for 1 epoch, on 200,000 unique queries of the dataset. We trained the model on one K80 GPU for 25,000 iterations with following parameters:
- learning rate: 1e-3
- train batch size: 8
- max input sequence length: 512
- max target sequence length: 64
## Model Performance:
Model evaluation was done on 2000 evaluation paragraphs of the dataset. Mean [f1 ROUGE scores](https://github.com/pltrdy/rouge) were calculated for the model.
| Rouge-1 | Rouge-2 | Rouge-L |
|---|---|---|
|0.162 | 0.052 | 0.161 |
| 1,227 | [
[
-0.036346435546875,
-0.05364990234375,
0.039520263671875,
0.01519775390625,
-0.046356201171875,
-0.0143585205078125,
-0.019073486328125,
-0.0290069580078125,
0.01433563232421875,
0.03814697265625,
-0.048675537109375,
-0.07135009765625,
-0.038116455078125,
0.... |
Salesforce/xgen-7b-8k-base | 2023-10-24T17:36:54.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2309.03450",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | Salesforce | null | null | Salesforce/xgen-7b-8k-base | 299 | 1,538 | transformers | 2023-06-28T00:57:54 | ---
license: apache-2.0
---
# XGen-7B-8K-Base
Official research release for the family of **XGen** models (`7B`) by Salesforce AI Research:
*Title*: [Long Sequence Modeling with XGen: A 7B LLM Trained on 8K Input Sequence Length](https://arxiv.org/abs/2309.03450)
*Authors*: [Erik Nijkamp](https://eriknijkamp.com)\*, Tian Xie\*, [Hiroaki Hayashi](https://hiroakih.me/)\*, [Bo Pang](https://scholar.google.com/citations?user=s9fNEVEAAAAJ&hl=en)\*, Congying Xia\*, Chen Xing, Jesse Vig, Semih Yavuz, Philippe Laban, Ben Krause, Senthil Purushwalkam, Tong Niu, Wojciech Kryscinski, Lidiya Murakhovs'ka, Prafulla Kumar Choubey, Alex Fabbri, Ye Liu, Rui Meng, Lifu Tu, Meghana Bhat, [Chien-Sheng Wu](https://jasonwu0731.github.io/), Silvio Savarese, [Yingbo Zhou](https://scholar.google.com/citations?user=H_6RQ7oAAAAJ&hl=en), [Shafiq Rayhan Joty](https://raihanjoty.github.io/), [Caiming Xiong](http://cmxiong.com/).
(* indicates equal contribution)
Correspondence to: [Shafiq Rayhan Joty](mailto:sjoty@salesforce.com), [Caiming Xiong](mailto:cxiong@salesforce.com)
## Models
### Base models
* [XGen-7B-4K-Base](https://huggingface.co/Salesforce/xgen-7b-4k-base): XGen-7B model pre-trained under 4K sequence length.
* License: Apache-2.0
* [XGen-7B-8K-Base](https://huggingface.co/Salesforce/xgen-7b-8k-base): XGen-7B model pre-trained under 8K sequence length.
* License: Apache-2.0
### Instruction-finetuned models
Supervised finetuned model on public domain instructional data. Released for ***research purpose*** only.
* [XGen-7B-8K-Inst](https://huggingface.co/Salesforce/xgen-7b-8k-inst)
## How to run
The training data for the models are tokenized with OpenAI Tiktoken library.
To use this model, install the package via `pip`:
```sh
pip install tiktoken
```
The models can be used as auto-regressive samplers as follows:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/xgen-7b-8k-base", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("Salesforce/xgen-7b-8k-base", torch_dtype=torch.bfloat16)
inputs = tokenizer("The world is", return_tensors="pt")
sample = model.generate(**inputs, max_length=128)
print(tokenizer.decode(sample[0]))
```
## Citation
```bibtex
@misc{XGen,
title={Long Sequence Modeling with XGen: A 7B LLM Trained on 8K Input Sequence Length},
author={Erik Nijkamp, Tian Xie, Hiroaki Hayashi, Bo Pang, Congying Xia, Chen Xing, Jesse Vig, Semih Yavuz, Philippe Laban, Ben Krause, Senthil Purushwalkam, Tong Niu, Wojciech Kryscinski, Lidiya Murakhovs'ka, Prafulla Kumar Choubey, Alex Fabbri, Ye Liu, Rui Meng, Lifu Tu, Meghana Bhat, Chien-Sheng Wu, Silvio Savarese, Yingbo Zhou, Shafiq Rayhan Joty, Caiming Xiong},
howpublished={ArXiv},
year={2023},
url={https://arxiv.org/abs/2309.03450}
}
```
| 2,858 | [
[
-0.03399658203125,
-0.0262908935546875,
0.0030269622802734375,
0.0196533203125,
-0.021514892578125,
0.00431060791015625,
-0.008148193359375,
-0.044464111328125,
0.0009822845458984375,
0.0291290283203125,
-0.047821044921875,
-0.034942626953125,
-0.037353515625,
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.