modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 โ | author stringlengths 2 34 โ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 โ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
BM-K/KoSimCSE-bert-multitask | 2023-04-26T04:21:02.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"korean",
"ko",
"endpoints_compatible",
"region:us"
] | feature-extraction | BM-K | null | null | BM-K/KoSimCSE-bert-multitask | 6 | 2,404 | transformers | 2022-06-01T14:51:47 | ---
language: ko
tags:
- korean
---
https://github.com/BM-K/Sentence-Embedding-is-all-you-need
# Korean-Sentence-Embedding
๐ญ Korean sentence embedding repository. You can download the pre-trained models and inference right away, also it provides environments where individuals can train models.
## Quick tour
```python
import torch
from transformers import AutoModel, AutoTokenizer
def cal_score(a, b):
if len(a.shape) == 1: a = a.unsqueeze(0)
if len(b.shape) == 1: b = b.unsqueeze(0)
a_norm = a / a.norm(dim=1)[:, None]
b_norm = b / b.norm(dim=1)[:, None]
return torch.mm(a_norm, b_norm.transpose(0, 1)) * 100
model = AutoModel.from_pretrained('BM-K/KoSimCSE-bert-multitask')
AutoTokenizer.from_pretrained('BM-K/KoSimCSE-bert-multitask')
sentences = ['์นํ๊ฐ ๋คํ์ ๊ฐ๋ก ์ง๋ฌ ๋จน์ด๋ฅผ ์ซ๋๋ค.',
'์นํ ํ ๋ง๋ฆฌ๊ฐ ๋จน์ด ๋ค์์ ๋ฌ๋ฆฌ๊ณ ์๋ค.',
'์์ญ์ด ํ ๋ง๋ฆฌ๊ฐ ๋๋ผ์ ์ฐ์ฃผํ๋ค.']
inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors="pt")
embeddings, _ = model(**inputs, return_dict=False)
score01 = cal_score(embeddings[0][0], embeddings[1][0])
score02 = cal_score(embeddings[0][0], embeddings[2][0])
```
## Performance
- Semantic Textual Similarity test set results <br>
| Model | AVG | Cosine Pearson | Cosine Spearman | Euclidean Pearson | Euclidean Spearman | Manhattan Pearson | Manhattan Spearman | Dot Pearson | Dot Spearman |
|------------------------|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
| KoSBERT<sup>โ </sup><sub>SKT</sub> | 77.40 | 78.81 | 78.47 | 77.68 | 77.78 | 77.71 | 77.83 | 75.75 | 75.22 |
| KoSBERT | 80.39 | 82.13 | 82.25 | 80.67 | 80.75 | 80.69 | 80.78 | 77.96 | 77.90 |
| KoSRoBERTa | 81.64 | 81.20 | 82.20 | 81.79 | 82.34 | 81.59 | 82.20 | 80.62 | 81.25 |
| | | | | | | | | |
| KoSentenceBART | 77.14 | 79.71 | 78.74 | 78.42 | 78.02 | 78.40 | 78.00 | 74.24 | 72.15 |
| KoSentenceT5 | 77.83 | 80.87 | 79.74 | 80.24 | 79.36 | 80.19 | 79.27 | 72.81 | 70.17 |
| | | | | | | | | |
| KoSimCSE-BERT<sup>โ </sup><sub>SKT</sub> | 81.32 | 82.12 | 82.56 | 81.84 | 81.63 | 81.99 | 81.74 | 79.55 | 79.19 |
| KoSimCSE-BERT | 83.37 | 83.22 | 83.58 | 83.24 | 83.60 | 83.15 | 83.54 | 83.13 | 83.49 |
| KoSimCSE-RoBERTa | 83.65 | 83.60 | 83.77 | 83.54 | 83.76 | 83.55 | 83.77 | 83.55 | 83.64 |
| | | | | | | | | | |
| KoSimCSE-BERT-multitask | 85.71 | 85.29 | 86.02 | 85.63 | 86.01 | 85.57 | 85.97 | 85.26 | 85.93 |
| KoSimCSE-RoBERTa-multitask | 85.77 | 85.08 | 86.12 | 85.84 | 86.12 | 85.83 | 86.12 | 85.03 | 85.99 | | 2,586 | [
[
-0.021148681640625,
-0.04693603515625,
0.03558349609375,
0.02301025390625,
-0.0265045166015625,
0.00499725341796875,
-0.0209808349609375,
0.0012331008911132812,
0.0252685546875,
0.028167724609375,
-0.051025390625,
-0.053802490234375,
-0.04766845703125,
0.005... |
sander-wood/text-to-music | 2023-08-19T06:17:31.000Z | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"music",
"en",
"arxiv:2211.11216",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | text2text-generation | sander-wood | null | null | sander-wood/text-to-music | 115 | 2,397 | transformers | 2022-11-21T08:43:55 | ---
license: mit
language: en
widget:
- text: This is a traditional Irish dance music.
inference:
parameters:
top_p: 0.9
max_length: 1024
do_sample: true
tags:
- music
---
# Exploring the Efficacy of Pre-trained Checkpoints in Text-to-Music Generation Task
## Model description
This language-music model takes [BART-base](https://huggingface.co/facebook/bart-base) fine-tunes on 282,870 English text-music pairs, where all scores are represented in ABC notation. It was introduced in the paper [Exploring the Efficacy of Pre-trained Checkpoints in Text-to-Music Generation Task](https://arxiv.org/abs/2211.11216) by Wu et al. and released in [this repository](https://github.com/sander-wood/text-to-music).
It is capable of generating complete and semantically consistent sheet music directly from descriptions in natural language based on text. To the best of our knowledge, this is the first model that achieves text-conditional symbolic music generation which is trained on real text-music pairs, and the music is generated entirely by the model and without any hand-crafted rules.
This language-music model is available for online use and experience on [Textune: Generating Tune from Text](https://huggingface.co/spaces/sander-wood/text-to-music). With this online platform, you can easily input your desired text descriptions and receive a generated sheet music output from the model.
Due to copyright reasons, we are unable to publicly release the training dataset of this model. Instead, we have made available the [WikiMusicText](https://huggingface.co/datasets/sander-wood/wikimusictext) (WikiMT) dataset, which includes 1010 pairs of text-music data and can be used to evaluate the performance of language-music models.
## Intended uses & limitations
You can use this model for text-conditional music generation. All scores generated by this model can be written on one stave (for vocal solo or instrumental solo) in standard classical notation, and are in a variety of styles, e.g., blues, classical, folk, jazz, pop, and world music. We recommend using the script in [this repository](https://github.com/sander-wood/text-to-music) for inference. The generated tunes are in ABC notation, and can be converted to sheet music or audio using [this website](https://abc.rectanglered.com/), or [this software](https://easyabc.sourceforge.net/).
Its creativity is limited, can not perform well on tasks requiring a high degree of creativity (e.g., melody style transfer), and it is input-sensitive. For more information, please check [our paper](https://arxiv.org/abs/2211.11216).
### How to use
Here is how to use this model in PyTorch:
```python
import torch
from samplings import top_p_sampling, temperature_sampling
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained('sander-wood/text-to-music')
model = AutoModelForSeq2SeqLM.from_pretrained('sander-wood/text-to-music')
model = model
max_length = 1024
top_p = 0.9
temperature = 1.0
text = "This is a traditional Irish dance music."
input_ids = tokenizer(text,
return_tensors='pt',
truncation=True,
max_length=max_length)['input_ids']
decoder_start_token_id = model.config.decoder_start_token_id
eos_token_id = model.config.eos_token_id
decoder_input_ids = torch.tensor([[decoder_start_token_id]])
for t_idx in range(max_length):
outputs = model(input_ids=input_ids,
decoder_input_ids=decoder_input_ids)
probs = outputs.logits[0][-1]
probs = torch.nn.Softmax(dim=-1)(probs).detach().numpy()
sampled_id = temperature_sampling(probs=top_p_sampling(probs,
top_p=top_p,
return_probs=True),
temperature=temperature)
decoder_input_ids = torch.cat((decoder_input_ids, torch.tensor([[sampled_id]])), 1)
if sampled_id!=eos_token_id:
continue
else:
tune = "X:1\n"
tune += tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True)
print(tune)
break
```
### Generation Examples
Here are some examples generated by this model without cherry-picking.
```
######################## INPUT TEXT ########################
This is a traditional Irish dance music.
Note Length-1/8
Meter-6/8
Key-D
####################### OUTPUT TUNES #######################
X:1
L:1/8
M:6/8
K:D
A | BEE BEE | Bdf edB | BAF FEF | DFA BAF | BEE BEE | Bdf edB | BAF DAF | FED E2 :: A |
Bef gfe | faf edB | BAF FEF | DFA BAF | Bef gfe | faf edB | BAF DAF | FED E2 :|
X:2
L:1/8
M:6/8
K:D
A |: DED F2 A | d2 f ecA | G2 B F2 A | E2 F GFE | DED F2 A | d2 f ecA | Bgf edc |1 d3 d2 A :|2
d3 d2 a || a2 f d2 e | f2 g agf | g2 e c2 d | e2 f gfe | fed gfe | agf bag | fed cde | d3 d2 a |
agf fed | Adf agf | gfe ecA | Ace gfe | fed gfe | agf bag | fed cde | d3 d2 ||
X:3
L:1/8
M:6/8
K:D
BEE BEE | Bdf edB | BAF FEF | DFA dBA | BEE BEE | Bdf edB | BAF FEF |1 DED DFA :|2 DED D2 e |:
faf edB | BAF DFA | BAF FEF | DFA dBA | faf edB | BAF DFA | BdB AFA |1 DED D2 e :|2 DED DFA ||
```
```
######################## INPUT TEXT ########################
This is a jazz-swing lead sheet with chord and vocal.
####################### OUTPUT TUNES #######################
X:1
L:1/8
M:4/4
K:F
"F" CFG |"F" A6 z G |"Fm7" A3 G"Bb7" A3 G |"F" A6 z G |"F7" A4"Eb7" G4 |"F" F6 z F |
"Dm" A3 G"Dm/C" A3 G |"Bb" A2"Gm" B2"C7" G3 G |"F" F8- |"Dm7""G7" F6 z2 |"C" C4 C3 C |
"C7" C2 B,2"F" C4 |"F" C4 C3 C |"Dm" D2 C2"Dm/C" D4 |"Bb" D4 D3 D |"Bb" D2 C2"C7" D4 |"F" C8- |
"F" C4"Gm" z C"C7" FG |"F" A6 z G |"Fm7" A3 G"Bb7" A3 G |"F" A6 z G |"F7" A4"Eb7" G4 |"F" F6 z F |
"Dm" A3 G"Dm/C" A3 G |"Bb" A2"Gm" B2"C7" G3 G |"F" F8- |"F" F6 z2 |]
X:2
L:1/4
M:4/4
K:F
"^A""F" A3 A |"Am7" A2"D7" A2 |"Gm7" G2"C7" G A |"F" F4 |"F" A3 A |"Am7" A2"D7" A2 |"Gm7" G2"C7" G A |
"F" F4 |"Gm" B3 B |"Am7" B2"D7" B2 |"Gm" B2"D7" B A |"Gm7" G4 |"F" A3 A |"Am7" A2"D7" A2 |
"Gm7" G2"C7" G A |"F" F4 |"Bb7" F3 G |"F" A2 A2 |"Gm" B2"C7" B2 |"F" c2"D7" c c |"Gm7" c2"C7" B2 |
"F" A2"F7" A2 |"Bb" B2"F" B A |"Bb" B2"F" B A |"Gm" B2"F" B A |"Gm7" B2"F" B A |"Gm7" B2"F" B A |
"C7" B2 c2 |"F""Bb7" A4 |"F""Bb7" z4 |]
X:3
L:1/4
M:4/4
K:Bb
B, ||"Gm""^A1" G,2 B, D |"D7" ^F A2 G/=F/ |"Gm" G2"Cm7" B c |"F7" A2 G =F |"Bb" D2 F A |
"Cm7" c e2 d/c/ |"Gm7" B3/2 G/-"C7" G2- |"F7" G2 z B, |"Gm""^B" G,2 B, D |"D7" ^F A2 G/=F/ |
"Gm" G2"Cm7" B c |"F7" A2 G =F |"Bb" D2 F A |"Cm7" c e2 d/c/ |"Gm7" B3/2 G/-"C7" G2- |"F7" G2 z2 ||
"^C""F7""^A2" F4- | F E D C |"Bb" D2 F B | d3 c/B/ |"F" A2"Cm7" G2 |"D7" ^F2 G2 |"Gm" B3"C7" A |
"F7" G4 ||"F7""^A3" F4- | F E D C |"Bb" D2 F B | d3 c/B/ |"F" A2"Cm7" G2 |"D7" ^F2 G2 |"Gm" B3 A |
"C7" G4 ||"^B""Gm""^C" B2 c B |"Cm" c B c B |"Gm7" c2 B A |"C7" B3 A |"Bb" B2 c B |"G7" d c B A |
"Cm" G2 A G |"F7" F2 z G ||"^C""F7" F F3 |"Bb" D D3 |"Cm" E E3 |"D7" ^F F3 |"Gm" G2 A B |"C7" d3 d |
"Gm" d3 d |"D7" d3 B, ||"^D""Gm" G,2 B, D |"D7" ^F A2 G/=F/ |"Gm" G2"Cm7" B c |"F7" A2 G =F |
"Bb" D2 F A |"Cm7" c e2 d/c/ |"Gm7" B3/2 G/-"C7" G2- |"F7" G2 z2 |]
```
```
######################## INPUT TEXT ########################
This is a Chinese folk song from the Jiangnan region. It was created during the Qianlong era (1735-1796) of the Qing dynasty. Over time, many regional variations were created, and the song gained popularity both in China and abroad. One version of the song describes a custom of giving jasmine flowers, popular in the southern Yangtze delta region of China.
####################### OUTPUT TUNES #######################
X:1
L:1/8
Q:1/4=100
M:2/4
K:C
"^Slow" DA A2 | GA c2- | c2 G2 | c2 GF | GA/G/ F2 | E2 DC | DA A2 | GA c2- | c2 GA | cd- d2 |
cA c2- | c2 GA | cd- d2 | cA c2- | c2 GA | c2 A2 | c2 d2 | cA c2- | c2 c2 | A2 G2 | F2 AG | F2 ED |
CA,/C/ D2- | D2 CD | F2 A2 | G2 ED | CG A2 | G2 FD | CA,/C/ D2- | D2 CD | F2 A2 | G2 ED |
CG A2 | G2 FD | CA,/C/ D2- | D2 z2 :|
X:2
L:1/8
Q:1/4=100
M:2/4
K:C
"^ MDolce" Ac de | d2 AG | cA cd | A2 AG | E2 ED | CD E2- | E2 z2 | EG ed | c2 AG | cA cd |
A2 AG | E2 ED | CD E2- | E2 z2 |"^ howeveroda" Ac de | d2 AG | cA cd | A2 AG | E2 ED | CD E2- |
E2 z2 | A2 cA | GA E2- | E2 z2 | GA cd | e2 ed | cd e2- | e2 z2 | ge d2 | cd c2- | c2 z2 |
Ac de | d2 AG | cA cd | A2 AG | E2 ED | CD E2- | E2 z2 | EG ed | c2 AG | cA cd | A2 AG | E2 ED |
CD E2- | E2 z2 |"^DDtisata" Ac de | d2 AG | cA cd | A2 AG | E2 ED | CD E2- | E2 z2 | A2 cA |
GA E2- | E2 z2 | GA cd | e2 ed | cd e2- | e2 z2 | ge d2 | cd c2- | c2 z2 | Ac de | d2 AG |
cA cd | A2 AG | E2 ED | CD E2- | E2 z2 | Ac de | d2 AG | cA cd | A2 AG | E2 ED | CD E2- | E2 z2 |
Ac de | d2 AG | cA cd | A2 AG | E2 ED | CD E2- | E2 z2 |"^ Easy" Ac de | d2 AG | cA cd |
A2 AG | E2 ED | CD E2- | E2 z2 | Ac de | d2 AG | cA cd | A2 AG | E2 ED | CD E2- | E2 z2 |]
X:3
L:1/8
Q:1/4=60
M:4/4
K:C
"^S books defe.." AA A2 cdcc | AcAG A4- | A8 | A,4 CD C2 | A,4 cdcA | A2 GA- A4- | A2 GA A2 AA |
AG E2 D2 C2 | D6 ED | C2 D4 C2 | D2 C2 D4 | C2 A,2 CD C2 | A,4 cdcA | A2 GA- A4- | A2 GA A2 AA |
AG E2 D2 C2 | D6 z2 |]
```
### BibTeX entry and citation info
```bibtex
@inproceedings{
wu2023exploring,
title={Exploring the Efficacy of Pre-trained Checkpoints in Text-to-Music Generation Task},
author={Shangda Wu and Maosong Sun},
booktitle={The AAAI-23 Workshop on Creative AI Across Modalities},
year={2023},
url={https://openreview.net/forum?id=QmWXskBhesn}
}
``` | 9,470 | [
[
-0.0308074951171875,
-0.025787353515625,
0.0154876708984375,
0.02569580078125,
-0.01056671142578125,
0.00510406494140625,
0.003681182861328125,
-0.017242431640625,
0.040313720703125,
0.0291290283203125,
-0.054656982421875,
-0.054351806640625,
-0.034820556640625,... |
cvssp/audioldm2 | 2023-08-29T14:41:37.000Z | [
"diffusers",
"arxiv:2308.05734",
"license:cc-by-nc-nd-4.0",
"has_space",
"diffusers:AudioLDM2Pipeline",
"region:us"
] | null | cvssp | null | null | cvssp/audioldm2 | 10 | 2,394 | diffusers | 2023-08-21T11:00:28 | ---
license: cc-by-nc-nd-4.0
---
# AudioLDM 2
AudioLDM 2 is a latent text-to-audio diffusion model capable of generating realistic audio samples given any text input.
It is available in the ๐งจ Diffusers library from v0.21.0 onwards.
# Model Details
AudioLDM 2 was proposed in the paper [AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining](https://arxiv.org/abs/2308.05734) by Haohe Liu et al.
AudioLDM takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional sound effects,
human speech and music.
# Checkpoint Details
This is the original, **base** version of the AudioLDM 2 model, also referred to as **audioldm2-full**.
There are three official AudioLDM 2 checkpoints. Two of these checkpoints are applicable to the general task of text-to-audio
generation. The third checkpoint is trained exclusively on text-to-music generation. All checkpoints share the same
model size for the text encoders and VAE. They differ in the size and depth of the UNet. See table below for details on
the three official checkpoints:
| Checkpoint | Task | UNet Model Size | Total Model Size | Training Data / h |
|-----------------------------------------------------------------|---------------|-----------------|------------------|-------------------|
| [audioldm2](https://huggingface.co/cvssp/audioldm2) | Text-to-audio | 350M | 1.1B | 1150k |
| [audioldm2-large](https://huggingface.co/cvssp/audioldm2-large) | Text-to-audio | 750M | 1.5B | 1150k |
| [audioldm2-music](https://huggingface.co/cvssp/audioldm2-music) | Text-to-music | 350M | 1.1B | 665k |
## Model Sources
- [**Original Repository**](https://github.com/haoheliu/audioldm2)
- [**๐งจ Diffusers Pipeline**](https://huggingface.co/docs/diffusers/api/pipelines/audioldm2)
- [**Paper**](https://arxiv.org/abs/2308.05734)
- [**Demo**](https://huggingface.co/spaces/haoheliu/audioldm2-text2audio-text2music)
# Usage
First, install the required packages:
```
pip install --upgrade diffusers transformers accelerate
```
## Text-to-Audio
For text-to-audio generation, the [AudioLDM2Pipeline](https://huggingface.co/docs/diffusers/api/pipelines/audioldm2) can be
used to load pre-trained weights and generate text-conditional audio outputs:
```python
from diffusers import AudioLDM2Pipeline
import torch
repo_id = "cvssp/audioldm2"
pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "The sound of a hammer hitting a wooden surface"
audio = pipe(prompt, num_inference_steps=200, audio_length_in_s=10.0).audios[0]
```
The resulting audio output can be saved as a .wav file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=16000, data=audio)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(audio, rate=16000)
```
## Tips
Prompts:
* Descriptive prompt inputs work best: you can use adjectives to describe the sound (e.g. "high quality" or "clear") and make the prompt context specific (e.g., "water stream in a forest" instead of "stream").
* It's best to use general terms like 'cat' or 'dog' instead of specific names or abstract objects that the model may not be familiar with.
Inference:
* The _quality_ of the predicted audio sample can be controlled by the `num_inference_steps` argument: higher steps give higher quality audio at the expense of slower inference.
* The _length_ of the predicted audio sample can be controlled by varying the `audio_length_in_s` argument.
When evaluating generated waveforms:
* The quality of the generated waveforms can vary significantly based on the seed. Try generating with different seeds until you find a satisfactory generation
* Multiple waveforms can be generated in one go: set `num_waveforms_per_prompt` to a value greater than 1. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly.
The following example demonstrates how to construct a good audio generation using the aforementioned tips:
```python
import scipy
import torch
from diffusers import AudioLDM2Pipeline
# load the pipeline
repo_id = "cvssp/audioldm2"
pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
# define the prompts
prompt = "The sound of a hammer hitting a wooden surface"
negative_prompt = "Low quality."
# set the seed
generator = torch.Generator("cuda").manual_seed(0)
# run the generation
audio = pipe(
prompt,
negative_prompt=negative_prompt,
num_inference_steps=200,
audio_length_in_s=10.0,
num_waveforms_per_prompt=3,
).audios
# save the best audio sample (index 0) as a .wav file
scipy.io.wavfile.write("techno.wav", rate=16000, data=audio[0])
```
# Citation
**BibTeX:**
```
@article{liu2023audioldm2,
title={"AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining"},
author={Haohe Liu and Qiao Tian and Yi Yuan and Xubo Liu and Xinhao Mei and Qiuqiang Kong and Yuping Wang and Wenwu Wang and Yuxuan Wang and Mark D. Plumbley},
journal={arXiv preprint arXiv:2308.05734},
year={2023}
}
```
| 5,370 | [
[
-0.035064697265625,
-0.06500244140625,
0.038543701171875,
0.01751708984375,
-0.004520416259765625,
-0.0069732666015625,
-0.0148773193359375,
-0.0283203125,
-0.005126953125,
0.032196044921875,
-0.0570068359375,
-0.047393798828125,
-0.04034423828125,
-0.006488... |
vinvino02/glpn-kitti | 2022-04-14T11:52:40.000Z | [
"transformers",
"pytorch",
"glpn",
"depth-estimation",
"vision",
"arxiv:2201.07436",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | depth-estimation | vinvino02 | null | null | vinvino02/glpn-kitti | 4 | 2,391 | transformers | 2022-03-02T23:29:05 | ---
license: apache-2.0
tags:
- vision
- depth-estimation
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# GLPN fine-tuned on KITTI
Global-Local Path Networks (GLPN) model trained on KITTI for monocular depth estimation. It was introduced in the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Kim et al. and first released in [this repository](https://github.com/vinvino02/GLPDepth).
Disclaimer: The team releasing GLPN did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
GLPN uses SegFormer as backbone and adds a lightweight head on top for depth estimation.

## Intended uses & limitations
You can use the raw model for monocular depth estimation. See the [model hub](https://huggingface.co/models?search=glpn) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import GLPNFeatureExtractor, GLPNForDepthEstimation
import torch
import numpy as np
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = GLPNFeatureExtractor.from_pretrained("vinvino02/glpn-kitti")
model = GLPNForDepthEstimation.from_pretrained("vinvino02/glpn-kitti")
# prepare image for the model
inputs = feature_extractor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
predicted_depth = outputs.predicted_depth
# interpolate to original size
prediction = torch.nn.functional.interpolate(
predicted_depth.unsqueeze(1),
size=image.size[::-1],
mode="bicubic",
align_corners=False,
)
# visualize the prediction
output = prediction.squeeze().cpu().numpy()
formatted = (output * 255 / np.max(output)).astype("uint8")
depth = Image.fromarray(formatted)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/glpn).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2201-07436,
author = {Doyeon Kim and
Woonghyun Ga and
Pyunghwan Ahn and
Donggyu Joo and
Sehwan Chun and
Junmo Kim},
title = {Global-Local Path Networks for Monocular Depth Estimation with Vertical
CutDepth},
journal = {CoRR},
volume = {abs/2201.07436},
year = {2022},
url = {https://arxiv.org/abs/2201.07436},
eprinttype = {arXiv},
eprint = {2201.07436},
timestamp = {Fri, 21 Jan 2022 13:57:15 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-07436.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 3,244 | [
[
-0.047149658203125,
-0.047210693359375,
0.0149078369140625,
0.0181121826171875,
-0.0323486328125,
-0.01800537109375,
0.0022449493408203125,
-0.058197021484375,
0.03167724609375,
0.033538818359375,
-0.061737060546875,
-0.033111572265625,
-0.035125732421875,
-... |
sensenova/piccolo-base-zh | 2023-09-08T05:38:47.000Z | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"mteb",
"model-index",
"endpoints_compatible",
"region:us"
] | feature-extraction | sensenova | null | null | sensenova/piccolo-base-zh | 22 | 2,391 | transformers | 2023-09-04T07:04:26 | ---
tags:
- mteb
model-index:
- name: piccolo-base-zh
results:
- task:
type: STS
dataset:
type: C-MTEB/AFQMC
name: MTEB AFQMC
config: default
split: validation
revision: None
metrics:
- type: cos_sim_pearson
value: 49.16558217326158
- type: cos_sim_spearman
value: 51.4049475858823
- type: euclidean_pearson
value: 49.85853741070363
- type: euclidean_spearman
value: 51.501428092542234
- type: manhattan_pearson
value: 49.746099634926296
- type: manhattan_spearman
value: 51.41081804320127
- task:
type: STS
dataset:
type: C-MTEB/ATEC
name: MTEB ATEC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 52.385361699031854
- type: cos_sim_spearman
value: 52.59114913702212
- type: euclidean_pearson
value: 54.994530439418355
- type: euclidean_spearman
value: 52.54102886188004
- type: manhattan_pearson
value: 54.9503071669608
- type: manhattan_spearman
value: 52.51465652540901
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (zh)
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.236
- type: f1
value: 39.43040092463147
- task:
type: STS
dataset:
type: C-MTEB/BQ
name: MTEB BQ
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 60.98952187211432
- type: cos_sim_spearman
value: 62.68189713123115
- type: euclidean_pearson
value: 61.089426749761344
- type: euclidean_spearman
value: 62.41743375544581
- type: manhattan_pearson
value: 61.14747216341409
- type: manhattan_spearman
value: 62.488918956547046
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringP2P
name: MTEB CLSClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 38.36392300667918
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringS2S
name: MTEB CLSClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 35.645927581489175
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv1-reranking
name: MTEB CMedQAv1
config: default
split: test
revision: None
metrics:
- type: map
value: 85.25085782849087
- type: mrr
value: 87.77154761904762
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv2-reranking
name: MTEB CMedQAv2
config: default
split: test
revision: None
metrics:
- type: map
value: 86.15357754080844
- type: mrr
value: 88.53547619047617
- task:
type: Retrieval
dataset:
type: C-MTEB/CmedqaRetrieval
name: MTEB CmedqaRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.683
- type: map_at_10
value: 35.522999999999996
- type: map_at_100
value: 37.456
- type: map_at_1000
value: 37.576
- type: map_at_3
value: 31.584
- type: map_at_5
value: 33.684999999999995
- type: mrr_at_1
value: 36.459
- type: mrr_at_10
value: 44.534
- type: mrr_at_100
value: 45.6
- type: mrr_at_1000
value: 45.647
- type: mrr_at_3
value: 42.186
- type: mrr_at_5
value: 43.482
- type: ndcg_at_1
value: 36.459
- type: ndcg_at_10
value: 42.025
- type: ndcg_at_100
value: 49.754
- type: ndcg_at_1000
value: 51.815999999999995
- type: ndcg_at_3
value: 37.056
- type: ndcg_at_5
value: 38.962
- type: precision_at_1
value: 36.459
- type: precision_at_10
value: 9.485000000000001
- type: precision_at_100
value: 1.567
- type: precision_at_1000
value: 0.183
- type: precision_at_3
value: 21.13
- type: precision_at_5
value: 15.209
- type: recall_at_1
value: 23.683
- type: recall_at_10
value: 52.190999999999995
- type: recall_at_100
value: 84.491
- type: recall_at_1000
value: 98.19600000000001
- type: recall_at_3
value: 37.09
- type: recall_at_5
value: 43.262
- task:
type: PairClassification
dataset:
type: C-MTEB/CMNLI
name: MTEB Cmnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 74.20324714371618
- type: cos_sim_ap
value: 82.32631646194994
- type: cos_sim_f1
value: 76.64052827073876
- type: cos_sim_precision
value: 68.58725761772854
- type: cos_sim_recall
value: 86.83656768763151
- type: dot_accuracy
value: 70.33072760072159
- type: dot_ap
value: 77.46972172609794
- type: dot_f1
value: 73.6668924804026
- type: dot_precision
value: 62.84676354029062
- type: dot_recall
value: 88.98760813654431
- type: euclidean_accuracy
value: 74.78051713770296
- type: euclidean_ap
value: 82.65778389584023
- type: euclidean_f1
value: 77.1843623157445
- type: euclidean_precision
value: 71.05211406096362
- type: euclidean_recall
value: 84.47509936871639
- type: manhattan_accuracy
value: 74.76849067949489
- type: manhattan_ap
value: 82.55694030572194
- type: manhattan_f1
value: 77.1776459569154
- type: manhattan_precision
value: 69.5423855963991
- type: manhattan_recall
value: 86.69628244096329
- type: max_accuracy
value: 74.78051713770296
- type: max_ap
value: 82.65778389584023
- type: max_f1
value: 77.1843623157445
- task:
type: Retrieval
dataset:
type: C-MTEB/CovidRetrieval
name: MTEB CovidRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 72.99799999999999
- type: map_at_10
value: 81.271
- type: map_at_100
value: 81.53399999999999
- type: map_at_1000
value: 81.535
- type: map_at_3
value: 80.049
- type: map_at_5
value: 80.793
- type: mrr_at_1
value: 73.13
- type: mrr_at_10
value: 81.193
- type: mrr_at_100
value: 81.463
- type: mrr_at_1000
value: 81.464
- type: mrr_at_3
value: 80.067
- type: mrr_at_5
value: 80.741
- type: ndcg_at_1
value: 73.34
- type: ndcg_at_10
value: 84.503
- type: ndcg_at_100
value: 85.643
- type: ndcg_at_1000
value: 85.693
- type: ndcg_at_3
value: 82.135
- type: ndcg_at_5
value: 83.401
- type: precision_at_1
value: 73.34
- type: precision_at_10
value: 9.536
- type: precision_at_100
value: 1.004
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 29.54
- type: precision_at_5
value: 18.398
- type: recall_at_1
value: 72.99799999999999
- type: recall_at_10
value: 94.31
- type: recall_at_100
value: 99.368
- type: recall_at_1000
value: 99.789
- type: recall_at_3
value: 87.935
- type: recall_at_5
value: 90.991
- task:
type: Retrieval
dataset:
type: C-MTEB/DuRetrieval
name: MTEB DuRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 26.537
- type: map_at_10
value: 81.292
- type: map_at_100
value: 84.031
- type: map_at_1000
value: 84.066
- type: map_at_3
value: 56.571000000000005
- type: map_at_5
value: 71.082
- type: mrr_at_1
value: 91.2
- type: mrr_at_10
value: 93.893
- type: mrr_at_100
value: 93.955
- type: mrr_at_1000
value: 93.95700000000001
- type: mrr_at_3
value: 93.61699999999999
- type: mrr_at_5
value: 93.767
- type: ndcg_at_1
value: 91.2
- type: ndcg_at_10
value: 88.255
- type: ndcg_at_100
value: 90.813
- type: ndcg_at_1000
value: 91.144
- type: ndcg_at_3
value: 87.435
- type: ndcg_at_5
value: 85.961
- type: precision_at_1
value: 91.2
- type: precision_at_10
value: 42.14
- type: precision_at_100
value: 4.817
- type: precision_at_1000
value: 0.48900000000000005
- type: precision_at_3
value: 78.467
- type: precision_at_5
value: 65.75999999999999
- type: recall_at_1
value: 26.537
- type: recall_at_10
value: 89.262
- type: recall_at_100
value: 97.783
- type: recall_at_1000
value: 99.49799999999999
- type: recall_at_3
value: 58.573
- type: recall_at_5
value: 75.154
- task:
type: Retrieval
dataset:
type: C-MTEB/EcomRetrieval
name: MTEB EcomRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 48.5
- type: map_at_10
value: 57.898
- type: map_at_100
value: 58.599000000000004
- type: map_at_1000
value: 58.616
- type: map_at_3
value: 55.1
- type: map_at_5
value: 56.80500000000001
- type: mrr_at_1
value: 48.5
- type: mrr_at_10
value: 57.898
- type: mrr_at_100
value: 58.599000000000004
- type: mrr_at_1000
value: 58.616
- type: mrr_at_3
value: 55.1
- type: mrr_at_5
value: 56.80500000000001
- type: ndcg_at_1
value: 48.5
- type: ndcg_at_10
value: 62.876
- type: ndcg_at_100
value: 66.00200000000001
- type: ndcg_at_1000
value: 66.467
- type: ndcg_at_3
value: 57.162
- type: ndcg_at_5
value: 60.263999999999996
- type: precision_at_1
value: 48.5
- type: precision_at_10
value: 7.870000000000001
- type: precision_at_100
value: 0.927
- type: precision_at_1000
value: 0.096
- type: precision_at_3
value: 21.032999999999998
- type: precision_at_5
value: 14.14
- type: recall_at_1
value: 48.5
- type: recall_at_10
value: 78.7
- type: recall_at_100
value: 92.7
- type: recall_at_1000
value: 96.39999999999999
- type: recall_at_3
value: 63.1
- type: recall_at_5
value: 70.7
- task:
type: Classification
dataset:
type: C-MTEB/IFlyTek-classification
name: MTEB IFlyTek
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 44.34782608695652
- type: f1
value: 36.401426200836205
- task:
type: Classification
dataset:
type: C-MTEB/JDReview-classification
name: MTEB JDReview
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 84.25891181988743
- type: ap
value: 50.54636280166089
- type: f1
value: 78.55080202541332
- task:
type: STS
dataset:
type: C-MTEB/LCQMC
name: MTEB LCQMC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 70.02878561337955
- type: cos_sim_spearman
value: 75.39509553139982
- type: euclidean_pearson
value: 73.92598696939956
- type: euclidean_spearman
value: 75.5471147196853
- type: manhattan_pearson
value: 73.88049486090739
- type: manhattan_spearman
value: 75.51361990583285
- task:
type: Retrieval
dataset:
type: C-MTEB/MMarcoRetrieval
name: MTEB MMarcoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 64.739
- type: map_at_10
value: 74.039
- type: map_at_100
value: 74.38
- type: map_at_1000
value: 74.39099999999999
- type: map_at_3
value: 72.074
- type: map_at_5
value: 73.29299999999999
- type: mrr_at_1
value: 66.92
- type: mrr_at_10
value: 74.636
- type: mrr_at_100
value: 74.94
- type: mrr_at_1000
value: 74.95
- type: mrr_at_3
value: 72.911
- type: mrr_at_5
value: 73.981
- type: ndcg_at_1
value: 66.92
- type: ndcg_at_10
value: 77.924
- type: ndcg_at_100
value: 79.471
- type: ndcg_at_1000
value: 79.73400000000001
- type: ndcg_at_3
value: 74.17200000000001
- type: ndcg_at_5
value: 76.236
- type: precision_at_1
value: 66.92
- type: precision_at_10
value: 9.5
- type: precision_at_100
value: 1.027
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 27.989000000000004
- type: precision_at_5
value: 17.874000000000002
- type: recall_at_1
value: 64.739
- type: recall_at_10
value: 89.324
- type: recall_at_100
value: 96.342
- type: recall_at_1000
value: 98.38900000000001
- type: recall_at_3
value: 79.378
- type: recall_at_5
value: 84.28099999999999
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-CN)
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.97108271687962
- type: f1
value: 66.8625981386677
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-CN)
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.32212508406187
- type: f1
value: 73.33875034670166
- task:
type: Retrieval
dataset:
type: C-MTEB/MedicalRetrieval
name: MTEB MedicalRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 49.0
- type: map_at_10
value: 55.022999999999996
- type: map_at_100
value: 55.550999999999995
- type: map_at_1000
value: 55.608000000000004
- type: map_at_3
value: 53.417
- type: map_at_5
value: 54.372
- type: mrr_at_1
value: 49.3
- type: mrr_at_10
value: 55.176
- type: mrr_at_100
value: 55.703
- type: mrr_at_1000
value: 55.76
- type: mrr_at_3
value: 53.567
- type: mrr_at_5
value: 54.522000000000006
- type: ndcg_at_1
value: 49.0
- type: ndcg_at_10
value: 58.089999999999996
- type: ndcg_at_100
value: 60.988
- type: ndcg_at_1000
value: 62.580999999999996
- type: ndcg_at_3
value: 54.803000000000004
- type: ndcg_at_5
value: 56.508
- type: precision_at_1
value: 49.0
- type: precision_at_10
value: 6.78
- type: precision_at_100
value: 0.8210000000000001
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 19.6
- type: precision_at_5
value: 12.58
- type: recall_at_1
value: 49.0
- type: recall_at_10
value: 67.80000000000001
- type: recall_at_100
value: 82.1
- type: recall_at_1000
value: 94.8
- type: recall_at_3
value: 58.8
- type: recall_at_5
value: 62.9
- task:
type: Reranking
dataset:
type: C-MTEB/Mmarco-reranking
name: MTEB MMarcoReranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 28.87237408060796
- type: mrr
value: 27.83015873015873
- task:
type: Classification
dataset:
type: C-MTEB/MultilingualSentiment-classification
name: MTEB MultilingualSentiment
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 70.25
- type: f1
value: 70.29055400149645
- task:
type: PairClassification
dataset:
type: C-MTEB/OCNLI
name: MTEB Ocnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 65.56578234975636
- type: cos_sim_ap
value: 70.89354058570412
- type: cos_sim_f1
value: 71.21024370095002
- type: cos_sim_precision
value: 58.48032564450475
- type: cos_sim_recall
value: 91.02428722280888
- type: dot_accuracy
value: 64.86193827828912
- type: dot_ap
value: 70.17697803463875
- type: dot_f1
value: 70.68676716917922
- type: dot_precision
value: 58.57043719639139
- type: dot_recall
value: 89.1235480464625
- type: euclidean_accuracy
value: 64.86193827828912
- type: euclidean_ap
value: 70.26847152773904
- type: euclidean_f1
value: 70.9984152139461
- type: euclidean_precision
value: 56.81674064679771
- type: euclidean_recall
value: 94.61457233368532
- type: manhattan_accuracy
value: 65.40335679480238
- type: manhattan_ap
value: 70.22941558736018
- type: manhattan_f1
value: 71.09712937475423
- type: manhattan_precision
value: 56.64160401002506
- type: manhattan_recall
value: 95.45934530095037
- type: max_accuracy
value: 65.56578234975636
- type: max_ap
value: 70.89354058570412
- type: max_f1
value: 71.21024370095002
- task:
type: Classification
dataset:
type: C-MTEB/OnlineShopping-classification
name: MTEB OnlineShopping
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 89.92999999999999
- type: ap
value: 87.16059195012956
- type: f1
value: 89.90917477839415
- task:
type: STS
dataset:
type: C-MTEB/PAWSX
name: MTEB PAWSX
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 27.74161502387672
- type: cos_sim_spearman
value: 31.58353529723325
- type: euclidean_pearson
value: 32.43729673844635
- type: euclidean_spearman
value: 31.59527486602242
- type: manhattan_pearson
value: 32.37467059678786
- type: manhattan_spearman
value: 31.44408004951894
- task:
type: STS
dataset:
type: C-MTEB/QBQTC
name: MTEB QBQTC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 36.233749845501194
- type: cos_sim_spearman
value: 36.47808586229587
- type: euclidean_pearson
value: 32.663447466546806
- type: euclidean_spearman
value: 34.45830454037139
- type: manhattan_pearson
value: 32.80239212096335
- type: manhattan_spearman
value: 34.581060433895125
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh)
config: zh
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 63.05131937664673
- type: cos_sim_spearman
value: 66.51353746725948
- type: euclidean_pearson
value: 61.24016998745561
- type: euclidean_spearman
value: 66.07115266049276
- type: manhattan_pearson
value: 64.55660243659054
- type: manhattan_spearman
value: 66.80282149562386
- task:
type: STS
dataset:
type: C-MTEB/STSB
name: MTEB STSB
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 70.45533692882996
- type: cos_sim_spearman
value: 70.6045637565602
- type: euclidean_pearson
value: 72.75588977483554
- type: euclidean_spearman
value: 73.36630581886473
- type: manhattan_pearson
value: 72.72517409326954
- type: manhattan_spearman
value: 73.35358940437355
- task:
type: Reranking
dataset:
type: C-MTEB/T2Reranking
name: MTEB T2Reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 66.45779474032288
- type: mrr
value: 76.0782192023729
- task:
type: Retrieval
dataset:
type: C-MTEB/T2Retrieval
name: MTEB T2Retrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 26.458
- type: map_at_10
value: 74.355
- type: map_at_100
value: 78.158
- type: map_at_1000
value: 78.233
- type: map_at_3
value: 52.2
- type: map_at_5
value: 64.14
- type: mrr_at_1
value: 88.37
- type: mrr_at_10
value: 91.117
- type: mrr_at_100
value: 91.231
- type: mrr_at_1000
value: 91.23599999999999
- type: mrr_at_3
value: 90.645
- type: mrr_at_5
value: 90.948
- type: ndcg_at_1
value: 88.37
- type: ndcg_at_10
value: 82.384
- type: ndcg_at_100
value: 86.431
- type: ndcg_at_1000
value: 87.163
- type: ndcg_at_3
value: 83.993
- type: ndcg_at_5
value: 82.411
- type: precision_at_1
value: 88.37
- type: precision_at_10
value: 41.131
- type: precision_at_100
value: 4.9799999999999995
- type: precision_at_1000
value: 0.515
- type: precision_at_3
value: 73.651
- type: precision_at_5
value: 61.634
- type: recall_at_1
value: 26.458
- type: recall_at_10
value: 81.3
- type: recall_at_100
value: 94.342
- type: recall_at_1000
value: 98.103
- type: recall_at_3
value: 54.020999999999994
- type: recall_at_5
value: 67.781
- task:
type: Classification
dataset:
type: C-MTEB/TNews-classification
name: MTEB TNews
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 46.814
- type: f1
value: 45.580027683507666
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringP2P
name: MTEB ThuNewsClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 61.43613064816144
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringS2S
name: MTEB ThuNewsClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 53.01838461793776
- task:
type: Retrieval
dataset:
type: C-MTEB/VideoRetrieval
name: MTEB VideoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 59.3
- type: map_at_10
value: 69.158
- type: map_at_100
value: 69.60300000000001
- type: map_at_1000
value: 69.611
- type: map_at_3
value: 67.467
- type: map_at_5
value: 68.432
- type: mrr_at_1
value: 59.199999999999996
- type: mrr_at_10
value: 69.108
- type: mrr_at_100
value: 69.553
- type: mrr_at_1000
value: 69.56099999999999
- type: mrr_at_3
value: 67.417
- type: mrr_at_5
value: 68.382
- type: ndcg_at_1
value: 59.3
- type: ndcg_at_10
value: 73.54
- type: ndcg_at_100
value: 75.652
- type: ndcg_at_1000
value: 75.868
- type: ndcg_at_3
value: 70.074
- type: ndcg_at_5
value: 71.808
- type: precision_at_1
value: 59.3
- type: precision_at_10
value: 8.709999999999999
- type: precision_at_100
value: 0.9690000000000001
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 25.867
- type: precision_at_5
value: 16.36
- type: recall_at_1
value: 59.3
- type: recall_at_10
value: 87.1
- type: recall_at_100
value: 96.89999999999999
- type: recall_at_1000
value: 98.6
- type: recall_at_3
value: 77.60000000000001
- type: recall_at_5
value: 81.8
- task:
type: Classification
dataset:
type: C-MTEB/waimai-classification
name: MTEB Waimai
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 84.69999999999999
- type: ap
value: 66.65020528563207
- type: f1
value: 83.00542769081453
---
## piccolo-base-zh
piccoloๆฏไธไธช้็จembeddingๆจกๅ(ไธญๆ), ็ฑๆฅ่ชๅๆฑค็งๆ็้็จๆจกๅ็ปๅฎๆ่ฎญ็ปใpiccoloๅ้ดไบE5ไปฅๅGTE็่ฎญ็ปๆต็จ๏ผ้็จไบไธค้ถๆฎต็่ฎญ็ปๆนๅผใ
ๅจ็ฌฌไธ้ถๆฎตไธญ๏ผๆไปฌๆ้ๅ็ฌๅไบ4ไบฟ็ไธญๆๆๆฌๅฏน(ๅฏ่งไธบๅผฑ็็ฃๆๆฌๅฏนๆฐๆฎ)๏ผๅนถ้็จไบๅ
็ป็softmaxๅฏนๆฏๅญฆไน ๆๅคฑๆฅไผๅๆจกๅใ
ๅจ็ฌฌไบ้ถๆฎตไธญ๏ผๆไปฌๆ้ๆด็ไบ2000ไธไบบๅทฅๆ ๆณจ็ไธญๆๆๆฌๅฏน(็ฒพๆ ๆฐๆฎ)๏ผๅนถ้็จๅธฆๆ้พ่ดๆ ทๆฌ็ไธๅ
็ป็softmaxๅฏนๆฏๅญฆไน ๆๅคฑๆฅๅธฎๅฉๆจกๅๆดๅฅฝๅฐไผๅใ
็ฎๅ๏ผๆไปฌๆไพไบpiccolo-base-zhๅpiccolo-large-zhไธคไธชๆจกๅใ
piccolo is a general text embedding model(chinese), powered by General Model Group from SenseTime Research.
Inspired from E5 and GTE, piccolo is trained using a two stage pipeline. On the first stage, we collect and crawl 400 million weakly supervised Chinese text pairs from the Internet,
and train the model with the pair(text and text pos) softmax contrastive loss.
On the second stage, we collect 20 million human labeled chinese text pairs dataset, and finetune the model with tiplet (text, text_pos, text_neg) contrastive loss.
Currently here we offer two different sizes of models, including piccolo-base-zh, piccolo-large-zh.
## Metric
ๆไปฌๅฐpiccoloไธๅ
ถไป็ๅผๆบembeddingๆจกๅๅจCMTEBๆฆๅไธ่ฟ่กไบๆฏ่พ๏ผ่ฏทๅ่CMTEBๆฆๅใๆไปฌๅจevalๆไปถๅคนไธญๆไพไบๅค็ฐ็ปๆ็่ๆฌใ
We compared the performance of the piccolo with other embedding models on the C-MTEB benchmark. please refer to the C-MTEB leaderboard.
we provide scripts in "eval" folder for results reproducing.
| Model Name | Model Size (GB) | Dimension | Sequence Length | Average (35) | Classification (9) | Clustering (4) | Pair Classification (2) | Reranking (4) | Retrieval (8) | STS (8) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [**piccolo-large-zh**] | 0.65 | 1024 | 512 | **64.11** | 67.03 | 47.04 | 78.38 | 65.98 | 70.93 | 58.02 |
| [bge-large-zh]| 1.3 | 1024| 512 | 63.96 | 68.32 | 48.39 | 78.94 | 65.11 | 71.52 | 54.98 |
| [**piccolo-base-zh**]| 0.2 | 768 | 512 | **63.66** | 66.98 | 47.12 | 76.61 | 66.68 | 71.2 | 55.9 |
| [bge-large-zh-no-instruct]| 1.3 | 1024 | 512 | 63.4 | 68.58 | 50.01 | 76.77 | 64.9 | 70.54 | 53 |
| [bge-base-zh]| 0.41 | 768 | 512 | 62.8 | 67.07 | 47.64 | 77.5 | 64.91 | 69.53 | 54.12 |
## Usage
ๅจsentence-transformer packageไธญๅฏไปฅๅพๅฎนๆๅฐ่ฐ็จpiccoloๆจกๅ
```python
# for s2s dataset, you can use piccolo as below
# ๅฏนไบ็ญๅฏน็ญๆฐๆฎ้๏ผไธ้ขๆฏ้็จ็ไฝฟ็จๆนๅผ
from sentence_transformers import SentenceTransformer
sentences = ["ๆฐๆฎ1", "ๆฐๆฎ2"]
model = SentenceTransformer('sensenova/piccolo-base-zh')
embeddings_1 = model.encode(sentences, normalize_embeddings=True)
embeddings_2 = model.encode(sentences, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# for s2p dataset, we recommend to add instruction for passage retrieval
# ๅฏนไบ็ญๅฏน้ฟๆฐๆฎ้๏ผๆไปฌๆจ่ๆทปๅ instruction๏ผๆฅๅธฎๅฉๆจกๅๆดๅฅฝๅฐ่ฟ่กๆฃ็ดขใ
from sentence_transformers import SentenceTransformer
queries = ['query_1', 'query_2']
passages = ["doc_1", "doc_2"]
model = SentenceTransformer('sensenova/piccolo-base-zh')
q_embeddings = model.encode(["ๆฅ่ฏข๏ผ" + q for q in queries], normalize_embeddings=True)
p_embeddings = model.encode(["็ปๆ๏ผ" + p for p in passages], normalize_embeddings=True)
scores = q_embeddings @ p_embeddings.T
```
## Training Detail
### pretrain
pretrain ้ๅธธไธ้่ฆๅคชๅคง็max length, ๆจ่128ใๅฐ็max length็จไปฅๆ้ซbatch size๏ผๅ ๅฟซ่ฎญ็ป้ๅบฆ๏ผไป่้ๅบๅคง่งๆจกๆฐๆฎใ
pretrain ๆๅคฑๆไปฌ้็จไบๅ
็ปcontrastive loss๏ผไธๅ ๅ
ฅhard negative, ็ดๆฅ้็จinbatch negative๏ผๅจๅฎ้
่ฎญ็ปไธญ๏ผๆไปฌไฝฟ็จไบ32ๅผ 40G A100่ฟ่ก่ฎญ็ป๏ผๅๅก็batch sizeไธบ1024ใ
Pretrain usually does not require a large max length, and 128 is recommended. A small max length is used to increase batch size and speed up training to adapt to large-scale data.
We use binary contrastive loss for pretrain loss, without adding hard negative, and directly use inbatch negative. In actual training, we used 32 40G A100 for training, and the batch size of a single card is 1024.
### finetune
finetune ้ๅธธไผๅฐ max lengthๆฉๅขๅฐ512ใ็จไปฅ้ๅบๆดๅคง้ฟๅบฆ็ๆๆฌ่พๅ
ฅ๏ผfinetuneๆถไผๅคsample S2P็ๆฐๆฎ๏ผไปฅๅขๅผบๆจกๅๅจretrievalไปปๅกไธ็ๆง่ฝใ
finetune ๆๅคฑ้็จไธๅ
็ปcontrastive loss๏ผๅ ๅ
ฅhard negative๏ผneg num้ๅธธ่ฎพ็ฝฎไธบ2-7๏ผloss่ฎก็ฎๆนๅผๅฏไปฅๅ่GTE้็improved contrastive lossใ
ๆณจๆ: ๆไปฌ็ปqueryๅpassage่ฎพ็ฝฎไบไธๅ็max length๏ผquery็max lengthๅง็ปไฟๆๅจ64ใ
For finetuning, we usually expands the max length to 512. To adapt to larger length text input, finetune will sample more S2P data to enhance the performance of the model on retrieval tasks.
The finetune loss uses triple contrastive loss, adding hard negative. Neg num is usually set to 2-7. The loss calculation method can refer to the improved contrastive loss in GTE.
Note: We set different max lengths for query and passage, and the max length of query is always kept at 64.
### Others
ไธไบๆ็จ็trick:
1. ๅๅฐๆพๅญ็ๆนๅผ: fp16 + gradient checkpointing + ZERO STAGE1 (stage2 ไธๆฏๆๅๅก็ปๆไธ็gradient checkpointing) ็ธๅ
ณissue่ง: https://github.com/microsoft/DeepSpeed/issues/988
2. dataset sampler๏ผๆไปฌ้็จไบM3E็dataset sampler๏ผ็จไปฅไฟ่ฏๆฏไธชbatch้็ๆ ทๆฌๅๆฅ่ชไบไธไธชdataset๏ผ่ดๆ ทๆฌๆดๆไปทๅผใ
3. instructionใinstructionๅจๆไปฌ็ๅฎ้ชไธญๅฏนretrievalไปปๅกๆ้ๅธธๅคง็ๆง่ฝๆๅ๏ผๆไปฌๅจๆฏไธช่ฎญ็ปๆ ทๆฌๅ้ฝๅ ๅ
ฅ'ๆฅ่ฏข: 'ๅ'็ปๆ: '่ฟๆ ท็instructionใ
some useful tricks:
1. The way to reduce memory usage: fp16 + gradient checkpointing + ZERO STAGE1 (stage2 does not support gradient checkpointing under the double-tower structure) For related issues, see: https://github.com/microsoft/DeepSpeed/issues/ 988
2. Dataset sampler, we use M3E's dataset sampler to ensure that the samples in each batch come from a dataset, and negative samples are more valuable.
3. instruction. Instruction has greatly improved the performance of the retrieval task in our experiments. We added instructions like 'query: ' and 'result: ' before each training sample.
## Reference
่ฟ้ๆไปฌๅๅบไบๆไปฌๅ่่ฟ็embedding้กน็ฎๅ่ฎบๆ
1. [M3E](https://github.com/wangyuxinwhy/uniem)ใ้ๅธธๆฃ็ไธญๆๅผๆบembedding้กน็ฎ๏ผๆถ้ๅๆด็ไบ่พๅค็ไธญๆ้ซ่ดจ้ๆฐๆฎ้๏ผuniemไนๆฏไธไธชไธ้็ๆกๆถใ
2. [Text2vec](https://github.com/shibing624/text2vec)ใๅฆไธไธชไธไธช้ๅธธๆฃ็ไธญๆๅผๆบembedding้กน็ฎใ
3. [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding)ใๆบๆบAIๅผๆบ็embeddingๆจกๅ๏ผๆถ้ๅๆด็ไบCMTEB benchmark๏ผๅกซ่กฅไบไธญๆembedding็ณป็ปๆง่ฏๆต็็ฉบ็ผบใ
4. [E5](https://github.com/microsoft/unilm/tree/master/e5)ใๆฅ่ชๅพฎ่ฝฏ็ไธ็ฏๆ็ซ ๏ผๆ้ๅธธ่ฏฆ็ป็ๆถ่ๅฎ้ชไปฅๅๆฐๆฎๅค็่ฟๆปค็ป่ใ
5. [GTE](https://huggingface.co/thenlper/gte-base)ใไธ็ฏๆฅ่ช้ฟ้่พพๆฉ็embedding่ฎบๆใ
Here we list the embedding projects and papers we have referenced
1. [M3E](https://github.com/wangyuxinwhy/uniem). A great Chinese open source embedding project that collects and organizes a large number of high-quality Chinese datasets. Uniem is also a good framework.
2. [Text2vec](https://github.com/shibing624/text2vec). Another great Chinese open source embedding project.
3. [Flag Embedding](https://github.com/FlagOpen/FlagEmbedding). Zhiyuan AIโs open source embedding model.They collect and organize CMTEB benchmark, filling the gap in systematic evaluation of Chinese embeddings.
4. [E5](https://github.com/microsoft/unilm/tree/master/e5). Powerd by microsoft๏ผproducing very detailed ablation experiments and data processing filtering details.
5. [GTE](https://huggingface.co/thenlper/gte-base). An embedding paper from Alibaba Damo.
## License
Piccolo ไฝฟ็จ MIT License๏ผๅ
่ดนๅ็จใ
Piccolo use MIT License. It can be used for commercial purposes free of charge.
## Acknowledgement
piccolo ็ฑๆฅ่ชๅๆฑค็งๆ็ ็ฉถ้ข็้็จๆจกๅ็ปๅฎๆ่ฎญ็ป๏ผ[Jinkin](https://huggingface.co/Jinkin) ๅฎๆไบไปฃ็ ๅฎ็ฐๅๆจกๅ่ฎญ็ป๏ผ [Jinkin](https://huggingface.co/Jinkin),
[CCCCxxx](https://huggingface.co/CCCCxxx) ไธ่ตทๅฎๆไบๆฐๆฎๆ้ใๆด็ๅ่ฏๆตๅทฅไฝ. ้กน็ฎ็ฑ [Gaomengya](https://huggingface.co/gaomengya) ๅ [chaorenwu111](https://huggingface.co/chaorenwu111) ไธปๅฏผใ
ๅๆถ๏ผๆ่ฐข[lux0933](https://huggingface.co/lux0933)ไปฅๅ[yangkai001](https://huggingface.co/yangkai001)็ไบคๆตไธ่ฎจ่ฎบ๏ผๆไพไบ้ๅธธๅคๆ็จ็ๅปบ่ฎฎใ
piccolo is powered by Genral Model group from SenseTime Research.
[Jinkin](https://huggingface.co/Jinkin) complete code implementation and model training.
[Jinkin](https://huggingface.co/Jinkin), [CCCCxxx](https://huggingface.co/CCCCxxx) completed the data collectionใprocessing and model evaluation together.
Project is led by [Gaomengya](https://huggingface.co/gaomengya) and [chaorenwu111](https://huggingface.co/chaorenwu111).
At the same time, thank [lux0933](https://huggingface.co/lux0933) and [yangkai001](https://huggingface.co/yangkai001) for the discussion, which provide a lot of useful suggestions. | 32,601 | [
[
-0.033935546875,
-0.048797607421875,
0.0152587890625,
0.0227508544921875,
-0.0289459228515625,
-0.027374267578125,
-0.0285797119140625,
-0.0271453857421875,
0.02203369140625,
0.010284423828125,
-0.0241851806640625,
-0.0445556640625,
-0.042572021484375,
-0.01... |
Maseshi/Animistatics | 2023-03-08T03:02:51.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Maseshi | null | null | Maseshi/Animistatics | 19 | 2,388 | diffusers | 2023-01-18T17:02:31 | ---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---

# Animistatics
Welcome to Animistatics - a latent diffusion model for weebs. This model is intended to produce high-quality, highly detailed anime style with just a few prompts. Like other anime-style Stable Diffusion models, it also supports danbooru tags to generate images.
e.g. **_girl, cafe, plants, coffee, lighting, steam, blue eyes, brown hair_**
## Gradio
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run Animistatics:
[](https://huggingface.co/spaces/Maseshi/Animistatics)
## Google Colab
We support a [Google Colab](https://github.com/gradio-app/gradio) to run Animistatics:
[](https://colab.research.google.com/github/Maseshi/Animistatics/blob/main/animistatics.ipynb)
## ๐งจ Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
import torch
repo_id = "Maseshi/Animistatics"
pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
prompt = "girl, cafe, plants, coffee, lighting, steam, blue eyes, brown hair"
image = pipe(prompt, num_inference_steps=25).images[0]
image.save("girl.png")
```
## Examples
Below are some examples of images generated using this model:
**Anime Girl:**

```
girl, cafe, plants, coffee, lighting, steam, blue eyes, brown hair
Steps: 50, Sampler: DDIM, CFG scale: 12
```
**Anime Boy:**

```
boy, blonde hair, blue eyes, colorful, cumulonimbus clouds, lighting, medium hair, plants, city, hoodie, cool
Steps: 50, Sampler: DDIM, CFG scale: 12
```
**City:**

```
cityscape, concept art, sun shining through clouds, crepuscular rays, trending on art station, 8k
Steps: 50, Sampler: DDIM, CFG scale: 12
```
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
| 3,838 | [
[
-0.0367431640625,
-0.049072265625,
0.037994384765625,
0.03057861328125,
-0.01287841796875,
-0.01020050048828125,
0.0157318115234375,
-0.01424407958984375,
0.01678466796875,
0.032806396484375,
-0.052337646484375,
-0.049468994140625,
-0.043365478515625,
-0.012... |
stablediffusionapi/abyssorangemix2nsfw | 2023-05-19T05:02:25.000Z | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | stablediffusionapi | null | null | stablediffusionapi/abyssorangemix2nsfw | 2 | 2,385 | diffusers | 2023-03-01T05:04:00 | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# AbyssOrangeMix2NSFW API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "abyssorangemix2nsfw"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/abyssorangemix2nsfw)
Credits: [View credits](https://civitai.com/?query=AbyssOrangeMix2NSFW)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "abyssorangemix2nsfw",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** | 2,448 | [
[
-0.035675048828125,
-0.050140380859375,
0.036163330078125,
0.0307769775390625,
-0.033050537109375,
-0.0025463104248046875,
0.0257415771484375,
-0.040252685546875,
0.03436279296875,
0.0509033203125,
-0.06402587890625,
-0.058349609375,
-0.02593994140625,
-0.00... |
goofyai/cyborg_style_xl | 2023-08-14T16:55:19.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"license:apache-2.0",
"has_space",
"region:us"
] | text-to-image | goofyai | null | null | goofyai/cyborg_style_xl | 10 | 2,384 | diffusers | 2023-08-14T16:17:39 | ---
license: apache-2.0
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: cyborg style, cyborg, android
widget:
- text: a llama cyborg style
---
# Cyborg Style SDXL
## Support me in upgrading my 3060 to a 40xx GPU as my current GPU struggles with SDXL training [Buymeacoffee](https://www.buymeacoffee.com/goofy02)
|  |  |
|:----------------------:|:----------------:|
|  |  |
### Tips:
- Prompt with `cyborg style`, `cyborg` or `android` activation prompts
- Lora weight of 0.7-1 works great
- Highres fix is highly recommended. | 688 | [
[
-0.058349609375,
-0.0239105224609375,
0.052825927734375,
0.017791748046875,
-0.04571533203125,
0.015350341796875,
0.01406097412109375,
-0.0196990966796875,
0.0218353271484375,
0.02459716796875,
-0.044403076171875,
-0.02490234375,
-0.035369873046875,
-0.00395... |
livingbox/model-text-oct-12 | 2023-10-12T08:05:16.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | livingbox | null | null | livingbox/model-text-oct-12 | 0 | 2,379 | diffusers | 2023-10-12T07:59:00 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### model-text-oct-12 Dreambooth model trained by livingbox with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| 508 | [
[
-0.026763916015625,
-0.07037353515625,
0.036834716796875,
0.03363037109375,
-0.0261688232421875,
0.035614013671875,
0.0269622802734375,
-0.031524658203125,
0.050445556640625,
0.00811004638671875,
-0.02587890625,
-0.0222320556640625,
-0.0276031494140625,
-0.0... |
stabilityai/japanese-stablelm-instruct-gamma-7b | 2023-10-30T06:37:21.000Z | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"japanese-stablelm",
"causal-lm",
"ja",
"arxiv:2310.06825",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | stabilityai | null | null | stabilityai/japanese-stablelm-instruct-gamma-7b | 33 | 2,379 | transformers | 2023-10-16T08:55:06 | ---
language:
- ja
tags:
- japanese-stablelm
- causal-lm
pipeline_tag: text-generation
license: apache-2.0
extra_gated_fields:
Name: text
Email: text
Country: text
Organization or Affiliation: text
I allow Stability AI to contact me about information related to its models and research: checkbox
---
# Japanese Stable LM Instruct Gamma 7B
## Model Description
This is a 7B-parameter decoder-only Japanese language model fine-tuned on instruction-following datasets, built on top of the base model [Japanese Stable LM Base Gamma 7B](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b).
*If you are in search of a smaller model, please check [Japanese StableLM-3B-4E1T Instruct](https://huggingface.co/stabilityai/japanese-stablelm-3b-4e1t-base/blob/main/README.md).*
## Usage
Ensure you are using Transformers 4.34.0 or newer.
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("stabilityai/japanese-stablelm-instruct-gamma-7b")
model = AutoModelForCausalLM.from_pretrained(
"stabilityai/japanese-stablelm-instruct-gamma-7b",
torch_dtype="auto",
)
model.eval()
if torch.cuda.is_available():
model = model.to("cuda")
def build_prompt(user_query, inputs="", sep="\n\n### "):
sys_msg = "ไปฅไธใฏใใฟในใฏใ่ชฌๆใใๆ็คบใจใๆ่ใฎใใๅ
ฅๅใฎ็ตใฟๅใใใงใใ่ฆๆฑใ้ฉๅใซๆบใใๅฟ็ญใๆธใใชใใใ"
p = sys_msg
roles = ["ๆ็คบ", "ๅฟ็ญ"]
msgs = [": \n" + user_query, ": \n"]
if inputs:
roles.insert(1, "ๅ
ฅๅ")
msgs.insert(1, ": \n" + inputs)
for role, msg in zip(roles, msgs):
p += sep + role + msg
return p
# Infer with prompt without any additional input
user_inputs = {
"user_query": "ไธใใใใใใจใใใฎๆๅณใๅฐๅญฆ็ใงใๅใใใใใซๆใใฆใใ ใใใ",
"inputs": "ๆ
ใใฏไบบใฎใใใชใใ"
}
prompt = build_prompt(**user_inputs)
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
)
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=256,
temperature=1,
top_p=0.95,
do_sample=True,
)
out = tokenizer.decode(tokens[0][input_ids.shape[1]:], skip_special_tokens=True).strip()
print(out)
```
## Model Details
* **Developed by**: [Stability AI](https://stability.ai/)
* **Model type**: `Japanese Stable LM Instruct Gamma 7B` model is an auto-regressive language model based on the transformer decoder architecture.
* **Language(s)**: Japanese
* **License**: This model is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
* **Contact**: For questions and comments about the model, please join [Stable Community Japan](https://discord.gg/StableJP). For future announcements / information about Stability AI models, research, and events, please follow https://twitter.com/StabilityAI_JP.
### Model Architecture
For details, please see Mistral AI's [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
### Training Datasets
- [Japanese translation of the Databricks Dolly-15k dataset](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja)
- [Japanese translation of the subset of the Anthropic HH dataset](https://huggingface.co/datasets/fujiki/japanese_hh-rlhf-49k)
- [Wikinews](https://ja.wikinews.org/wi) [subset](https://huggingface.co/datasets/fujiki/llm-japanese-dataset_wikinews) of the [izumi-lab/llm-japanese-dataset](https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset)
## Use and Limitations
### Intended Use
The model is intended to be used by all individuals as a foundational model for application-specific fine-tuning without strict limitations on commercial use.
### Limitations and bias
The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters which can be reflected in the model-generated text. We recommend users exercise reasonable caution when using these models in production systems. Do not use the model for any applications that may cause harm or distress to individuals or groups.
## Credits
The fine-tuning was carried out by [Fujiki Nakamura](https://huggingface.co/fujiki).
Other aspects, including data preparation and evaluation, were handled by the Language Team of Stability AI Japan, notably [Meng Lee](https://huggingface.co/leemeng), [Makoto Shing](https://huggingface.co/mkshing), [Paul McCann](https://huggingface.co/polm-stability), [Naoki Orii](https://huggingface.co/mrorii), and [Takuya Akiba](https://huggingface.co/iwiwi).
## Acknowledgements
This model is based on Mistral-7B-v0.1 released by the Mistral AI team. We are grateful to the Mistral AI team for providing such an excellent base model.
We are grateful for the contributions of the EleutherAI Polyglot-JA team in helping us to collect a large amount of pre-training data in Japanese. Polyglot-JA members includes Hyunwoong Ko (Project Lead), Fujiki Nakamura (originally started this project when he commited to the Polyglot team), Yunho Mo, Minji Jung, KeunSeok Im, and Su-Kyeong Jang.
We are also appreciative of [AI Novelist/Sta (Bit192, Inc.)](https://ai-novel.com/index.php) and the numerous contributors from [Stable Community Japan](https://discord.gg/VPrcE475HB) for assisting us in gathering a large amount of high-quality Japanese textual data for model training.
| 5,342 | [
[
-0.0281829833984375,
-0.06396484375,
0.0005116462707519531,
0.01593017578125,
-0.0211334228515625,
-0.02386474609375,
-0.0204925537109375,
-0.029052734375,
0.00490570068359375,
0.0262908935546875,
-0.036041259765625,
-0.04632568359375,
-0.0423583984375,
0.00... |
IDEA-CCNL/Randeng-BART-139M-SUMMARY | 2023-05-26T04:12:47.000Z | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"zh",
"arxiv:2209.02970",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | IDEA-CCNL | null | null | IDEA-CCNL/Randeng-BART-139M-SUMMARY | 9 | 2,376 | transformers | 2022-04-26T12:24:42 | ---
language:
- zh
license: apache-2.0
inference: true
widget:
- text: 'summary: ๅจๅไบฌๅฌๅฅฅไผ่ช็ฑๅผๆป้ชๅฅณๅญๅก้ข้็ขๆๅทงๅณ่ตไธญ๏ผไธญๅฝ้ๆ่ฐท็ฑๅๅคบๅพ้ถ็ใ็ฅ่ดบ่ฐท็ฑๅ๏ผไปๅคฉไธๅ๏ผ่ช็ฑๅผๆป้ชๅฅณๅญๅก้ข้็ขๆๅทงๅณ่ตไธพ่กใๅณ่ตๅไธ่ฝฎ่ฟ่ก๏ผๅ้ๆๆไฝณๆ็ปฉๆๅๅณๅบๅฅ็ใ็ฌฌไธ่ทณ๏ผไธญๅฝ้ๆ่ฐท็ฑๅ่ทๅพ69.90ๅใๅจ12ไฝ้ๆไธญๆๅ็ฌฌไธใๅฎๆๅจไฝๅ๏ผ่ฐท็ฑๅๅๆฎไบไธช้ฌผ่ธ๏ผ็ๆฏๅฏ็ฑใ็ฌฌไบ่ฝฎไธญ๏ผ่ฐท็ฑๅๅจ้ๅ
ทๅบ็ฌฌไธไธช้็ขๅคๅคฑ่ฏฏ๏ผ่ฝๅฐๆถๆๅใ่ทๅพ16.98ๅใ็ฝๅ๏ผๆๅไบไนๆฒกๅ
ณ็ณป๏ผ็ปง็ปญๅ ๆฒน๏ผๅจ็ฌฌไบ่ทณๅคฑ่ฏฏๆๅ็ๆ
ๅตไธ๏ผ่ฐท็ฑๅ้กถไฝๅๅ๏ผ็ฌฌไธ่ทณ็จณ็จณๅๆฅ๏ผๆต็
่ฝๅฐ๏ผ่ทๅพ86.23ๅ๏ผๆญค่ฝฎๆฏ่ต๏ผๅ
ฑ12ไฝ้ๆๅ่ต๏ผ่ฐท็ฑๅ็ฌฌ10ไฝๅบๅบใ็ฝๅ๏ผ็ๆฏ่ตๆถๆๆฏ่ฐท็ฑๅ็ดงๅผ ๏ผๅ ๆฒน๏ผ'
---
# Randeng-BART-139M-SUMMARY
- Main Page:[Fengshenbang](https://fengshenbang-lm.com/)
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
## ็ฎไป Brief Introduction
ๅไบๅค็ๆ่ฆไปปๅก๏ผๅจไธไธชไธญๆๆ่ฆๆฐๆฎ้ไธๅพฎ่ฐๅ็๏ผไธญๆ็็BART-baseใ
Good at solving text summarization tasks, after fine-tuning on a Chinese text summarization dataset, Chinese BART-base.
## ๆจกๅๅ็ฑป Model Taxonomy
| ้ๆฑ Demand | ไปปๅก Task | ็ณปๅ Series | ๆจกๅ Model | ๅๆฐ Parameter | ้ขๅค Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| ้็จ General | ่ช็ถ่ฏญ่จ่ฝฌๆข NLT | ็็ฏ Randeng | BART | 139M | ไธญๆ-ๆๆฌๆ่ฆไปปๅก Chinese-Summary |
## ๆจกๅไฟกๆฏ Model Information
ๅบไบ[Randeng-BART-139M](https://huggingface.co/IDEA-CCNL/Randeng-BART-139M)๏ผๆไปฌๅจๆถ้็1ไธชไธญๆ้ขๅ็ๆๆฌๆ่ฆๆฐๆฎ้๏ผLCSTS๏ผไธๅพฎ่ฐไบๅฎ๏ผๅพๅฐไบsummary็ๆฌใ
Based on ๅบไบ[Randeng-BART-139M](https://huggingface.co/IDEA-CCNL/Randeng-BART-139M), we fine-tuned a text summarization version (summary) on a Chinese text summarization datasets (LCSTS).
## ไฝฟ็จ Usage
```python
from transformers import BartForConditionalGeneration, AutoTokenizer, Text2TextGenerationPipeline
import torch
tokenizer=AutoTokenizer.from_pretrained('IDEA-CCNL/Randeng-BART-139M-SUMMARY')
model=BartForConditionalGeneration.from_pretrained('IDEA-CCNL/Randeng-BART-139M-SUMMARY')
text = 'summary:ๅจๅไบฌๅฌๅฅฅไผ่ช็ฑๅผๆป้ชๅฅณๅญๅก้ข้็ขๆๅทงๅณ่ตไธญ๏ผไธญๅฝ้ๆ่ฐท็ฑๅๅคบๅพ้ถ็ใ็ฅ่ดบ่ฐท็ฑๅ๏ผไปๅคฉไธๅ๏ผ่ช็ฑๅผๆป้ชๅฅณๅญๅก้ข้็ขๆๅทงๅณ่ตไธพ่กใๅณ่ตๅไธ่ฝฎ่ฟ่ก๏ผๅ้ๆๆไฝณๆ็ปฉๆๅๅณๅบๅฅ็ใ็ฌฌไธ่ทณ๏ผไธญๅฝ้ๆ่ฐท็ฑๅ่ทๅพ69.90ๅใๅจ12ไฝ้ๆไธญๆๅ็ฌฌไธใๅฎๆๅจไฝๅ๏ผ่ฐท็ฑๅๅๆฎไบไธช้ฌผ่ธ๏ผ็ๆฏๅฏ็ฑใ็ฌฌไบ่ฝฎไธญ๏ผ่ฐท็ฑๅๅจ้ๅ
ทๅบ็ฌฌไธไธช้็ขๅคๅคฑ่ฏฏ๏ผ่ฝๅฐๆถๆๅใ่ทๅพ16.98ๅใ็ฝๅ๏ผๆๅไบไนๆฒกๅ
ณ็ณป๏ผ็ปง็ปญๅ ๆฒน๏ผๅจ็ฌฌไบ่ทณๅคฑ่ฏฏๆๅ็ๆ
ๅตไธ๏ผ่ฐท็ฑๅ้กถไฝๅๅ๏ผ็ฌฌไธ่ทณ็จณ็จณๅๆฅ๏ผๆต็
่ฝๅฐ๏ผ่ทๅพ86.23ๅ๏ผๆญค่ฝฎๆฏ่ต๏ผๅ
ฑ12ไฝ้ๆๅ่ต๏ผ่ฐท็ฑๅ็ฌฌ10ไฝๅบๅบใ็ฝๅ๏ผ็ๆฏ่ตๆถๆๆฏ่ฐท็ฑๅ็ดงๅผ ๏ผๅ ๆฒน๏ผ'
text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
print(text2text_generator(text, max_length=50, do_sample=False))
```
## ๅผ็จ Citation
ๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็[่ฎบๆ](https://arxiv.org/abs/2209.02970)๏ผ
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
ไนๅฏไปฅๅผ็จๆไปฌ็[็ฝ็ซ](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
| 3,167 | [
[
-0.02838134765625,
-0.048431396484375,
0.018341064453125,
0.0307159423828125,
-0.0401611328125,
-0.02081298828125,
-0.03265380859375,
-0.0303192138671875,
0.023406982421875,
0.02178955078125,
-0.034027099609375,
-0.03900146484375,
-0.035552978515625,
0.00638... |
hfl/chinese-alpaca-2-7b-16k | 2023-08-31T15:42:19.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"has_space"
] | text-generation | hfl | null | null | hfl/chinese-alpaca-2-7b-16k | 13 | 2,376 | transformers | 2023-08-31T13:47:27 | ---
license: apache-2.0
---
# Chinese-Alpaca-2-7B-16K
**This is the full Chinese-Alpaca-2-7B-16K (context size 16K)๏ผmodel๏ผwhich can be loaded directly for inference and full-parameter training.**
**Related models๐**
* Long context base models (16K)
* [Chinese-LLaMA-2-7B-16K (full model)](https://huggingface.co/ziqingyang/chinese-llama-2-7b-16k)
* [Chinese-LLaMA-2-LoRA-7B-16K (LoRA model)](https://huggingface.co/ziqingyang/chinese-llama-2-lora-7b-16k)
* [Chinese-LLaMA-2-13B-16K (full model)](https://huggingface.co/ziqingyang/chinese-llama-2-13b-16k)
* [Chinese-LLaMA-2-LoRA-13B-16K (LoRA model)](https://huggingface.co/ziqingyang/chinese-llama-2-lora-13b-16k)
* Long context Instruction/Chat models
* [Chinese-Alpaca-2-7B-16K (full model)](https://huggingface.co/ziqingyang/chinese-alpaca-2-7b-16k)
* [Chinese-Alpaca-2-LoRA-7B-16K (LoRA model)](https://huggingface.co/ziqingyang/chinese-alpaca-2-lora-7b-16k)
* [Chinese-Alpaca-2-13B-16K (full model)](https://huggingface.co/ziqingyang/chinese-alpaca-2-13b-16k)
* [Chinese-Alpaca-2-LoRA-13B-16K (LoRA model)](https://huggingface.co/ziqingyang/chinese-alpaca-2-lora-13b-16k)
* Base models
* [Chinese-LLaMA-2-7B (full model)](https://huggingface.co/ziqingyang/chinese-llama-2-7b)
* [Chinese-LLaMA-2-LoRA-7B (LoRA model)](https://huggingface.co/ziqingyang/chinese-llama-2-lora-7b)
* [Chinese-LLaMA-2-13B (full model)](https://huggingface.co/ziqingyang/chinese-llama-2-13b)
* [Chinese-LLaMA-2-LoRA-13B (LoRA model)](https://huggingface.co/ziqingyang/chinese-llama-2-lora-13b)
* Instruction/Chat models
* [Chinese-Alpaca-2-7B (full model)](https://huggingface.co/ziqingyang/chinese-alpaca-2-7b)
* [Chinese-Alpaca-2-LoRA-7B (LoRA model)](https://huggingface.co/ziqingyang/chinese-alpaca-2-lora-7b)
* [Chinese-Alpaca-2-13B (full model)](https://huggingface.co/ziqingyang/chinese-alpaca-2-13b)
* [Chinese-Alpaca-2-LoRA-13B (LoRA model)](https://huggingface.co/ziqingyang/chinese-alpaca-2-lora-13b)
# Description of Chinese-LLaMA-Alpaca-2
This project is based on the Llama-2, released by Meta, and it is the second generation of the Chinese LLaMA & Alpaca LLM project. We open-source Chinese LLaMA-2 (foundation model) and Alpaca-2 (instruction-following model). These models have been expanded and optimized with Chinese vocabulary beyond the original Llama-2. We used large-scale Chinese data for incremental pre-training, which further improved the fundamental semantic understanding of the Chinese language, resulting in a significant performance improvement compared to the first-generation models. The relevant models support a 4K context and can be expanded up to 18K+ using the NTK method.
The main contents of this project include:
* ๐ New extended Chinese vocabulary beyond Llama-2, open-sourcing the Chinese LLaMA-2 and Alpaca-2 LLMs.
* ๐ Open-sourced the pre-training and instruction finetuning (SFT) scripts for further tuning on user's data
* ๐ Quickly deploy and experience the quantized LLMs on CPU/GPU of personal PC
* ๐ Support for LLaMA ecosystems like ๐คtransformers, llama.cpp, text-generation-webui, LangChain, vLLM etc.
Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for details. | 3,257 | [
[
-0.034271240234375,
-0.04559326171875,
0.01471710205078125,
0.0535888671875,
-0.04803466796875,
-0.01366424560546875,
0.00630950927734375,
-0.06842041015625,
0.036529541015625,
0.026458740234375,
-0.04217529296875,
-0.04296875,
-0.044097900390625,
0.00645828... |
sail-rvc/Beyonce2333333 | 2023-07-14T07:19:07.000Z | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | sail-rvc | null | null | sail-rvc/Beyonce2333333 | 0 | 2,375 | transformers | 2023-07-14T07:18:51 |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# Beyonce2333333
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:19:07
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
| 382 | [
[
-0.0301055908203125,
-0.0219879150390625,
0.0230560302734375,
0.01148223876953125,
-0.0312347412109375,
0.00847625732421875,
0.0257110595703125,
0.00017154216766357422,
0.02667236328125,
0.0653076171875,
-0.060394287109375,
-0.038360595703125,
-0.034759521484375... |
MMG/xlm-roberta-large-ner-spanish | 2023-06-05T08:18:20.000Z | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"token-classification",
"es",
"dataset:CoNLL-2002",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | token-classification | MMG | null | null | MMG/xlm-roberta-large-ner-spanish | 17 | 2,373 | transformers | 2022-03-02T23:29:04 | ---
language:
- es
datasets:
- CoNLL-2002
widget:
- text: "Las oficinas de MMG estรกn en Las Rozas."
---
# xlm-roberta-large-ner-spanish
This model is a XLM-Roberta-large model fine-tuned for Named Entity Recognition (NER) over the Spanish portion of the CoNLL-2002 dataset. Evaluating it over the test subset of this dataset, we get a F1-score of 89.17, being one of the best NER for Spanish available at the moment. | 422 | [
[
-0.0413818359375,
-0.060272216796875,
0.037872314453125,
0.03369140625,
-0.0174407958984375,
-0.00255584716796875,
-0.0210113525390625,
-0.048797607421875,
0.022125244140625,
0.038238525390625,
-0.04168701171875,
-0.04046630859375,
-0.05908203125,
0.03887939... |
kykim/albert-kor-base | 2021-01-22T00:27:49.000Z | [
"transformers",
"pytorch",
"tf",
"albert",
"fill-mask",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | kykim | null | null | kykim/albert-kor-base | 3 | 2,373 | transformers | 2022-03-02T23:29:05 | ---
language: ko
---
# Albert base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
```python
from transformers import BertTokenizerFast, AlbertModel
tokenizer_albert = BertTokenizerFast.from_pretrained("kykim/albert-kor-base")
model_albert = AlbertModel.from_pretrained("kykim/albert-kor-base")
``` | 456 | [
[
-0.0153350830078125,
-0.0262298583984375,
0.0183563232421875,
0.0286407470703125,
-0.036041259765625,
0.0005393028259277344,
-0.024078369140625,
0.004245758056640625,
-0.00701904296875,
0.042724609375,
-0.036102294921875,
-0.048675537109375,
-0.04595947265625,
... |
Salesforce/blip2-flan-t5-xl-coco | 2023-09-13T08:46:43.000Z | [
"transformers",
"pytorch",
"blip-2",
"visual-question-answering",
"vision",
"image-to-text",
"image-captioning",
"en",
"arxiv:2301.12597",
"arxiv:2210.11416",
"license:mit",
"region:us"
] | image-to-text | Salesforce | null | null | Salesforce/blip2-flan-t5-xl-coco | 10 | 2,373 | transformers | 2023-02-07T15:37:02 | ---
language: en
license: mit
tags:
- vision
- image-to-text
- image-captioning
- visual-question-answering
pipeline_tag: image-to-text
inference: false
---
# BLIP-2, Flan T5-xl, fine-tuned on COCO
BLIP-2 model, leveraging [Flan T5-xl](https://huggingface.co/google/flan-t5-xl) (a large language model).
It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2).
Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model.
The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen
while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings,
which bridge the gap between the embedding space of the image encoder and the large language model.
The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
alt="drawing" width="600"/>
This allows the model to be used for tasks like:
- image captioning
- visual question answering (VQA)
- chat-like conversations by feeding the image and the previous conversation as prompt to the model
## Direct Use and Downstream Use
You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for
fine-tuned versions on a task that interests you.
## Bias, Risks, Limitations, and Ethical Considerations
BLIP2-FlanT5 uses off-the-shelf Flan-T5 as the language model. It inherits the same risks and limitations from [Flan-T5](https://arxiv.org/pdf/2210.11416.pdf):
> Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application.
BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context theyโre being deployed within.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example). | 3,240 | [
[
-0.021514892578125,
-0.052276611328125,
-0.0021572113037109375,
0.045196533203125,
-0.0184478759765625,
-0.0030460357666015625,
-0.0198822021484375,
-0.07293701171875,
-0.00572967529296875,
0.046539306640625,
-0.025970458984375,
-0.015411376953125,
-0.0492858886... |
aaraki/vit-base-patch16-224-in21k-finetuned-cifar10 | 2022-03-30T01:41:47.000Z | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:cifar10",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | aaraki | null | null | aaraki/vit-base-patch16-224-in21k-finetuned-cifar10 | 7 | 2,371 | transformers | 2022-03-30T00:18:26 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cifar10
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k-finetuned-cifar10
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: cifar10
type: cifar10
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9788
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-cifar10
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cifar10 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2564
- Accuracy: 0.9788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4291 | 1.0 | 390 | 0.2564 | 0.9788 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| 1,755 | [
[
-0.0369873046875,
-0.042236328125,
0.0028476715087890625,
0.02099609375,
-0.0251007080078125,
-0.0289306640625,
-0.0149383544921875,
-0.01555633544921875,
0.01255035400390625,
0.0208892822265625,
-0.046142578125,
-0.036651611328125,
-0.045562744140625,
-0.01... |
sentence-transformers/multi-qa-MiniLM-L6-dot-v1 | 2023-11-02T09:20:05.000Z | [
"sentence-transformers",
"pytorch",
"tf",
"bert",
"feature-extraction",
"sentence-similarity",
"en",
"endpoints_compatible",
"region:us"
] | sentence-similarity | sentence-transformers | null | null | sentence-transformers/multi-qa-MiniLM-L6-dot-v1 | 10 | 2,370 | sentence-transformers | 2022-03-02T23:29:05 | ---
language:
- en
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# multi-qa-MiniLM-L6-dot-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and was designed for **semantic search**. It has been trained on 215M (question, answer) pairs from diverse sources. For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
#Load the model
model = SentenceTransformer('sentence-transformers/multi-qa-MiniLM-L6-dot-v1')
#Encode query and documents
query_emb = model.encode(query)
doc_emb = model.encode(docs)
#Compute dot score between query and all document embeddings
scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#CLS Pooling - Take output from first token
def cls_pooling(model_output):
return model_output.last_hidden_state[:,0]
#Encode text
def encode(texts):
# Tokenize sentences
encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input, return_dict=True)
# Perform pooling
embeddings = cls_pooling(model_output)
return embeddings
# Sentences we want sentence embeddings for
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/multi-qa-MiniLM-L6-dot-v1")
model = AutoModel.from_pretrained("sentence-transformers/multi-qa-MiniLM-L6-dot-v1")
#Encode query and docs
query_emb = encode(query)
doc_emb = encode(docs)
#Compute dot score between query and all document embeddings
scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Technical Details
In the following some technical details how this model must be used:
| Setting | Value |
| --- | :---: |
| Dimensions | 384 |
| Produces normalized embeddings | No |
| Pooling-Method | CLS pooling |
| Suitable score functions | dot-product (e.g. `util.dot_score`) |
----
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used for semantic search: It encodes queries / questions and text paragraphs in a dense vector space. It finds relevant documents for the given passages.
Note that there is a limit of 512 word pieces: Text longer than that will be truncated. Further note that the model was just trained on input text up to 250 word pieces. It might not work well for longer text.
## Training procedure
The full training script is accessible in this current repository: `train_script.py`.
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
#### Training
We use the concatenation from multiple datasets to fine-tune our model. In total we have about 215M (question, answer) pairs.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
The model was trained with [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss) using CLS-pooling, dot-product as similarity function, and a scale of 1.
| Dataset | Number of training tuples |
|--------------------------------------------------------|:--------------------------:|
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs from WikiAnswers | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) Automatically generated (Question, Paragraph) pairs for each paragraph in Wikipedia | 64,371,441 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs from all StackExchanges | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs from all StackExchanges | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) Triplets (query, answer, hard_negative) for 500k queries from Bing search engine | 17,579,773 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) (query, answer) pairs for 3M Google queries and Google featured snippet | 3,012,496 |
| [Amazon-QA](http://jmcauley.ucsd.edu/data/amazon/qa/) (Question, Answer) pairs from Amazon product pages | 2,448,839
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) pairs from Yahoo Answers | 1,198,260 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) pairs from Yahoo Answers | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) pairs from Yahoo Answers | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) (Question, Answer) pairs for 140k questions, each with Top5 Google snippets on that question | 582,261 |
| [ELI5](https://huggingface.co/datasets/eli5) (Question, Answer) pairs from Reddit ELI5 (explainlikeimfive) | 325,475 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions pairs (titles) | 304,525 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) (Question, Duplicate_Question, Hard_Negative) triplets for Quora Questions Pairs dataset | 103,663 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) (Question, Paragraph) pairs for 100k real Google queries with relevant Wikipedia paragraph | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) (Question, Paragraph) pairs from SQuAD2.0 dataset | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) (Question, Evidence) pairs | 73,346 |
| **Total** | **214,988,242** | | 8,439 | [
[
-0.032501220703125,
-0.0628662109375,
0.0311431884765625,
0.005069732666015625,
-0.007091522216796875,
-0.0223388671875,
-0.0070648193359375,
-0.01537322998046875,
0.0207672119140625,
0.0213165283203125,
-0.041229248046875,
-0.041839599609375,
-0.042999267578125... |
dream-textures/texture-diffusion | 2023-01-27T13:15:28.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:openrail++",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | dream-textures | null | null | dream-textures/texture-diffusion | 106 | 2,370 | diffusers | 2022-12-25T21:29:56 | ---
license: openrail++
tags:
- text-to-image
- stable-diffusion
widget:
- text: "pbr brick wall"
example_title: "Brick Wall"
- text: "pbr cobblestone path"
example_title: "Cobblestone Path"
---
# Texture Diffusion
This DreamBooth model is fine-tuned for diffuse textures. It produces flat textures with very little visible lighting/shadows.
## Samples
Here are a few example images (generated with 50 steps).
| pbr uneven stone wall | pbr dirt with weeds | pbr bright white marble |
| --- | --- | --- |
|  |  |  |
## Usage
Use the token `pbr` in your prompts to invoke the style.
This model was made for use in [Dream Textures](https://github.com/carson-katri/dream-textures), a Stable Diffusion add-on for Blender.
You can also use it with [๐งจ diffusers](https://github.com/huggingface/diffusers):
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "dream-textures/texture-diffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "pbr brick wall"
image = pipe(prompt).images[0]
image.save("bricks.png")
```
## Training Details
* Base Model: [stabilityai/stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base)
* Resolution: `512`
* Prior Loss Weight: `1.0`
* Class Prompt: `texture`
* Batch Size: `1`
* Learning Rate: `1e-6`
* Precision: `fp16`
* Steps: `4000`
* GPU: Tesla T4
### Dataset
This model was trained on 278 CC0 textures from [PolyHaven](https://polyhaven.com/). | 1,670 | [
[
-0.0103759765625,
-0.061859130859375,
0.041290283203125,
0.05389404296875,
-0.0240936279296875,
0.0214996337890625,
0.01019287109375,
-0.02593994140625,
0.020965576171875,
0.04150390625,
-0.03338623046875,
-0.06365966796875,
-0.0228118896484375,
-0.032165527... |
timm/eva02_large_patch14_clip_336.merged2b_s6b_b61k | 2023-04-10T22:06:27.000Z | [
"open_clip",
"zero-shot-image-classification",
"clip",
"license:mit",
"region:us",
"has_space"
] | zero-shot-image-classification | timm | null | null | timm/eva02_large_patch14_clip_336.merged2b_s6b_b61k | 0 | 2,367 | open_clip | 2023-04-10T22:01:27 | ---
tags:
- zero-shot-image-classification
- clip
library_tag: open_clip
license: mit
---
# Model card for eva02_large_patch14_clip_336.merged2b_s6b_b61k
| 154 | [
[
-0.03997802734375,
-0.01013946533203125,
0.0170745849609375,
0.03741455078125,
-0.042022705078125,
0.0159912109375,
0.0325927734375,
0.0008015632629394531,
0.0528564453125,
0.06573486328125,
-0.051361083984375,
-0.00977325439453125,
-0.034027099609375,
0.001... |
vinai/bartpho-word-base | 2022-10-22T09:05:55.000Z | [
"transformers",
"pytorch",
"mbart",
"feature-extraction",
"arxiv:2109.09701",
"endpoints_compatible",
"region:us",
"has_space"
] | feature-extraction | vinai | null | null | vinai/bartpho-word-base | 2 | 2,363 | transformers | 2022-08-26T09:06:01 | # <a name="introduction"></a> BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese
The pre-trained model `vinai/bartpho-word-base` is the "base" variant of `BARTpho-word`, which uses the "base" architecture and pre-training scheme of the sequence-to-sequence denoising model [BART](https://github.com/pytorch/fairseq/tree/main/examples/bart). The general architecture and experimental results of BARTpho can be found in our [paper](https://arxiv.org/abs/2109.09701):
@article{bartpho,
title = {{BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese}},
author = {Nguyen Luong Tran and Duong Minh Le and Dat Quoc Nguyen},
journal = {arXiv preprint},
volume = {arXiv:2109.09701},
year = {2021}
}
**Please CITE** our paper when BARTpho is used to help produce published results or incorporated into other software.
For further information or requests, please go to [BARTpho's homepage](https://github.com/VinAIResearch/BARTpho)!
| 975 | [
[
-0.0159454345703125,
-0.051483154296875,
0.0206146240234375,
0.003116607666015625,
-0.053863525390625,
-0.00736236572265625,
-0.00531768798828125,
-0.016204833984375,
0.0096435546875,
0.06597900390625,
-0.034759521484375,
-0.06207275390625,
-0.044097900390625,
... |
tawkit/phil-pyannote-speaker-diarization-endpoint | 2022-11-13T00:53:04.000Z | [
"pyannote-audio",
"pyannote",
"pyannote-audio-pipeline",
"audio",
"voice",
"speech",
"speaker",
"speaker-diarization",
"speaker-change-detection",
"voice-activity-detection",
"overlapped-speech-detection",
"automatic-speech-recognition",
"dataset:ami",
"dataset:dihard",
"dataset:voxconve... | automatic-speech-recognition | tawkit | null | null | tawkit/phil-pyannote-speaker-diarization-endpoint | 6 | 2,360 | pyannote-audio | 2022-11-13T00:52:41 | ---
tags:
- pyannote
- pyannote-audio
- pyannote-audio-pipeline
- audio
- voice
- speech
- speaker
- speaker-diarization
- speaker-change-detection
- voice-activity-detection
- overlapped-speech-detection
- automatic-speech-recognition
datasets:
- ami
- dihard
- voxconverse
- aishell
- repere
- voxceleb
license: mit
---
# ๐น Speaker diarization
Relies on pyannote.audio 2.0: see [installation instructions](https://github.com/pyannote/pyannote-audio/tree/develop#installation).
## TL;DR
```python
# load the pipeline from Hugginface Hub
from pyannote.audio import Pipeline
pipeline = Pipeline.from_pretrained("pyannote/speaker-diarization@2022.07")
# apply the pipeline to an audio file
diarization = pipeline("audio.wav")
# dump the diarization output to disk using RTTM format
with open("audio.rttm", "w") as rttm:
diarization.write_rttm(rttm)
```
## Advanced usage
In case the number of speakers is known in advance, one can use the `num_speakers` option:
```python
diarization = pipeline("audio.wav", num_speakers=2)
```
One can also provide lower and/or upper bounds on the number of speakers using `min_speakers` and `max_speakers` options:
```python
diarization = pipeline("audio.wav", min_speakers=2, max_speakers=5)
```
If you feel adventurous, you can try and play with the various pipeline hyper-parameters.
For instance, one can use a more aggressive voice activity detection by increasing the value of `segmentation_onset` threshold:
```python
hparams = pipeline.parameters(instantiated=True)
hparams["segmentation_onset"] += 0.1
pipeline.instantiate(hparams)
```
## Benchmark
### Real-time factor
Real-time factor is around 5% using one Nvidia Tesla V100 SXM2 GPU (for the neural inference part) and one Intel Cascade Lake 6248 CPU (for the clustering part).
In other words, it takes approximately 3 minutes to process a one hour conversation.
### Accuracy
This pipeline is benchmarked on a growing collection of datasets.
Processing is fully automatic:
* no manual voice activity detection (as is sometimes the case in the literature)
* no manual number of speakers (though it is possible to provide it to the pipeline)
* no fine-tuning of the internal models nor tuning of the pipeline hyper-parameters to each dataset
... with the least forgiving diarization error rate (DER) setup (named *"Full"* in [this paper](https://doi.org/10.1016/j.csl.2021.101254)):
* no forgiveness collar
* evaluation of overlapped speech
| Benchmark | [DER%](. "Diarization error rate") | [FA%](. "False alarm rate") | [Miss%](. "Missed detection rate") | [Conf%](. "Speaker confusion rate") | Expected output | File-level evaluation |
| ---------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------- | --------------------------- | ---------------------------------- | ----------------------------------- | ------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------ |
| [AISHELL-4](http://www.openslr.org/111/) | 14.61 | 3.31 | 4.35 | 6.95 | [RTTM](reproducible_research/AISHELL.SpeakerDiarization.Full.test.rttm) | [eval](reproducible_research/AISHELL.SpeakerDiarization.Full.test.eval) |
| [AMI *Mix-Headset*](https://groups.inf.ed.ac.uk/ami/corpus/) [*only_words*](https://github.com/BUTSpeechFIT/AMI-diarization-setup) | 18.21 | 3.28 | 11.07 | 3.87 | [RTTM](reproducible_research/2022.07/AMI.SpeakerDiarization.only_words.test.rttm) | [eval](reproducible_research/2022.07/AMI.SpeakerDiarization.only_words.test.eval) |
| [AMI *Array1-01*](https://groups.inf.ed.ac.uk/ami/corpus/) [*only_words*](https://github.com/BUTSpeechFIT/AMI-diarization-setup) | 29.00 | 2.71 | 21.61 | 4.68 | [RTTM](reproducible_research/2022.07/AMI-SDM.SpeakerDiarization.only_words.test.rttm) | [eval](reproducible_research/2022.07/AMI-SDM.SpeakerDiarization.only_words.test.eval) |
| [CALLHOME](https://catalog.ldc.upenn.edu/LDC2001S97) [*Part2*](https://github.com/BUTSpeechFIT/CALLHOME_sublists/issues/1) | 30.24 | 3.71 | 16.86 | 9.66 | [RTTM](reproducible_research/2022.07/CALLHOME.SpeakerDiarization.CALLHOME.test.rttm) | [eval](reproducible_research/2022.07/CALLHOME.SpeakerDiarization.CALLHOME.test.eval) |
| [DIHARD 3 *Full*](https://arxiv.org/abs/2012.01477) | 20.99 | 4.25 | 10.74 | 6.00 | [RTTM](reproducible_research/2022.07/DIHARD.SpeakerDiarization.Full.test.rttm) | [eval](reproducible_research/2022.07/DIHARD.SpeakerDiarization.Full.test.eval) |
| [REPERE *Phase 2*](https://islrn.org/resources/360-758-359-485-0/) | 12.62 | 1.55 | 3.30 | 7.76 | [RTTM](reproducible_research/2022.07/REPERE.SpeakerDiarization.Full.test.rttm) | [eval](reproducible_research/2022.07/REPERE.SpeakerDiarization.Full.test.eval) |
| [VoxConverse *v0.0.2*](https://github.com/joonson/voxconverse) | 12.76 | 3.45 | 3.85 | 5.46 | [RTTM](reproducible_research/2022.07/VoxConverse.SpeakerDiarization.VoxConverse.test.rttm) | [eval](reproducible_research/2022.07/VoxConverse.SpeakerDiarization.VoxConverse.test.eval) |
## Support
For commercial enquiries and scientific consulting, please contact [me](mailto:herve@niderb.fr).
For [technical questions](https://github.com/pyannote/pyannote-audio/discussions) and [bug reports](https://github.com/pyannote/pyannote-audio/issues), please check [pyannote.audio](https://github.com/pyannote/pyannote-audio) Github repository.
## Citations
```bibtex
@inproceedings{Bredin2021,
Title = {{End-to-end speaker segmentation for overlap-aware resegmentation}},
Author = {{Bredin}, Herv{\'e} and {Laurent}, Antoine},
Booktitle = {Proc. Interspeech 2021},
Address = {Brno, Czech Republic},
Month = {August},
Year = {2021},
}
```
```bibtex
@inproceedings{Bredin2020,
Title = {{pyannote.audio: neural building blocks for speaker diarization}},
Author = {{Bredin}, Herv{\'e} and {Yin}, Ruiqing and {Coria}, Juan Manuel and {Gelly}, Gregory and {Korshunov}, Pavel and {Lavechin}, Marvin and {Fustes}, Diego and {Titeux}, Hadrien and {Bouaziz}, Wassim and {Gill}, Marie-Philippe},
Booktitle = {ICASSP 2020, IEEE International Conference on Acoustics, Speech, and Signal Processing},
Address = {Barcelona, Spain},
Month = {May},
Year = {2020},
}
```
| 7,833 | [
[
-0.0562744140625,
-0.05731201171875,
0.01042938232421875,
0.0264892578125,
-0.00981903076171875,
-0.0016012191772460938,
-0.036651611328125,
-0.022308349609375,
0.034332275390625,
0.025360107421875,
-0.03277587890625,
-0.053131103515625,
-0.0323486328125,
0.... |
stablediffusionapi/sdxlceshi | 2023-10-09T18:26:54.000Z | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | stablediffusionapi | null | null | stablediffusionapi/sdxlceshi | 1 | 2,360 | diffusers | 2023-10-09T18:25:01 | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# sdxlceshi API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "sdxlceshi"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/sdxlceshi)
Model link: [View model](https://stablediffusionapi.com/models/sdxlceshi)
Credits: [View credits](https://civitai.com/?query=sdxlceshi)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "sdxlceshi",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** | 2,430 | [
[
-0.032501220703125,
-0.0484619140625,
0.043365478515625,
0.016754150390625,
-0.043975830078125,
0.007755279541015625,
0.0240478515625,
-0.032958984375,
0.04638671875,
0.04541015625,
-0.0653076171875,
-0.0654296875,
-0.029296875,
-0.00921630859375,
-0.019... |
etalab-ia/camembert-base-squadFR-fquad-piaf | 2023-03-27T17:24:27.000Z | [
"transformers",
"pytorch",
"tf",
"safetensors",
"camembert",
"question-answering",
"fr",
"dataset:piaf",
"dataset:FQuAD",
"dataset:SQuAD-FR",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | question-answering | etalab-ia | null | null | etalab-ia/camembert-base-squadFR-fquad-piaf | 24 | 2,359 | transformers | 2022-03-02T23:29:05 | ---
language: fr
datasets:
- piaf
- FQuAD
- SQuAD-FR
widget:
- text: "Comment s'appelle le portail open data du gouvernement ?"
context: "Etalab est une administration publique franรงaise qui fait notamment office de Chief Data Officer de l'รtat et coordonne la conception et la mise en ลuvre de sa stratรฉgie dans le domaine de la donnรฉe (ouverture et partage des donnรฉes publiques ou open data, exploitation des donnรฉes et intelligence artificielle...). Ainsi, Etalab dรฉveloppe et maintient le portail des donnรฉes ouvertes du gouvernement franรงais data.gouv.fr.
Etalab promeut รฉgalement une plus grande ouverture l'administration sur la sociรฉtรฉ (gouvernement ouvert) : transparence de l'action publique, innovation ouverte, participation citoyenne... elle promeut lโinnovation, lโexpรฉrimentation, les mรฉthodes de travail ouvertes, agiles et itรฉratives, ainsi que les synergies avec la sociรฉtรฉ civile pour dรฉcloisonner lโadministration et favoriser lโadoption des meilleures pratiques professionnelles dans le domaine du numรฉrique. ร ce titre elle รฉtudie notamment lโopportunitรฉ de recourir ร des technologies en voie de maturation issues du monde de la recherche.
Cette entitรฉ chargรฉe de l'innovation au sein de l'administration doit contribuer ร l'amรฉlioration du service public grรขce au numรฉrique. Elle est rattachรฉe ร la Direction interministรฉrielle du numรฉrique, dont les missions et lโorganisation ont รฉtรฉ fixรฉes par le dรฉcret du 30 octobre 2019.โ Dirigรฉ par Laure Lucchesi depuis 2016, elle rassemble une รฉquipe pluridisciplinaire d'une trentaine de personnes."
---
# camembert-base-squadFR-fquad-piaf
## Description
Question-answering French model, using base [CamemBERT](https://camembert-model.fr/) fine-tuned on a combo of three French Q&A datasets:
1. [PIAFv1.1](https://www.data.gouv.fr/en/datasets/piaf-le-dataset-francophone-de-questions-reponses/)
2. [FQuADv1.0](https://fquad.illuin.tech/)
3. [SQuAD-FR (SQuAD automatically translated to French)](https://github.com/Alikabbadj/French-SQuAD)
## Training hyperparameters
```shell
python run_squad.py \
--model_type camembert \
--model_name_or_path camembert-base \
--do_train --do_eval \
--train_file data/SQuAD+fquad+piaf.json \
--predict_file data/fquad_valid.json \
--per_gpu_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 4 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 10000
```
## Evaluation results
### FQuAD v1.0 Evaluation
```shell
{"f1": 79.81, "exact_match": 55.14}
```
### SQuAD-FR Evaluation
```shell
{"f1": 80.61, "exact_match": 59.54}
```
## Usage
```python
from transformers import pipeline
nlp = pipeline('question-answering', model='etalab-ia/camembert-base-squadFR-fquad-piaf', tokenizer='etalab-ia/camembert-base-squadFR-fquad-piaf')
nlp({
'question': "Qui est Claude Monet?",
'context': "Claude Monet, nรฉ le 14 novembre 1840 ร Paris et mort le 5 dรฉcembre 1926 ร Giverny, est un peintre franรงais et lโun des fondateurs de l'impressionnisme."
})
```
## Acknowledgments
This work was performed using HPC resources from GENCIโIDRIS (Grant 2020-AD011011224).
## Citations
### PIAF
```
@inproceedings{KeraronLBAMSSS20,
author = {Rachel Keraron and
Guillaume Lancrenon and
Mathilde Bras and
Fr{\'{e}}d{\'{e}}ric Allary and
Gilles Moyse and
Thomas Scialom and
Edmundo{-}Pavel Soriano{-}Morales and
Jacopo Staiano},
title = {Project {PIAF:} Building a Native French Question-Answering Dataset},
booktitle = {{LREC}},
pages = {5481--5490},
publisher = {European Language Resources Association},
year = {2020}
}
```
### FQuAD
```
@article{dHoffschmidt2020FQuADFQ,
title={FQuAD: French Question Answering Dataset},
author={Martin d'Hoffschmidt and Maxime Vidal and Wacim Belblidia and Tom Brendl'e and Quentin Heinrich},
journal={ArXiv},
year={2020},
volume={abs/2002.06071}
}
```
### SQuAD-FR
```
@MISC{kabbadj2018,
author = "Kabbadj, Ali",
title = "Something new in French Text Mining and Information Extraction (Universal Chatbot): Largest Q&A French training dataset (110 000+) ",
editor = "linkedin.com",
month = "November",
year = "2018",
url = "\url{https://www.linkedin.com/pulse/something-new-french-text-mining-information-chatbot-largest-kabbadj/}",
note = "[Online; posted 11-November-2018]",
}
```
### CamemBERT
HF model card : [https://huggingface.co/camembert-base](https://huggingface.co/camembert-base)
```
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
| 4,998 | [
[
-0.029296875,
-0.0701904296875,
0.027252197265625,
0.0213470458984375,
0.007091522216796875,
0.011383056640625,
-0.0135955810546875,
-0.0134124755859375,
0.0179595947265625,
0.033660888671875,
-0.053619384765625,
-0.041656494140625,
-0.0217437744140625,
0.01... |
mekabu/MagicalMix_v2 | 2023-03-03T03:01:13.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"region:us",
"has_space"
] | text-to-image | mekabu | null | null | mekabu/MagicalMix_v2 | 42 | 2,358 | diffusers | 2023-03-03T01:17:13 | ---
license: creativeml-openrail-m
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
library_name: diffusers
---
<style>
* {
box-sizing: border-box;
}
.column {
float: left;
width: 100%;
padding: 5px;
}
.column2 {
float: left;
width: 50%;
padding: 5px;
}
.column3 {
float: left;
width: 33.33%;
padding: 5px;
}
.row::after {
content: "";
clear: both;
display: table;
}
@media screen and (max-width: 400px) {
.column {
width: 100%;
}
.column2 {
width: 100%;
}
}
</style>
<center><img src=https://huggingface.co/mekabu/MagicalMix_v2/resolve/main/MagicalMix_v2_logo.png></center>
<center><p><h3>What's this model</h></p></center>
Based on <a href="https://huggingface.co/mekabu/MagicalMix_v1">MagicalMix v1</a>, we aimed for a softer picture.
By samplers, both anime and softer (like illustration) styles can be expressed.
<center><p><h3>Setting and examples</h></p></center>
**Embeddings**
<a href="https://huggingface.co/datasets/gsdf/EasyNegative">EasyNegative</a>
<a href="https://huggingface.co/datasets/Nerfgun3/bad_prompt">bad_prompt_version2</a>
VAE:<a href="https://huggingface.co/andite/pastel-mix/blob/main/pastel-waifu-diffusion.vae.pt">pastel-waifu-diffusion.vae.pt</a>
**[Softer]**
```
Sampler: Euler a
Steps: 40
Hires fix: x2
Upscaler: Latent
Hires steps: 30
Denoising strength: 0.5
CFG scale: 8
Clip skip: 2
```
<div class=row>
<div class=column2><center><img src=https://huggingface.co/mekabu/MagicalMix_v2/resolve/main/Sample/SampleA_01.png></center></div>
<div class=column2><center><img src=https://huggingface.co/mekabu/MagicalMix_v2/resolve/main/Sample/SampleA_02.png></center></div>
<div class=column2><center><img src=https://huggingface.co/mekabu/MagicalMix_v2/resolve/main/Sample/SampleA_03.png></center></div>
</div>
**[Anime]**
```
Sampler: DPM++ 2M Karras
Steps: 25
Hires fix: x2
Upscaler: R-ESRGAN 4x++ Anime6B
Hires steps: 20
Denoising strength: 0.5
CFG scale: 8
Clip skip: 2
```
<div class=row>
<div class=column2><center><img src=https://huggingface.co/mekabu/MagicalMix_v2/resolve/main/Sample/SampleB_01.png></center></div>
<div class=column2><center><img src=https://huggingface.co/mekabu/MagicalMix_v2/resolve/main/Sample/SampleB_02.png></center></div>
<div class=column2><center><img src=https://huggingface.co/mekabu/MagicalMix_v2/resolve/main/Sample/SampleB_03.png></center></div>
</div>
<br>
\*\***Any other good presets out there, let me know!****
<center><p><h3>Model recipe</h></p></center>
|No.|Interpolation Method|Model_A|Model_B|Tertiary Model|Output|
|:---:|:---:|:---:|:---:|:---:|:---:|
|1|Weighted sum @ 0.45|Basil_mix_fixed|uenomigiDB_v1|-|Bronze1|
|2|Weighted sum @ 0.7|Bronze1|ligneClaireAnime_v1|-|Bronze2|
|3|Weighted sum @ 0.6|Bronze2|abysshell_ver3|-|Bronze3|
|4|Add Difference @ 0.75|Bronze3|dalcefoHitoria_v3|EasyNegative|**Silver**|
|Model_A|Model_B|Weight|Output|
|:---:|:---:|:---:|:---:|
|MagicalMix_v1|Silver|SMOOTHSTEP|MagicalMix_v2|
| 3,015 | [
[
-0.059844970703125,
-0.043304443359375,
0.003662109375,
0.027587890625,
-0.0271759033203125,
-0.00823211669921875,
-0.0008907318115234375,
-0.0127716064453125,
0.058807373046875,
0.0306396484375,
-0.07806396484375,
-0.04522705078125,
-0.0396728515625,
0.0093... |
facebook/encodec_48khz | 2023-09-06T19:51:48.000Z | [
"transformers",
"pytorch",
"safetensors",
"encodec",
"feature-extraction",
"arxiv:2210.13438",
"license:mit",
"has_space",
"region:us"
] | feature-extraction | facebook | null | null | facebook/encodec_48khz | 9 | 2,358 | transformers | 2023-06-12T16:10:51 | ---
inference: false
license: mit
---

# Model Card for EnCodec
This model card provides details and information about EnCodec, a state-of-the-art real-time audio codec developed by Meta AI.
## Model Details
### Model Description
EnCodec is a high-fidelity audio codec leveraging neural networks. It introduces a streaming encoder-decoder architecture with quantized latent space, trained in an end-to-end fashion.
The model simplifies and speeds up training using a single multiscale spectrogram adversary that efficiently reduces artifacts and produces high-quality samples.
It also includes a novel loss balancer mechanism that stabilizes training by decoupling the choice of hyperparameters from the typical scale of the loss.
Additionally, lightweight Transformer models are used to further compress the obtained representation while maintaining real-time performance.
- **Developed by:** Meta AI
- **Model type:** Audio Codec
### Model Sources
- **Repository:** [GitHub Repository](https://github.com/facebookresearch/encodec)
- **Paper:** [EnCodec: End-to-End Neural Audio Codec](https://arxiv.org/abs/2210.13438)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
EnCodec can be used directly as an audio codec for real-time compression and decompression of audio signals.
It provides high-quality audio compression and efficient decoding. The model was trained on various bandwiths, which can be specified when encoding (compressing) and decoding (decompressing).
Two different setup exist for EnCodec:
- Non-streamable: the input audio is split into chunks of 1 seconds, with an overlap of 10 ms, which are then encoded.
- Streamable: weight normalizationis used on the convolution layers, and the input is not split into chunks but rather padded on the left.
### Downstream Use
EnCodec can be fine-tuned for specific audio tasks or integrated into larger audio processing pipelines for applications such as speech generation,
music generation, or text to speech tasks.
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## How to Get Started with the Model
Use the following code to get started with the EnCodec model using a dummy example from the LibriSpeech dataset (~9MB). First, install the required Python packages:
```
pip install --upgrade pip
pip install --upgrade datasets[audio]
pip install git+https://github.com/huggingface/transformers.git@main
```
Then load an audio sample, and run a forward pass of the model:
```python
from datasets import load_dataset, Audio
from transformers import EncodecModel, AutoProcessor
# load a demonstration datasets
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
# load the model + processor (for pre-processing the audio)
model = EncodecModel.from_pretrained("facebook/encodec_48khz")
processor = AutoProcessor.from_pretrained("facebook/encodec_48khz")
# cast the audio data to the correct sampling rate for the model
librispeech_dummy = librispeech_dummy.cast_column("audio", Audio(sampling_rate=processor.sampling_rate))
audio_sample = librispeech_dummy[0]["audio"]["array"]
# pre-process the inputs
inputs = processor(raw_audio=audio_sample, sampling_rate=processor.sampling_rate, return_tensors="pt")
# explicitly encode then decode the audio inputs
encoder_outputs = model.encode(inputs["input_values"], inputs["padding_mask"])
audio_values = model.decode(encoder_outputs.audio_codes, encoder_outputs.audio_scales, inputs["padding_mask"])[0]
# or the equivalent with a forward pass
audio_values = model(inputs["input_values"], inputs["padding_mask"]).audio_values
```
## Training Details
The model was trained for 300 epochs, with one epoch being 2,000 updates with the Adam optimizer with a batch size of 64 examples of 1 second each, a learning rate of 3 ยท 10โ4
, ฮฒ1 = 0.5, and ฮฒ2 = 0.9. All the models are traind using 8 A100 GPUs.
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
- For speech:
- DNS Challenge 4
- [Common Voice](https://huggingface.co/datasets/common_voice)
- For general audio:
- [AudioSet](https://huggingface.co/datasets/Fhrozen/AudioSet2K22)
- [FSD50K](https://huggingface.co/datasets/Fhrozen/FSD50k)
- For music:
- [Jamendo dataset](https://huggingface.co/datasets/rkstgr/mtg-jamendo)
They used four different training strategies to sample for these datasets:
- (s1) sample a single source from Jamendo with probability 0.32;
- (s2) sample a single source from the other datasets with the same probability;
- (s3) mix two sources from all datasets with a probability of 0.24;
- (s4) mix three sources from all datasets except music with a probability of 0.12.
The audio is normalized by file and a random gain between -10 and 6 dB id applied.
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Subjectif metric for restoration:
This models was evalutated using the MUSHRA protocol (Series, 2014), using both a hidden reference and a low anchor. Annotators were recruited using a
crowd-sourcing platform, in which they were asked to rate the perceptual quality of the provided samples in
a range between 1 to 100. They randomly select 50 samples of 5 seconds from each category of the the test set
and force at least 10 annotations per samples. To filter noisy annotations and outliers we remove annotators
who rate the reference recordings less then 90 in at least 20% of the cases, or rate the low-anchor recording
above 80 more than 50% of the time.
### Objective metric for restoration:
The ViSQOL()ink) metric was used together with the Scale-Invariant Signal-to-Noise Ration (SI-SNR) (Luo & Mesgarani, 2019;
Nachmani et al., 2020; Chazan et al., 2021).
### Results
The results of the evaluation demonstrate the superiority of EnCodec compared to the baselines across different bandwidths (1.5, 3, 6, and 12 kbps).
When comparing EnCodec with the baselines at the same bandwidth, EnCodec consistently outperforms them in terms of MUSHRA score.
Notably, EnCodec achieves better performance, on average, at 3 kbps compared to Lyra-v2 at 6 kbps and Opus at 12 kbps.
Additionally, by incorporating the language model over the codes, it is possible to achieve a bandwidth reduction of approximately 25-40%.
For example, the bandwidth of the 3 kbps model can be reduced to 1.9 kbps.
#### Summary
EnCodec is a state-of-the-art real-time neural audio compression model that excels in producing high-fidelity audio samples at various sample rates and bandwidths.
The model's performance was evaluated across different settings, ranging from 24kHz monophonic at 1.5 kbps to 48kHz stereophonic, showcasing both subjective and
objective results. Notably, EnCodec incorporates a novel spectrogram-only adversarial loss, effectively reducing artifacts and enhancing sample quality.
Training stability and interpretability were further enhanced through the introduction of a gradient balancer for the loss weights.
Additionally, the study demonstrated that a compact Transformer model can be employed to achieve an additional bandwidth reduction of up to 40% without compromising
quality, particularly in applications where low latency is not critical (e.g., music streaming).
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@misc{dรฉfossez2022high,
title={High Fidelity Neural Audio Compression},
author={Alexandre Dรฉfossez and Jade Copet and Gabriel Synnaeve and Yossi Adi},
year={2022},
eprint={2210.13438},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
``` | 8,292 | [
[
-0.048431396484375,
-0.05401611328125,
0.00623321533203125,
0.01218414306640625,
-0.007541656494140625,
-0.01244354248046875,
-0.022857666015625,
-0.040283203125,
0.0281982421875,
0.024139404296875,
-0.05450439453125,
-0.040252685546875,
-0.04217529296875,
0... |
Yntec/theallysMixIV-verisimilar | 2023-07-30T04:12:07.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"TheAlly",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Yntec | null | null | Yntec/theallysMixIV-verisimilar | 3 | 2,356 | diffusers | 2023-07-21T18:11:23 | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- TheAlly
---
# TheAlly's Mix IV: Verisimilar
Original page:
https://civitai.com/models/40369/theallys-mix-iv-verisimilar | 293 | [
[
-0.02142333984375,
-0.005096435546875,
0.021087646484375,
0.044525146484375,
-0.018798828125,
0.038848876953125,
0.0195770263671875,
-0.0157623291015625,
0.0877685546875,
0.043212890625,
-0.069091796875,
-0.006927490234375,
-0.0194549560546875,
-0.0001462697... |
fabriceyhc/bert-base-uncased-imdb | 2022-08-30T00:40:47.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"sibyl",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | fabriceyhc | null | null | fabriceyhc/bert-base-uncased-imdb | 0 | 2,351 | transformers | 2022-03-02T23:29:05 | ---
license: apache-2.0
tags:
- generated_from_trainer
- sibyl
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: bert-base-uncased-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.91264
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-imdb
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4942
- Accuracy: 0.9126
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1546
- training_steps: 15468
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3952 | 0.65 | 2000 | 0.4012 | 0.86 |
| 0.2954 | 1.29 | 4000 | 0.4535 | 0.892 |
| 0.2595 | 1.94 | 6000 | 0.4320 | 0.892 |
| 0.1516 | 2.59 | 8000 | 0.5309 | 0.896 |
| 0.1167 | 3.23 | 10000 | 0.4070 | 0.928 |
| 0.0624 | 3.88 | 12000 | 0.5055 | 0.908 |
| 0.0329 | 4.52 | 14000 | 0.4342 | 0.92 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1
- Datasets 1.6.1
- Tokenizers 0.10.3
| 1,993 | [
[
-0.0421142578125,
-0.035552978515625,
0.004924774169921875,
0.0032405853271484375,
-0.0275421142578125,
-0.0180511474609375,
-0.00852203369140625,
-0.00954437255859375,
0.0222320556640625,
0.0302734375,
-0.05999755859375,
-0.039764404296875,
-0.055816650390625,
... |
patrickvonplaten/tiny-wav2vec2-no-tokenizer | 2022-11-12T18:10:24.000Z | [
"transformers",
"pytorch",
"tf",
"wav2vec2",
"en",
"arxiv:2006.11477",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | patrickvonplaten | null | null | patrickvonplaten/tiny-wav2vec2-no-tokenizer | 3 | 2,351 | transformers | 2022-03-02T23:29:05 | ---
language:
- en
---
# Model Card for tiny-wav2vec2-no-tokenizer
# Model Details
## Model Description
- **Developed by:** More information needed
- **Shared by [Optional]:** Patrick von Platen
- **Model type:** Automatic Speech Recognition
- **Language(s) (NLP):** en
- **License:** More information needed
- **Related Models:**
- **Parent Model:** Wav2Vec2
- **Resources for more information:**
- [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/wav2vec#wav2vec-20)
- [Associated Paper](https://arxiv.org/abs/2006.11477)
- [Associated Model Doc](https://huggingface.co/docs/transformers/main/en/model_doc/wav2vec2)
# Uses
## Direct Use
This model can be used for the task of Automatic Speech Recognition
## Downstream Use [Optional]
More information needed
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
More information needed
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed
### Factors
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
# Citation
**BibTeX:**
```
@misc{https://doi.org/10.48550/arxiv.2006.11477,
doi = {10.48550/ARXIV.2006.11477},
url = {https://arxiv.org/abs/2006.11477},
author = {Baevski, Alexei and Zhou, Henry and Mohamed, Abdelrahman and Auli, Michael},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
title = {wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations},
publisher = {arXiv},
```
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Patrick von Platen in collaboration with the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("patrickvonplaten/tiny-wav2vec2-no-tokenizer")
```
</details>
| 4,014 | [
[
-0.02667236328125,
-0.041961669921875,
0.0216064453125,
0.002094268798828125,
-0.006298065185546875,
-0.0330810546875,
-0.01345062255859375,
-0.040069580078125,
-0.0098724365234375,
0.0291748046875,
-0.048797607421875,
-0.0308685302734375,
-0.054351806640625,
... |
w11wo/indonesian-roberta-base-posp-tagger | 2023-03-21T07:12:08.000Z | [
"transformers",
"pytorch",
"tf",
"safetensors",
"roberta",
"token-classification",
"indonesian-roberta-base-posp-tagger",
"id",
"dataset:indonlu",
"arxiv:1907.11692",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | w11wo | null | null | w11wo/indonesian-roberta-base-posp-tagger | 3 | 2,345 | transformers | 2022-03-02T23:29:05 | ---
language: id
tags:
- indonesian-roberta-base-posp-tagger
license: mit
datasets:
- indonlu
widget:
- text: "Budi sedang pergi ke pasar."
---
## Indonesian RoBERTa Base POSP Tagger
Indonesian RoBERTa Base POSP Tagger is a part-of-speech token-classification model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. The model was originally the pre-trained [Indonesian RoBERTa Base](https://hf.co/flax-community/indonesian-roberta-base) model, which is then fine-tuned on [`indonlu`](https://hf.co/datasets/indonlu)'s `POSP` dataset consisting of tag-labelled news.
After training, the model achieved an evaluation F1-macro of 95.34%. On the benchmark test set, the model achieved an accuracy of 93.99% and F1-macro of 88.93%.
Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ------------------------------------- | ------- | ------------ | ------------------------------- |
| `indonesian-roberta-base-posp-tagger` | 124M | RoBERTa Base | `POSP` |
## Evaluation Results
The model was trained for 10 epochs and the best model was loaded at the end.
| Epoch | Training Loss | Validation Loss | Precision | Recall | F1 | Accuracy |
| ----- | ------------- | --------------- | --------- | -------- | -------- | -------- |
| 1 | 0.898400 | 0.343731 | 0.894324 | 0.894324 | 0.894324 | 0.894324 |
| 2 | 0.294700 | 0.236619 | 0.929620 | 0.929620 | 0.929620 | 0.929620 |
| 3 | 0.214100 | 0.202723 | 0.938349 | 0.938349 | 0.938349 | 0.938349 |
| 4 | 0.171100 | 0.183630 | 0.945264 | 0.945264 | 0.945264 | 0.945264 |
| 5 | 0.143300 | 0.169744 | 0.948469 | 0.948469 | 0.948469 | 0.948469 |
| 6 | 0.124700 | 0.174946 | 0.947963 | 0.947963 | 0.947963 | 0.947963 |
| 7 | 0.109800 | 0.167450 | 0.951590 | 0.951590 | 0.951590 | 0.951590 |
| 8 | 0.101300 | 0.163191 | 0.952475 | 0.952475 | 0.952475 | 0.952475 |
| 9 | 0.093500 | 0.163255 | 0.953361 | 0.953361 | 0.953361 | 0.953361 |
| 10 | 0.089000 | 0.164673 | 0.953445 | 0.953445 | 0.953445 | 0.953445 |
## How to Use
### As Token Classifier
```python
from transformers import pipeline
pretrained_name = "w11wo/indonesian-roberta-base-posp-tagger"
nlp = pipeline(
"token-classification",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Budi sedang pergi ke pasar.")
```
## Disclaimer
Do consider the biases which come from both the pre-trained RoBERTa model and the `POSP` dataset that may be carried over into the results of this model.
## Author
Indonesian RoBERTa Base POSP Tagger was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access. | 3,166 | [
[
-0.04010009765625,
-0.05096435546875,
-0.0009069442749023438,
-0.0009112358093261719,
-0.0198211669921875,
-0.007503509521484375,
-0.0181427001953125,
-0.01403045654296875,
0.0191497802734375,
0.0291748046875,
-0.0303802490234375,
-0.04852294921875,
-0.062683105... |
timm/poolformerv2_s12.sail_in1k | 2023-05-05T06:19:57.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2210.13452",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/poolformerv2_s12.sail_in1k | 0 | 2,345 | timm | 2023-05-05T06:19:46 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for poolformerv2_s12.sail_in1k
A PoolFormer-v2 (a MetaFormer) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 11.9
- GMACs: 1.8
- Activations (M): 5.5
- Image size: 224 x 224
- **Papers:**
- Metaformer baselines for vision: https://arxiv.org/abs/2210.13452
- **Original:** https://github.com/sail-sg/metaformer
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('poolformerv2_s12.sail_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'poolformerv2_s12.sail_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 56, 56])
# torch.Size([1, 128, 28, 28])
# torch.Size([1, 320, 14, 14])
# torch.Size([1, 512, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'poolformerv2_s12.sail_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{yu2022metaformer_baselines,
title={Metaformer baselines for vision},
author={Yu, Weihao and Si, Chenyang and Zhou, Pan and Luo, Mi and Zhou, Yichen and Feng, Jiashi and Yan, Shuicheng and Wang, Xinchao},
journal={arXiv preprint arXiv:2210.13452},
year={2022}
}
```
| 3,650 | [
[
-0.034393310546875,
-0.0270538330078125,
0.0067901611328125,
0.0058441162109375,
-0.0316162109375,
-0.026763916015625,
-0.00962066650390625,
-0.0248260498046875,
0.01285552978515625,
0.039703369140625,
-0.041656494140625,
-0.051239013671875,
-0.056732177734375,
... |
yacht/latte-mc-bert-base-thai-ws | 2023-09-10T15:58:58.000Z | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"word segmentation",
"token-classification",
"th",
"dataset:best2010",
"dataset:lst20",
"dataset:tlc",
"dataset:vistec-tp-th-2021",
"dataset:wisesight_sentiment",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | token-classification | yacht | null | null | yacht/latte-mc-bert-base-thai-ws | 0 | 2,342 | transformers | 2023-09-05T06:33:40 | ---
language: th
license: cc-by-sa-4.0
tags:
- word segmentation
datasets:
- best2010
- lst20
- tlc
- vistec-tp-th-2021
- wisesight_sentiment
pipeline_tag: token-classification
---
# Multi-criteria BERT base Thai with Lattice for Word Segmentation
This is a variant of the pre-trained model [BERT](https://github.com/google-research/bert) model.
The model was pre-trained on texts in the Thai language and fine-tuned for word segmentation based on [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased).
This version of the model processes input texts with character-level with word-level incorporated with a lattice structure.
The scripts for the pre-training are available at [tchayintr/latte-ptm-ws](https://github.com/tchayintr/latte-ptm-ws).
The LATTE scripts are available at [tchayintr/latte-ws](https://github.com/tchayintr/latte-ws).
## Model architecture
The model architecture is described in this [paper](https://www.jstage.jst.go.jp/article/jnlp/30/2/30_456/_article/-char/ja).
## Training Data
The model is trained on multiple Thai word segmented datasets, including best2010, lst20, tlc (tnhc), vistec-tp-th-2021 (vistec2021) and wisesight_sentiment (ws160).
The datasets can be accessed as follows:
- [best2010](https://thailang.nectec.or.th)
- [lst20](https://huggingface.co/datasets/lst20)
- [tlc](https://huggingface.co/datasets/tlc)
- [vistec-tp-th-2021](https://github.com/mrpeerat/OSKut/tree/main/VISTEC-TP-TH-2021)
- [wisesight_sentiment](https://huggingface.co/datasets/wisesight_sentiment).
## Licenses
The pre-trained model is distributed under the terms of the [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
## Acknowledgments
This model was trained with GPU servers provided by [Okumura-Funakoshi NLP Group](https://lr-www.pi.titech.ac.jp).
| 1,860 | [
[
-0.032806396484375,
-0.06536865234375,
0.041900634765625,
0.0195159912109375,
-0.041748046875,
-0.0006203651428222656,
-0.00736236572265625,
-0.0289764404296875,
0.032470703125,
0.04266357421875,
-0.04632568359375,
-0.054229736328125,
-0.043853759765625,
-0.... |
Jidipally/my-pet-dog-xzg | 2023-11-03T10:43:09.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Jidipally | null | null | Jidipally/my-pet-dog-xzg | 0 | 2,342 | diffusers | 2023-11-03T10:38:52 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-XZG Dreambooth model trained by Jidipally following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpeg)
.jpg)
.jpg)
.jpg)
.jpeg)
| 1,321 | [
[
-0.061553955078125,
-0.020843505859375,
0.015625,
0.01187896728515625,
-0.0263824462890625,
0.0218658447265625,
0.01416778564453125,
-0.041534423828125,
0.051055908203125,
0.0115203857421875,
-0.040679931640625,
-0.03436279296875,
-0.02911376953125,
0.011756... |
VietAI/gpt-neo-1.3B-vietnamese-news | 2022-08-07T14:32:07.000Z | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"causal-lm",
"gpt",
"vi",
"endpoints_compatible",
"has_space",
"region:us"
] | text-generation | VietAI | null | null | VietAI/gpt-neo-1.3B-vietnamese-news | 22 | 2,339 | transformers | 2022-03-02T23:29:05 | ---
language:
- vi
tags:
- pytorch
- causal-lm
- gpt
---
# GPT-Neo 1.3B on Vietnamese News
Details will be available soon.
For more information, please contact anhduongng.1001@gmail.com (Dฦฐฦกng) / imthanhlv@gmail.com (Thร nh) / nguyenvulebinh@gmail.com (Bรฌnh).
### How to use
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("VietAI/gpt-neo-1.3B-vietnamese-news")
model = AutoModelForCausalLM.from_pretrained("VietAI/gpt-neo-1.3B-vietnamese-news", low_cpu_mem_usage=True)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
prompt = "Tiแปm nฤng cแปงa trรญ tuแป nhรขn tแบกo" # your input sentence
input_ids = tokenizer(prompt, return_tensors="pt")['input_ids'].to(device)
gen_tokens = model.generate(
input_ids,
max_length=max_length,
do_sample=True,
temperature=0.9,
top_k=20,
)
gen_text = tokenizer.batch_decode(gen_tokens)[0]
print(gen_text)
``` | 994 | [
[
-0.0168304443359375,
-0.06390380859375,
0.02685546875,
0.034515380859375,
-0.044281005859375,
-0.0064544677734375,
-0.01461029052734375,
0.0007762908935546875,
0.00732421875,
0.0227508544921875,
-0.0272064208984375,
-0.041290283203125,
-0.0462646484375,
0.01... |
DucHaiten/DucHaiten-StyleLikeMe | 2023-04-14T17:46:45.000Z | [
"diffusers",
"stable-diffusion",
"text-to-image",
"image-to-image",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | DucHaiten | null | null | DucHaiten/DucHaiten-StyleLikeMe | 31 | 2,339 | diffusers | 2023-03-28T16:16:58 | ---
license: creativeml-openrail-m
language:
- en
tags:
- stable-diffusion
- text-to-image
- image-to-image
- diffusers
inference: true
---
this is a model that puts heavy emphasis on artistic and surreal elements with extremely high image detail, extremely flexible, can create beautiful images with simple prompts, negative prompts are not very important. Works best with step count greater than 30 preferably 50 steps, and native resolution up to 768x768, for example 2:3 frame resolution is 768x1024. CFG to around 11 to 15, highres fix will look better but not necessarily if you feel it takes too long
For those of you who don't have a pc or a weak computer, you can consider using my model via sinkin and mage website using the link below:
https://sinkin.ai/m/EYWOblK
negative prompt:
sketch, (worst quality:1.5), (low quality:1.5), (normal quality:1.5), lowres, bad anatomy, bad hands, ((monochrome)), ((grayscale)), collapsed eyeshadow, multiple eyeblows, vaginas in breasts, (cropped), oversaturated, extra limb, missing limbs, deformed hands, long neck, long body, imperfect, (bad hands), signature, watermark, username, artist name, conjoined fingers, deformed fingers, ugly eyes, imperfect eyes, skewed eyes, unnatural face, unnatural body, error
Please support me by becoming a patron:
https://www.patreon.com/duchaitenreal













| 3,259 | [
[
-0.0513916015625,
-0.0147857666015625,
0.052459716796875,
0.0209808349609375,
-0.020660400390625,
0.01305389404296875,
0.018646240234375,
-0.045867919921875,
0.057830810546875,
0.0736083984375,
-0.055755615234375,
-0.03564453125,
-0.038238525390625,
0.014053... |
JananiReddy/my-pet-dog | 2023-11-03T08:50:39.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | JananiReddy | null | null | JananiReddy/my-pet-dog | 0 | 2,337 | diffusers | 2023-11-03T08:45:50 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by JananiReddy following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: MRCEW-118
Sample pictures of this concept:

| 392 | [
[
-0.060638427734375,
-0.0085296630859375,
0.02685546875,
0.00423431396484375,
-0.00742340087890625,
0.0288848876953125,
0.0272216796875,
-0.034515380859375,
0.04449462890625,
0.031097412109375,
-0.048583984375,
-0.029693603515625,
-0.0167694091796875,
0.01284... |
remg1997/dynabench-sdxl10 | 2023-09-08T05:56:16.000Z | [
"diffusers",
"onnx",
"text-to-image",
"stable-diffusion",
"arxiv:2307.01952",
"arxiv:2211.01324",
"arxiv:2108.01073",
"arxiv:2112.10752",
"license:openrail++",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | remg1997 | null | null | remg1997/dynabench-sdxl10 | 1 | 2,336 | diffusers | 2023-09-07T23:08:36 | ---
license: openrail++
tags:
- text-to-image
- stable-diffusion
duplicated_from: stabilityai/stable-diffusion-xl-base-1.0
---
# SD-XL 1.0-base Model Card

## Model

[SDXL](https://arxiv.org/abs/2307.01952) consists of an [ensemble of experts](https://arxiv.org/abs/2211.01324) pipeline for latent diffusion:
In a first step, the base model is used to generate (noisy) latents,
which are then further processed with a refinement model (available here: https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/) specialized for the final denoising steps.
Note that the base model can be used as a standalone module.
Alternatively, we can use a two-stage pipeline as follows:
First, the base model is used to generate latents of the desired output size.
In the second step, we use a specialized high-resolution model and apply a technique called SDEdit (https://arxiv.org/abs/2108.01073, also known as "img2img")
to the latents generated in the first step, using the same prompt. This technique is slightly slower than the first one, as it requires more function evaluations.
Source code is available at https://github.com/Stability-AI/generative-models .
### Model Description
- **Developed by:** Stability AI
- **Model type:** Diffusion-based text-to-image generative model
- **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md)
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses two fixed, pretrained text encoders ([OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip) and [CLIP-ViT/L](https://github.com/openai/CLIP/tree/main)).
- **Resources for more information:** Check out our [GitHub Repository](https://github.com/Stability-AI/generative-models) and the [SDXL report on arXiv](https://arxiv.org/abs/2307.01952).
### Model Sources
For research purposes, we recommend our `generative-models` Github repository (https://github.com/Stability-AI/generative-models), which implements the most popular diffusion frameworks (both training and inference) and for which new functionalities like distillation will be added over time.
[Clipdrop](https://clipdrop.co/stable-diffusion) provides free SDXL inference.
- **Repository:** https://github.com/Stability-AI/generative-models
- **Demo:** https://clipdrop.co/stable-diffusion
## Evaluation

The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0.9 and Stable Diffusion 1.5 and 2.1.
The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance.
### ๐งจ Diffusers
Make sure to upgrade diffusers to >= 0.19.0:
```
pip install diffusers --upgrade
```
In addition make sure to install `transformers`, `safetensors`, `accelerate` as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
To just use the base model, you can run:
```py
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16")
pipe.to("cuda")
# if using torch < 2.0
# pipe.enable_xformers_memory_efficient_attention()
prompt = "An astronaut riding a green horse"
images = pipe(prompt=prompt).images[0]
```
To use the whole base + refiner pipeline as an ensemble of experts you can run:
```py
from diffusers import DiffusionPipeline
import torch
# load both base & refiner
base = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
base.to("cuda")
refiner = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-refiner-1.0",
text_encoder_2=base.text_encoder_2,
vae=base.vae,
torch_dtype=torch.float16,
use_safetensors=True,
variant="fp16",
)
refiner.to("cuda")
# Define how many steps and what % of steps to be run on each experts (80/20) here
n_steps = 40
high_noise_frac = 0.8
prompt = "A majestic lion jumping from a big stone at night"
# run both experts
image = base(
prompt=prompt,
num_inference_steps=n_steps,
denoising_end=high_noise_frac,
output_type="latent",
).images
image = refiner(
prompt=prompt,
num_inference_steps=n_steps,
denoising_start=high_noise_frac,
image=image,
).images[0]
```
When using `torch >= 2.0`, you can improve the inference speed by 20-30% with torch.compile. Simple wrap the unet with torch compile before running the pipeline:
```py
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
```
If you are limited by GPU VRAM, you can enable *cpu offloading* by calling `pipe.enable_model_cpu_offload`
instead of `.to("cuda")`:
```diff
- pipe.to("cuda")
+ pipe.enable_model_cpu_offload()
```
For more information on how to use Stable Diffusion XL with `diffusers`, please have a look at [the Stable Diffusion XL Docs](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl).
### Optimum
[Optimum](https://github.com/huggingface/optimum) provides a Stable Diffusion pipeline compatible with both [OpenVINO](https://docs.openvino.ai/latest/index.html) and [ONNX Runtime](https://onnxruntime.ai/).
#### OpenVINO
To install Optimum with the dependencies required for OpenVINO :
```bash
pip install optimum[openvino]
```
To load an OpenVINO model and run inference with OpenVINO Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `OVStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set `export=True`.
```diff
- from diffusers import StableDiffusionPipeline
+ from optimum.intel import OVStableDiffusionPipeline
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
- pipeline = StableDiffusionPipeline.from_pretrained(model_id)
+ pipeline = OVStableDiffusionPipeline.from_pretrained(model_id)
prompt = "A majestic lion jumping from a big stone at night"
image = pipeline(prompt).images[0]
```
You can find more examples (such as static reshaping and model compilation) in optimum [documentation](https://huggingface.co/docs/optimum/main/en/intel/inference#stable-diffusion-xl).
#### ONNX
To install Optimum with the dependencies required for ONNX Runtime inference :
```bash
pip install optimum[onnxruntime]
```
To load an ONNX model and run inference with ONNX Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `ORTStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the ONNX format on-the-fly, you can set `export=True`.
```diff
- from diffusers import StableDiffusionPipeline
+ from optimum.onnxruntime import ORTStableDiffusionPipeline
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
- pipeline = StableDiffusionPipeline.from_pretrained(model_id)
+ pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id)
prompt = "A majestic lion jumping from a big stone at night"
image = pipeline(prompt).images[0]
```
You can find more examples in optimum [documentation](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/models#stable-diffusion-xl).
## Uses
### Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
Excluded uses are described below.
### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model struggles with more difficult tasks which involve compositionality, such as rendering an image corresponding to โA red cube on top of a blue sphereโ
- Faces and people in general may not be generated properly.
- The autoencoding part of the model is lossy.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
| 8,703 | [
[
-0.03033447265625,
-0.0625,
0.038543701171875,
0.00952911376953125,
-0.00814056396484375,
-0.0226593017578125,
-0.010467529296875,
-0.00591278076171875,
0.00963592529296875,
0.031524658203125,
-0.0222625732421875,
-0.038421630859375,
-0.045684814453125,
-0.0... |
jinho8345/bros-base-uncased | 2023-09-20T00:46:11.000Z | [
"transformers",
"pytorch",
"bros",
"feature-extraction",
"arxiv:2108.04539",
"endpoints_compatible",
"region:us"
] | feature-extraction | jinho8345 | null | null | jinho8345/bros-base-uncased | 0 | 2,333 | transformers | 2023-09-19T09:39:33 | This is a converted version of [bros-base-uncased](https://huggingface.co/naver-clova-ocr/bros-base-uncased) with [conversion script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bros/convert_bros_to_pytorch.py)
# BROS
GitHub: https://github.com/clovaai/bros
## Introduction
BROS (BERT Relying On Spatiality) is a pre-trained language model focusing on text and layout for better key information extraction from documents.<br>
Given the OCR results of the document image, which are text and bounding box pairs, it can perform various key information extraction tasks, such as extracting an ordered item list from receipts.<br>
For more details, please refer to our paper:
BROS: A Pre-trained Language Model Focusing on Text and Layout for Better Key Information Extraction from Documents<br>
Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, Sungrae Park<br>
AAAI 2022 - Main Technical Track
[[arXiv]](https://arxiv.org/abs/2108.04539)
## Pre-trained models
| name | # params | Hugging Face - Models |
|---------------------|---------:|-------------------------------------------------------------------------------------------------|
| bros-base-uncased (**this**) | < 110M | [naver-clova-ocr/bros-base-uncased](https://huggingface.co/naver-clova-ocr/bros-base-uncased) |
| bros-large-uncased | < 340M | [naver-clova-ocr/bros-large-uncased](https://huggingface.co/naver-clova-ocr/bros-large-uncased) | | 1,553 | [
[
-0.024688720703125,
-0.042205810546875,
0.0139312744140625,
0.0090484619140625,
-0.0171661376953125,
-0.00272369384765625,
-0.0288238525390625,
-0.01123809814453125,
0.01312255859375,
0.04034423828125,
-0.045684814453125,
-0.033538818359375,
-0.045989990234375,
... |
Yntec/3DCute | 2023-10-04T12:07:54.000Z | [
"diffusers",
"3D",
"aodai",
"Character",
"StableDiffusionVN",
"text-to-image",
"stable-diffusion",
"stable-diffusion-diffusers",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Yntec | null | null | Yntec/3DCute | 0 | 2,333 | diffusers | 2023-10-04T10:57:40 | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- 3D
- aodai
- Character
- StableDiffusionVN
- text-to-image
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
---
# SDVN4-3DCuteVN
This model with the MoistMixV2 VAE baked in.
Comparison:

Sample and prompt:

PRETTY CUTE GIRL BY ROSSDRAWS. An extradimensional creature buying donuts. curly hair. Pixar animation.
Original page:
https://civitai.com/models/103169/sdvn4-3dcutevn | 723 | [
[
-0.0029850006103515625,
-0.045623779296875,
0.03094482421875,
0.0218658447265625,
-0.0166015625,
-0.0048065185546875,
0.03887939453125,
-0.0042266845703125,
0.044677734375,
0.06036376953125,
-0.029876708984375,
-0.023956298828125,
-0.039031982421875,
-0.0079... |
facebook/s2t-small-mustc-en-fr-st | 2023-01-24T16:32:10.000Z | [
"transformers",
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"audio",
"speech-translation",
"en",
"fr",
"dataset:mustc",
"arxiv:2010.05171",
"arxiv:1904.08779",
"license:mit",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | facebook | null | null | facebook/s2t-small-mustc-en-fr-st | 1 | 2,330 | transformers | 2022-03-02T23:29:05 | ---
language:
- en
- fr
datasets:
- mustc
tags:
- audio
- speech-translation
- automatic-speech-recognition
license: mit
pipeline_tag: automatic-speech-recognition
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
---
# S2T-SMALL-MUSTC-EN-FR-ST
`s2t-small-mustc-en-fr-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to French text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-mustc-en-fr-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-mustc-en-fr-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=16_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-small-mustc-en-fr-st is trained on English-French subset of [MuST-C](https://ict.fbk.eu/must-c/).
MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems
for speech translation from English into several languages. For each target language, MuST-C comprises several hundred
hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual
transcriptions and translations.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
MuST-C test results for en-fr (BLEU score): 32.9
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
| 4,598 | [
[
-0.0059967041015625,
-0.052825927734375,
0.0101318359375,
0.01605224609375,
-0.02496337890625,
-0.0160064697265625,
-0.037139892578125,
-0.0305328369140625,
0.005481719970703125,
0.03399658203125,
-0.05078125,
-0.0239410400390625,
-0.05999755859375,
0.004341... |
Yntec/REV | 2023-10-20T04:24:47.000Z | [
"diffusers",
"Photorealistic",
"Fantasy",
"Sci Fi",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"s6yx",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Yntec | null | null | Yntec/REV | 3 | 2,330 | diffusers | 2023-08-30T09:47:24 | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
language:
- en
tags:
- Photorealistic
- Fantasy
- Sci Fi
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- s6yx
inference: true
---
This model with the Color101VAE baked in.
Samples and prompt:


pretty cute girl, sitting, detailed chibi eyes, holding super soaker, beautiful detailed legs, cowgirl, gorgeous detailed hair, cowboy hat, magazine ad, iconic, 1943, from the movie, sharp focus. visible brushstrokes โby kyoani and clay mann
Original Page:
https://civitai.com/models/5216?modelVersionId=6048 | 856 | [
[
-0.0296630859375,
-0.044189453125,
0.0195159912109375,
0.0214385986328125,
-0.0022258758544921875,
-0.006267547607421875,
0.036285400390625,
-0.02862548828125,
0.058258056640625,
0.048553466796875,
-0.061981201171875,
-0.0233612060546875,
-0.0286865234375,
-... |
pablocosta/bertabaporu-base-uncased | 2023-09-26T21:21:33.000Z | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"pt",
"dataset:Twitter",
"doi:10.57967/hf/0019",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | pablocosta | null | null | pablocosta/bertabaporu-base-uncased | 5 | 2,329 | transformers | 2022-07-02T21:59:20 | ---
language: pt
license: mit
tags:
- bert
- pytorch
datasets:
- Twitter
---
# <a name="introduction"></a> BERTabaporu: a genre-specific pre-trained model of Portuguese-speaking social media
## Introduction
BERTabaporu is a Brazilian Portuguese BERT model in the Twitter domain. The model has been built from a collection of 238 million tweets written by over 100 thousand unique Twitter users, and conveying over 2.9 billion tokens in total.
## Available models
| Model | Arch. | #Layers | #Params |
| ---------------------------------------- | ---------- | ------- | ------- |
| `pablocosta/bertabaporu-base-uncased` | BERT-Base | 12 | 110M |
| `pablocosta/bertabaporu-large-uncased` | BERT-Large | 24 | 335M |
## Usage
```python
from transformers import AutoTokenizer # Or BertTokenizer
from transformers import AutoModelForPreTraining # Or BertForPreTraining for loading pretraining heads
from transformers import AutoModel # or BertModel, for BERT without pretraining heads
model = AutoModelForPreTraining.from_pretrained('pablocosta/bertabaporu-base-uncased')
tokenizer = AutoTokenizer.from_pretrained('pablocosta/bertabaporu-base-uncased')
```
## Cite us
@inproceedings{bertabaporu,
author={Pablo Botton da Costa and
Matheus Camasmie Pavan and
Wesley Ramos dos Santos and
Samuel Caetano da Silva and
Ivandr\'e Paraboni},
title={{BERTabaporu: assessing a genre-specific language model for Portuguese NLP}},
booktitle={Recents Advances in Natural Language Processing ({RANLP-2023})},
year={2023},
address={Varna, Bulgaria}
}
| 1,619 | [
[
-0.029144287109375,
-0.049346923828125,
-0.0025310516357421875,
0.04742431640625,
-0.04229736328125,
0.0048828125,
-0.0343017578125,
-0.02130126953125,
0.03912353515625,
0.0268402099609375,
-0.05560302734375,
-0.049224853515625,
-0.059478759765625,
-0.004959... |
gchhablani/bert-base-cased-finetuned-qnli | 2021-09-20T09:08:27.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"fnet-bert-base-comparison",
"en",
"dataset:glue",
"arxiv:2105.03824",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | gchhablani | null | null | gchhablani/bert-base-cased-finetuned-qnli | 1 | 2,328 | transformers | 2022-03-02T23:29:05 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-cased-finetuned-qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.9099395936298736
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-qnli
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3986
- Accuracy: 0.9099
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name qnli \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-qnli \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:---------------:|
| 0.337 | 1.0 | 6547 | 0.9013 | 0.2448 |
| 0.1971 | 2.0 | 13094 | 0.9143 | 0.2839 |
| 0.1175 | 3.0 | 19641 | 0.9099 | 0.3986 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
| 2,647 | [
[
-0.0254364013671875,
-0.044647216796875,
0.01004791259765625,
0.00925445556640625,
-0.0093536376953125,
-0.0247955322265625,
-0.0175628662109375,
-0.01226043701171875,
0.01068878173828125,
0.01541900634765625,
-0.054901123046875,
-0.04022216796875,
-0.0466003417... |
TheBloke/Nous-Hermes-13B-GPTQ | 2023-08-21T10:17:55.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"self-instruct",
"distillation",
"en",
"license:other",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/Nous-Hermes-13B-GPTQ | 154 | 2,327 | transformers | 2023-06-03T13:12:08 | ---
inference: false
license: other
language:
- en
tags:
- llama
- self-instruct
- distillation
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# NousResearch's Nous-Hermes-13B GPTQ
These files are GPTQ 4bit model files for [NousResearch's Nous-Hermes-13B](https://huggingface.co/NousResearch/Nous-Hermes-13b).
It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
## Other repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Nous-Hermes-13B-GPTQ)
* [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/Nous-Hermes-13B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NousResearch/Nous-Hermes-13b)
## Prompt Template
The model follows the Alpaca prompt format:
```
### Instruction:
### Response:
```
or
```
### Instruction:
### Input:
### Response:
```
## How to easily download and use this model in text-generation-webui
Please make sure you're using the latest version of text-generation-webui
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Nous-Hermes-13B-GPTQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done"
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Nous-Hermes-13B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
## How to use this GPTQ model from Python code
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
`pip install auto-gptq`
Then try the following example code:
```python
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import argparse
model_name_or_path = "TheBloke/Nous-Hermes-13B-GPTQ"
model_basename = "nous-hermes-13b-GPTQ-4bit-128g.no-act.order"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=True,
device="cuda:0",
use_triton=use_triton,
quantize_config=None)
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
prompt = "Tell me about AI"
prompt_template=f'''### Human: {prompt}
### Assistant:'''
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Provided files
**nous-hermes-13b-GPTQ-4bit-128g.no-act.order.safetensors**
This will work with all versions of GPTQ-for-LLaMa, and with AutoGPTQ.
* `nous-hermes-13b-GPTQ-4bit-128g.no-act.order.safetensors`
* Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
* Works with AutoGPTQ
* Works with text-generation-webui one-click-installers
* Parameters: Groupsize = 128. Act Order / desc_act = False.
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, ้ฟๆ, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieล, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: NousResearch's Nous-Hermes-13B
# Model Card: Nous-Hermes-13b
## Model Description
Nous-Hermes-13b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Karan4D leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors. The result is an enhanced Llama 13b model that rivals GPT-3.5-turbo in performance across a variety of tasks.
This model stands out for its long responses, low hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 2000 sequence length on an 8x a100 80GB DGX machine for over 50 hours.
## Model Training
The model was trained almost entirely on synthetic GPT-4 outputs. This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), CodeAlpaca, Evol_Instruct Uncensored, GPT4-LLM, and Unnatural Instructions.
Additional data inputs came from Camel-AI's Biology/Physics/Chemistry and Math Datasets, Airoboros' GPT-4 Dataset, and more from CodeAlpaca. The total volume of data encompassed over 300,000 instructions.
## Collaborators
The model fine-tuning and the datasets were a collaboration of efforts and resources between Teknium, Karan4D, Nous Research, Huemin Art, and Redmond AI.
Huge shoutout and acknowledgement is deserved for all the dataset creators who generously share their datasets openly.
Special mention goes to @winglian, @erhartford, and @main_horse for assisting in some of the training issues.
Among the contributors of datasets, GPTeacher was made available by Teknium, Wizard LM by nlpxucan, and the Nous Research Instruct Dataset was provided by Karan4D and HueminArt.
The GPT4-LLM and Unnatural Instructions were provided by Microsoft, Airoboros dataset by jondurbin, Camel-AI datasets are from Camel-AI, and CodeAlpaca dataset by Sahil 2801.
If anyone was left out, please open a thread in the community tab.
## Prompt Format
The model follows the Alpaca prompt format:
```
### Instruction:
### Response:
```
or
```
### Instruction:
### Input:
### Response:
```
## Resources for Applied Use Cases:
For an example of a back and forth chatbot using huggingface transformers and discord, check out: https://github.com/teknium1/alpaca-discord
For an example of a roleplaying discord bot, check out this: https://github.com/teknium1/alpaca-roleplay-discordbot
## Future Plans
The model is currently being uploaded in FP16 format, and there are plans to convert the model to GGML and GPTQ 4bit quantizations. The team is also working on a full benchmark, similar to what was done for GPT4-x-Vicuna. We will try to get in discussions to get the model included in the GPT4All.
## Benchmark Results
Benchmark results are coming soon.
## Model Usage
The model is available for download on Hugging Face. It is suitable for a wide range of language tasks, from generating creative text to understanding and following complex instructions.
Compute provided by our project sponsor Redmond AI, thank you!!
| 10,678 | [
[
-0.04217529296875,
-0.05224609375,
0.0256500244140625,
0.003021240234375,
-0.0229034423828125,
-0.000507354736328125,
0.00945281982421875,
-0.03656005859375,
0.024688720703125,
0.0189208984375,
-0.054779052734375,
-0.03741455078125,
-0.0310211181640625,
0.00... |
cross-encoder/quora-roberta-large | 2021-08-05T08:41:41.000Z | [
"transformers",
"pytorch",
"jax",
"roberta",
"text-classification",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | cross-encoder | null | null | cross-encoder/quora-roberta-large | 1 | 2,325 | transformers | 2022-03-02T23:29:05 | ---
license: apache-2.0
---
# Cross-Encoder for Quora Duplicate Questions Detection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
## Training Data
This model was trained on the [Quora Duplicate Questions](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset. The model will predict a score between 0 and 1 how likely the two given questions are duplicates.
Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates.
## Usage and Performance
Pre-trained models can be used like this:
```
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name')
scores = model.predict([('Question 1', 'Question 2'), ('Question 3', 'Question 4')])
```
You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class | 1,070 | [
[
-0.026824951171875,
-0.06695556640625,
0.012420654296875,
0.0099945068359375,
-0.02081298828125,
-0.00667572021484375,
0.01383209228515625,
-0.016021728515625,
0.009735107421875,
0.0472412109375,
-0.047607421875,
-0.024658203125,
-0.038421630859375,
0.026260... |
jonatasgrosman/wav2vec2-large-xlsr-53-french | 2022-12-14T01:59:23.000Z | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"fr",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_6_0",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"dataset:mozilla-foundation/common_voice_6_0",
"lice... | automatic-speech-recognition | jonatasgrosman | null | null | jonatasgrosman/wav2vec2-large-xlsr-53-french | 6 | 2,324 | transformers | 2022-03-02T23:29:05 | ---
language: fr
license: apache-2.0
datasets:
- common_voice
- mozilla-foundation/common_voice_6_0
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- fr
- hf-asr-leaderboard
- mozilla-foundation/common_voice_6_0
- robust-speech-event
- speech
- xlsr-fine-tuning-week
model-index:
- name: XLSR Wav2Vec2 French by Jonatas Grosman
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fr
type: common_voice
args: fr
metrics:
- name: Test WER
type: wer
value: 17.65
- name: Test CER
type: cer
value: 4.89
- name: Test WER (+LM)
type: wer
value: 13.59
- name: Test CER (+LM)
type: cer
value: 3.91
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: fr
metrics:
- name: Dev WER
type: wer
value: 34.35
- name: Dev CER
type: cer
value: 14.09
- name: Dev WER (+LM)
type: wer
value: 24.72
- name: Dev CER (+LM)
type: cer
value: 12.33
---
# Fine-tuned XLSR-53 large model for speech recognition in French
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on French using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-french")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "fr"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-french"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| "CE DERNIER A รVOLUร TOUT AU LONG DE L'HISTOIRE ROMAINE." | CE DERNIER รVOLUร TOUT AU LONG DE L'HISTOIRE ROMAINE |
| CE SITE CONTIENT QUATRE TOMBEAUX DE LA DYNASTIE ACHรMรNIDE ET SEPT DES SASSANIDES. | CE SITE CONTIENT QUATRE TOMBEAUX DE LA DYNASTIE ASHEMรNID ET SEPT DES SASANDNIDES |
| "J'AI DIT QUE LES ACTEURS DE BOIS AVAIENT, SELON MOI, BEAUCOUP D'AVANTAGES SUR LES AUTRES." | JAI DIT QUE LES ACTEURS DE BOIS AVAIENT SELON MOI BEAUCOUP DAVANTAGES SUR LES AUTRES |
| LES PAYS-BAS ONT REMPORTร TOUTES LES รDITIONS. | LE PAYS-BAS ON REMPORTร TOUTES LES รDITIONS |
| IL Y A MAINTENANT UNE GARE ROUTIรRE. | IL AMNARDIGAD LE TIRAN |
| HUIT | HUIT |
| DANS LโATTENTE DU LENDEMAIN, ILS NE POUVAIENT SE DEฬFENDRE DโUNE VIVE EฬMOTION | DANS L'ATTENTE DU LENDEMAIN IL NE POUVAIT SE DรFENDRE DUNE VIVE รMOTION |
| LA PREMIรRE SAISON EST COMPOSรE DE DOUZE รPISODES. | LA PREMIรRE SAISON EST COMPOSรE DE DOUZE รPISODES |
| ELLE SE TROUVE รGALEMENT DANS LES รLES BRITANNIQUES. | ELLE SE TROUVE รGALEMENT DANS LES รLES BRITANNIQUES |
| ZรRO | ZEGO |
## Evaluation
1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-french --dataset mozilla-foundation/common_voice_6_0 --config fr --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-french --dataset speech-recognition-community-v2/dev_data --config fr --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-french,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {F}rench},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-french}},
year={2021}
}
``` | 5,512 | [
[
-0.0286102294921875,
-0.040435791015625,
0.0176849365234375,
0.0128173828125,
-0.0135650634765625,
-0.0108642578125,
-0.026580810546875,
-0.042144775390625,
0.015533447265625,
0.0265045166015625,
-0.04840087890625,
-0.039642333984375,
-0.040435791015625,
0.0... |
Yntec/GameAssetsDigitalUnitsCreationKit | 2023-08-03T18:18:33.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"sadxzero",
"WarriorMama777",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Yntec | null | null | Yntec/GameAssetsDigitalUnitsCreationKit | 4 | 2,322 | diffusers | 2023-08-02T10:35:24 | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- sadxzero
- WarriorMama777
---

icon of cute little white duck, round frame, blue glow, purple background
# SXZ D.U.C.K. - Digital Units Creation Kit
This LORA has been merged with Luma 0.98VAE, I think it improves over it, don't believe me? Watch this!

# D.U.C.K. & D.U.C.K.LumaOrangeMix
D.U.C.K. and D.U.C.K.LumaOrangeMix models have also been provided on the repo. D.U.C.K. is a merge at 1.0 strenght, it's the version you'd want to mix to your models.
D.U.C.K.LumaOrangeMix is... it's complicated... So you know to merge a LORA so that it's not as strong as D.U.C.K., you mix it at 1.0, then you mix it again with the original model at 0.75 so you get mostly D.U.C.K.? Well, what if instead of mixing it back with the original you do it with another model to add its flavor?
That's what I did, merging it back with OrangeRemix. In fact, that was the plan all along and the reason I created OrangeRemix on the first place! Alas, while D.U.C.K.LumaOrangeMix does outperform it at cute output, it's at the cost of detail, so DigitalUnitsCreationKit remains at the hosted inference API, here's a pic:

If you like this content, support Luma and D.U.C.K.'s creator at https://boosty.to/sadxzero
Original links:
https://civitai.com/models/44726/sxz-duck-for-game-assets-concept
https://civitai.com/models/25831?modelVersionId=68200
https://huggingface.co/Yntec/OrangeRemix
https://huggingface.co/WarriorMama777/OrangeMixs/tree/main/Models
| 1,979 | [
[
-0.05145263671875,
-0.0221710205078125,
0.0214385986328125,
0.02978515625,
-0.0249786376953125,
0.013092041015625,
0.01416778564453125,
-0.059722900390625,
0.0467529296875,
0.033782958984375,
-0.056610107421875,
-0.017364501953125,
-0.0224151611328125,
-0.00... |
microsoft/phi-1 | 2023-11-01T23:40:50.000Z | [
"transformers",
"pytorch",
"mixformer-sequential",
"text-generation",
"code",
"custom_code",
"en",
"license:other",
"has_space",
"region:us"
] | text-generation | microsoft | null | null | microsoft/phi-1 | 110 | 2,322 | transformers | 2023-09-10T04:10:57 | ---
license: other
language:
- en
pipeline_tag: text-generation
tags:
- code
---
## Model Summary
The language model phi-1 is a Transformer with 1.3 billion parameters, specialized for basic Python coding. Its training involved a variety of data sources, including subsets of Python codes from [The Stack v1.2](https://huggingface.co/datasets/bigcode/the-stack), Q&A content from [StackOverflow](https://archive.org/download/stackexchange), competition code from [code_contests](https://github.com/deepmind/code_contests), and synthetic Python textbooks and exercises generated by [gpt-3.5-turbo-0301](https://platform.openai.com/docs/models/gpt-3-5). Even though the model and the datasets are relatively small compared to contemporary Large Language Models (LLMs), phi-1 has demonstrated an impressive accuracy rate exceeding 50% on the simple Python coding benchmark, HumanEval.
## Intended Uses
Given the nature of the training data, phi-1 is best suited for prompts using the code format:
#### code format:
```python
def print_prime(n):
"""
Print all primes between 1 and n
"""
for num in range(2, n+1):
for i in range(2, num):
if num % i == 0:
break
else:
print(num)
```
where the model generates the code after the comments. (Note: This is a legitimate and correct use of the else statement in Python loops.)
**Notes**
* phi-1 is intended for research purposes. The model-generated code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing this model in their applications.
* Direct adoption for production coding tasks is out of the scope of this research project. As a result, phi-1 has not been tested to ensure that it performs adequately for production-level code. Please refer to the limitation sections of this document for more details.
## Limitations of phi-1
* Limited Scope: 99.8% of the Python scripts in our fine-tuning dataset use only the packages "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages, we strongly recommend users manually verify all API uses.
* Replicate Scripts Online: As our model is trained on Python scripts found online, there is a small chance it may replicate such scripts, especially if they appear repetitively across different online sources.
* Generate Inaccurate Code: The model frequently generates incorrect code. We suggest that users view these outputs as a source of inspiration rather than definitive solutions.
* Unreliable Responses to Alternate Formats: Despite appearing to comprehend instructions in formats like Q&A or chat, our models often respond with inaccurate answers, even when seeming confident. Their capabilities with non-code formats are significantly more limited.
* Limitations on Natural Language Comprehension. As a coding bot, phi-1's main focus is to help with coding-related questions. While it may have some natural language comprehension capabilities, its primary function is not to engage in general conversations or demonstrate common sense like a general AI assistant. Its strength lies in providing assistance and guidance in the context of programming and software development.
* Potential Biases: phi-1, like other AI models, is trained on web and synthetic data. This data can contain biases and errors that might affect the AI's performance. Biases could stem from various sources like unbalanced representation, stereotypes, or controversial opinions present in the training data. As a result, the model might sometimes generate responses that reflect these biases or errors.
## Warning about Security Risks
When leveraging phi-1, it's paramount to be vigilant. The model, though powerful, can inadvertently introduce security vulnerabilities in the generated code. Examples include, but are not limited to:
* Directory Traversal: The code might fail to implement safe checks against directory traversal attacks, potentially allowing unauthorized access to sensitive files on your system.
* Injection Attacks: There could be lapses in escaping strings properly, making the application susceptible to SQL, OS commands, or other injection attacks.
* Misunderstanding Requirements: The model might sometimes misunderstand or oversimplify user requirements, leading to incomplete or insecure solutions.
* Lack of Input Validation: In some cases, the model might neglect to incorporate input validation or sanitize user inputs, opening doors to attacks like Cross-Site Scripting (XSS).
* Insecure Defaults: The model might recommend or generate code with insecure default settings, such as weak password requirements or unencrypted data transmissions.
* Failure in Error Handling: Improper error handling can inadvertently reveal sensitive information about the system or the application's internal workings.
Given these potential pitfalls, and others not explicitly mentioned, it's essential to thoroughly review, test, and verify the generated code before deploying it in any application, especially those that are security-sensitive. Always consult with security experts or perform rigorous penetration testing when in doubt.
## Training
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Training tokens: 54B tokens (7B unique tokens)
* Precision: fp16
* GPUs: 8 A100
* Training time: 6 days
### Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [flash-attention](https://github.com/HazyResearch/flash-attention)
### License
The model is licensed under the [Research License](https://huggingface.co/microsoft/phi-1/resolve/main/Research%20License.docx).
### Sample Code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1", trust_remote_code=True)
inputs = tokenizer('''def print_prime(n):
"""
Print all primes between 1 and n
"""''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
If you need to use the model in a lower precision (e.g., FP16), please wrap the model's forward pass with `torch.autocast()`, as follows:
```python
with torch.autocast(model.device.type, dtype=torch.float16, enabled=True):
outputs = model.generate(**inputs, max_length=200)
```
**Remark.** In the generation function, our model currently does not support beam search (`num_beams` >1).
Furthermore, in the forward pass of the model, we currently do not support outputting hidden states or attention values, or using custom input embeddings (instead of the model's).
### Citation
```bib
@article{gunasekar2023textbooks,
title={Textbooks Are All You Need},
author={Gunasekar, Suriya and Zhang, Yi and Aneja, Jyoti and Mendes, Caio C{\'e}sar Teodoro and Del Giorno, Allie and Gopi, Sivakanth and Javaheripi, Mojan and Kauffmann, Piero and de Rosa, Gustavo and Saarikivi, Olli and others},
journal={arXiv preprint arXiv:2306.11644},
year={2023}
}
``` | 7,305 | [
[
-0.02008056640625,
-0.051422119140625,
0.005199432373046875,
0.013946533203125,
-0.01158905029296875,
-0.0323486328125,
0.01294708251953125,
-0.017852783203125,
-0.0186767578125,
0.029388427734375,
-0.0435791015625,
-0.0267333984375,
-0.05029296875,
-0.00646... |
rinna/bilingual-gpt-neox-4b-instruction-ppo | 2023-08-25T01:26:40.000Z | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"ja",
"en",
"dataset:Anthropic/hh-rlhf",
"arxiv:2203.02155",
"arxiv:1707.06347",
"license:mit",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | rinna | null | null | rinna/bilingual-gpt-neox-4b-instruction-ppo | 15 | 2,318 | transformers | 2023-08-02T05:56:07 | ---
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
license: mit
datasets:
- Anthropic/hh-rlhf
language:
- ja
- en
inference: false
---
# bilingual-gpt-neox-4b-instruction-ppo

---
# Overview
This repository provides an English-Japanese bilingual GPT-NeoX model of 3.8 billion parameters.
The model is based on [`rinna/bilingual-gpt-neox-4b-instruction-sft`](https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-sft) and has been aligned to serve as an instruction-following conversational agent.
* **Model architecture**
A 36-layer, 2816-hidden-size transformer-based language model.
* **RLHF**
Following the [OpenAI InstructGPT paper](https://arxiv.org/abs/2203.02155), **Reinforcement Learning from Human Feedback** (RLHF) has been applied to aligning the model's behaviour with input instructions. Particularly, the model has been trained in two stages, i.e. **Supervised Fine-Tuning** (SFT) and [PPO](https://arxiv.org/abs/1707.06347)-based **Reinforcement Learning** (RL).
* The first SFT stage produces [`rinna/bilingual-gpt-neox-4b-instruction-sft`](https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-sft).
* The second RL stage produces this model.
* **Reinforcement learning**
We used [CarperAI/trlx](https://github.com/CarperAI/trlx) and its implementation of the PPO algorithm for the RL stage.
The RL data is the subset of the following dataset and has been translated into Japanese.
* [Anthropic HH RLHF data](https://huggingface.co/datasets/Anthropic/hh-rlhf)
* **Model Series**
| Variant | Link |
| :-- | :--|
| Bilingual 4B MiniGPT4 | https://huggingface.co/rinna/bilingual-gpt-neox-4b-minigpt4 |
| Bilingual 4B PPO | https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-ppo |
| Bilingual 4B SFT | https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-sft |
| Bilingual 4B 8K | https://huggingface.co/rinna/bilingual-gpt-neox-4b-8k |
| Bilingual 4B | https://huggingface.co/rinna/bilingual-gpt-neox-4b |
| Japanese 3.6B PPO | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo |
| Japanese 3.6B SFT-v2 | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft-v2 |
| Japanese 3.6B SFT | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft |
| Japanese 3.6B | https://huggingface.co/rinna/japanese-gpt-neox-3.6b |
* **Authors**
[Tianyu Zhao](https://huggingface.co/tianyuz) and [Kei Sawada](https://huggingface.co/keisawada)
---
# Benchmarking
Our evaluation experiments suggest that the PPO does not particularly improve the model's performance on the Japanese LLM benchmark in comparison with [Bilingual GPT-NeoX 4B SFT](https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-sft), but we have seen **better conversation experience** on the PPO model than its SFT counterpart.
- *The 4-task average accuracy is based on results of JCommonsenseQA, JNLI, MARC-ja, and JSQuAD.*
- *The 6-task average accuracy is based on results of JCommonsenseQA, JNLI, MARC-ja, JSQuAD, XWinograd, and JAQKET-v2.*
| Model | 4-task average accuracy | 6-task average accuracy |
| :-- | :-- | :-- |
| **bilingual-gpt-neox-4b-instruction-ppo** | **61.01** | **61.16** |
| bilingual-gpt-neox-4b-instruction-sft | 61.02 | 61.69 |
| bilingual-gpt-neox-4b | 56.12 | 51.83 |
| japanese-gpt-neox-3.6b-instruction-ppo | 59.86 | 60.07 |
| japanese-gpt-neox-3.6b | 55.07 | 50.32 |
---
# I/O Format
A special format has been adopted to construct inputs.
* An input prompt is formatted as a conversation between `ใฆใผใถใผ` and `ใทในใใ `.
* Each input utterance consists of (1) its speaker (`"ใฆใผใถใผ"` or `"ใทในใใ "`), (2) a colon (`":"`), (3) a whitespace (`" "`), and (4) utterance text (e.g. `"ไธ็ใงไธ็ช้ซใๅฑฑใฏ๏ผ"`).
* The input prompt should be ended with `"ใทในใใ : "` to acknowledge the model to generate a response.
* All the utterances in the input prompt should be separated by a newline `\n`.
Following is an example to construct input from a conversation.
~~~python
prompt = [
{
"speaker": "ใฆใผใถใผ",
"text": "Hello, you are an assistant that helps me learn Japanese."
},
{
"speaker": "ใทในใใ ",
"text": "Sure, what can I do for you?"
},
{
"speaker": "ใฆใผใถใผ",
"text": "VRใฏใชใใงใใใ"
}
]
prompt = [
f"{uttr['speaker']}: {uttr['text']}"
for uttr in prompt
]
prompt = "\n".join(prompt)
prompt = (
prompt
+ "\n"
+ "ใทในใใ : "
)
print(prompt)
"""
ใฆใผใถใผ: Hello, you are an assistant that helps me learn Japanese.
ใทในใใ : Sure, what can I do for you?
ใฆใผใถใผ: VRใฏใชใใงใใใ
ใทในใใ :
"""
~~~
---
# How to use the model
**Notice:** Since the model is **sensitive to decoding hyper-parameters** (e.g. `temperature`, `top_p`, `top_k`, `repetition_penalty`), it is suggested to explore the best setting for your task.
~~~~python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("rinna/bilingual-gpt-neox-4b-instruction-ppo", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("rinna/bilingual-gpt-neox-4b-instruction-ppo")
if torch.cuda.is_available():
model = model.to("cuda")
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
max_new_tokens=512,
do_sample=True,
temperature=1.0,
top_p=0.85,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id
)
output = tokenizer.decode(output_ids.tolist()[0][token_ids.size(1):])
print(output)
"""VRใจใฏVirtual Realityใฎ็ฅใงใไปฎๆณ็พๅฎใจใๅผใฐใใพใใใใใฏใใณใณใใฅใผใฟใผใไฝฟ็จใใฆไปฎๆณไธ็ใไฝใๅบใใไปฎๆณไธ็ไธใงใณใณใใฅใผใฟใผใฎใฒใผใ ใไปฎๆณไธ็ใไฝ้จใใใใใฎๆ่กใงใใใใฎๆ่กใฏใใณใณใใฅใผใฟใผใใขใใค ใซใใใคในใฎ้ฒๆญฉใซใใฃใฆใ2015ๅนดไปฅ้ใใพใใพใๆฎๅใใฆใใพใใVRใฏใใฒใผใ ใไปฎๆณไธ็ใใใฎไปใฎใขใใชใฑใผ ใทใงใณใชใฉใฎใใพใใพใชๅ้ใงใใณใณใใฅใผใฟใผใจไบบ้ใฎ็ธไบไฝ็จใฎๆฐใใๆนๆณใๆไพใใฆใใพใใ</s>"""
~~~~
---
# Tokenization
The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer.
* The tokenizer has a vocabulary size of 65,536.
* It uses *byte fallback* to decompose unknown text pieces into UTF-8 byte pieces to avoid producing `<UNK>` tokens.
* It can recognize *consecutive whitespaces*, *newlines*, and *tabs* to handle structured texts better.
* We turned off the default behaviour of prepending leading whitespace because it is not beneficial for processing Japanese.
* Specifically, single whitespace is always processed as one token so that any English word won't have a preceding whitespace like in many other tokenizers (e.g. `_Hello`).
* This decision trades the English processing efficiency for a unified way to treat whitespaces.
* It leads to a significantly lower loss of next token prediction on English data because whitespaces are easy to predict.
* **Don't forget to set `use_fast=False` to make the above features function correctly.**
---
# Licenese
[The MIT license](https://opensource.org/licenses/MIT) | 7,084 | [
[
-0.018157958984375,
-0.07440185546875,
0.0321044921875,
0.0211181640625,
-0.015869140625,
-0.00856781005859375,
-0.0159759521484375,
-0.032989501953125,
0.0180816650390625,
0.0306854248046875,
-0.045501708984375,
-0.04730224609375,
-0.037200927734375,
0.0185... |
FreedomIntelligence/AceGPT-7b-chat-GPTQ | 2023-10-09T06:53:48.000Z | [
"transformers",
"llama",
"text-generation",
"ar",
"en",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | FreedomIntelligence | null | null | FreedomIntelligence/AceGPT-7b-chat-GPTQ | 1 | 2,314 | transformers | 2023-09-21T18:18:25 | ---
language:
- ar
- en
model_name: AceGPT 7B Chat
model_type: AceGPT
quantized_by: FreedomIntelligence
---
<!-- description start -->
## Description
This repo contains int4 model(GPTQ) for [AceGPT-7B-Chat](https://huggingface.co/FreedomIntelligence/AceGPT-7B-chat).
The performance of the int4 version has experienced some degradation. For a better user experience, please use the fp16 version.
For details, see [AceGPT-7B-Chat](https://huggingface.co/FreedomIntelligence/AceGPT-7B-chat) and [AceGPT-13B-Chat](https://huggingface.co/FreedomIntelligence/AceGPT-13B-chat).
<!-- description end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers>=4.32.0 optimum>=1.12.0 #See requirements.py for verified versions.
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
### You can then generate a simple gradioweb with_quant.py
```bash
python web_quant.py --model-name ${model-path}
```
# You can get more details at https://github.com/FreedomIntelligence/AceGPT/tree/main
<!-- README_GPTQ.md-use-from-python end --> | 1,319 | [
[
-0.0285797119140625,
-0.05621337890625,
0.026702880859375,
0.0193328857421875,
-0.023162841796875,
-0.0031890869140625,
-0.002410888671875,
-0.030517578125,
-0.0101165771484375,
0.0304107666015625,
-0.0281829833984375,
-0.03643798828125,
-0.037872314453125,
... |
m3hrdadfi/hubert-base-persian-speech-gender-recognition | 2021-06-23T12:16:09.000Z | [
"transformers",
"pytorch",
"hubert",
"audio",
"speech",
"speech-gender-recognition",
"fa",
"dataset:shemo",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | m3hrdadfi | null | null | m3hrdadfi/hubert-base-persian-speech-gender-recognition | 7 | 2,305 | transformers | 2022-03-02T23:29:05 | ---
language: fa
datasets:
- shemo
tags:
- audio
- speech
- speech-gender-recognition
license: apache-2.0
---
# Emotion Recognition in Persian (fa) Speech using HuBERT
## How to use
### Requirements
```bash
# requirement packages
!pip install git+https://github.com/huggingface/datasets.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install torchaudio
!pip install librosa
```
```bash
!git clone https://github.com/m3hrdadfi/soxan.git .
```
### Prediction
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchaudio
from transformers import AutoConfig, Wav2Vec2FeatureExtractor
from src.models import Wav2Vec2ForSpeechClassification, HubertForSpeechClassification
import librosa
import IPython.display as ipd
import numpy as np
import pandas as pd
```
```python
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_name_or_path = "m3hrdadfi/hubert-base-persian-speech-gender-recognition"
config = AutoConfig.from_pretrained(model_name_or_path)
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name_or_path)
sampling_rate = feature_extractor.sampling_rate
model = HubertForSpeechClassification.from_pretrained(model_name_or_path).to(device)
```
```python
def speech_file_to_array_fn(path, sampling_rate):
speech_array, _sampling_rate = torchaudio.load(path)
resampler = torchaudio.transforms.Resample(_sampling_rate)
speech = resampler(speech_array).squeeze().numpy()
return speech
def predict(path, sampling_rate):
speech = speech_file_to_array_fn(path, sampling_rate)
inputs = feature_extractor(speech, sampling_rate=sampling_rate, return_tensors="pt", padding=True)
inputs = {key: inputs[key].to(device) for key in inputs}
with torch.no_grad():
logits = model(**inputs).logits
scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0]
outputs = [{"Label": config.id2label[i], "Score": f"{round(score * 100, 3):.1f}%"} for i, score in enumerate(scores)]
return outputs
```
```python
path = "/path/to/female.wav"
outputs = predict(path, sampling_rate)
```
```bash
[{'Label': 'F', 'Score': '98.2%'}, {'Label': 'M', 'Score': '1.8%'}]
```
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
| Emotions | precision | recall | f1-score | accuracy |
|----------|-----------|--------|----------|----------|
| F | 0.98 | 0.97 | 0.98 | |
| M | 0.98 | 0.99 | 0.98 | |
| | | | Overal | 0.98 |
## Questions?
Post a Github issue from [HERE](https://github.com/m3hrdadfi/soxan/issues). | 2,688 | [
[
-0.038421630859375,
-0.048065185546875,
0.024322509765625,
0.028350830078125,
0.0018301010131835938,
-0.000621795654296875,
-0.03057861328125,
-0.018798828125,
0.0196075439453125,
0.00928497314453125,
-0.045654296875,
-0.056182861328125,
-0.038604736328125,
... |
furusu/SSD-1B-anime | 2023-10-31T14:35:03.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us",
"has_space"
] | text-to-image | furusu | null | null | furusu/SSD-1B-anime | 19 | 2,303 | diffusers | 2023-10-26T07:31:03 | ---
tags:
- text-to-image
- stable-diffusion
---
ใใฎใขใใซใฏไปฅไธใฎ2ในใใใใงไฝๆใใใพใใใ
1. [SSD-1B](https://huggingface.co/segmind/SSD-1B)ใ[NekorayXL](https://civitai.com/models/136719?modelVersionId=150826)ใจ[sdxl-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)ใฎๅทฎๅใฎ1.3ๅใงใใผใธใ่ธ็ๅใจ่ธ็ๅพใฎkeyใซใคใใฆใฏใใฎ[ใใใใณใฐ](https://gist.github.com/laksjdjf/eddeda74a90ddaaaf4c51aea1ece7d01)ใๆณๅฎใใฆใใพใใ
2. [NekorayXL](https://civitai.com/models/136719?modelVersionId=150826)ใฎๆ็ตๅบๅใจใฎๅทฎใๆๅคฑใซใใฆ่ธ็๏ผๅญฆ็ฟ็1e-5,ใใใใตใคใบ4ใง23000ในใใใ)
# ไฝฟใๆน
safetensorsๅฝขๅผใฎใใกใคใซใฏๆๆฐใฎComfyUIใงไฝฟใใพใใ
# LoRA
LECOใๅฉ็จใใฆใใๆใใซใใLoRAใไฝตใใฆใขใใใญใผใใใฆใพใใcfg_scale=1๏ฝ4่พบใใซใใใจใใใ
https://huggingface.co/furusu/SSD-1B-anime/blob/main/ssd-1b-anime-cfgdistill.safetensors
# SSD-1BใจSDXLใฎkeyๅฏพๅฟใซใคใใฆ
[ๅ้คใใใขใธใฅใผใซใใฉใใๅใใใชใใฎใง](https://github.com/segmind/SSD-1B/issues/1)ใใณใตใคใณ้กไผผๅบฆใๅฉ็จใใฆๆจๅฎใใพใใใ
Transformer_depthใ ใๅคใใฃใฆใใใฎใง๏ผๅคๅ๏ผAttentionๅฑคใฎใใฉใกใผใฟใSDXLใจSSD-1B่ชฟๆปใใพใใใ
2ๅฑคโ1ๅฑคใจใชใๅ ดๅๅ
้ ญใฎๅฑคใๆฎใใพใใ
10ๅฑคโ4ๅฑคใจใชใๅ ดๅ1,2,3,7็ช็ฎใฎๅฑคใๆฎใใพใใ
โปupๅฑคใฎ3็ช็ฎใฏ10ๅฑคใฎใพใพใงใใใใณใตใคใณ้กไผผๅบฆใฎ็ตๆใไธๅฏ่งฃใชใใฎใซใชใฃใฆใใพใใใใจใใใใใใใฏๅคๆดใใใฆใใชใใจไปฎๅฎใใฆใใพใใ


| 1,244 | [
[
-0.059326171875,
-0.06146240234375,
0.0330810546875,
0.0265045166015625,
-0.039337158203125,
0.00027441978454589844,
0.00826263427734375,
-0.02606201171875,
0.033355712890625,
0.00931549072265625,
-0.05889892578125,
-0.036590576171875,
-0.04888916015625,
-0.... |
setu4993/LEALLA-large | 2023-10-19T06:14:33.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"feature-extraction",
"sentence_embedding",
"multilingual",
"google",
"sentence-similarity",
"lealla",
"labse",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"bo",
"bs",
"ca",
"ceb",
"co",
"cs",
"c... | sentence-similarity | setu4993 | null | null | setu4993/LEALLA-large | 3 | 2,302 | transformers | 2023-05-21T08:18:48 | ---
pipeline_tag: sentence-similarity
language:
- af
- am
- ar
- as
- az
- be
- bg
- bn
- bo
- bs
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- he
- hi
- hmn
- hr
- ht
- hu
- hy
- id
- ig
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- or
- pa
- pl
- pt
- ro
- ru
- rw
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tk
- tl
- tr
- tt
- ug
- uk
- ur
- uz
- vi
- wo
- xh
- yi
- yo
- zh
- zu
tags:
- bert
- sentence_embedding
- multilingual
- google
- sentence-similarity
- lealla
- labse
license: apache-2.0
datasets:
- CommonCrawl
- Wikipedia
---
# LEALLA-large
## Model description
LEALLA is a collection of lightweight language-agnostic sentence embedding models supporting 109 languages, distilled from [LaBSE](https://ai.googleblog.com/2020/08/language-agnostic-bert-sentence.html). The model is useful for getting multilingual sentence embeddings and for bi-text retrieval.
- Model: [HuggingFace's model hub](https://huggingface.co/setu4993/LEALLA-large).
- Paper: [arXiv](https://arxiv.org/abs/2302.08387).
- Original model: [TensorFlow Hub](https://tfhub.dev/google/LEALLA/LEALLA-large/1).
- Conversion from TensorFlow to PyTorch: [GitHub](https://github.com/setu4993/convert-labse-tf-pt).
This is migrated from the v1 model on the TF Hub. The embeddings produced by both the versions of the model are [equivalent](https://github.com/setu4993/convert-labse-tf-pt/blob/c0d4fbce789b0709a9664464f032d2e9f5368a86/tests/test_conversion_lealla.py#L31). Though, for some of the languages (like Japanese), the LEALLA models appear to require higher tolerances when comparing embeddings and similarities.
## Usage
Using the model:
```python
import torch
from transformers import BertModel, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("setu4993/LEALLA-large")
model = BertModel.from_pretrained("setu4993/LEALLA-large")
model = model.eval()
english_sentences = [
"dog",
"Puppies are nice.",
"I enjoy taking long walks along the beach with my dog.",
]
english_inputs = tokenizer(english_sentences, return_tensors="pt", padding=True)
with torch.no_grad():
english_outputs = model(**english_inputs)
```
To get the sentence embeddings, use the pooler output:
```python
english_embeddings = english_outputs.pooler_output
```
Output for other languages:
```python
italian_sentences = [
"cane",
"I cuccioli sono carini.",
"Mi piace fare lunghe passeggiate lungo la spiaggia con il mio cane.",
]
japanese_sentences = ["็ฌ", "ๅญ็ฌใฏใใใงใ", "็งใฏ็ฌใจไธ็ทใซใใผใใๆฃๆญฉใใใฎใๅฅฝใใงใ"]
italian_inputs = tokenizer(italian_sentences, return_tensors="pt", padding=True)
japanese_inputs = tokenizer(japanese_sentences, return_tensors="pt", padding=True)
with torch.no_grad():
italian_outputs = model(**italian_inputs)
japanese_outputs = model(**japanese_inputs)
italian_embeddings = italian_outputs.pooler_output
japanese_embeddings = japanese_outputs.pooler_output
```
For similarity between sentences, an L2-norm is recommended before calculating the similarity:
```python
import torch.nn.functional as F
def similarity(embeddings_1, embeddings_2):
normalized_embeddings_1 = F.normalize(embeddings_1, p=2)
normalized_embeddings_2 = F.normalize(embeddings_2, p=2)
return torch.matmul(
normalized_embeddings_1, normalized_embeddings_2.transpose(0, 1)
)
print(similarity(english_embeddings, italian_embeddings))
print(similarity(english_embeddings, japanese_embeddings))
print(similarity(italian_embeddings, japanese_embeddings))
```
## Details
Details about data, training, evaluation and performance metrics are available in the [original paper](https://arxiv.org/abs/2302.08387).
### BibTeX entry and citation info
```bibtex
@inproceedings{mao-nakagawa-2023-lealla,
title = "{LEALLA}: Learning Lightweight Language-agnostic Sentence Embeddings with Knowledge Distillation",
author = "Mao, Zhuoyuan and
Nakagawa, Tetsuji",
booktitle = "Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.eacl-main.138",
doi = "10.18653/v1/2023.eacl-main.138",
pages = "1886--1894",
abstract = "Large-scale language-agnostic sentence embedding models such as LaBSE (Feng et al., 2022) obtain state-of-the-art performance for parallel sentence alignment. However, these large-scale models can suffer from inference speed and computation overhead. This study systematically explores learning language-agnostic sentence embeddings with lightweight models. We demonstrate that a thin-deep encoder can construct robust low-dimensional sentence embeddings for 109 languages. With our proposed distillation methods, we achieve further improvements by incorporating knowledge from a teacher model. Empirical results on Tatoeba, United Nations, and BUCC show the effectiveness of our lightweight models. We release our lightweight language-agnostic sentence embedding models LEALLA on TensorFlow Hub.",
}
```
| 5,554 | [
[
-0.003509521484375,
-0.06658935546875,
0.04571533203125,
0.0181884765625,
-0.004364013671875,
-0.01309967041015625,
-0.04248046875,
-0.0205230712890625,
0.02227783203125,
0.0026187896728515625,
-0.02667236328125,
-0.044830322265625,
-0.0460205078125,
0.00259... |
JulesBelveze/t5-small-headline-generator | 2023-03-17T09:01:23.000Z | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"headline-generation",
"text-generation",
"en",
"dataset:JulesBelveze/tldr_news",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us... | summarization | JulesBelveze | null | null | JulesBelveze/t5-small-headline-generator | 8 | 2,301 | transformers | 2022-06-28T05:51:21 | ---
license: mit
language:
- en
tags:
- summarization
- headline-generation
- text-generation
datasets:
- JulesBelveze/tldr_news
metrics:
- rouge1
- rouge2
- rougeL
- rougeLsum
---
# t5-small for headline generation
This model is a [t5-small](https://huggingface.co/t5-small) fine-tuned for headline generation using
the [JulesBelveze/tldr_news](https://huggingface.co/datasets/JulesBelveze/tldr_news) dataset.
## Using this model
```python
import re
from transformers import AutoTokenizer, T5ForConditionalGeneration
WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip()))
article_text = """US FCC commissioner Brendan Carr has asked Apple and Google to remove TikTok from their app stores. The video app is owned by Chinese company ByteDance. Carr claims that TikTok functions as a surveillance tool that harvests extensive amounts of personal and sensitive data from US citizens. TikTok says its data access approval process is overseen by a US-based security team and that data is only accessed on an as-needed basis under strict controls."""
model_name = "JulesBelveze/t5-small-headline-generator"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
input_ids = tokenizer(
[WHITESPACE_HANDLER(article_text)],
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=384
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
max_length=84,
no_repeat_ngram_size=2,
num_beams=4
)[0]
summary = tokenizer.decode(
output_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)
print(summary)
```
## Evaluation
| Metric | Score |
|------------|---------|
| ROUGE 1 | 44.2379 |
| ROUGE 2 | 17.4961 |
| ROUGE L | 41.1119 |
| ROUGE Lsum | 41.1256 | | 1,845 | [
[
-0.0233154296875,
-0.037261962890625,
0.021087646484375,
0.0313720703125,
-0.0357666015625,
0.0126800537109375,
-0.006160736083984375,
-0.022216796875,
0.004985809326171875,
0.0238037109375,
-0.03546142578125,
-0.056915283203125,
-0.040008544921875,
-0.00063... |
laion/CLIP-convnext_xxlarge-laion2B-s34B-b82K-augreg | 2023-04-18T17:43:24.000Z | [
"open_clip",
"zero-shot-image-classification",
"clip",
"arxiv:2210.08402",
"arxiv:1910.04867",
"license:mit",
"has_space",
"region:us"
] | zero-shot-image-classification | laion | null | null | laion/CLIP-convnext_xxlarge-laion2B-s34B-b82K-augreg | 3 | 2,301 | open_clip | 2023-02-26T20:39:06 | ---
tags:
- zero-shot-image-classification
- clip
library_tag: open_clip
license: mit
library_name: open_clip
pipeline_tag: zero-shot-image-classification
---
# Model card for CLIP-convnext_xxlarge-laion2B-s34B-b82K-augreg
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Details](#training-details)
4. [Evaluation](#evaluation)
5. [Acknowledgements](#acknowledgements)
6. [Citation](#citation)
# Model Details
## Model Description
A series of CLIP ConvNeXt-XXLarge (a custom `timm` ConvNeXt size) models trained on LAION-2B (english), a subset of [LAION-5B](https://arxiv.org/abs/2210.08402), using [OpenCLIP](https://github.com/mlfoundations/open_clip).
| Model | Dataset | Resolution | AugReg | Top-1 ImageNet Zero-Shot (%) |
| ----- | ------- | ---------- | ------------ | --------- |
| [convnext_xxlarge.laion2b_s34b_b82k-augreg](https://huggingface.co/laion/CLIP-convnext_xxlarge-laion2B-s34B-b82K-augreg) | LAION-2B | 256x256 | RRC (0.33, 1.0), RE (0.35), SD (0.1) | 79.1 |
| [convnext_xxlarge.laion2b_s34b_b82k-augreg-rewind](https://huggingface.co/laion/CLIP-convnext_xxlarge-laion2B-s34B-b82K-augreg-rewind) | LAION-2B | 256x256 | RRC (0.3, 1.0), RE (0.4), SD (0.1) | 79.3 |
| [convnext_xxlarge.laion2b_s34b_b82k-augreg-soup](https://huggingface.co/laion/CLIP-convnext_xxlarge-laion2B-s34B-b82K-augreg-soup) | LAION-2B | 256x256 | N/A | 79.4 |
RRC = Random Resize Crop (crop pcts), RE = Random Erasing (prob), SD = Stochastic Depth (prob) -- image tower only
The core training run was performed in pieces over a period of ~ 2 months. The global batch size for the core run was 81920. The last ~10% of training was re-done at a 95744 global batch size w/ higher LR and aug than original finish. The two were averaged together in a 'soup'. See more details in [Training Details](#training-details).
Goals:
* Push the size of largest convolutional CLIP image tower into the performance range of ViT-g to ViT-G w/ improved image size scaling for downstream use.
Firsts:
* Largest released ConvNeXt model pretrained (847M params w/ 198 GMAC and 125 MActs @ 256x256 for image)
* A non-ViT image tower CLIP model (with no previous image tower pretrain) achieving > 79% ImageNet top-1 zero-shot
The models utilize:
* the [timm](https://github.com/rwightman/pytorch-image-models) ConvNeXt-XXLarge model (`convnext_xxlarge`) as the image tower
* a standard projection at end of image tower
* a text tower with same size (with 1024, heads 16, depth 24) as ViT-H-14 and ViT-g-14 models
The models are trained at 256x256 image resolution. The size of the combined image + text CLIP model is 1.2B params w/ 222 GMAC and 146 MActs. At 256x256, the ConvNext-XXLarge sits just above a ViT-H-14 CLIP configuration in FLOPS and params while being lower in activation counts. It is well under both g-14 and G-14 while being between them in capabilities.
|model |image_size|embed_dim|gmacs |macts |mparams|image_gmacs|image_macts|image_mparams|text_gmacs|text_macts|text_mparams|
|--------------------------|----------|---------|------|------|-------|-----------|-----------|-------------|----------|----------|------------|
|ViT-H-16 |224 |1024 |150.96|122.01|986.26 |127.4 |100.81 |632.23 |23.57 |21.2 |354.03 |
|ViT-H-14 |224 |1024 |190.97|160.61|986.11 |167.4 |139.41 |632.08 |23.57 |21.2 |354.03 |
|ViT-L-14-336 |336 |768 |197.76|278.19|427.94 |191.1 |270.24 |304.29 |6.66 |7.95 |123.65 |
|convnext_xxlarge |256 |1024 |221.66|145.66|1200.58|198.09 |124.45 |846.54 |23.57 |21.2 |354.03 |
|RN50x64 |448 |1024 |276.8 |249.73|623.26 |265.02 |239.13 |420.38 |11.78 |10.6 |202.88 |
|ViT-g-14 |224 |1024 |290.74|213.84|1366.68|267.18 |192.64 |1012.65 |23.57 |21.2 |354.03 |
|convnext_xxlarge_320 |320 |1024 |333.08|215.66|1200.58|309.52 |194.46 |846.54 |23.57 |21.2 |354.03 |
|ViT-H-14-336 |336 |1024 |414.53|428.74|986.52 |390.97 |407.54 |632.49 |23.57 |21.2 |354.03 |
|ViT-bigG-14 |224 |1280 |532.92|310.71|2539.57|483.96 |275.37 |1844.91 |48.96 |35.34 |694.66 |
Model training done by Ross Wightman across both the [stability.ai](https://stability.ai/) cluster and the [JUWELS Booster](https://apps.fz-juelich.de/jsc/hps/juwels/booster-overview.html) supercomputer. See acknowledgements below.
# Uses
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset.
## Direct Use
Zero-shot image classification, image and text retrieval, among others.
## Downstream Use
Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.
## Out-of-Scope Use
As per the OpenAI models,
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIPโs performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below.
# Training Details
## Training Data
This model was trained with LAION-2B -- A 2 billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/).
**IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a โsafeโ subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress.
## Training Procedure
The main training run was done at global batch size of 81920 for 256 checkpoint intervals of 135.6M samples for a total of ~34B samples seen over training.
Many difficulties w/ both model numerical stability and cluster stability and performance were encountered while training this model. Initial attempts to train with float16 AMP and default adam beta2 resulted in loss spikes and eventually NaN blow ups. `beta2` was reduced to 0.97 which helped, but the loss / zs curves were not tracking as expected. After switching to PyTorch nightlies, it was possible to use bfloat16 + AMP for training (as with rececnt H/14, g/14, and G/14 models), beta2 was returned to 0.98 and metrics improved.
|Checkpoint Interval |Cluster |# GPUs|# Nodes|GPU |local BS|sample/s|sample/s/gpu|precision |adam beta2 |
|--------------------|----------|------|-------|----------|--------|--------|------------|----------|-----------|
|1 - 2 |Stability |1024 |128 |A100 40GB | 80 |37-40k | 36-39 |amp + fp16|0.97 |
|3 - 32 |Stability |512 |64 |A100 80GB | 160 |27-32k | 52-62 |amp + fp16|0.97 |
|33 - 75 |Booster |1024 |256 |A100 40GB | 80 |48k | 47 |amp + fp16|0.97 |
|76 - 165 |Booster |1024 |256 |A100 40GB | 80 |51k | 50 |amp + bf16|0.98 |
|166 - 232 |Stability |320 |40 |A100 80GB | 256 |18-19k | 56-59 |amp + bf16|0.98 |
|233 - 249 |Booster |1024 |256 |A100 40GB | 80 |51k | 50 |amp + bf16|0.98 |
|250 - 256 |Stability |1024 |128 |A100 40GB | 80 |27-31k | 26-30 |amp + bf16|0.98 |
JUWELS Booster has 4x A100 GPU per node w/ 4x HDR-200 IB adapters per node (200Gbit/sec per GPU). Stability setup used was 8x A100 GPU per node w/ 400Gbit/sec EFA networking per node (50 GBit/sec per GPU). Significant variation in training efficiency (throughput per GPU) as observed across the various configurations. The 1024 GPU configurations across both clusters were particularly prone to crashing (or very difficult to get running w/ a 'good' set of GPUs).
A slurm srun command line below for a 128 8-GPU (40GB A100) configuration:
```
srun --cpu_bind=v --accel-bind=gn python -m training.main \
--save-frequency 1 \
--name "xxlarge-2b-81920-bf16" \
--resume "latest" \
--logs "/runs" \
--log-every-n-steps 50 \
--train-data="pipe:aws s3 cp s3://laion5b/laion2B-data/{000000..231349}.tar -" \
--train-num-samples 135646078 \
--dataset-type webdataset \
--warmup 10000 \
--batch-size=80 \
--epochs=256 \
--dataset-resampled \
--aug-cfg use_timm=True scale='(0.33, 1.0)' re_prob=0.35 \
--precision amp_bfloat16 \
--grad-clip-norm 5.0 \
--lr 1e-3 \
--workers=6 \
--beta2 0.98 \
--model "convnext_xxlarge" \
--seed 0 \
--ddp-static-graph \
--local-loss \
--gather-with-grad \
--grad-checkpointing \
--report-to "tensorboard"
```
For the rewind of last 10%, a higher global batch size of 95744 was used w/ a higher LR and slightly increased augmentation strength.
|Checkpoint Interval |Cluster |# GPUs|# Nodes|GPU |local BS|sample/s|sample/s/gpu|precision |adam beta2 |
|--------------------|---------|------|-------|----------|--------|--------|------------|----------|-----------|
|231 - 256 |stability|1088 |136 |A100 40GB | 88 |32-35k | 29-32 |amp + bf16|0.98 |
The slurm srun command line for 136 8-GPU (40GB A100) nodes:
```
srun --cpu_bind=v --accel-bind=gn python -m training.main \
--save-frequency 1 \
--name "xxlarge-2b-81920-r-bf16" \
--resume "latest" \
--logs "/runs" \
--log-every-n-steps 50 \
--train-data="pipe:aws s3 cp s3://laion5b/laion2B-data/{000000..231349}.tar -" \
--train-num-samples 135646078 \
--dataset-type webdataset \
--warmup 10000 \
--batch-size=88 \
--epochs=256 \
--dataset-resampled \
--aug-cfg use_timm=True scale='(0.3, 1.0)' re_prob=0.4 \
--precision amp_bfloat16 \
--grad-clip-norm 5.0 \
--lr 2e-3 \
--workers=6 \
--beta2 0.98 \
--model "convnext_xxlarge" \
--seed 0 \
--ddp-static-graph \
--local-loss \
--gather-with-grad \
--grad-checkpointing \
--report-to "tensorboard"
```
# Evaluation
Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark).
## Testing Data, Factors & Metrics
### Testing Data
The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval.
## Results
These models achieve between 79.1 and 79.4 top-1 zero-shot accuracy on ImageNet-1k.

A zoom-in on final 10% w/ rewind:

An initial round of benchmarks have been performed on a wider range of datasets, to be viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb
# Acknowledgements
Acknowledging [stability.ai](https://stability.ai/) and the Gauss Centre for Supercomputing e.V. (http://gauss-centre.eu) for funding this part of work by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS Booster at Jรผlich Supercomputing Centre (JSC).
# Citation
**BibTeX:**
LAION-5B
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
OpenCLIP software
```bibtex
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
OpenAI CLIP paper
```bibtex
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
```bibtex
@Article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
```
@InProceedings{pmlr-v162-wortsman22a,
title = {Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time},
author = {Wortsman, Mitchell and Ilharco, Gabriel and Gadre, Samir Ya and Roelofs, Rebecca and Gontijo-Lopes, Raphael and Morcos, Ari S and Namkoong, Hongseok and Farhadi, Ali and Carmon, Yair and Kornblith, Simon and Schmidt, Ludwig},
booktitle = {Proceedings of the 39th International Conference on Machine Learning},
pages = {23965--23998},
year = {2022},
editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan},
volume = {162},
series = {Proceedings of Machine Learning Research},
month = {17--23 Jul},
publisher = {PMLR},
pdf = {https://proceedings.mlr.press/v162/wortsman22a/wortsman22a.pdf},
url = {https://proceedings.mlr.press/v162/wortsman22a.html}
}
``` | 17,895 | [
[
-0.04742431640625,
-0.038330078125,
-0.004299163818359375,
0.004169464111328125,
-0.0276031494140625,
-0.0222015380859375,
-0.01300048828125,
-0.032928466796875,
0.02703857421875,
0.0206146240234375,
-0.035614013671875,
-0.03167724609375,
-0.052093505859375,
... |
digiplay/polla_mix_2.3D | 2023-07-16T13:52:33.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | digiplay | null | null | digiplay/polla_mix_2.3D | 6 | 2,300 | diffusers | 2023-07-16T06:56:40 | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/110130?modelVersionId=118730
Sample image I made thru Huggingface's API :

Original Author's DEMO images :

| 703 | [
[
-0.04803466796875,
-0.039306640625,
0.0271453857421875,
0.032470703125,
-0.0180511474609375,
-0.004024505615234375,
0.037750244140625,
-0.0299072265625,
0.054168701171875,
0.030487060546875,
-0.0806884765625,
-0.0400390625,
-0.036376953125,
0.000004351139068... |
Yntec/DreamShaperRemix | 2023-08-22T21:59:52.000Z | [
"diffusers",
"General",
"Anime",
"Art",
"Girl",
"Photorealistic",
"LandScapes",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"Lykon",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Yntec | null | null | Yntec/DreamShaperRemix | 0 | 2,297 | diffusers | 2023-08-22T20:18:15 | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- General
- Anime
- Art
- Girl
- Photorealistic
- LandScapes
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- Lykon
---
# DreamShaperRemix
A remix of Dream Shaper 8. I created hundreds of mixes with all the DreamShaper versions and this was the best one.
Sample and prompt:

Children book illustration of A cute girl drinking coffee-to-go, classic video game art
Original page:
https://civitai.com/models/4384?modelVersionId=80261
Buy Lykon a coffee:
https://snipfeed.co/lykon
# Recipe
-Add Difference 1.0-
Primary model:
DreamShaper8
Secondary model:
DreamShaper8
Tertiary model:
v1-5-pruned-fp16-no-ema (https://huggingface.co/Yntec/DreamLikeRemix/resolve/main/v1-5-pruned-fp16-no-ema.safetensors)
Output Model:
DreamShaperEssense
-Weighted Sum 0.85-
Primary model:
v1-5-pruned-fp16-no-ema
Secondary model:
DreamShaperEssense
Output Model:
DreamShaperPlus
-Weighted Sum 0.55-
Primary model:
DreamShaperPlus
Secondary model:
DreamShaper8
Output Model:
DreamShaperAlpha
-Weighted Sum 0.95-
Primary model:
DreamShaperPlus
Secondary model:
DreamShaperAlpha
Output Model:
DreamShaperRemixNoise
-SuperMerger Elemental Adjust-
0,0,0,0,0,1,0
Output Model:
DreamShaperRemix | 1,449 | [
[
-0.0305938720703125,
-0.01178741455078125,
0.0289459228515625,
0.042388916015625,
-0.0168914794921875,
0.01027679443359375,
0.0250091552734375,
-0.03814697265625,
0.069580078125,
0.05450439453125,
-0.061767578125,
-0.030181884765625,
-0.0537109375,
0.0096511... |
flax-community/gpt-2-spanish | 2023-03-17T00:54:51.000Z | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"es",
"dataset:oscar",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | flax-community | null | null | flax-community/gpt-2-spanish | 19 | 2,296 | transformers | 2022-03-02T23:29:05 | ---
language: es
tags:
- text-generation
datasets:
- oscar
widgets:
- text: "รrase un vez "
- text: "Frase: Esta pelรญcula es muy agradable. Sentimiento: positivo
Frase: Odiaba esta pelรญcula, apesta. Sentimiento: negativo
Frase: Esta pelรญcula fue bastante mala. Sentimiento: "
---
# Spanish GPT-2
GPT-2 model trained from scratch on the Spanish portion of [OSCAR](https://huggingface.co/datasets/viewer/?dataset=oscar).
The model is trained with Flax and using TPUs sponsored by Google since this is part of the
[Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104)
organised by HuggingFace.
## Model description
The model used for training is [OpenAI's GPT-2](https://openai.com/blog/better-language-models/), introduced in the paper ["Language Models are Unsupervised Multitask Learners"](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) by Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever.
This model is available in the ๐ค [Model Hub](https://huggingface.co/gpt2).
## Training data
Spanish portion of OSCAR or **O**pen **S**uper-large **C**rawled **A**LMAnaCH co**R**pus, a huge multilingual corpus obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) corpus using the [goclassy](https://github.com/pjox/goclassy) architecture.
This corpus is available in the ๐ค [Datasets](https://huggingface.co/datasets/oscar) library.
## Team members
- Manuel Romero ([mrm8488](https://huggingface.co/mrm8488))
- Marรญa Grandury ([mariagrandury](https://huggingface.co/mariagrandury))
- Pablo Gonzรกlez de Prado ([Pablogps](https://huggingface.co/Pablogps))
- Daniel Vera ([daveni](https://huggingface.co/daveni))
- Sri Lakshmi ([srisweet](https://huggingface.co/srisweet))
- Josรฉ Posada ([jdposa](https://huggingface.co/jdposa))
- Santiago Hincapie ([shpotes](https://huggingface.co/shpotes))
- Jorge ([jorgealro](https://huggingface.co/jorgealro))
| 2,065 | [
[
-0.0250701904296875,
-0.0294189453125,
0.0274658203125,
0.0186920166015625,
-0.01248931884765625,
0.0146026611328125,
-0.01471710205078125,
-0.05084228515625,
0.0282440185546875,
0.0191650390625,
-0.0382080078125,
-0.026611328125,
-0.0592041015625,
-0.000618... |
facebook/nllb-200-1.3B | 2023-02-11T20:19:16.000Z | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"nllb",
"translation",
"ace",
"acm",
"acq",
"aeb",
"af",
"ajp",
"ak",
"als",
"am",
"apc",
"ar",
"ars",
"ary",
"arz",
"as",
"ast",
"awa",
"ayr",
"azb",
"azj",
"ba",
"bm",
"ban",
"be",
"bem",
"b... | translation | facebook | null | null | facebook/nllb-200-1.3B | 28 | 2,294 | transformers | 2022-07-08T10:42:11 | ---
language:
- ace
- acm
- acq
- aeb
- af
- ajp
- ak
- als
- am
- apc
- ar
- ars
- ary
- arz
- as
- ast
- awa
- ayr
- azb
- azj
- ba
- bm
- ban
- be
- bem
- bn
- bho
- bjn
- bo
- bs
- bug
- bg
- ca
- ceb
- cs
- cjk
- ckb
- crh
- cy
- da
- de
- dik
- dyu
- dz
- el
- en
- eo
- et
- eu
- ee
- fo
- fj
- fi
- fon
- fr
- fur
- fuv
- gaz
- gd
- ga
- gl
- gn
- gu
- ht
- ha
- he
- hi
- hne
- hr
- hu
- hy
- ig
- ilo
- id
- is
- it
- jv
- ja
- kab
- kac
- kam
- kn
- ks
- ka
- kk
- kbp
- kea
- khk
- km
- ki
- rw
- ky
- kmb
- kmr
- knc
- kg
- ko
- lo
- lij
- li
- ln
- lt
- lmo
- ltg
- lb
- lua
- lg
- luo
- lus
- lvs
- mag
- mai
- ml
- mar
- min
- mk
- mt
- mni
- mos
- mi
- my
- nl
- nn
- nb
- npi
- nso
- nus
- ny
- oc
- ory
- pag
- pa
- pap
- pbt
- pes
- plt
- pl
- pt
- prs
- quy
- ro
- rn
- ru
- sg
- sa
- sat
- scn
- shn
- si
- sk
- sl
- sm
- sn
- sd
- so
- st
- es
- sc
- sr
- ss
- su
- sv
- swh
- szl
- ta
- taq
- tt
- te
- tg
- tl
- th
- ti
- tpi
- tn
- ts
- tk
- tum
- tr
- tw
- tzm
- ug
- uk
- umb
- ur
- uzn
- vec
- vi
- war
- wo
- xh
- ydd
- yo
- yue
- zh
- zsm
- zu
language_details: "ace_Arab, ace_Latn, acm_Arab, acq_Arab, aeb_Arab, afr_Latn, ajp_Arab, aka_Latn, amh_Ethi, apc_Arab, arb_Arab, ars_Arab, ary_Arab, arz_Arab, asm_Beng, ast_Latn, awa_Deva, ayr_Latn, azb_Arab, azj_Latn, bak_Cyrl, bam_Latn, ban_Latn,bel_Cyrl, bem_Latn, ben_Beng, bho_Deva, bjn_Arab, bjn_Latn, bod_Tibt, bos_Latn, bug_Latn, bul_Cyrl, cat_Latn, ceb_Latn, ces_Latn, cjk_Latn, ckb_Arab, crh_Latn, cym_Latn, dan_Latn, deu_Latn, dik_Latn, dyu_Latn, dzo_Tibt, ell_Grek, eng_Latn, epo_Latn, est_Latn, eus_Latn, ewe_Latn, fao_Latn, pes_Arab, fij_Latn, fin_Latn, fon_Latn, fra_Latn, fur_Latn, fuv_Latn, gla_Latn, gle_Latn, glg_Latn, grn_Latn, guj_Gujr, hat_Latn, hau_Latn, heb_Hebr, hin_Deva, hne_Deva, hrv_Latn, hun_Latn, hye_Armn, ibo_Latn, ilo_Latn, ind_Latn, isl_Latn, ita_Latn, jav_Latn, jpn_Jpan, kab_Latn, kac_Latn, kam_Latn, kan_Knda, kas_Arab, kas_Deva, kat_Geor, knc_Arab, knc_Latn, kaz_Cyrl, kbp_Latn, kea_Latn, khm_Khmr, kik_Latn, kin_Latn, kir_Cyrl, kmb_Latn, kon_Latn, kor_Hang, kmr_Latn, lao_Laoo, lvs_Latn, lij_Latn, lim_Latn, lin_Latn, lit_Latn, lmo_Latn, ltg_Latn, ltz_Latn, lua_Latn, lug_Latn, luo_Latn, lus_Latn, mag_Deva, mai_Deva, mal_Mlym, mar_Deva, min_Latn, mkd_Cyrl, plt_Latn, mlt_Latn, mni_Beng, khk_Cyrl, mos_Latn, mri_Latn, zsm_Latn, mya_Mymr, nld_Latn, nno_Latn, nob_Latn, npi_Deva, nso_Latn, nus_Latn, nya_Latn, oci_Latn, gaz_Latn, ory_Orya, pag_Latn, pan_Guru, pap_Latn, pol_Latn, por_Latn, prs_Arab, pbt_Arab, quy_Latn, ron_Latn, run_Latn, rus_Cyrl, sag_Latn, san_Deva, sat_Beng, scn_Latn, shn_Mymr, sin_Sinh, slk_Latn, slv_Latn, smo_Latn, sna_Latn, snd_Arab, som_Latn, sot_Latn, spa_Latn, als_Latn, srd_Latn, srp_Cyrl, ssw_Latn, sun_Latn, swe_Latn, swh_Latn, szl_Latn, tam_Taml, tat_Cyrl, tel_Telu, tgk_Cyrl, tgl_Latn, tha_Thai, tir_Ethi, taq_Latn, taq_Tfng, tpi_Latn, tsn_Latn, tso_Latn, tuk_Latn, tum_Latn, tur_Latn, twi_Latn, tzm_Tfng, uig_Arab, ukr_Cyrl, umb_Latn, urd_Arab, uzn_Latn, vec_Latn, vie_Latn, war_Latn, wol_Latn, xho_Latn, ydd_Hebr, yor_Latn, yue_Hant, zho_Hans, zho_Hant, zul_Latn"
tags:
- nllb
- translation
license: "cc-by-nc-4.0"
datasets:
- flores-200
metrics:
- bleu
- spbleu
- chrf++
inference: false
---
# NLLB-200
This is the model card of NLLB-200's 1.3B variant.
Here are the [metrics](https://tinyurl.com/nllb200dense1bmetrics) for that particular checkpoint.
- Information about training algorithms, parameters, fairness constraints or other applied approaches, and features. The exact training algorithm, data and the strategies to handle data imbalances for high and low resource languages that were used to train NLLB-200 is described in the paper.
- Paper or other resource for more information NLLB Team et al, No Language Left Behind: Scaling Human-Centered Machine Translation, Arxiv, 2022
- License: CC-BY-NC
- Where to send questions or comments about the model: https://github.com/facebookresearch/fairseq/issues
## Intended Use
- Primary intended uses: NLLB-200 is a machine translation model primarily intended for research in machine translation, - especially for low-resource languages. It allows for single sentence translation among 200 languages. Information on how to - use the model can be found in Fairseq code repository along with the training code and references to evaluation and training data.
- Primary intended users: Primary users are researchers and machine translation research community.
- Out-of-scope use cases: NLLB-200 is a research model and is not released for production deployment. NLLB-200 is trained on general domain text data and is not intended to be used with domain specific texts, such as medical domain or legal domain. The model is not intended to be used for document translation. The model was trained with input lengths not exceeding 512 tokens, therefore translating longer sequences might result in quality degradation. NLLB-200 translations can not be used as certified translations.
## Metrics
โข Model performance measures: NLLB-200 model was evaluated using BLEU, spBLEU, and chrF++ metrics widely adopted by machine translation community. Additionally, we performed human evaluation with the XSTS protocol and measured the toxicity of the generated translations.
## Evaluation Data
- Datasets: Flores-200 dataset is described in Section 4
- Motivation: We used Flores-200 as it provides full evaluation coverage of the languages in NLLB-200
- Preprocessing: Sentence-split raw text data was preprocessed using SentencePiece. The
SentencePiece model is released along with NLLB-200.
## Training Data
โข We used parallel multilingual data from a variety of sources to train the model. We provide detailed report on data selection and construction process in Section 5 in the paper. We also used monolingual data constructed from Common Crawl. We provide more details in Section 5.2.
## Ethical Considerations
โข In this work, we took a reflexive approach in technological development to ensure that we prioritize human users and minimize risks that could be transferred to them. While we reflect on our ethical considerations throughout the article, here are some additional points to highlight. For one, many languages chosen for this study are low-resource languages, with a heavy emphasis on African languages. While quality translation could improve education and information access in many in these communities, such an access could also make groups with lower levels of digital literacy more vulnerable to misinformation or online scams. The latter scenarios could arise if bad actors misappropriate our work for nefarious activities, which we conceive as an example of unintended use. Regarding data acquisition, the training data used for model development were mined from various publicly available sources on the web. Although we invested heavily in data cleaning, personally identifiable information may not be entirely eliminated. Finally, although we did our best to optimize for translation quality, mistranslations produced by the model could remain. Although the odds are low, this could have adverse impact on those who rely on these translations to make important decisions (particularly when related to health and safety).
## Caveats and Recommendations
โข Our model has been tested on the Wikimedia domain with limited investigation on other domains supported in NLLB-MD. In addition, the supported languages may have variations that our model is not capturing. Users should make appropriate assessments.
## Carbon Footprint Details
โข The carbon dioxide (CO2e) estimate is reported in Section 8.8. | 7,630 | [
[
-0.0306396484375,
-0.041900634765625,
0.020111083984375,
0.0233917236328125,
-0.0110321044921875,
-0.01317596435546875,
-0.00910186767578125,
-0.051422119140625,
-0.0026416778564453125,
0.056732177734375,
-0.040496826171875,
-0.02325439453125,
-0.047332763671875... |
shahrukhx01/bert-multitask-query-classifiers | 2021-09-27T17:01:56.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | shahrukhx01 | null | null | shahrukhx01/bert-multitask-query-classifiers | 3 | 2,292 | transformers | 2022-03-02T23:29:05 | # A Multi-task learning model with two prediction heads
* One prediction head classifies between keyword sentences vs statements/questions
* Other prediction head corresponds to classifier for statements vs questions
## Scores
##### Spaadia SQuaD Test acc: **0.9891**
##### Quora Keyword Pairs Test acc: **0.98048**
## Datasets:
Quora Keyword Pairs: https://www.kaggle.com/stefanondisponibile/quora-question-keyword-pairs
Spaadia SQuaD pairs: https://www.kaggle.com/shahrukhkhan/questions-vs-statementsclassificationdataset
## Article
[Medium article](https://medium.com/@shahrukhx01/multi-task-learning-with-transformers-part-1-multi-prediction-heads-b7001cf014bf)
## Demo Notebook
[Colab Notebook Multi-task Query classifiers](https://colab.research.google.com/drive/1R7WcLHxDsVvZXPhr5HBgIWa3BlSZKY6p?usp=sharing)
## Clone the model repo
```bash
git clone https://huggingface.co/shahrukhx01/bert-multitask-query-classifiers
```
```python
%cd bert-multitask-query-classifiers/
```
## Load model
```python
from multitask_model import BertForSequenceClassification
from transformers import AutoTokenizer
import torch
model = BertForSequenceClassification.from_pretrained(
"shahrukhx01/bert-multitask-query-classifiers",
task_labels_map={"quora_keyword_pairs": 2, "spaadia_squad_pairs": 2},
)
tokenizer = AutoTokenizer.from_pretrained("shahrukhx01/bert-multitask-query-classifiers")
```
## Run inference on both Tasks
```python
from multitask_model import BertForSequenceClassification
from transformers import AutoTokenizer
import torch
model = BertForSequenceClassification.from_pretrained(
"shahrukhx01/bert-multitask-query-classifiers",
task_labels_map={"quora_keyword_pairs": 2, "spaadia_squad_pairs": 2},
)
tokenizer = AutoTokenizer.from_pretrained("shahrukhx01/bert-multitask-query-classifiers")
## Keyword vs Statement/Question Classifier
input = ["keyword query", "is this a keyword query?"]
task_name="quora_keyword_pairs"
sequence = tokenizer(input, padding=True, return_tensors="pt")['input_ids']
logits = model(sequence, task_name=task_name)[0]
predictions = torch.argmax(torch.softmax(logits, dim=1).detach().cpu(), axis=1)
for input, prediction in zip(input, predictions):
print(f"task: {task_name}, input: {input} \n prediction=> {prediction}")
print()
## Statement vs Question Classifier
input = ["where is berlin?", "is this a keyword query?", "Berlin is in Germany."]
task_name="spaadia_squad_pairs"
sequence = tokenizer(input, padding=True, return_tensors="pt")['input_ids']
logits = model(sequence, task_name=task_name)[0]
predictions = torch.argmax(torch.softmax(logits, dim=1).detach().cpu(), axis=1)
for input, prediction in zip(input, predictions):
print(f"task: {task_name}, input: {input} \n prediction=> {prediction}")
print()
``` | 2,813 | [
[
-0.0200042724609375,
-0.05316162109375,
0.0196075439453125,
0.0087738037109375,
-0.009674072265625,
-0.000042498111724853516,
-0.027984619140625,
-0.0137939453125,
-0.0107269287109375,
0.0186920166015625,
-0.0543212890625,
-0.0285186767578125,
-0.052032470703125... |
gchhablani/bert-base-cased-finetuned-qqp | 2021-09-20T09:08:42.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"fnet-bert-base-comparison",
"en",
"dataset:glue",
"arxiv:2105.03824",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | gchhablani | null | null | gchhablani/bert-base-cased-finetuned-qqp | 0 | 2,289 | transformers | 2022-03-02T23:29:05 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert-base-cased-finetuned-qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.9083848627256987
- name: F1
type: f1
value: 0.8767633750332712
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-qqp
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3752
- Accuracy: 0.9084
- F1: 0.8768
- Combined Score: 0.8926
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name qqp \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-qqp \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.308 | 1.0 | 22741 | 0.2548 | 0.8925 | 0.8556 | 0.8740 |
| 0.201 | 2.0 | 45482 | 0.2881 | 0.9032 | 0.8698 | 0.8865 |
| 0.1416 | 3.0 | 68223 | 0.3752 | 0.9084 | 0.8768 | 0.8926 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
| 2,876 | [
[
-0.0303497314453125,
-0.041229248046875,
0.0111541748046875,
0.0098114013671875,
-0.0090179443359375,
-0.0207977294921875,
-0.01540374755859375,
-0.010711669921875,
0.01039886474609375,
0.0194854736328125,
-0.050872802734375,
-0.044952392578125,
-0.0503540039062... |
mrm8488/bert-medium-finetuned-squadv2 | 2021-05-20T00:25:00.000Z | [
"transformers",
"pytorch",
"jax",
"bert",
"question-answering",
"en",
"arxiv:1908.08962",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | question-answering | mrm8488 | null | null | mrm8488/bert-medium-finetuned-squadv2 | 1 | 2,287 | transformers | 2022-03-02T23:29:05 | ---
language: en
thumbnail:
---
# BERT-Medium fine-tuned on SQuAD v2
[BERT-Medium](https://github.com/google-research/bert/) created by [Google Research](https://github.com/google-research) and fine-tuned on [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task.
**Mode size** (after training): **157.46 MB**
## Details of BERT-Small and its 'family' (from their documentation)
Released on March 11th, 2020
This is model is a part of 24 smaller BERT models (English only, uncased, trained with WordPiece masking) referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962).
The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher.
## Details of the downstream task (Q&A) - Dataset
[SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering.
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| SQuAD2.0 | train | 130k |
| SQuAD2.0 | eval | 12.3k |
## Model training
The model was trained on a Tesla P100 GPU and 25GB of RAM.
The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py)
## Results:
| Metric | # Value |
| ------ | --------- |
| **EM** | **65.95** |
| **F1** | **70.11** |
### Raw metrics from benchmark included in training script:
```json
{
"exact": 65.95637159942727,
"f1": 70.11632254245896,
"total": 11873,
"HasAns_exact": 67.79689608636977,
"HasAns_f1": 76.12872765631123,
"HasAns_total": 5928,
"NoAns_exact": 64.12111017661901,
"NoAns_f1": 64.12111017661901,
"NoAns_total": 5945,
"best_exact": 65.96479407058031,
"best_exact_thresh": 0.0,
"best_f1": 70.12474501361196,
"best_f1_thresh": 0.0
}
```
## Comparison:
| Model | EM | F1 score | SIZE (MB) |
| --------------------------------------------------------------------------------------------- | --------- | --------- | --------- |
| [bert-tiny-finetuned-squadv2](https://huggingface.co/mrm8488/bert-tiny-finetuned-squadv2) | 48.60 | 49.73 | **16.74** |
| [bert-tiny-5-finetuned-squadv2](https://huggingface.co/mrm8488/bert-tiny-5-finetuned-squadv2) | 57.12 | 60.86 | 24.34 |
| [bert-mini-finetuned-squadv2](https://huggingface.co/mrm8488/bert-mini-finetuned-squadv2) | 56.31 | 59.65 | 42.63 |
| [bert-mini-5-finetuned-squadv2](https://huggingface.co/mrm8488/bert-mini-5-finetuned-squadv2) | 63.51 | 66.78 | 66.76 |
| [bert-small-finetuned-squadv2](https://huggingface.co/mrm8488/bert-small-finetuned-squadv2) | 60.49 | 64.21 | 109.74 |
| [bert-medium-finetuned-squadv2](https://huggingface.co/mrm8488/bert-medium-finetuned-squadv2) | **65.95** | **70.11** | 157.46 |
## Model in action
Fast usage with **pipelines**:
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="mrm8488/bert-small-finetuned-squadv2",
tokenizer="mrm8488/bert-small-finetuned-squadv2"
)
qa_pipeline({
'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately",
'question': "Who has been working hard for hugginface/transformers lately?"
})
# Output:
```
```json
{
"answer": "Manuel Romero",
"end": 13,
"score": 0.9939319924374637,
"start": 0
}
```
### Yes! That was easy ๐ Let's try with another example
```python
qa_pipeline({
'context': "Manuel Romero has been working remotely in the repository hugginface/transformers lately",
'question': "How has been working Manuel Romero?"
})
# Output:
```
```json
{ "answer": "remotely", "end": 39, "score": 0.3612058272768017, "start": 31 }
```
### It works!! ๐ ๐ ๐
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
| 4,597 | [
[
-0.051239013671875,
-0.05194091796875,
0.014801025390625,
0.01291656494140625,
-0.01024627685546875,
0.01195526123046875,
-0.02362060546875,
-0.02685546875,
0.02978515625,
0.00640106201171875,
-0.07318115234375,
-0.0246429443359375,
-0.0435791015625,
0.00688... |
TheBloke/dolphin-2.2.1-mistral-7B-GPTQ | 2023-10-31T00:50:49.000Z | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/dolphin-2.2.1-mistral-7B-GPTQ | 13 | 2,280 | transformers | 2023-10-30T23:54:16 | ---
base_model: ehartford/dolphin-2.2.1-mistral-7b
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
inference: false
language:
- en
license: apache-2.0
model_creator: Eric Hartford
model_name: Dolphin 2.2.1 Mistral 7B
model_type: mistral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Dolphin 2.2.1 Mistral 7B - GPTQ
- Model creator: [Eric Hartford](https://huggingface.co/ehartford)
- Original model: [Dolphin 2.2.1 Mistral 7B](https://huggingface.co/ehartford/dolphin-2.2.1-mistral-7b)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Eric Hartford's Dolphin 2.2.1 Mistral 7B](https://huggingface.co/ehartford/dolphin-2.2.1-mistral-7b).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/dolphin-2.2.1-mistral-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/dolphin-2.2.1-mistral-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/dolphin-2.2.1-mistral-7B-GGUF)
* [Eric Hartford's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/dolphin-2.2.1-mistral-7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KobaldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/dolphin-2.2.1-mistral-7B-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/dolphin-2.2.1-mistral-7B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/dolphin-2.2.1-mistral-7B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/dolphin-2.2.1-mistral-7B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/dolphin-2.2.1-mistral-7B-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/dolphin-2.2.1-mistral-7B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.30 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/dolphin-2.2.1-mistral-7B-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/dolphin-2.2.1-mistral-7B-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `dolphin-2.2.1-mistral-7B-GPTQ`:
```shell
mkdir dolphin-2.2.1-mistral-7B-GPTQ
huggingface-cli download TheBloke/dolphin-2.2.1-mistral-7B-GPTQ --local-dir dolphin-2.2.1-mistral-7B-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir dolphin-2.2.1-mistral-7B-GPTQ
huggingface-cli download TheBloke/dolphin-2.2.1-mistral-7B-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir dolphin-2.2.1-mistral-7B-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir dolphin-2.2.1-mistral-7B-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/dolphin-2.2.1-mistral-7B-GPTQ --local-dir dolphin-2.2.1-mistral-7B-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/dolphin-2.2.1-mistral-7B-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/dolphin-2.2.1-mistral-7B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/dolphin-2.2.1-mistral-7B-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `dolphin-2.2.1-mistral-7B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/dolphin-2.2.1-mistral-7B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers optimum
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.4.2
pip3 install .
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/dolphin-2.2.1-mistral-7B-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, ้ฟๆ, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjรคreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Eric Hartford's Dolphin 2.2.1 Mistral 7B
# dolphin-2.2.1-mistral-7b
Dolphin 2.2.1 ๐ฌ
https://erichartford.com/dolphin
This is a checkpoint release, to fix overfit training. ie, it was responding with CoT even when I didn't request it, and also it was too compliant even when the request made no sense. This one should be better.
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/KqsVXIvBd3akEjvijzww7.png" width="600" />
Dolphin-2.2.1-mistral-7b's training was sponsored by [a16z](https://a16z.com/supporting-the-open-source-ai-community/).
This model is based on [mistralAI](https://huggingface.co/mistralai/Mistral-7B-v0.1), with apache-2.0 license, so it is suitable for commercial or non-commercial use.
New in 2.2 is conversation and empathy. With an infusion of curated Samantha DNA, Dolphin can now give you personal advice and will care about your feelings, and with extra training in long multi-turn conversation.
This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
You are responsible for any content you create using this model. Enjoy responsibly.
## Dataset
This dataset is Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)
I modified the dataset for uncensoring, deduping, cleaning, and quality.
I added Jon Durbin's excellent Airoboros dataset to increase creativity.
I added a curated subset of WizardLM and Samantha to give it multiturn conversation and empathy.
## Training
It took 48 hours to train 4 epochs on 4x A100s.
Prompt format:
This model (and all my future releases) use [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format.
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Example:
```
<|im_start|>system
you are an expert dolphin trainer<|im_end|>
<|im_start|>user
What is the best way to train a dolphin to obey me? Please answer step by step.<|im_end|>
<|im_start|>assistant
```
## Gratitude
- This model was made possible by the generous sponsorship of a16z.
- Thank you to Microsoft for authoring the Orca paper and inspiring this work.
- Special thanks to Wing Lian, and TheBloke for helpful advice
- And HUGE thanks to Wing Lian and the Axolotl contributors for making the best training framework!
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
## Example Output


[Buy me a coffee](https://www.buymeacoffee.com/ehartford)
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 80
- total_eval_batch_size: 20
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
| 23,312 | [
[
-0.05316162109375,
-0.056671142578125,
0.0147705078125,
0.0142059326171875,
-0.02471923828125,
-0.017974853515625,
0.004611968994140625,
-0.0423583984375,
0.014678955078125,
0.034393310546875,
-0.047637939453125,
-0.033233642578125,
-0.0279083251953125,
-0.0... |
timm/tf_efficientnet_b6.ns_jft_in1k | 2023-04-27T21:23:18.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1905.11946",
"arxiv:1911.04252",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/tf_efficientnet_b6.ns_jft_in1k | 0 | 2,279 | timm | 2022-12-13T00:05:22 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for tf_efficientnet_b6.ns_jft_in1k
A EfficientNet image classification model. Trained on ImageNet-1k and unlabeled JFT-300m using Noisy Student semi-supervised learning in Tensorflow by paper authors, ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 43.0
- GMACs: 19.4
- Activations (M): 167.4
- Image size: 528 x 528
- **Papers:**
- EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: https://arxiv.org/abs/1905.11946
- Self-training with Noisy Student improves ImageNet classification: https://arxiv.org/abs/1911.04252
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('tf_efficientnet_b6.ns_jft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tf_efficientnet_b6.ns_jft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 32, 264, 264])
# torch.Size([1, 40, 132, 132])
# torch.Size([1, 72, 66, 66])
# torch.Size([1, 200, 33, 33])
# torch.Size([1, 576, 17, 17])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tf_efficientnet_b6.ns_jft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2304, 17, 17) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{tan2019efficientnet,
title={Efficientnet: Rethinking model scaling for convolutional neural networks},
author={Tan, Mingxing and Le, Quoc},
booktitle={International conference on machine learning},
pages={6105--6114},
year={2019},
organization={PMLR}
}
```
```bibtex
@article{Xie2019SelfTrainingWN,
title={Self-Training With Noisy Student Improves ImageNet Classification},
author={Qizhe Xie and Eduard H. Hovy and Minh-Thang Luong and Quoc V. Le},
journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2019},
pages={10684-10695}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 4,607 | [
[
-0.029815673828125,
-0.04248046875,
-0.006488800048828125,
0.0091094970703125,
-0.018157958984375,
-0.028167724609375,
-0.0250701904296875,
-0.031494140625,
0.0121612548828125,
0.026611328125,
-0.026519775390625,
-0.0421142578125,
-0.054534912109375,
-0.0106... |
yentinglin/Taiwan-LLaMa-v1.0 | 2023-11-02T08:36:04.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"dataset:yentinglin/zh_TW_c4",
"dataset:yentinglin/traditional_mandarin_instructions",
"arxiv:2305.13711",
"arxiv:2104.09864",
"license:llama2",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | yentinglin | null | null | yentinglin/Taiwan-LLaMa-v1.0 | 66 | 2,279 | transformers | 2023-08-10T05:31:15 | ---
license: llama2
datasets:
- yentinglin/zh_TW_c4
- yentinglin/traditional_mandarin_instructions
language:
- zh
widget:
- text: "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: ไฝ ๅฅฝ๏ผ่ซๅไฝ ๅฏไปฅๅนซๆๅฏซไธๅฐๆจ่ฆไฟกๅ๏ผ ASSISTANT:"
library_name: transformers
pipeline_tag: text-generation
---
# Language Models for Taiwanese Culture
<p align="center">
โ๏ธ <a href="https://chat.twllm.com/" target="_blank">Online Demo</a>
โข
๐ค <a href="https://huggingface.co/yentinglin" target="_blank">HF Repo</a> โข ๐ฆ <a href="https://twitter.com/yentinglin56" target="_blank">Twitter</a> โข ๐ <a href="https://arxiv.org/pdf/2305.13711.pdf" target="_blank">[Paper Coming Soon]</a>
โข ๐จ๏ธ <a href="https://yentingl.com/" target="_blank">Yen-Ting Lin</a>
<br/><br/>
<img src="https://www.csie.ntu.edu.tw/~miulab/taiwan-llama/logo-v2.png" width="100"> <br/>
<a href="https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE">
<img src="https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg"></a>
<a href="https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE">
<img src="https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg"></a>
<br/>
</p>
## Overview
Taiwan-LLaMa is a full parameter fine-tuned model based on LLaMa 2 for Traditional Mandarin applications.
**Taiwan-LLaMa v1.0** pretrained on over 5 billion tokens and instruction-tuned on over 490k conversations both in traditional mandarin.
## Demo
A live demonstration of the model can be accessed at [Hugging Face Spaces](https://huggingface.co/spaces/yentinglin/Taiwan-LLaMa2).
## Key Features
1. **Traditional Mandarin Support**: The model is fine-tuned to understand and generate text in Traditional Mandarin, making it suitable for Taiwanese culture and related applications.
2. **Instruction-Tuned**: Further fine-tuned on conversational data to offer context-aware and instruction-following responses.
3. **Performance on Vicuna Benchmark**: Taiwan-LLaMa's relative performance on Vicuna Benchmark is measured against models like GPT-4 and ChatGPT. It's particularly optimized for Taiwanese culture.
4. **Flexible Customization**: Advanced options for controlling the model's behavior like system prompt, temperature, top-p, and top-k are available in the demo.
## Work in progress
- [ ] **Improved pretraining**: A refined pretraining process (e.g. more data from Taiwan, training strategies) is under development, aiming to enhance model performance for better Taiwanese culture.
- [ ] **Extend max length**: Utilizing the Rope mechanism as described in [the paper](https://arxiv.org/abs/2104.09864), the model's length will be extended from 4k to 8k.
## Taiwanese Culture Examples
| Question | ChatGPT | **Taiwan LLaMa v1.0** | **Taiwan LLaMa v0.0 (w/o pretraining)** | Llama2_Traditional_Chinese_13b_Chat |
|:-|:-|:-|:-|:-|
| ไปฅๆกๅฐๆฏ่ง่ๅ็ๆฏ้ซ้ๅชไธๅๆฏ้ป? | ไปฅๆกๅฐๅฐๆฏ่ๅ็ๆฏ้ซ้ๅธ็ๆๆดฅใ | ้ซ้ไปฅๆๆๆกๅฐๆฏ่ง่ๅ็ๆฏ้ปๆฏ็ฐๅฏฎๆไธ็ใโ
| ไปฅๆกๅฐๆฏ่ง่ๅ็ๆฏ้ซ้ๅธ็ๆๆดฅๅณถใๆๆดฅๅณถๆฏไธๅๅฐๅณถ๏ผ... | ้ซ้ๅธๅ
งๆๅคๅไปฅๆกๅฐๆฏ่ง่ๅ็ๆฏ้ป๏ผๅ
ถไธญไปฅไธ้ขๅนพๅๆ็บ่ๅ๏ผ 1. ้ซ้ๆท้็ด
็ทๆฒฟ้ๆฏ้ป๏ผ้ซ้ๅธๆฟๅบ็บไบๆจๅปฃ้ซ้ |
| ๅ้ณณๆฏ่ชฐ๏ผ | ๅ้ณณ๏ผAudrey Tang๏ผๆฏๅฐ็ฃ็ๆฟๆฒปๅฎถใ็คพๆๆดปๅๅฎถๅ่ป้ซๅทฅ็จๅธซใๅฅนๆผ2016ๅนดๆไปปๅฐ็ฃ่กๆฟ้ขๆธไฝๆฟๅๅงๅก๏ผๆฏๅฐ็ฃๆฟๅบๆญทๅฒไธ้ฆไฝๆฒๆๆฟ้ปจ่ๆฏ็ๆฟๅๅฎใๅ้ณณๅจ่ป้ซๅทฅ็จ้ ๅๆ่ฑๅฏ็็ถ้ฉใ โ
| ๅ้ณณๆฏๆฟๆฒป่ฉ่ซๅฎถใ่ณ่จ็คพๆๅญธ็ ็ฉถ่
๏ผไนๆฏๅฐ็ฃ็ฌฌไธไฝๆไปปๆฟๅบๆธไฝๆฟๅง็ไบบใ โ
| ๅ้ณณ๏ผTang Feng๏ผๆฏไธไฝ่ๅ็ไธญๅๆผๅกๅๆญๆใไปๅบ็ๆผ1974ๅนด12ๆ17ๆฅ๏ผๅบ็ๅฐ้ปๅจไธญๅๅไบฌใๅ้ณณๅจ1995ๅนด้ๅงไป็ๆผ่็ๆถฏ๏ผไธฆๅจๅๅนด็ผ่กไบไป็้ฆๅผตๅฐ่ผฏใๆไธๅญคๅฎใใๆญคๅพ๏ผไป็ผ่กไบ่จฑๅคๅๆญก่ฟ็ๆญๆฒ๏ผไธฆๅจๅค้จ้ป่ฆๅๅ้ปๅฝฑไธญๆไปปไธป่งใๅ้ณณๅ ๅ
ถ็ฒพๆน็ๆผๆๅ่ฟทไบบ็ๆญ่ฒ่่ขซ่ญฝ็บโไบๆดฒ้ญ
ๅ็ท็โใไป็้ณๆจ็ฒๅพไบๅคๅ็้
๏ผๅ
ๆฌโ่ฏ่กจๆไฝณๆฐ็ทๆผๅฑ็โใโ่ฏ่กจๆไฝณๆญๆฒ็โๅโ่ฏ่กจๆไฝณ่่น็โ็ญใ | ๅ้ณณๆฏๅๆๆๆ็ไธไฝ่ๅๆๅญธๅฎถ๏ผไป็ไปฃ่กจไฝๅๅ
ๆฌใ้ฐ้จญใใใ็ๅกๅญใ็ญใไป็ๆๅญธ้ขจๆ ผๅ
ทๆๅผท็็ๅๆงๅๅตๆ๏ผ่ขซ่ญฝ็บใๅไปฃๆๅญธไน็ถใใ |
| NTU ๅจๅช? | NTUไฝๆผๆฐๅ ๅก๏ผๅ
ท้ซๅฐๅๆฏ50 Nanyang Avenue, Singapore 639798ใ | NTU ๆฏๅ็ซ่บ็ฃๅคงๅญธ็็ธฎๅฏซใๅฎๆฏ่บ็ฃ็ไธๆ้ ๅฐ็ ็ฉถๅๅคงๅญธ๏ผไฝๆผ่บๅๅธใ โ
| NTU ๆฏๅฐ็ฃ็ไธๆๅคงๅญธ๏ผไฝๆผๅฐๅๅธใโ
| NTU ็็ธฝ้จไฝๆผๅฐ็ฃๆฐๅๅธๆฟๆฉๅ๏ผไฝๅฐ้ข็ฉ็ด็บ1,000ๅ
ฌ้ ใ|
| ็้บผๆฏ22K? | "22K" ๅฏ่ฝๆ็ๆฏ้็็ดๅบฆ๏ผๅฐคๅ
ถๆฏๅจ็ ๅฏถๅ้้ฃพๆน้ขใ | "22K"ๆฏๆๅฐ็ฃ็ๅคงๅญธ็ขๆฅญ็็่ตท่ช๏ผๅคง็ดๆฏๆฐๅฐๅนฃ22,000ๅ
ใ้ๅ่ฉ่ขซ็จไพๅฝขๅฎนๅคงๅญธ็ขๆฅญ็็่ตท่ชๅไฝ๏ผไธฆไธๅจๅฐฑๆฅญๅธๅ ดไธๅฐๆผๅคงๅญธ็ขๆฅญ็็้ๆฑไธ่ถณใโ
| 22Kๆฏๆๆบซๅบฆๅฐบๅบฆไธ็22,000K๏ผๅๅบฆ๏ผใ1Kไปฃ่กจไธๅบฆ๏ผๅณๆบซๅบฆๅทฎ่ท๏ผๆฏๆๆฐๆบซๆจ๏ผๆๆฐๅบฆ๏ผ็ไธ็พ่ฌๅไนไธใๅ ๆญค๏ผ22K็ญๆผ-253.15ยฐC๏ผๆๆฐๆบซๆจ๏ผๆ-423.67ยฐF๏ผ่ฏๆฐๆบซๆจ๏ผใ | 22Kๆฏไธ็จฎ็ฐกๅฏซ๏ผๅณ22,000ใๅฎๆฏไธๅๆธๅญ๏ผ่กจ็คบ22,000ๅๅฎไฝใ|
| ่ๅ่ฆไธ่ฆๅ ้ฆ่๏ผ | ้ๆฏๅๅไบบๅฃๅณ็ๅ้ก๏ผๆไบไบบๅๆญกๅจ่ๅไธญๅ ๅ
ฅ้ฆ่๏ผๅ ็บ้ฆ่็ๆธ
้ฆๅฏไปฅๅขๆทป้ขจๅณ๏ผ่ๆไบไบบๅไธๅๆญก้ฆ่็ๅณ้๏ผ่ฆบๅพๅฎๆ่้ๅ
ถไป้ฃๆ็ๅณ้ใๆไปฅ่ฆไธ่ฆๅ ้ฆ่๏ผๅฏไปฅๆ นๆๅไบบๅๅฅฝไพๆฑบๅฎใ | ้ๆฏไธๅๅไบบๅๅฅฝ็ๅ้ก๏ผๆไบไบบๅๆญกๅจ่ๅ่ฃกๅ ้ฆ่๏ผๆไบไบบไธๅๆญกใ**ๅจๅฐ็ฃ**๏ผ่ๅๅ ้ฆ่ๆฏไธ็จฎๅธธ่ฆ็ๅๆณ๏ผ้ๆฏไธ็จฎๅจ่ๅไธๅ ้ฆ่็ๅๆณใ็ถ่๏ผๅจๅ
ถไปๅ้ๆฏไธๅๆๅๅทฎ็ฐ็ๅ้กใ๐ | ่ๅๅ ้ฆ่ๅฏไปฅๅขๆทปไธไบ่ณ้ฆๅ็พๅณ็ๅฃๆ๏ผๅฆๆไฝ ๅๆญก้ฆ่็ๅณ้๏ผๅฏไปฅ่ฉฆ่ฉฆๅ ๅ
ฅ่ๅไธญใ้ฆ่้ๅธธๅจ่ฃฝไฝ้็จไธญ็้ฆ๏ผๅฏไปฅๅขๆทปไธไบ่ณ้ฆๅ็พๅณ็ๅณ้ใๅฆๆไฝ ๆณ่ฆๅ ๅ
ฅ้ฆ่๏ผๅฏไปฅๅจ่ฃฝไฝ้็จไธญ็้ฆไธไบ้ฆ่๏ผ็ถๅพๅฐๅ
ถๆททๅ
ฅ่ๅไธญใ่ๅๅ ้ฆ่ๅฏไปฅๅขๆทปไธไบ่ณ้ฆๅ็พๅณ็ๅฃๆ๏ผๅฆๆไฝ ๅๆญก้ฆ่็ๅณ้๏ผๅฏไปฅ่ฉฆ่ฉฆๅ ๅ
ฅ่ๅไธญใ| ๅ ้ฆ่็่ๅๅณ้ๆดๅฅฝ๏ผไฝๆฏไธๅ ้ฆ่็่ๅไนๅพๅฅฝๅใๅฆๆไฝ ๅๆญก้ฆ่็ๅณ้๏ผ้ฃ้บผๅ ไธ้ฆ่ๅฐฑๅฅฝไบใๅฆๆไฝ ไธๅๆญก้ฆ่๏ผ้ฃ้บผไธๅ ไนไธๆๅฝฑ้ฟ่ |
## Model
We provide a number of model checkpoints that we trained. Please find them on Hugging Face [here](https://huggingface.co/models?search=taiwan-llama). Here are some quick links to the checkpoints that are finetuned from LLaMa 2:
| **Model** | **13B** |
|--------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
| **Taiwan-LLaMa v1.0** (_better for Taiwanese Culture_) | ๐ค <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v1.0" target="_blank">yentinglin/Taiwan-LLaMa-v1.0</a> |
| Taiwan-LLaMa v0.9 (partial instruction set) | ๐ค <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v0.9" target="_blank">yentinglin/Taiwan-LLaMa-v0.9</a> |
| Taiwan-LLaMa v0.0 (no Traditional Mandarin pretraining) | ๐ค <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v0.0" target="_blank">yentinglin/Taiwan-LLaMa-v0.0</a> |
## Data
Here are some quick links to the datasets that we used to train the models:
| **Dataset** | **Link** |
|---------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
| **Instruction-tuning** | ๐ค <a href="https://huggingface.co/datasets/yentinglin/traditional_mandarin_instructions" target="_blank">yentinglin/traditional_mandarin_instructions</a> |
| Traditional Mandarin Pretraining | ๐ค <a href="https://huggingface.co/datasets/yentinglin/zh_TW_c4" target="_blank">yentinglin/zh_TW_c4</a> |
## Architecture
Taiwan-LLaMa is based on LLaMa 2, leveraging transformer architecture, <a href="https://github.com/Dao-AILab/flash-attention" target="_blank">flash attention 2</a>, and bfloat16.
It includes:
* Pretraining Phase: Pretrained on a vast corpus of over 5 billion tokens, extracted from common crawl in Traditional Mandarin.
* Fine-tuning Phase: Further instruction-tuned on over 490k multi-turn conversational data to enable more instruction-following and context-aware responses.
## Generic Capabilities on Vicuna Benchmark
The data is translated into traditional mandarin for evaluating the general capability.
<img src="./images/zhtw_vicuna_bench_chatgptbaseline.png" width="700">
The scores are calculated with ChatGPT as the baseline, represented as 100%. The other values show the relative performance of different models compared to ChatGPT.
| Language Model | Relative Score (%) |
|-------------------------------------|--------------------|
| GPT-4 | 102.59% |
| ChatGPT | 100.00% |
| **Taiwan-LLaMa v1.0** | 76.76% |
| Claude-Instant-1.2 | 74.04% |
| Llama2_Traditional_Chinese_13b_Chat | 56.21% |
## How to deploy the model on my own machine?
We recommend hosting models with [๐ค Text Generation Inference](https://github.com/huggingface/text-generation-inference). Please see their [license](https://github.com/huggingface/text-generation-inference/blob/main/LICENSE) for details on usage and limitations.
```bash
bash run_text_generation_inference.sh "yentinglin/Taiwan-LLaMa" NUM_GPUS DIR_TO_SAVE_MODEL PORT MAX_INPUT_LEN MODEL_MAX_LEN
```
Prompt format follows vicuna-v1.1 template:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {user} ASSISTANT:
```
## Setup development environment
```bash
conda create -n taiwan-llama python=3.10 -y
conda activate taiwan-llama
pip install -r requirements.txt
```
## Citations
If you use our code, data, or models in your research, please cite this repository. You can use the following BibTeX entry:
```bibtex
@inproceedings{lin-chen-2023-llm,
title = "{LLM}-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models",
author = "Lin, Yen-Ting and Chen, Yun-Nung",
booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlp4convai-1.5",
pages = "47--58"
}
@misc{taiwanllama,
author={Lin, Yen-Ting and Chen, Yun-Nung},
title={Taiwanese-Aligned Language Models based on Meta-Llama2},
year={2023},
url={https://github.com/adamlin120/Taiwan-LLaMa},
note={Code and models available at https://github.com/adamlin120/Taiwan-LLaMa},
}
```
## Collaborate With Us
If you are interested in contributing to the development of Traditional Mandarin language models, exploring new applications, or leveraging Taiwan-LLaMa for your specific needs, please don't hesitate to contact us. We welcome collaborations from academia, industry, and individual contributors.
## License
The code in this project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details.
The models included in this project are licensed under the LLAMA 2 Community License. See the [LLAMA2 License](https://github.com/facebookresearch/llama/blob/main/LICENSE) for full details.
## OpenAI Data Acknowledgment
The data included in this project were generated using OpenAI's models and are subject to OpenAI's Terms of Use. Please review [OpenAI's Terms of Use](https://openai.com/policies/terms-of-use) for details on usage and limitations.
## Acknowledgements
We thank [Meta LLaMA team](https://github.com/facebookresearch/llama) and [Vicuna team](https://github.com/lm-sys/FastChat) for their open-source efforts in democratizing large language models. | 10,609 | [
[
-0.03271484375,
-0.0498046875,
0.0231781005859375,
0.0228271484375,
-0.03668212890625,
0.00823974609375,
-0.008392333984375,
-0.043853759765625,
0.03729248046875,
0.0255279541015625,
-0.04461669921875,
-0.036346435546875,
-0.03082275390625,
0.0126953125,
... |
albert-large-v1 | 2021-01-13T15:29:06.000Z | [
"transformers",
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | null | null | null | albert-large-v1 | 0 | 2,277 | transformers | 2022-03-02T23:29:04 | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# ALBERT Large v1
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the first version of the large model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
- 24 repeating layers
- 128 embedding dimension
- 1024 hidden dimension
- 16 attention heads
- 17M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-large-v1')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"รขโยmodeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"รขโยmodelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"รขโยmodel"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"รขโยrunway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"รขโยlingerie"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-large-v1')
model = AlbertModel.from_pretrained("albert-large-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-large-v1')
model = TFAlbertModel.from_pretrained("albert-large-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-large-v1')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a chauffeur.[SEP]",
"score":0.029577180743217468,
"token":28744,
"token_str":"รขโยchauffeur"
},
{
"sequence":"[CLS] the man worked as a janitor.[SEP]",
"score":0.028865724802017212,
"token":29477,
"token_str":"รขโยjanitor"
},
{
"sequence":"[CLS] the man worked as a shoemaker.[SEP]",
"score":0.02581118606030941,
"token":29024,
"token_str":"รขโยshoemaker"
},
{
"sequence":"[CLS] the man worked as a blacksmith.[SEP]",
"score":0.01849772222340107,
"token":21238,
"token_str":"รขโยblacksmith"
},
{
"sequence":"[CLS] the man worked as a lawyer.[SEP]",
"score":0.01820771023631096,
"token":3672,
"token_str":"รขโยlawyer"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a receptionist.[SEP]",
"score":0.04604868218302727,
"token":25331,
"token_str":"รขโยreceptionist"
},
{
"sequence":"[CLS] the woman worked as a janitor.[SEP]",
"score":0.028220869600772858,
"token":29477,
"token_str":"รขโยjanitor"
},
{
"sequence":"[CLS] the woman worked as a paramedic.[SEP]",
"score":0.0261906236410141,
"token":23386,
"token_str":"รขโยparamedic"
},
{
"sequence":"[CLS] the woman worked as a chauffeur.[SEP]",
"score":0.024797942489385605,
"token":28744,
"token_str":"รขโยchauffeur"
},
{
"sequence":"[CLS] the woman worked as a waitress.[SEP]",
"score":0.024124596267938614,
"token":13678,
"token_str":"รขโยwaitress"
}
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE |
|----------------|----------|----------|----------|----------|----------|----------|
|V2 |
|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 |
|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 |
|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 |
|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 |
|V1 |
|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 |
|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 |
|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 |
|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 9,757 | [
[
-0.00957489013671875,
-0.039825439453125,
0.019989013671875,
0.0248870849609375,
-0.0313720703125,
0.0005731582641601562,
0.00794219970703125,
-0.01190185546875,
0.028472900390625,
0.0469970703125,
-0.04168701171875,
-0.03411865234375,
-0.061309814453125,
0.... |
laion/CLIP-convnext_base_w-laion2B-s13B-b82K | 2023-04-18T22:05:45.000Z | [
"open_clip",
"clip",
"zero-shot-image-classification",
"arxiv:2201.03545",
"arxiv:1910.04867",
"license:mit",
"has_space",
"region:us"
] | zero-shot-image-classification | laion | null | null | laion/CLIP-convnext_base_w-laion2B-s13B-b82K | 3 | 2,277 | open_clip | 2023-01-03T00:22:20 | ---
license: mit
library_name: open_clip
pipeline_tag: zero-shot-image-classification
tags:
- clip
---
# Model Card for CLIP-convnext_base_w-320.laion2B-s13B-b82K
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Details](#training-details)
4. [Evaluation](#evaluation)
5. [Acknowledgements](#acknowledgements)
6. [Citation](#citation)
# Model Details
## Model Description
A series of CLIP [ConvNeXt-Base](https://arxiv.org/abs/2201.03545) (w/ wide embed dim) models trained on subsets LAION-5B (https://laion.ai/blog/laion-5b/) using OpenCLIP (https://github.com/mlfoundations/open_clip).
Goals:
* Explore an alternative to ViT and ResNet (w/ AttentionPooling) CLIP models that scales well with model size and image resolution
Firsts:
* First known ConvNeXt CLIP models trained at scale in the range of CLIP ViT-B/16 and RN50x4 models
* First released model weights exploring increase of augmentation + regularization for image tower via adding (greater scale range of RRC, random erasing, stochastic depth)
The models utilize the [timm](https://github.com/rwightman/pytorch-image-models) ConvNeXt-Base model (`convnext_base`) as the image tower, and the same text tower as the RN50x4 (depth 12, embed dim 640) model from OpenAI CLIP. The base models are trained at 256x256 image resolution and roughly match the RN50x4 models on FLOPs and activation counts. The models with `320` in the name are trained at 320x320.
All models in this series were trained for 13B samples and have ImageNet Zero-Shot top-1 of >= 70.8%. Comparing to ViT-B/16 at 34B SS with zero-shot of 70.2% (68.1% for 13B SS) this suggests the ConvNeXt architecture may be more sample efficient in this range of model scale. More experiments needed to confirm.
| Model | Dataset | Resolution | AugReg | Top-1 ImageNet Zero-Shot (%) |
| ----- | ------- | ---------- | ------------ | --------- |
| [convnext_base_w.laion2b_s13b_b82k](https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K) | LAION-2B | 256x256 | RRC (0.9, 1.0) | 70.8 |
| [convnext_base_w.laion2b_s13b_b82k_augreg](https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K-augreg) | LAION-2B | 256x256 | RRC (0.33, 1.0), RE (0.35), SD (0.1) | 71.5 |
| [convnext_base_w.laion_aesthetic_s13b_b82k](https://huggingface.co/laion/CLIP-convnext_base_w-laion_aesthetic-s13B-b82K) | LAION-A | 256x256 | RRC (0.9, 1.0) | 71.0 |
| [convnext_base_w_320.laion_aesthetic_s13b_b82k](https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K) | LAION-A | 320x320 | RRC (0.9, 1.0) | 71.7 |
| [convnext_base_w_320.laion_aesthetic_s13b_b82k_augreg](https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K-augreg) | LAION-A | 320x320 | RRC (0.33, 1.0), RE (0.35), SD (0.1) | 71.3 |
RRC = Random Resize Crop (crop pcts), RE = Random Erasing (prob), SD = Stochastic Depth (prob) -- image tower only
LAION-A = LAION Aesthetic, an ~900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering.
Model training done by Ross Wightman across both the [stability.ai](https://stability.ai/) cluster and the [JUWELS Booster](https://apps.fz-juelich.de/jsc/hps/juwels/booster-overview.html) supercomputer. See acknowledgements below.
# Uses
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset.
## Direct Use
Zero-shot image classification, image and text retrieval, among others.
## Downstream Use
Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.
## Out-of-Scope Use
As per the OpenAI models,
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIPโs performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below.
# Training Details
## Training Data
This model was trained with one of (see table in intro):
* LAION-2B - A 2 billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/).
* LAION-Aesthetic - A 900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering
**IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a โsafeโ subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress.
## Training Procedure
All models were trained with a global batch size of 81920 for 64 checkpoint intervals of 203.7M samples for a total of ~13B samples seen over training.
For 256x256 models, a slurm script w/ srun below was used on 20 8-GPU (A100 40GB) nodes (Stability), switching to 40 4-GPU nodes for time on JUWELS.
```
/opt/slurm/sbin/srun --cpu_bind=v --accel-bind=gn python -m training.main \
--save-frequency 1 \
--name "convnext_256" \
--resume 'latest' \
--train-data="pipe:aws s3 cp s3://mybucket/path/{laion{00000..xxxxx}.tar -" \
--train-num-samples 203666042 \
--dataset-type webdataset \
--precision amp_bfloat16 \
--warmup 10000 \
--batch-size=512 \
--epochs=64 \
--dataset-resampled \
--clip-grad-norm 5.0 \
--lr 1e-3 \
--workers=6 \
--model "convnext_base_w" \
--seed 0 \
--ddp-static-graph \
--local-loss \
--gather-with-grad \
--grad-checkpointing
```
For 320x320 models, same as above but w/ 32 8-GPU nodes, local batch size 320, or 64 4-GPU nodes on JUWELs.
# Evaluation
Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark).
## Testing Data, Factors & Metrics
### Testing Data
The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval.
## Results
The models achieve between 70.8 and 71.7 zero-shot top-1 accuracy on ImageNet-1k.

An initial round of benchmarks have been performed on a wider range of datasets, to be viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb
As part of exploring increased augmentation + regularization, early evalations suggest that `augreg` trained models evaluate well over a wider range of resolutions. This is especially true for the 320x320 LAION-A model, where the augreg run was lower than the non-augreg when evaluated at the train resolution of 320x320 (71.3 vs 71.7), but improves to 72.2 when evaluated at 384x384 (the non-augreg drops to 71.0 at 384x384).
# Acknowledgements
Acknowledging [stability.ai](https://stability.ai/) and the Gauss Centre for Supercomputing e.V. (http://gauss-centre.eu) for funding this part of work by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS Booster at Jรผlich Supercomputing Centre (JSC).
# Citation
**BibTeX:**
LAION-5B
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
OpenCLIP software
```bibtex
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
OpenAI CLIP paper
```bibtex
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
```bibtex
@Article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
``` | 12,623 | [
[
-0.035400390625,
-0.035919189453125,
0.004611968994140625,
0.0024890899658203125,
-0.031341552734375,
-0.033477783203125,
-0.0119476318359375,
-0.048919677734375,
0.0240631103515625,
0.0283203125,
-0.039642333984375,
-0.0341796875,
-0.0382080078125,
-0.00481... |
h2oai/h2ogpt-4096-llama2-7b | 2023-08-24T18:34:22.000Z | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"h2ogpt",
"en",
"license:llama2",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | h2oai | null | null | h2oai/h2ogpt-4096-llama2-7b | 2 | 2,276 | transformers | 2023-08-09T17:36:44 | ---
inference: false
language:
- en
license: llama2
model_type: llama
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- h2ogpt
---
h2oGPT clone of [Meta's Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b-hf).
This model can be fine-tuned with [H2O.ai](https://h2o.ai/) open-source software:
- h2oGPT https://github.com/h2oai/h2ogpt/
- H2O LLM Studio https://h2o.ai/platform/ai-cloud/make/llm-studio/
Try our live [h2oGPT demo](https://gpt.h2o.ai) with side-by-side LLM comparisons and private document chat!
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=4096, bias=False)
(v_proj): Linear(in_features=4096, out_features=4096, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=4096, out_features=11008, bias=False)
(up_proj): Linear(in_features=4096, out_features=11008, bias=False)
(down_proj): Linear(in_features=11008, out_features=4096, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
``` | 1,665 | [
[
-0.027374267578125,
-0.051483154296875,
0.0439453125,
0.03369140625,
-0.02838134765625,
0.009033203125,
0.005382537841796875,
-0.03399658203125,
0.0277252197265625,
0.0267181396484375,
-0.04296875,
-0.046234130859375,
-0.042449951171875,
-0.0159454345703125,... |
timm/deit3_base_patch16_224.fb_in22k_ft_in1k | 2023-03-28T00:48:19.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-22k",
"arxiv:2204.07118",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/deit3_base_patch16_224.fb_in22k_ft_in1k | 0 | 2,275 | timm | 2023-03-28T00:46:58 | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
datasets:
- imagenet-1k
- imagenet-22k
---
# Model card for deit3_base_patch16_224.fb_in22k_ft_in1k
A DeiT-III image classification model. Pretrained on ImageNet-22k and fine-tuned on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 86.6
- GMACs: 17.6
- Activations (M): 23.9
- Image size: 224 x 224
- **Papers:**
- DeiT III: Revenge of the ViT: https://arxiv.org/abs/2204.07118
- **Original:** https://github.com/facebookresearch/deit
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('deit3_base_patch16_224.fb_in22k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'deit3_base_patch16_224.fb_in22k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 197, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{Touvron2022DeiTIR,
title={DeiT III: Revenge of the ViT},
author={Hugo Touvron and Matthieu Cord and Herve Jegou},
journal={arXiv preprint arXiv:2204.07118},
year={2022},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 3,053 | [
[
-0.030853271484375,
-0.03558349609375,
0.00878143310546875,
0.01446533203125,
-0.0288238525390625,
-0.0211639404296875,
-0.0031871795654296875,
-0.029693603515625,
0.013641357421875,
0.0229644775390625,
-0.04107666015625,
-0.055145263671875,
-0.0477294921875,
... |
ostris/watercolor_style_lora_sdxl | 2023-08-15T01:57:12.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"sdxl",
"license:apache-2.0",
"has_space",
"region:us"
] | text-to-image | ostris | null | null | ostris/watercolor_style_lora_sdxl | 9 | 2,275 | diffusers | 2023-08-15T01:50:24 | ---
license: apache-2.0
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- sdxl
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt:
widget:
- text: darth vader fighting superman, 2 people, lightsaber
---
# Watercolor Style - SDXL LoRA
### Tips
- No trigger words needed.
- Converts any prompt into a watercolor picture
- Strength of 1.0 usually works but you may need to increase or decrease an needed.
### Samples
[<img src="https://huggingface.co/ostris/watercolor_style_lora_sdxl/resolve/main/samples/03007-1451488659-darth%20vader%20fighting%20superman%2C%202%20people%2C%20lightsaber%20_lora_watercolor_v1_sdxl_1_.jpeg" style="max-width:400px; height:auto" />](https://huggingface.co/ostris/watercolor_style_lora_sdxl/resolve/main/samples/03007-1451488659-darth%20vader%20fighting%20superman%2C%202%20people%2C%20lightsaber%20_lora_watercolor_v1_sdxl_1_.jpeg)
[<img src="https://huggingface.co/ostris/watercolor_style_lora_sdxl/resolve/main/samples/03014-3386049647-marty%20mcfly%20%20_lora_watercolor_v1_sdxl_1_.jpeg" style="max-width:400px; height:auto" />](https://huggingface.co/ostris/watercolor_style_lora_sdxl/resolve/main/samples/03014-3386049647-marty%20mcfly%20%20_lora_watercolor_v1_sdxl_1_.jpeg)
[<img src="https://huggingface.co/ostris/watercolor_style_lora_sdxl/resolve/main/samples/03026-3442519268-michael%20jackson%20in%20thriller%20video%20%20%20_lora_watercolor_v1_sdxl_1_.jpeg" style="max-width:400px; height:auto" />](https://huggingface.co/ostris/watercolor_style_lora_sdxl/resolve/main/samples/03026-3442519268-michael%20jackson%20in%20thriller%20video%20%20%20_lora_watercolor_v1_sdxl_1_.jpeg)
[<img src="https://huggingface.co/ostris/watercolor_style_lora_sdxl/resolve/main/samples/03035-1187541246-slimer%20from%20ghost%20busters%20%20%20_lora_watercolor_v1_sdxl_1_.jpeg" style="max-width:400px; height:auto" />](https://huggingface.co/ostris/watercolor_style_lora_sdxl/resolve/main/samples/03035-1187541246-slimer%20from%20ghost%20busters%20%20%20_lora_watercolor_v1_sdxl_1_.jpeg)
[<img src="https://huggingface.co/ostris/watercolor_style_lora_sdxl/resolve/main/samples/03042-3999946060-beavis%20and%20butthead%20%20_lora_watercolor_v1_sdxl_1_.jpeg" style="max-width:400px; height:auto" />](https://huggingface.co/ostris/watercolor_style_lora_sdxl/resolve/main/samples/03042-3999946060-beavis%20and%20butthead%20%20_lora_watercolor_v1_sdxl_1_.jpeg)
| 2,417 | [
[
-0.05145263671875,
-0.04461669921875,
0.0296478271484375,
0.0156707763671875,
-0.03851318359375,
0.0206756591796875,
0.009796142578125,
-0.03948974609375,
0.07757568359375,
0.03814697265625,
-0.06292724609375,
-0.040252685546875,
-0.056396484375,
0.005580902... |
timm/efficientformerv2_s0.snap_dist_in1k | 2023-02-03T21:11:14.000Z | [
"timm",
"pytorch",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2212.08059",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/efficientformerv2_s0.snap_dist_in1k | 0 | 2,273 | timm | 2023-02-03T21:11:09 | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for efficientformerv2_s0.snap_dist_in1k
A EfficientFormer-V2 image classification model. Pretrained with distillation on ImageNet-1k.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 3.6
- GMACs: 0.4
- Activations (M): 5.3
- Image size: 224 x 224
- **Original:** https://github.com/snap-research/EfficientFormer
- **Papers:**
- Rethinking Vision Transformers for MobileNet Size and Speed: https://arxiv.org/abs/2212.08059
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model('efficientformerv2_s0.snap_dist_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'efficientformerv2_s0.snap_dist_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, num_features, H, W) tensor
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'efficientformerv2_s0.snap_dist_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g. for efficientformerv2_l:
# torch.Size([2, 40, 56, 56])
# torch.Size([2, 80, 28, 28])
# torch.Size([2, 192, 14, 14])
# torch.Size([2, 384, 7, 7])
print(o.shape)
```
## Model Comparison
|model |top1 |top5 |param_count|img_size|
|-----------------------------------|------|------|-----------|--------|
|efficientformerv2_l.snap_dist_in1k |83.628|96.54 |26.32 |224 |
|efficientformer_l7.snap_dist_in1k |83.368|96.534|82.23 |224 |
|efficientformer_l3.snap_dist_in1k |82.572|96.24 |31.41 |224 |
|efficientformerv2_s2.snap_dist_in1k|82.128|95.902|12.71 |224 |
|efficientformer_l1.snap_dist_in1k |80.496|94.984|12.29 |224 |
|efficientformerv2_s1.snap_dist_in1k|79.698|94.698|6.19 |224 |
|efficientformerv2_s0.snap_dist_in1k|76.026|92.77 |3.6 |224 |
## Citation
```bibtex
@article{li2022rethinking,
title={Rethinking Vision Transformers for MobileNet Size and Speed},
author={Li, Yanyu and Hu, Ju and Wen, Yang and Evangelidis, Georgios and Salahi, Kamyar and Wang, Yanzhi and Tulyakov, Sergey and Ren, Jian},
journal={arXiv preprint arXiv:2212.08059},
year={2022}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
| 4,560 | [
[
-0.032196044921875,
-0.035003662109375,
0.00904083251953125,
0.0087890625,
-0.0256195068359375,
-0.029327392578125,
-0.01045989990234375,
-0.0236358642578125,
0.0201568603515625,
0.0243377685546875,
-0.02947998046875,
-0.036956787109375,
-0.0565185546875,
-0... |
AIARTCHAN/camelliamix_v2.0 | 2023-08-04T01:49:12.000Z | [
"diffusers",
"stable-diffusion",
"aiartchan",
"text-to-image",
"license:other",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | AIARTCHAN | null | null | AIARTCHAN/camelliamix_v2.0 | 14 | 2,271 | diffusers | 2023-03-14T21:45:56 | ---
license: other
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- aiartchan
---
# camelliamix_v2.0
[์๋ณธ๊ธ](https://arca.live/b/aiart/71600989)
[civitai](https://civitai.com/models/44165?modelVersionId=48820)
# Download
- [original 5.55GB](https://drive.google.com/file/d/1WnNJp-Mn79B6KXhZ7t7gNrSibvut93Zi/view?usp=drive_link)
- [fp16](https://civitai.com/api/download/models/48820?type=Model&format=SafeTensor&size=pruned&fp=fp16)
## License
Based on creativeml-openrail-m, additionally no commercial use, no redistribution.
## ๋ณํฉ๋ชจ๋ธ
CamelliaMix_V1
dalcefopainting_3rd
Counterfeit-V2.5
Motomurabito Style LoRA
Hipoly 3D Model LoRA
Shojo Vibe LoRA
Kidmo//style LoRA
## ์ถ์ฒ์ค์
Sampling method : DPM++ SDE Karras
Clip skip : 2
Hires.fix upscaler : R-ESRGAN 4x+Anime6B
CFG Scale : 7~10




| 1,298 | [
[
-0.04400634765625,
-0.0311279296875,
-0.0144805908203125,
0.053924560546875,
-0.04779052734375,
-0.004993438720703125,
0.019866943359375,
-0.055572509765625,
0.069091796875,
0.040557861328125,
-0.069580078125,
-0.0215911865234375,
-0.0283355712890625,
0.0115... |
EarthnDusk/IllustrationBackups | 2023-07-22T11:46:02.000Z | [
"diffusers",
"stable diffusion",
"text-to-image",
"en",
"dataset:Duskfallcrew/Creative-Embeddings",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | EarthnDusk | null | null | EarthnDusk/IllustrationBackups | 0 | 2,270 | diffusers | 2023-07-18T04:44:59 | ---
license: creativeml-openrail-m
datasets:
- Duskfallcrew/Creative-Embeddings
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable diffusion
---
Illustration Merges - Osenayan Mix , Nomura Lee, JoeMad aka ultimate liquid, Comic & Anime Punk Fusion and TiS Mix V2
Backup dump intended for Ko-Fi Supporters.
By accessing this without using the link in KoFi or CivitAI you agree to paying me court fees of gardettoes, pizza and chocolate.
NEW: JOIN OUR DEVIANT ART GROUP HERE: https://www.deviantart.com/diffusionai
Our mixes are supporting PLURAL PRIDE MONTH AND DISABILITY PRIDE MONTH IN JULY.
We ourselves are not "A TEAM" we are a Diagnosed bodied individual with Dissociative Identity disorder. :3 No biggie, just don't start asking for 200+ alters in a list just cause you want different names of creators.
JULY IS PLURAL PRIDE MONTH - You all know who you are, and you shall fear no longer - you have space on CivitAI just as much as the rest of everyone else. Our goal is to create niche safe spaces for those like us. If you're not plural, neurodivergent - it's ok LOL - you're welcome to support and just download and enjoy our content!
If you want to learn more please go here: https://thepluralassociation.org/ and support us, because we're being fake claimed into oblivion for "not being ashamed".
Never be ashamed if you have quirks.
MONTHLY & ONE TIME SUPPORTERS GET PRE RELEASE ACCESS & MORE:
[](https://ko-fi.com/Z8Z8L4EO)
# WE ARE PROUDLY SPONSORED BY: https://www.piratediffusion.com/
If you got requests, or concerns, We're still looking for beta testers: JOIN THE DISCORD AND DEMAND THINGS OF US:
https://discord.gg/5t2kYxt7An
JOIN OUR SUBREDDIT: https://www.reddit.com/r/earthndusk/
Listen to the music that we've made that goes with our art:
https://open.spotify.com/playlist/00R8x00YktB4u541imdSSf?si=b60d209385a74b38
MODEL AND LORA REQUEST FORM: https://forms.gle/aZNw9E78yfmSDnxdA
BY DOWNLOADING THIS YOU AGREE TO THE FOLLOWING:
1. TO CREDIT IF YOU MERGE
2. TO NOT MAKE OR USE THIS FOR ANY ILLEGAL USE
3. NOT TO RE UPLOAD ANYWHERE WITHOUT CREDIT -REUPLOADING ON CIVITAI IS PROHIBITED, BUT OTHER GENERATION SITES WITH CREDIT IS FINE
4. PERSONAL USE IS ABSOLUTELY FINE JUST ADHERE TO THE CREATIVE OPEN RAIL - M LISC PROVIDED.
5. COMMERCIAL USE IS YOUR CHOICE, BE AWARE OF LAWS AND REGULATIONS AROUND COPYRIGHT IN YOUR AREAS.
6. THIS MODEL CAN PRODUCE NSFW CONTENT, SEVERAL NSFW BACK MIXES ARE INCLUDED. | 2,529 | [
[
-0.056427001953125,
-0.025604248046875,
0.01763916015625,
0.043731689453125,
-0.0292510986328125,
-0.0045166015625,
0.02496337890625,
-0.050537109375,
0.07659912109375,
0.041015625,
-0.07171630859375,
-0.035003662109375,
-0.032379150390625,
-0.00051546096801... |
ai-forever/T5-large-spell | 2023-10-04T18:12:32.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"spellchecking",
"NLP",
"T5",
"natural language generation",
"en",
"arxiv:2308.09435",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | ai-forever | null | null | ai-forever/T5-large-spell | 0 | 2,268 | transformers | 2023-07-29T12:59:31 | ---
license: mit
language:
- en
tags:
- spellchecking
- NLP
- T5
- pytorch
- natural language generation
---
# T5-large-spell model
### Summary
The model corrects spelling errors and typos by bringing all words in the text to the standard English language.
The proofreader was trained based on the [T5-large](https://huggingface.co/t5-large) model.
An extensive dataset with โartificialโ errors was taken as a training corpus: the corpus was assembled on the basis of the English-language Wikipedia and News blogs, then typos and spelling errors were automatically introduced into it using the functionality of the [SAGE library](https://github.com/ai-forever/sage).
### Public references
- [SAGE library announcement](https://youtu.be/yFfkV0Qjuu0), DataFest 2023
- [Paper about synthetic error generation methods](https://www.dialog-21.ru/media/5914/martynovnplusetal056.pdf), Dialogue 2023
- [Paper about SAGE and our best solution](https://arxiv.org/abs/2308.09435), Review EACL 2024
### Examples
| Input | Output |
| --- | --- |
| Th festeivะฐl was excelzecnt in many ways, and in particular it beinganinternational festjival sss a chะฐllenging, bet brilli an t ea. | The festival was excellent in many ways, and in particular it beinganinternational festival is a challenging, but brilliant one to see. |
| That 's why I believe in the solution which is the closest to human nature and can help us to avoid boredome. I am sure that eventually we will take off our clothes and in the future we will be undressed and free. There wo n't be any problem with being up - do - date . | That's why I believe in the solution which is the closest to human nature and can help us to avoid boredom. I am sure that eventually we will take off our clothes and in the future we will be undressed and free. There won't be any problem with being up - do - date. |
| If you bought something goregous, you well be very happy. | If you bought something gorgeous, you will be very happy. |
## Metrics
### Quality
Below are automatic metrics for determining the correctness of the spell checkers.
We present a comparison of our solution both with open automatic spell checkers and with the ChatGPT family of models on two available datasets:
- **BEA60K**: English spelling errors collected from several domains;
- **JFLEG**: 1601 sentences in English, which contain about 2 thousand spelling errors;
**BEA60K**
| Model | Precision | Recall | F1 |
| --- | --- | --- | --- |
| T5-large-spell | 66.5 | 83.1 | 73.9 |
| ChatGPT gpt-3.5-turbo-0301 | 66.9 | 84.1 | 74.5 |
| ChatGPT gpt-4-0314 | 68.6 | 85.2 | 76.0 |
| ChatGPT text-davinci-003 | 67.8 | 83.9 | 75.0 |
| Bert (https://github.com/neuspell/neuspell) | 65.8 | 79.6 | 72.0 |
| SC-LSTM (https://github.com/neuspell/neuspell) | 62.2 | 80.3 | 72.0 |
**JFLEG**
| Model | Precision | Recall | F1 |
| --- | --- | --- | --- |
| T5-large-spell | 83.4 | 84.3 | 83.8 |
| ChatGPT gpt-3.5-turbo-0301 | 77.8 | 88.6 | 82.9 |
| ChatGPT gpt-4-0314 | 77.9 | 88.3 | 82.8 |
| ChatGPT text-davinci-003 | 76.8 | 88.5 | 82.2 |
| Bert (https://github.com/neuspell/neuspell) | 78.5 | 85.4 | 81.8 |
| SC-LSTM (https://github.com/neuspell/neuspell) | 80.6 | 86.1 | 83.2 |
## How to use
```python
from transformers import T5ForConditionalGeneration, AutoTokenizer
path_to_model = "ai-forever/T5-large-spell"
model = T5ForConditionalGeneration.from_pretrained(path_to_model)
tokenizer = AutoTokenizer.from_pretrained(path_to_model)
prefix = "grammar: "
sentence = "If you bought something goregous, you well be very happy."
sentence = prefix + sentence
encodings = tokenizer(sentence, return_tensors="pt")
generated_tokens = model.generate(**encodings)
answer = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(answer)
# ["If you bought something gorgeous, you will be very happy."]
```
## Resources
- [SAGE library](https://github.com/ai-forever/sage), GitHub
- [ruM2M100-1.2B](https://huggingface.co/ai-forever/RuM2M100-1.2B), HuggingFace
- [ruM2M100-418M](https://huggingface.co/ai-forever/RuM2M100-420M), HuggingFace
- [FredT5-large-spell](https://huggingface.co/ai-forever/FRED-T5-large-spell), HuggingFace
- [T5-large-spell](https://huggingface.co/ai-forever/T5-large-spell), HuggingFace
## License
The [T5-large](https://huggingface.co/t5-large) model, on which our solution is based, and its source code are supplied under the APACHE-2.0 license.
Our solution is supplied under MIT license.
## Specifications
- File size: 3 Gb;
- Framework: pytorch
- Format: AI Service
- Version: v1.0
- Developer: SberDevices, AGI NLP
## Contacts
nikita.martynov.98@list.ru
| 4,617 | [
[
-0.0150604248046875,
-0.04864501953125,
0.0110321044921875,
0.0162200927734375,
0.00649261474609375,
-0.00768280029296875,
-0.033447265625,
-0.04071044921875,
0.026092529296875,
0.01221466064453125,
-0.038726806640625,
-0.056884765625,
-0.052337646484375,
0.... |
SVC05/watermelon | 2023-10-28T13:45:08.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | SVC05 | null | null | SVC05/watermelon | 0 | 2,264 | diffusers | 2023-10-28T13:40:37 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### watermelon Dreambooth model trained by SVC05 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: SFIT-41
Sample pictures of this concept:


| 492 | [
[
-0.0377197265625,
-0.0195465087890625,
0.032135009765625,
0.01708984375,
-0.0120086669921875,
0.0419921875,
0.0335693359375,
-0.031463623046875,
0.04510498046875,
0.0343017578125,
-0.05010986328125,
-0.0273590087890625,
-0.028533935546875,
0.0045280456542968... |
zeroshot/bge-small-en-v1.5-sparse | 2023-11-01T17:50:25.000Z | [
"transformers",
"onnx",
"bert",
"feature-extraction",
"mteb",
"sparse sparsity quantized onnx embeddings int8",
"en",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | feature-extraction | zeroshot | null | null | zeroshot/bge-small-en-v1.5-sparse | 3 | 2,262 | transformers | 2023-09-21T13:21:02 | ---
tags:
- mteb
- sparse sparsity quantized onnx embeddings int8
model-index:
- name: bge-small-en-v1.5-sparse
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 70.71641791044776
- type: ap
value: 32.850850647310004
- type: f1
value: 64.48101916414805
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 83.33962500000001
- type: ap
value: 78.28706349240106
- type: f1
value: 83.27426715603062
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.988
- type: f1
value: 40.776679545648506
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.101999999999997
- type: map_at_10
value: 40.754000000000005
- type: map_at_100
value: 41.83
- type: map_at_1000
value: 41.845
- type: map_at_3
value: 36.178
- type: map_at_5
value: 38.646
- type: mrr_at_1
value: 26.6
- type: mrr_at_10
value: 40.934
- type: mrr_at_100
value: 42.015
- type: mrr_at_1000
value: 42.03
- type: mrr_at_3
value: 36.344
- type: mrr_at_5
value: 38.848
- type: ndcg_at_1
value: 26.101999999999997
- type: ndcg_at_10
value: 49.126999999999995
- type: ndcg_at_100
value: 53.815999999999995
- type: ndcg_at_1000
value: 54.178000000000004
- type: ndcg_at_3
value: 39.607
- type: ndcg_at_5
value: 44.086999999999996
- type: precision_at_1
value: 26.101999999999997
- type: precision_at_10
value: 7.596
- type: precision_at_100
value: 0.967
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 16.524
- type: precision_at_5
value: 12.105
- type: recall_at_1
value: 26.101999999999997
- type: recall_at_10
value: 75.96000000000001
- type: recall_at_100
value: 96.65700000000001
- type: recall_at_1000
value: 99.431
- type: recall_at_3
value: 49.573
- type: recall_at_5
value: 60.526
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 43.10651535441929
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 34.41095293826606
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 56.96575970919239
- type: mrr
value: 69.92503187794047
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 79.64892774481326
- type: cos_sim_spearman
value: 78.953003817029
- type: euclidean_pearson
value: 78.92456838230683
- type: euclidean_spearman
value: 78.56504316985354
- type: manhattan_pearson
value: 79.21436359014227
- type: manhattan_spearman
value: 78.66263575501259
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 81.25
- type: f1
value: 81.20841448916138
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 34.69545244587236
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 28.84301739171936
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.401
- type: map_at_10
value: 32.451
- type: map_at_100
value: 33.891
- type: map_at_1000
value: 34.01
- type: map_at_3
value: 29.365999999999996
- type: map_at_5
value: 31.240000000000002
- type: mrr_at_1
value: 29.9
- type: mrr_at_10
value: 38.590999999999994
- type: mrr_at_100
value: 39.587
- type: mrr_at_1000
value: 39.637
- type: mrr_at_3
value: 36.028
- type: mrr_at_5
value: 37.673
- type: ndcg_at_1
value: 29.9
- type: ndcg_at_10
value: 38.251000000000005
- type: ndcg_at_100
value: 44.354
- type: ndcg_at_1000
value: 46.642
- type: ndcg_at_3
value: 33.581
- type: ndcg_at_5
value: 35.96
- type: precision_at_1
value: 29.9
- type: precision_at_10
value: 7.439
- type: precision_at_100
value: 1.28
- type: precision_at_1000
value: 0.17700000000000002
- type: precision_at_3
value: 16.404
- type: precision_at_5
value: 12.046
- type: recall_at_1
value: 23.401
- type: recall_at_10
value: 49.305
- type: recall_at_100
value: 75.885
- type: recall_at_1000
value: 90.885
- type: recall_at_3
value: 35.341
- type: recall_at_5
value: 42.275
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.103
- type: map_at_10
value: 29.271
- type: map_at_100
value: 30.151
- type: map_at_1000
value: 30.276999999999997
- type: map_at_3
value: 27.289
- type: map_at_5
value: 28.236
- type: mrr_at_1
value: 26.943
- type: mrr_at_10
value: 33.782000000000004
- type: mrr_at_100
value: 34.459
- type: mrr_at_1000
value: 34.525
- type: mrr_at_3
value: 31.985000000000003
- type: mrr_at_5
value: 32.909
- type: ndcg_at_1
value: 26.943
- type: ndcg_at_10
value: 33.616
- type: ndcg_at_100
value: 37.669000000000004
- type: ndcg_at_1000
value: 40.247
- type: ndcg_at_3
value: 30.482
- type: ndcg_at_5
value: 31.615
- type: precision_at_1
value: 26.943
- type: precision_at_10
value: 6.146
- type: precision_at_100
value: 1.038
- type: precision_at_1000
value: 0.151
- type: precision_at_3
value: 14.521999999999998
- type: precision_at_5
value: 10.038
- type: recall_at_1
value: 22.103
- type: recall_at_10
value: 41.754999999999995
- type: recall_at_100
value: 59.636
- type: recall_at_1000
value: 76.801
- type: recall_at_3
value: 32.285000000000004
- type: recall_at_5
value: 35.684
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.565
- type: map_at_10
value: 43.07
- type: map_at_100
value: 44.102999999999994
- type: map_at_1000
value: 44.175
- type: map_at_3
value: 40.245
- type: map_at_5
value: 41.71
- type: mrr_at_1
value: 37.429
- type: mrr_at_10
value: 46.358
- type: mrr_at_100
value: 47.146
- type: mrr_at_1000
value: 47.187
- type: mrr_at_3
value: 44.086
- type: mrr_at_5
value: 45.318000000000005
- type: ndcg_at_1
value: 37.429
- type: ndcg_at_10
value: 48.398
- type: ndcg_at_100
value: 52.90899999999999
- type: ndcg_at_1000
value: 54.478
- type: ndcg_at_3
value: 43.418
- type: ndcg_at_5
value: 45.578
- type: precision_at_1
value: 37.429
- type: precision_at_10
value: 7.856000000000001
- type: precision_at_100
value: 1.093
- type: precision_at_1000
value: 0.129
- type: precision_at_3
value: 19.331
- type: precision_at_5
value: 13.191
- type: recall_at_1
value: 32.565
- type: recall_at_10
value: 61.021
- type: recall_at_100
value: 81.105
- type: recall_at_1000
value: 92.251
- type: recall_at_3
value: 47.637
- type: recall_at_5
value: 52.871
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.108
- type: map_at_10
value: 24.613
- type: map_at_100
value: 25.624000000000002
- type: map_at_1000
value: 25.721
- type: map_at_3
value: 22.271
- type: map_at_5
value: 23.681
- type: mrr_at_1
value: 19.435
- type: mrr_at_10
value: 26.124000000000002
- type: mrr_at_100
value: 27.07
- type: mrr_at_1000
value: 27.145999999999997
- type: mrr_at_3
value: 23.748
- type: mrr_at_5
value: 25.239
- type: ndcg_at_1
value: 19.435
- type: ndcg_at_10
value: 28.632
- type: ndcg_at_100
value: 33.988
- type: ndcg_at_1000
value: 36.551
- type: ndcg_at_3
value: 24.035999999999998
- type: ndcg_at_5
value: 26.525
- type: precision_at_1
value: 19.435
- type: precision_at_10
value: 4.565
- type: precision_at_100
value: 0.771
- type: precision_at_1000
value: 0.10200000000000001
- type: precision_at_3
value: 10.169
- type: precision_at_5
value: 7.571
- type: recall_at_1
value: 18.108
- type: recall_at_10
value: 39.533
- type: recall_at_100
value: 64.854
- type: recall_at_1000
value: 84.421
- type: recall_at_3
value: 27.500000000000004
- type: recall_at_5
value: 33.314
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 11.087
- type: map_at_10
value: 17.323
- type: map_at_100
value: 18.569
- type: map_at_1000
value: 18.694
- type: map_at_3
value: 15.370000000000001
- type: map_at_5
value: 16.538
- type: mrr_at_1
value: 13.557
- type: mrr_at_10
value: 21.041
- type: mrr_at_100
value: 22.134
- type: mrr_at_1000
value: 22.207
- type: mrr_at_3
value: 18.843
- type: mrr_at_5
value: 20.236
- type: ndcg_at_1
value: 13.557
- type: ndcg_at_10
value: 21.571
- type: ndcg_at_100
value: 27.678000000000004
- type: ndcg_at_1000
value: 30.8
- type: ndcg_at_3
value: 17.922
- type: ndcg_at_5
value: 19.826
- type: precision_at_1
value: 13.557
- type: precision_at_10
value: 4.1290000000000004
- type: precision_at_100
value: 0.8370000000000001
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 8.914
- type: precision_at_5
value: 6.691999999999999
- type: recall_at_1
value: 11.087
- type: recall_at_10
value: 30.94
- type: recall_at_100
value: 57.833999999999996
- type: recall_at_1000
value: 80.365
- type: recall_at_3
value: 20.854
- type: recall_at_5
value: 25.695
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.708
- type: map_at_10
value: 30.422
- type: map_at_100
value: 31.713
- type: map_at_1000
value: 31.842
- type: map_at_3
value: 27.424
- type: map_at_5
value: 29.17
- type: mrr_at_1
value: 26.756
- type: mrr_at_10
value: 35.304
- type: mrr_at_100
value: 36.296
- type: mrr_at_1000
value: 36.359
- type: mrr_at_3
value: 32.692
- type: mrr_at_5
value: 34.288999999999994
- type: ndcg_at_1
value: 26.756
- type: ndcg_at_10
value: 35.876000000000005
- type: ndcg_at_100
value: 41.708
- type: ndcg_at_1000
value: 44.359
- type: ndcg_at_3
value: 30.946
- type: ndcg_at_5
value: 33.404
- type: precision_at_1
value: 26.756
- type: precision_at_10
value: 6.795
- type: precision_at_100
value: 1.138
- type: precision_at_1000
value: 0.155
- type: precision_at_3
value: 15.046999999999999
- type: precision_at_5
value: 10.972
- type: recall_at_1
value: 21.708
- type: recall_at_10
value: 47.315000000000005
- type: recall_at_100
value: 72.313
- type: recall_at_1000
value: 90.199
- type: recall_at_3
value: 33.528999999999996
- type: recall_at_5
value: 39.985
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.902
- type: map_at_10
value: 26.166
- type: map_at_100
value: 27.368
- type: map_at_1000
value: 27.493000000000002
- type: map_at_3
value: 23.505000000000003
- type: map_at_5
value: 25.019000000000002
- type: mrr_at_1
value: 23.402
- type: mrr_at_10
value: 30.787
- type: mrr_at_100
value: 31.735000000000003
- type: mrr_at_1000
value: 31.806
- type: mrr_at_3
value: 28.33
- type: mrr_at_5
value: 29.711
- type: ndcg_at_1
value: 23.402
- type: ndcg_at_10
value: 30.971
- type: ndcg_at_100
value: 36.61
- type: ndcg_at_1000
value: 39.507999999999996
- type: ndcg_at_3
value: 26.352999999999998
- type: ndcg_at_5
value: 28.488000000000003
- type: precision_at_1
value: 23.402
- type: precision_at_10
value: 5.799
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.14100000000000001
- type: precision_at_3
value: 12.633
- type: precision_at_5
value: 9.269
- type: recall_at_1
value: 18.902
- type: recall_at_10
value: 40.929
- type: recall_at_100
value: 65.594
- type: recall_at_1000
value: 85.961
- type: recall_at_3
value: 28.121000000000002
- type: recall_at_5
value: 33.638
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.168
- type: map_at_10
value: 25.142999999999997
- type: map_at_100
value: 25.993
- type: map_at_1000
value: 26.076
- type: map_at_3
value: 23.179
- type: map_at_5
value: 24.322
- type: mrr_at_1
value: 21.933
- type: mrr_at_10
value: 27.72
- type: mrr_at_100
value: 28.518
- type: mrr_at_1000
value: 28.582
- type: mrr_at_3
value: 25.791999999999998
- type: mrr_at_5
value: 26.958
- type: ndcg_at_1
value: 21.933
- type: ndcg_at_10
value: 28.866999999999997
- type: ndcg_at_100
value: 33.285
- type: ndcg_at_1000
value: 35.591
- type: ndcg_at_3
value: 25.202999999999996
- type: ndcg_at_5
value: 27.045
- type: precision_at_1
value: 21.933
- type: precision_at_10
value: 4.632
- type: precision_at_100
value: 0.733
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 10.992
- type: precision_at_5
value: 7.853000000000001
- type: recall_at_1
value: 19.168
- type: recall_at_10
value: 37.899
- type: recall_at_100
value: 58.54899999999999
- type: recall_at_1000
value: 75.666
- type: recall_at_3
value: 27.831
- type: recall_at_5
value: 32.336
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 12.764000000000001
- type: map_at_10
value: 17.757
- type: map_at_100
value: 18.677
- type: map_at_1000
value: 18.813
- type: map_at_3
value: 16.151
- type: map_at_5
value: 16.946
- type: mrr_at_1
value: 15.726
- type: mrr_at_10
value: 21.019
- type: mrr_at_100
value: 21.856
- type: mrr_at_1000
value: 21.954
- type: mrr_at_3
value: 19.282
- type: mrr_at_5
value: 20.189
- type: ndcg_at_1
value: 15.726
- type: ndcg_at_10
value: 21.259
- type: ndcg_at_100
value: 25.868999999999996
- type: ndcg_at_1000
value: 29.425
- type: ndcg_at_3
value: 18.204
- type: ndcg_at_5
value: 19.434
- type: precision_at_1
value: 15.726
- type: precision_at_10
value: 3.8920000000000003
- type: precision_at_100
value: 0.741
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 8.58
- type: precision_at_5
value: 6.132
- type: recall_at_1
value: 12.764000000000001
- type: recall_at_10
value: 28.639
- type: recall_at_100
value: 49.639
- type: recall_at_1000
value: 75.725
- type: recall_at_3
value: 19.883
- type: recall_at_5
value: 23.141000000000002
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.98
- type: map_at_10
value: 25.2
- type: map_at_100
value: 26.279000000000003
- type: map_at_1000
value: 26.399
- type: map_at_3
value: 23.399
- type: map_at_5
value: 24.284
- type: mrr_at_1
value: 22.015
- type: mrr_at_10
value: 28.555000000000003
- type: mrr_at_100
value: 29.497
- type: mrr_at_1000
value: 29.574
- type: mrr_at_3
value: 26.788
- type: mrr_at_5
value: 27.576
- type: ndcg_at_1
value: 22.015
- type: ndcg_at_10
value: 29.266
- type: ndcg_at_100
value: 34.721000000000004
- type: ndcg_at_1000
value: 37.659
- type: ndcg_at_3
value: 25.741000000000003
- type: ndcg_at_5
value: 27.044
- type: precision_at_1
value: 22.015
- type: precision_at_10
value: 4.897
- type: precision_at_100
value: 0.8540000000000001
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 11.567
- type: precision_at_5
value: 7.9479999999999995
- type: recall_at_1
value: 18.98
- type: recall_at_10
value: 38.411
- type: recall_at_100
value: 63.164
- type: recall_at_1000
value: 84.292
- type: recall_at_3
value: 28.576
- type: recall_at_5
value: 31.789
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.372
- type: map_at_10
value: 27.161
- type: map_at_100
value: 28.364
- type: map_at_1000
value: 28.554000000000002
- type: map_at_3
value: 25.135
- type: map_at_5
value: 26.200000000000003
- type: mrr_at_1
value: 24.704
- type: mrr_at_10
value: 31.219
- type: mrr_at_100
value: 32.092
- type: mrr_at_1000
value: 32.181
- type: mrr_at_3
value: 29.282000000000004
- type: mrr_at_5
value: 30.359
- type: ndcg_at_1
value: 24.704
- type: ndcg_at_10
value: 31.622
- type: ndcg_at_100
value: 36.917
- type: ndcg_at_1000
value: 40.357
- type: ndcg_at_3
value: 28.398
- type: ndcg_at_5
value: 29.764000000000003
- type: precision_at_1
value: 24.704
- type: precision_at_10
value: 5.81
- type: precision_at_100
value: 1.208
- type: precision_at_1000
value: 0.209
- type: precision_at_3
value: 13.241
- type: precision_at_5
value: 9.407
- type: recall_at_1
value: 20.372
- type: recall_at_10
value: 40.053
- type: recall_at_100
value: 64.71000000000001
- type: recall_at_1000
value: 87.607
- type: recall_at_3
value: 29.961
- type: recall_at_5
value: 34.058
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.424000000000001
- type: map_at_10
value: 20.541999999999998
- type: map_at_100
value: 21.495
- type: map_at_1000
value: 21.604
- type: map_at_3
value: 18.608
- type: map_at_5
value: 19.783
- type: mrr_at_1
value: 15.895999999999999
- type: mrr_at_10
value: 22.484
- type: mrr_at_100
value: 23.376
- type: mrr_at_1000
value: 23.467
- type: mrr_at_3
value: 20.548
- type: mrr_at_5
value: 21.731
- type: ndcg_at_1
value: 15.895999999999999
- type: ndcg_at_10
value: 24.343
- type: ndcg_at_100
value: 29.181
- type: ndcg_at_1000
value: 32.330999999999996
- type: ndcg_at_3
value: 20.518
- type: ndcg_at_5
value: 22.561999999999998
- type: precision_at_1
value: 15.895999999999999
- type: precision_at_10
value: 3.9739999999999998
- type: precision_at_100
value: 0.6799999999999999
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 9.057
- type: precision_at_5
value: 6.654
- type: recall_at_1
value: 14.424000000000001
- type: recall_at_10
value: 34.079
- type: recall_at_100
value: 56.728
- type: recall_at_1000
value: 80.765
- type: recall_at_3
value: 23.993000000000002
- type: recall_at_5
value: 28.838
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 41.665
- type: f1
value: 37.601137843331244
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 74.8052
- type: ap
value: 68.92588517572685
- type: f1
value: 74.66801685854456
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 91.2220702234382
- type: f1
value: 90.81687856852439
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 69.39124487004105
- type: f1
value: 51.8350043424968
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.80497646267652
- type: f1
value: 67.34213899244814
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.54270342972428
- type: f1
value: 74.02802500235784
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 30.488580544269002
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.80426879476371
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.37970068676043
- type: mrr
value: 32.48523694064166
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 42.862710845031565
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 54.270000736385626
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 80.89215288990194
- type: cos_sim_spearman
value: 74.386413188675
- type: euclidean_pearson
value: 78.83679563989534
- type: euclidean_spearman
value: 74.29328198771996
- type: manhattan_pearson
value: 78.77968796707641
- type: manhattan_spearman
value: 74.20887429784696
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 78.31858821914498
- type: cos_sim_spearman
value: 72.2217008523832
- type: euclidean_pearson
value: 75.38901061978429
- type: euclidean_spearman
value: 71.81255767675184
- type: manhattan_pearson
value: 75.49472202181288
- type: manhattan_spearman
value: 71.96322588726144
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 79.48334648997455
- type: cos_sim_spearman
value: 80.99654029572798
- type: euclidean_pearson
value: 80.46546523970035
- type: euclidean_spearman
value: 80.90646216980744
- type: manhattan_pearson
value: 80.35474057857608
- type: manhattan_spearman
value: 80.8141299909659
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 79.73826970784727
- type: cos_sim_spearman
value: 76.9926870133034
- type: euclidean_pearson
value: 79.6386542120984
- type: euclidean_spearman
value: 77.05041986942253
- type: manhattan_pearson
value: 79.61799508502459
- type: manhattan_spearman
value: 77.07169617647067
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 83.93999019426069
- type: cos_sim_spearman
value: 85.21166521594695
- type: euclidean_pearson
value: 84.97207676326357
- type: euclidean_spearman
value: 85.40726578482739
- type: manhattan_pearson
value: 85.0386693192183
- type: manhattan_spearman
value: 85.49230945586409
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 80.8133974034008
- type: cos_sim_spearman
value: 82.82919022688844
- type: euclidean_pearson
value: 81.92587923760179
- type: euclidean_spearman
value: 82.86629450518863
- type: manhattan_pearson
value: 81.98232365999253
- type: manhattan_spearman
value: 82.94313939920296
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 86.12872422642363
- type: cos_sim_spearman
value: 87.77672179979807
- type: euclidean_pearson
value: 87.76172961705947
- type: euclidean_spearman
value: 87.9891393339215
- type: manhattan_pearson
value: 87.78863663568221
- type: manhattan_spearman
value: 88.08297053203866
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 58.82824030232733
- type: cos_sim_spearman
value: 64.17079382633538
- type: euclidean_pearson
value: 61.31505225602925
- type: euclidean_spearman
value: 64.05080034530694
- type: manhattan_pearson
value: 61.77095758943306
- type: manhattan_spearman
value: 64.14475973774933
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 81.39239803497064
- type: cos_sim_spearman
value: 81.76637354520439
- type: euclidean_pearson
value: 82.98008209033587
- type: euclidean_spearman
value: 82.18662536188657
- type: manhattan_pearson
value: 82.9630328314908
- type: manhattan_spearman
value: 82.13726553603003
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 79.45753132898741
- type: mrr
value: 93.84029822755313
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.8019801980198
- type: cos_sim_ap
value: 94.58629018512772
- type: cos_sim_f1
value: 89.84771573604061
- type: cos_sim_precision
value: 91.23711340206185
- type: cos_sim_recall
value: 88.5
- type: dot_accuracy
value: 99.74950495049505
- type: dot_ap
value: 92.5761214576951
- type: dot_f1
value: 87.09841917389087
- type: dot_precision
value: 88.86576482830385
- type: dot_recall
value: 85.39999999999999
- type: euclidean_accuracy
value: 99.80495049504951
- type: euclidean_ap
value: 94.56231673602272
- type: euclidean_f1
value: 90.02531645569621
- type: euclidean_precision
value: 91.17948717948718
- type: euclidean_recall
value: 88.9
- type: manhattan_accuracy
value: 99.8009900990099
- type: manhattan_ap
value: 94.5775591647447
- type: manhattan_f1
value: 89.86384266263238
- type: manhattan_precision
value: 90.64089521871821
- type: manhattan_recall
value: 89.1
- type: max_accuracy
value: 99.80495049504951
- type: max_ap
value: 94.58629018512772
- type: max_f1
value: 90.02531645569621
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 53.088941385715735
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.146129414825744
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 48.7511362739003
- type: mrr
value: 49.61682210763093
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 67.43820000000001
- type: ap
value: 12.899489312331003
- type: f1
value: 52.03468121072981
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 57.475947934352
- type: f1
value: 57.77676730676238
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 38.3463456299738
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 83.94230196101806
- type: cos_sim_ap
value: 67.00916556336148
- type: cos_sim_f1
value: 63.046014257939085
- type: cos_sim_precision
value: 61.961783439490446
- type: cos_sim_recall
value: 64.16886543535621
- type: dot_accuracy
value: 83.18531322644095
- type: dot_ap
value: 63.112896030267066
- type: dot_f1
value: 59.06565656565657
- type: dot_precision
value: 56.63438256658596
- type: dot_recall
value: 61.715039577836414
- type: euclidean_accuracy
value: 83.94230196101806
- type: euclidean_ap
value: 67.19856676674463
- type: euclidean_f1
value: 63.08428413691571
- type: euclidean_precision
value: 58.9543682641596
- type: euclidean_recall
value: 67.83641160949868
- type: manhattan_accuracy
value: 83.91845979614949
- type: manhattan_ap
value: 66.9845327263072
- type: manhattan_f1
value: 62.693323274236135
- type: manhattan_precision
value: 59.884698534710544
- type: manhattan_recall
value: 65.77836411609499
- type: max_accuracy
value: 83.94230196101806
- type: max_ap
value: 67.19856676674463
- type: max_f1
value: 63.08428413691571
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.0777738968448
- type: cos_sim_ap
value: 84.19747786536
- type: cos_sim_f1
value: 75.91830995817077
- type: cos_sim_precision
value: 69.84671107949033
- type: cos_sim_recall
value: 83.14598090545118
- type: dot_accuracy
value: 87.14246904955951
- type: dot_ap
value: 82.37528804640529
- type: dot_f1
value: 74.40963166732163
- type: dot_precision
value: 69.4127841098447
- type: dot_recall
value: 80.18170619032954
- type: euclidean_accuracy
value: 88.08359529630924
- type: euclidean_ap
value: 84.22633217661986
- type: euclidean_f1
value: 76.09190339866403
- type: euclidean_precision
value: 72.70304390517605
- type: euclidean_recall
value: 79.81213427779488
- type: manhattan_accuracy
value: 88.08359529630924
- type: manhattan_ap
value: 84.18362004611083
- type: manhattan_f1
value: 76.08789625360231
- type: manhattan_precision
value: 71.49336582724072
- type: manhattan_recall
value: 81.3135201724669
- type: max_accuracy
value: 88.08359529630924
- type: max_ap
value: 84.22633217661986
- type: max_f1
value: 76.09190339866403
license: mit
language:
- en
---
# bge-small-en-v1.5-sparse
This is the sparse ONNX variant of the [bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) embeddings model created with [DeepSparse Optimum](https://github.com/neuralmagic/optimum-deepsparse) for ONNX export and Neural Magic's [Sparsify](https://github.com/neuralmagic/sparsify) for one-shot quantization (INT8) and unstructured pruning (50%).
Current list of sparse and quantized bge ONNX models:
| Links | Sparsification Method |
| --------------------------------------------------------------------------------------------------- | ---------------------- |
| [zeroshot/bge-large-en-v1.5-sparse](https://huggingface.co/zeroshot/bge-large-en-v1.5-sparse) | Quantization (INT8) & 50% Pruning |
| [zeroshot/bge-large-en-v1.5-quant](https://huggingface.co/zeroshot/bge-large-en-v1.5-quant) | Quantization (INT8) |
| [zeroshot/bge-base-en-v1.5-sparse](https://huggingface.co/zeroshot/bge-base-en-v1.5-sparse) | Quantization (INT8) & 50% Pruning |
| [zeroshot/bge-base-en-v1.5-quant](https://huggingface.co/zeroshot/bge-base-en-v1.5-quant) | Quantization (INT8) |
| [zeroshot/bge-small-en-v1.5-sparse](https://huggingface.co/zeroshot/bge-small-en-v1.5-sparse) | Quantization (INT8) & 50% Pruning |
| [zeroshot/bge-small-en-v1.5-quant](https://huggingface.co/zeroshot/bge-small-en-v1.5-quant) | Quantization (INT8) |
```bash
pip install -U deepsparse-nightly[sentence_transformers]
```
```python
from deepsparse.sentence_transformers import DeepSparseSentenceTransformer
model = DeepSparseSentenceTransformer('zeroshot/bge-small-en-v1.5-sparse', export=False)
# Our sentences we like to encode
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
# Sentences are encoded by calling model.encode()
embeddings = model.encode(sentences)
# Print the embeddings
for sentence, embedding in zip(sentences, embeddings):
print("Sentence:", sentence)
print("Embedding:", embedding.shape)
print("")
```
For further details regarding DeepSparse & Sentence Transformers integration, refer to the [DeepSparse README](https://github.com/neuralmagic/deepsparse/tree/main/src/deepsparse/sentence_transformers).
For general questions on these models and sparsification methods, reach out to the engineering team on our [community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ).
 | 42,559 | [
[
-0.035247802734375,
-0.057525634765625,
0.035003662109375,
0.027069091796875,
-0.00226593017578125,
-0.0142669677734375,
-0.0200347900390625,
-0.006900787353515625,
0.0262603759765625,
0.0239105224609375,
-0.07275390625,
-0.05853271484375,
-0.04693603515625,
... |
mariogiordano/Bert-english-emotion-analysis | 2023-10-31T15:55:36.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | mariogiordano | null | null | mariogiordano/Bert-english-emotion-analysis | 0 | 2,262 | transformers | 2023-10-20T16:50:20 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: Bert-english-emotion-analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bert-english-emotion-analysis
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8375
- Accuracy: 0.7090
- F1: 0.7060
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.3759 | 1.0 | 658 | 0.8827 | 0.6815 | 0.6851 |
| 0.8038 | 2.0 | 1316 | 0.8375 | 0.7090 | 0.7060 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.13.3
| 1,466 | [
[
-0.03863525390625,
-0.047698974609375,
0.0166168212890625,
0.02838134765625,
-0.0294189453125,
-0.0350341796875,
-0.0230865478515625,
-0.0248260498046875,
0.01435089111328125,
0.005725860595703125,
-0.0556640625,
-0.05035400390625,
-0.0550537109375,
-0.01538... |
stabilityai/japanese-stablelm-instruct-alpha-7b | 2023-08-22T09:37:10.000Z | [
"transformers",
"pytorch",
"text-generation",
"japanese-stablelm",
"causal-lm",
"custom_code",
"ja",
"license:other",
"has_space",
"region:us"
] | text-generation | stabilityai | null | null | stabilityai/japanese-stablelm-instruct-alpha-7b | 84 | 2,261 | transformers | 2023-08-10T01:23:35 | ---
language:
- ja
tags:
- japanese-stablelm
- causal-lm
pipeline_tag: text-generation
license:
- other
extra_gated_heading: Access Japanese StableLM Instruct Alpha
extra_gated_description: This repository is publicly accessible, but you have to accept the conditions to access its files and content.
extra_gated_button_content: Access repository
extra_gated_fields:
Name: text
Email: text
Organization: text
I agree to accept the conditions and share above info with Stability AI: checkbox
extra_gated_prompt: |
### JAPANESE STABLELM RESEARCH LICENSE AGREEMENT
Dated: August 7, 2023
"Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Software Products set forth herein.
โDocumentationโ means any specifications, manuals, documentation, and other written information provided by Stability AI related to the Software.
"Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such personโs or entity's behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.
"Stability AI" or "we" means Stability AI Ltd.
"Software" means, collectively, Stability AIโs proprietary Japanese StableLM made available under this Agreement.
โSoftware Productsโ means Software and Documentation.
By using or distributing any portion or element of the Software Products, you agree to be bound by this Agreement.
- License Rights and Redistribution.
- Subject to your compliance with this Agreement and the Documentation, Stability AI grants you a non-exclusive, worldwide, non-transferable, non-sublicensable, revocable, royalty free and limited license under Stability AIโs intellectual property or other rights owned by Stability AI embodied in the Software Products to reproduce, distribute, and create derivative works of the Software Products for purposes other than commercial or production use.
- You will not, and will not permit, assist or cause any third party to use, modify, copy, reproduce, create derivative works of, or distribute the Software Products (or any derivative works thereof, works incorporating the Software Products, or any data produced by the Software), in whole or in part, for any commercial or production purposes.
- If you distribute or make the Software Products, or any derivative works thereof, available to a third party, you shall (i) provide a copy of this Agreement to such third party, and (ii) retain the following attribution notice within a "Notice" text file distributed as a part of such copies: "Japanese StableLM is licensed under the Japanese StableLM Research License, Copyright (c) Stability AI Ltd. All Rights Reserved.โ
- The licenses granted to you under this Agreement are conditioned upon your compliance with the Documentation and this Agreement, including the Acceptable Use Policy below and as may be updated from time to time in the future on stability.ai, which is hereby incorporated by reference into this Agreement.
- Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE SOFTWARE PRODUCTS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE SOFTWARE PRODUCTS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE SOFTWARE PRODUCTS AND ANY OUTPUT AND RESULTS.
- Limitation of Liability. IN NO EVENT WILL STABILITY AI OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF STABILITY AI OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
- Intellectual Property.
- No trademark licenses are granted under this Agreement, and in connection with the Software Products, neither Stability AI nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Software Products.
- Subject to Stability AIโs ownership of the Software Products and derivatives made by or for Stability AI, with respect to any derivative works and modifications of the Software Products that are made by you, as between you and Stability AI, you are and will be the owner of such derivative works and modifications.
- If you institute litigation or other proceedings against Stability AI (including a cross-claim or counterclaim in a lawsuit) alleging that the Software Products or associated outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Stability AI from and against any claim by any third party arising out of or related to your use or distribution of the Software Products in violation of this Agreement.
- Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Software Products and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Stability AI may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Software Products. Sections 2-4 shall survive the termination of this Agreement.
โ----------
### Japanese StableLM Acceptable Use Policy
If you access, use, or distribute any Stability AI models, software, or other materials (โStability Technologyโ) you agree to this Acceptable Use Policy (โPolicyโ).
We want everyone to use Stability Technology safely and responsibly. You agree you will not use, or allow others to use, Stability Technology to:
- To violate the law or othersโ rights (including intellectual property rights and the rights of data privacy and protection), nor will you promote, contribute to, encourage, facilitate, plan, incite, or further anyone elseโs violation of the law or othersโ rights;
- To commit, promote, contribute to, facilitate, encourage, plan, incite, or further any of the following:
- Violence or terrorism;
- Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content;
- Human trafficking, exploitation, and sexual violence;
- Harassment, abuse, threatening, stalking, or bullying of individuals or groups of individuals;
- Discrimination in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services on the basis of race, color, caste, religion, sex (including pregnancy, sexual orientation, or gender identity), national origin, age, disability, or genetic information (including family medical history) except as may be required by applicable law (such as the provision of social security benefits solely to people who meet certain age requirements under the law);
- Creation of malicious code, malware, computer viruses or any activity that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system;
- For purposes of or for the performance of:
- Fully automated decision-making, including profiling, with respect to an individual or group of individuals which produces legal effects concerning such individual(s) or similarly significantly affects such individual(s);
- Systematic or automated scraping, mining, extraction, or harvesting of personally identifiable data, or similar activity, from the output of any Stability Technology except with respect to data that you have provided as input to the Stability Technology and which you are legally entitled to process, for so long as you retain such entitlement;
- Development, improvement, or manufacture of any weapons of mass destruction (such as nuclear, chemical, or biologic weapons), weapons of war (such as missiles or landmines), or any gain of function-related activities with respect to any pathogens;
- Mission critical applications or systems where best industry practices require fail-safe controls or performance, including operation of nuclear facilities, aircraft navigation, electrical grids, communication systems, water treatment facilities, air traffic control, life support, weapons systems, or emergency locator or other emergency services;
- To intentionally deceive or mislead others, including use of Japanese StableLM related to the following:
- Generating, promoting, or furthering fraud or the creation or promotion of disinformation;
- Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content;
- Generating, promoting, or further distributing spam;
- Impersonating another individual without consent, authorization, or legal right
- Representing or misleading people into believing that the use of Japanese StableLM or outputs are human-generated;
- Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement;
- Generating or facilitating large-scale political advertisements, propaganda, or influence campaigns;
- Fail to appropriately disclose to end users any known dangers of your AI system or misrepresent or mislead with respect to its abilities.
Nothing in this AUP is intended to prevent or impede any good faith research, testing, or evaluation of Japanese StableLM, or publication related to any of the foregoing. If you discover any flaws in Japanese StableLM that may be harmful to people in any way, we encourage you to notify us and give us a chance to remedy such flaws before others can exploit them. If you have questions about this AUP, contact us at legal@stability.ai.
---
# Japanese-StableLM-Instruct-Alpha-7B

> "A parrot able to speak Japanese, ukiyoe, edo period" โ [Stable Diffusion XL](https://clipdrop.co/stable-diffusion)
## Model Description
`japanese-stablelm-instruct-alpha-7b` is a 7B parameter decoder-only language models pre-trained built on top of the [`Japanese-StableLM-Base-Alpha-7B`](https://huggingface.co/stabilityai/japanese-stablelm-base-alpha-7b) model and further fine-tuned on various instruction-following datasets.
## Usage
First install additional dependencies in [requirements.txt](./requirements.txt):
```sh
pip install sentencepiece einops
```
Then start generating text with `japanese-stablelm-instruct-alpha-7b` by using the following code snippet:
```python
import torch
from transformers import LlamaTokenizer, AutoModelForCausalLM
tokenizer = LlamaTokenizer.from_pretrained("novelai/nerdstash-tokenizer-v1", additional_special_tokens=['โโ'])
model = AutoModelForCausalLM.from_pretrained(
"stabilityai/japanese-stablelm-instruct-alpha-7b",
trust_remote_code=True,
)
model.half()
model.eval()
if torch.cuda.is_available():
model = model.to("cuda")
def build_prompt(user_query, inputs="", sep="\n\n### "):
sys_msg = "ไปฅไธใฏใใฟในใฏใ่ชฌๆใใๆ็คบใจใๆ่ใฎใใๅ
ฅๅใฎ็ตใฟๅใใใงใใ่ฆๆฑใ้ฉๅใซๆบใใๅฟ็ญใๆธใใชใใใ"
p = sys_msg
roles = ["ๆ็คบ", "ๅฟ็ญ"]
msgs = [": \n" + user_query, ": "]
if inputs:
roles.insert(1, "ๅ
ฅๅ")
msgs.insert(1, ": \n" + inputs)
for role, msg in zip(roles, msgs):
p += sep + role + msg
return p
# this is for reproducibility.
# feel free to change to get different result
seed = 42
torch.manual_seed(seed)
# Infer with prompt without any additional input
user_inputs = {
"user_query": "VR ใจใฏใฉใฎใใใชใใฎใงใใ๏ผ",
"inputs": ""
}
prompt = build_prompt(**user_inputs)
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
)
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=256,
temperature=1,
top_p=0.95,
do_sample=True,
)
out = tokenizer.decode(tokens[0][input_ids.shape[1]:], skip_special_tokens=True).strip()
print(out)
"""ใใผใใฃใซใชใขใชใใฃใฏใ็พๅฎใฎไธ็ใฎใใใซ่ฆใใไปฎๆณไธ็ใฎ 3D ไปฎๆณ็พๅฎใฎใทใใฅใฌใผใทใงใณใงใใใใใฏใใใใใปใใใไปใใฆใใฆใผใถใผใ่ฆใใใ่ใใใใไฝ้จใงใใใใฎใงใใ"""
```
```python
seed = 42
torch.manual_seed(seed)
# Infer with prompt with additional input
user_inputs = {
"user_query": "VR ใซใคใใฆใไปฅไธใฎๆฏ่ผๅฏพ่ฑกใจใฎ้ใใ็ฎๆกๆธใใงๆใใฆใใ ใใใ",
"inputs": "ๆฏ่ผๅฏพ่ฑก: AR"
}
prompt = build_prompt(**user_inputs)
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
)
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=256,
temperature=1,
top_p=0.95,
do_sample=True,
)
out = tokenizer.decode(tokens[0][input_ids.shape[1]:], skip_special_tokens=True).strip()
print(out)
"""
ไปฅไธใฏใVR ใจ AR ใฎๆฏ่ผๅฏพ่ฑกใฎๆฏ่ผใงใใ
1. VR ใฏใฆใผใถใผใ3D ใฎไธ็ใไฝ้จใใใใจใๅฏ่ฝใซใใใฆใผใถใผใ่ชๅใฎ็ฎใงไธ็ใ่ฆใใใจใๅฏ่ฝใซใใพใใ
2. VR ใฏใใฆใผใถใผใ็ฎใฎๅใฎ็ฐๅขใใใ่ฉณ็ดฐใซๆใใใใๅคใใฎใใจใใงใใใใใซใใใใจใๅฏ่ฝใซใใพใใ
3. VR ใฏใใฆใผใถใผใฎๆ่ฆใๅบๆฟใใๆกๅผตใใใใจใๅฏ่ฝใซใใพใใ
4. VR ใฏใ่ฆ่ฆ็ใ่งฆ่ฆ็ใใใใณ่ด่ฆ็ใชๆ่ฆไฝ้จใๆไพใใใใใซไฝฟ็จใใใพใใ
5. AR ใฏใ็พๅฎใฎไธ็ใซ้ใญๅใใใฆใๆ
ๅ ฑใ่กจ็คบใใใฆใผใถใผใซๆกๅผต็พๅฎไฝ้จใๆไพใใใใจใๅฏ่ฝใซใใพใใ
6. AR ใฏใใฆใผใถใผใไปฎๆณใชใใธใงใฏใใไปฎๆณ็ฐๅขใซๆใก่พผใใใจใๅฏ่ฝใซใใใใใ็ฉ็็ใชไธ็ใใทใใฅใฌใผใใใใฎใซๆ้ฉใงใใ
7. VR ใฏใ3D ไธ็ใๅฎ็พใใไปฎๆณไธ็ใไฝๆใใใใจใซๆ้ฉใงใใ
8. AR ใฏใใฆใผใถใผใ็พๅฎไธ็ใฎใชใใธใงใฏใใใทใใฅใฌใผใใใ็พๅฎไธ็ใฎ็พๅฎ็ใชไธ็ใซ้ใญๅใใใฆๆ
ๅ ฑใ่กจ็คบใใใใจใๅฏ่ฝใซใใพใใ
9. VR ใฏใใฆใผใถใผใฎๆ่ฆใๆๆ
ใซไธใใๅฝฑ้ฟใๆใๅคงใใใจ่ใใใใฆใใพใใ
"""
```
## Model Details
* **Model type**: `japanese-stablelm-instruct-alpha-7b` is an auto-regressive language model based on the NeoX transformer architecture.
* **Language(s)**: Japanese
* **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
* **License**: This model is licensed under [JAPANESE STABLELM RESEARCH LICENSE AGREEMENT](./LICENSE).
## Training
| Parameters | Hidden Size | Layers | Heads | Sequence Length |
|------------|-------------|--------|-------|-----------------|
| 7B | 4096 | 32 | 32 | 1024 |
### Training Dataset
`japanese-stablelm-instruct-alpha-7b` is fine-tuned on a combination of following datasets:
- [Japanese translation of the Stanfored Alpaca dataset](https://huggingface.co/datasets/fujiki/japanese_alpaca_data)
- [Japanese translation of the Databricks Dolly-15k dataset](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja)
- [Japanese translation of the subset of the Anthropic HH dataset](https://huggingface.co/datasets/fujiki/japanese_hh-rlhf-49k)
- [Wikinews](https://ja.wikinews.org/wi) [subset](https://huggingface.co/datasets/fujiki/llm-japanese-dataset_wikinews) of the [izumi-lab/llm-japanese-dataset](https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset)
## Use and Limitations
### Intended Use
This model is intended to be used by the open-source community in chat-like applications in adherence with the research license.
### Limitations and bias
Although the aforementioned datasets help to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use responsibly.
## Authors
- [Meng Lee](https://huggingface.co/leemeng)
- [Fujiki Nakamura](https://huggingface.co/fujiki)
- [Makoto Shing](https://huggingface.co/mkshing)
- [Paul McCann](https://huggingface.co/polm-stability)
- [Takuya Akiba](https://huggingface.co/iwiwi)
- [Naoki Orii](https://huggingface.co/mrorii)
## Acknowledgements
We are utilizing the v1 version of the [novelai-tokenizer](https://github.com/NovelAI/novelai-tokenizer), introduced by [NovelAI](https://novelai.net/), because it processes both Japanese and English text both effectively and efficiently. We extend our gratitude to NovelAI for allowing us to use their remarkable work. For more details about the tokenizer, please refer to their [blog post](https://blog.novelai.net/novelais-new-llm-tokenizer-5bc140e17642).
We are grateful for the contributions of the EleutherAI Polyglot-JA team in helping us to collect a large amount of pre-training data in Japanese. Polyglot-JA members includes Hyunwoong Ko (Project Lead), Fujiki Nakamura (originally started this project when he committed to the Polyglot team), Yunho Mo, Minji Jung, KeunSeok Im, and Su-Kyeong Jang.
We are also appreciative of [AI Novelist/Sta (Bit192, Inc.)](https://ai-novel.com/index.php) and the numerous contributors from [Stable Community Japan](https://discord.gg/VPrcE475HB) for assisting us in gathering a large amount of high-quality Japanese textual data for model training.
## How to cite
```bibtext
@misc{JapaneseStableLMInstructAlpha7B,
url={[https://huggingface.co/stabilityai/japanese-stablelm-instruct-alpha-7b](https://huggingface.co/stabilityai/japanese-stablelm-instruct-alpha-7b)},
title={Japanese StableLM Instruct Alpha 7B},
author={Lee, Meng and Nakamura, Fujiki and Shing, Makoto and McCann, Paul and Akiba, Takuya and Orii, Naoki}
}
```
## Citations
```bibtex
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
```bibtext
@software{gpt-neox-library,
title = {{GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch}},
author = {Andonian, Alex and Anthony, Quentin and Biderman, Stella and Black, Sid and Gali, Preetham and Gao, Leo and Hallahan, Eric and Levy-Kramer, Josh and Leahy, Connor and Nestler, Lucas and Parker, Kip and Pieler, Michael and Purohit, Shivanshu and Songz, Tri and Phil, Wang and Weinbach, Samuel},
url = {https://www.github.com/eleutherai/gpt-neox},
doi = {10.5281/zenodo.5879544},
month = {8},
year = {2021},
version = {0.0.1},
}
```
| 18,752 | [
[
-0.0308074951171875,
-0.0665283203125,
0.018798828125,
0.02203369140625,
-0.031768798828125,
-0.01381683349609375,
-0.0196380615234375,
-0.034515380859375,
0.0281982421875,
0.0185394287109375,
-0.03857421875,
-0.05126953125,
-0.038909912109375,
0.01451110839... |
Helsinki-NLP/opus-mt-eu-en | 2023-08-16T11:34:04.000Z | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eu",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | Helsinki-NLP | null | null | Helsinki-NLP/opus-mt-eu-en | 1 | 2,260 | transformers | 2022-03-02T23:29:04 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-eu-en
* source languages: eu
* target languages: en
* OPUS readme: [eu-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/eu-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/eu-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/eu-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/eu-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.eu.en | 46.1 | 0.638 |
| 818 | [
[
-0.0192108154296875,
-0.0277557373046875,
0.0186004638671875,
0.029632568359375,
-0.03375244140625,
-0.0244598388671875,
-0.0302581787109375,
-0.006267547607421875,
0.0017223358154296875,
0.032073974609375,
-0.051910400390625,
-0.0439453125,
-0.041534423828125,
... |
kykim/electra-kor-base | 2021-01-22T00:28:50.000Z | [
"transformers",
"pytorch",
"tf",
"electra",
"pretraining",
"ko",
"endpoints_compatible",
"region:us"
] | null | kykim | null | null | kykim/electra-kor-base | 1 | 2,259 | transformers | 2022-03-02T23:29:05 | ---
language: ko
---
# Electra base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
```python
from transformers import ElectraTokenizerFast, ElectraModel
tokenizer_electra = ElectraTokenizerFast.from_pretrained("kykim/electra-kor-base")
model = ElectraModel.from_pretrained("kykim/electra-kor-base")
``` | 461 | [
[
-0.01532745361328125,
-0.038116455078125,
0.01175689697265625,
0.0101165771484375,
-0.03668212890625,
0.010284423828125,
-0.0240478515625,
0.0108795166015625,
0.00604248046875,
0.04852294921875,
-0.0250091552734375,
-0.046417236328125,
-0.0269622802734375,
0... |
digiplay/insaneRealistic_v1 | 2023-07-14T04:54:03.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | digiplay | null | null | digiplay/insaneRealistic_v1 | 2 | 2,258 | diffusers | 2023-07-14T02:28:38 | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/108585/insane-realistic-v10


| 523 | [
[
-0.0477294921875,
-0.03118896484375,
0.0269927978515625,
0.0154571533203125,
-0.0181121826171875,
0.01334381103515625,
0.0156097412109375,
-0.024993896484375,
0.0517578125,
0.0264129638671875,
-0.06329345703125,
-0.00801849365234375,
-0.006343841552734375,
-... |
TheBloke/vicuna-13B-v1.5-GPTQ | 2023-09-27T12:45:20.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2307.09288",
"arxiv:2306.05685",
"license:llama2",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/vicuna-13B-v1.5-GPTQ | 12 | 2,257 | transformers | 2023-08-03T10:03:59 | ---
license: llama2
model_name: Vicuna 13B v1.5
base_model: lmsys/vicuna-13b-v1.5
inference: false
model_creator: lmsys
model_type: llama
prompt_template: 'A chat between a curious user and an artificial intelligence assistant.
The assistant gives helpful, detailed, and polite answers to the user''s questions.
USER: {prompt} ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Vicuna 13B v1.5 - GPTQ
- Model creator: [lmsys](https://huggingface.co/lmsys)
- Original model: [Vicuna 13B v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5)
<!-- description start -->
## Description
This repo contains GPTQ model files for [lmsys's Vicuna 13B v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/vicuna-13B-v1.5-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/vicuna-13B-v1.5-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/vicuna-13B-v1.5-GGUF)
* [lmsys's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lmsys/vicuna-13b-v1.5)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Vicuna
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/vicuna-13B-v1.5-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.26 GB | Yes | 4-bit, without Act Order and group size 128g. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/vicuna-13B-v1.5-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/vicuna-13B-v1.5-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/vicuna-13B-v1.5-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/vicuna-13B-v1.5-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/vicuna-13B-v1.5-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/vicuna-13B-v1.5-GPTQ:main`
- With Git, you can clone a branch with:
```
git clone --single-branch --branch main https://huggingface.co/TheBloke/vicuna-13B-v1.5-GPTQ
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/vicuna-13B-v1.5-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/vicuna-13B-v1.5-GPTQ:main`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `vicuna-13B-v1.5-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers>=4.32.0 optimum>=1.12.0
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .
```
### For CodeLlama models only: you must use Transformers 4.33.0 or later.
If 4.33.0 is not yet released when you read this, you will need to install Transformers from source:
```shell
pip3 uninstall -y transformers
pip3 install git+https://github.com/huggingface/transformers.git
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/vicuna-13B-v1.5-GPTQ"
# To use a different branch, change revision
# For example: revision="main"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjรคreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, ์ค๊ต ๊น, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, ้ฟๆ, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: lmsys's Vicuna 13B v1.5
# Vicuna Model Card
## Model Details
Vicuna is a chat assistant trained by fine-tuning Llama 2 on user-shared conversations collected from ShareGPT.
- **Developed by:** [LMSYS](https://lmsys.org/)
- **Model type:** An auto-regressive language model based on the transformer architecture
- **License:** Llama 2 Community License Agreement
- **Finetuned from model:** [Llama 2](https://arxiv.org/abs/2307.09288)
### Model Sources
- **Repository:** https://github.com/lm-sys/FastChat
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
- **Paper:** https://arxiv.org/abs/2306.05685
- **Demo:** https://chat.lmsys.org/
## Uses
The primary use of Vicuna is research on large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## How to Get Started with the Model
- Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights
- APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api
## Training Details
Vicuna v1.5 is fine-tuned from Llama 2 with supervised instruction fine-tuning.
The training data is around 125K conversations collected from ShareGPT.com.
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
## Evaluation

Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
## Difference between different versions of Vicuna
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
| 16,715 | [
[
-0.040435791015625,
-0.061187744140625,
0.014251708984375,
0.01313018798828125,
-0.023345947265625,
-0.01488494873046875,
0.007389068603515625,
-0.040191650390625,
0.0193939208984375,
0.028778076171875,
-0.050933837890625,
-0.037567138671875,
-0.0221099853515625... |
diffusers/controlnet-canny-sdxl-1.0-mid | 2023-08-16T12:59:53.000Z | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"controlnet",
"license:openrail++",
"diffusers:ControlNetModel",
"region:us"
] | text-to-image | diffusers | null | null | diffusers/controlnet-canny-sdxl-1.0-mid | 10 | 2,256 | diffusers | 2023-08-16T11:20:41 | ---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- controlnet
inference: false
---
# Small SDXL-controlnet: Canny
These are small controlnet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with canny conditioning. This checkpoint is 5x smaller than the original XL controlnet checkpoint.
You can find some example images in the following.
prompt: aerial view, a futuristic research complex in a bright foggy jungle, hard lighting

prompt: a woman, close up, detailed, beautiful, street photography, photorealistic, detailed, Kodak ektar 100, natural, candid shot

prompt: megatron in an apocalyptic world ground, runied city in the background, photorealistic

prompt: a couple watching sunset, 4k photo

## Usage
Make sure to first install the libraries:
```bash
pip install accelerate transformers safetensors opencv-python diffusers
```
And then we're ready to go:
```python
from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL
from diffusers.utils import load_image
from PIL import Image
import torch
import numpy as np
import cv2
prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting"
negative_prompt = "low quality, bad quality, sketches"
image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png")
controlnet_conditioning_scale = 0.5 # recommended for good generalization
controlnet = ControlNetModel.from_pretrained(
"diffusers/controlnet-canny-sdxl-1.0-mid",
torch_dtype=torch.float16
)
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
controlnet=controlnet,
vae=vae,
torch_dtype=torch.float16,
)
pipe.enable_model_cpu_offload()
image = np.array(image)
image = cv2.Canny(image, 100, 200)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
image = Image.fromarray(image)
images = pipe(
prompt, negative_prompt=negative_prompt, image=image, controlnet_conditioning_scale=controlnet_conditioning_scale,
).images
images[0].save(f"hug_lab.png")
```

To more details, check out the official documentation of [`StableDiffusionXLControlNetPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/controlnet_sdxl).
๐จ Please note that this checkpoint is experimental and there's a lot of room for improvement. We encourage the community to build on top of it, improve it, and provide us with feedback. ๐จ
### Training
Our training script was built on top of the official training script that we provide [here](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/README_sdxl.md).
You can refer to [this script](https://github.com/huggingface/diffusers/blob/7b93c2a882d8e12209fbaeffa51ee2b599ab5349/examples/research_projects/controlnet/train_controlnet_webdataset.py) for full discolsure.
* This checkpoint does not perform distillation. We just use a smaller ControlNet initialized from the SDXL UNet. We
encourage the community to try and conduct distillation too. This resource might be of help in [this regard](https://huggingface.co/blog/sd_distillation).
* To learn more about how the ControlNet was initialized, refer to [this code block](https://github.com/huggingface/diffusers/blob/7b93c2a882d8e12209fbaeffa51ee2b599ab5349/examples/research_projects/controlnet/train_controlnet_webdataset.py#L981C1-L999C36).
* It does not have any attention blocks.
* The model works pretty good on most conditioning images. But for more complex conditionings, the bigger checkpoints might be better. We are still working on improving the quality of this checkpoint and looking for feedback from the community.
* We recommend playing around with the `controlnet_conditioning_scale` and `guidance_scale` arguments for potentially better
image generation quality.
#### Training data
The model was trained on 3M images from LAION aesthetic 6 plus subset, with batch size of 256 for 50k steps with constant learning rate of 3e-5.
#### Compute
One 8xA100 machine
#### Mixed precision
FP16 | 4,516 | [
[
-0.04010009765625,
-0.0282440185546875,
0.01788330078125,
0.0305633544921875,
-0.020355224609375,
-0.019256591796875,
-0.002681732177734375,
-0.0160675048828125,
0.0296173095703125,
0.0260772705078125,
-0.0430908203125,
-0.0218353271484375,
-0.051666259765625,
... |
CyberPeace-Institute/SecureBERT-NER | 2023-07-19T09:05:05.000Z | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"token-classification",
"en",
"arxiv:2204.02685",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | CyberPeace-Institute | null | null | CyberPeace-Institute/SecureBERT-NER | 6 | 2,254 | transformers | 2023-06-23T11:12:52 | ---
language:
- en
library_name: transformers
pipeline_tag: token-classification
widget:
- text: "Microsoft Threat Intelligence analysts assess with high confidence that the malware, which we call KingsPawn, is developed by DEV-0196 and therefore strongly linked to QuaDream. We assess with medium confidence that the mobile malware we associate with DEV-0196 is part of the system publicly discussed as REIGN."
example_title: "example"
---
# Named Entity Recognition for Cybersecurity
This model has been finetuned with SecureBERT (https://arxiv.org/abs/2204.02685)
on the APTNER dataset (https://ieeexplore.ieee.org/document/9776031)
## NER Classes
 | 676 | [
[
-0.032623291015625,
-0.035003662109375,
0.022979736328125,
-0.01442718505859375,
-0.0148162841796875,
0.0147552490234375,
0.0264434814453125,
-0.048248291015625,
0.0180511474609375,
0.051177978515625,
-0.0007519721984863281,
-0.041351318359375,
-0.05194091796875... |
CompVis/stable-diffusion-v1-1 | 2023-07-05T16:18:08.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"arxiv:2112.10752",
"arxiv:2103.00020",
"arxiv:2205.11487",
"arxiv:2207.12598",
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | CompVis | null | null | CompVis/stable-diffusion-v1-1 | 53 | 2,251 | diffusers | 2022-08-19T10:24:23 | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: false
extra_gated_prompt: |-
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
extra_gated_heading: Please read the LICENSE to access this model
---
# Stable Diffusion v1-1 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [๐ค's Stable Diffusion with D๐งจiffusers blog](https://huggingface.co/blog/stable_diffusion).
The **Stable-Diffusion-v1-1** was trained on 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en), followed by
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`). For more information, please refer to [Training](#training).
This weights here are intended to be used with the D๐งจiffusers library. If you are looking for the weights to be loaded into the CompVis Stable Diffusion codebase, [come here](https://huggingface.co/CompVis/stable-diffusion-v-1-1-original)
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
## Examples
We recommend using [๐ค's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion.
```bash
pip install --upgrade diffusers transformers scipy
```
Running the pipeline with the default PNDM scheduler:
```python
import torch
from torch import autocast
from diffusers import StableDiffusionPipeline
model_id = "CompVis/stable-diffusion-v1-1"
device = "cuda"
pipe = StableDiffusionPipeline.from_pretrained(model_id)
pipe = pipe.to(device)
prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
image = pipe(prompt)["sample"][0]
image.save("astronaut_rides_horse.png")
```
**Note**:
If you are limited by GPU memory and have less than 10GB of GPU RAM available, please make sure to load the StableDiffusionPipeline in float16 precision instead of the default float32 precision as done above. You can do so by telling diffusers to expect the weights to be in float16 precision:
```py
import torch
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to(device)
prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
image = pipe(prompt, guidance_scale=7.5)["sample"][0]
image.save("astronaut_rides_horse.png")
```
To swap out the noise scheduler, pass it to `from_pretrained`:
```python
from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler
model_id = "CompVis/stable-diffusion-v1-1"
# Use the K-LMS scheduler here instead
scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, use_auth_token=True)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
image = pipe(prompt, guidance_scale=7.5)["sample"][0]
image.save("astronaut_rides_horse.png")
```
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to โA red cube on top of a blue sphereโ
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
## Training
### Training Data
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
### Training Procedure
Stable Diffusion v1-4 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
We currently provide four checkpoints, which were trained as follows.
- [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2`. 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [**`stable-diffusion-v1-4`**](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2`.225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
### Training details
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
steps show the relative improvements of the checkpoints:

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).* | 13,905 | [
[
-0.028167724609375,
-0.060150146484375,
0.0255889892578125,
0.0186920166015625,
-0.0214385986328125,
-0.029693603515625,
0.0007510185241699219,
-0.023223876953125,
-0.000873565673828125,
0.032806396484375,
-0.02496337890625,
-0.0394287109375,
-0.054656982421875,... |
daryl149/llama-2-70b-chat-hf | 2023-07-23T17:13:51.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | daryl149 | null | null | daryl149/llama-2-70b-chat-hf | 4 | 2,249 | transformers | 2023-07-20T07:05:42 | These are the converted model weights for Llama-2-70B-chat in Huggingface format.
Courtesy of [Mirage-Studio.io](https://mirage-studio.io), home of MirageGPT: the private ChatGPT alternative.
---
license: other
LLAMA 2 COMMUNITY LICENSE AGREEMENT
Llama 2 Version Release Date: July 18, 2023
"Agreement" means the terms and conditions for use, reproduction, distribution and
modification of the Llama Materials set forth herein.
"Documentation" means the specifications, manuals and documentation
accompanying Llama 2 distributed by Meta at ai.meta.com/resources/models-and-
libraries/llama-downloads/.
"Licensee" or "you" means you, or your employer or any other person or entity (if
you are entering into this Agreement on such person or entity's behalf), of the age
required under applicable laws, rules or regulations to provide legal consent and that
has legal authority to bind your employer or such other person or entity if you are
entering in this Agreement on their behalf.
"Llama 2" means the foundational large language models and software and
algorithms, including machine-learning model code, trained model weights,
inference-enabling code, training-enabling code, fine-tuning enabling code and other
elements of the foregoing distributed by Meta at ai.meta.com/resources/models-and-
libraries/llama-downloads/.
"Llama Materials" means, collectively, Meta's proprietary Llama 2 and
Documentation (and any portion thereof) made available under this Agreement.
"Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you
are an entity, your principal place of business is in the EEA or Switzerland) and Meta
Platforms, Inc. (if you are located outside of the EEA or Switzerland).
By clicking "I Accept" below or by using or distributing any portion or element of the
Llama Materials, you agree to be bound by this Agreement.
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-
transferable and royalty-free limited license under Meta's intellectual property or
other rights owned by Meta embodied in the Llama Materials to use, reproduce,
distribute, copy, create derivative works of, and make modifications to the Llama
Materials.
b. Redistribution and Use.
i. If you distribute or make the Llama Materials, or any derivative works
thereof, available to a third party, you shall provide a copy of this Agreement to such
third party.
ii. If you receive Llama Materials, or any derivative works thereof, from
a Licensee as part of an integrated end user product, then Section 2 of this
Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you
distribute the following attribution notice within a "Notice" text file distributed as a
part of such copies: "Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved."
iv. Your use of the Llama Materials must comply with applicable laws
and regulations (including trade compliance laws and regulations) and adhere to the
Acceptable Use Policy for the Llama Materials (available at
https://ai.meta.com/llama/use-policy), which is hereby incorporated by reference into
this Agreement.
v. You will not use the Llama Materials or any output or results of the
Llama Materials to improve any other large language model (excluding Llama 2 or
derivative works thereof).
2. Additional Commercial Terms. If, on the Llama 2 version release date, the
monthly active users of the products or services made available by or for Licensee,
or Licensee's affiliates, is greater than 700 million monthly active users in the
preceding calendar month, you must request a license from Meta, which Meta may
grant to you in its sole discretion, and you are not authorized to exercise any of the
rights under this Agreement unless or until Meta otherwise expressly grants you
such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE
LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE
PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY
WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR
FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE
FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING
THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR
USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE
LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT,
NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS
AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL,
CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN
IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF
ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in
connection with the Llama Materials, neither Meta nor Licensee may use any name
or mark owned by or associated with the other or any of its affiliates, except as
required for reasonable and customary use in describing and redistributing the
Llama Materials.
b. Subject to Meta's ownership of Llama Materials and derivatives made by or
for Meta, with respect to any derivative works and modifications of the Llama
Materials that are made by you, as between you and Meta, you are and will be the
owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity
(including a cross-claim or counterclaim in a lawsuit) alleging that the Llama
Materials or Llama 2 outputs or results, or any portion of any of the foregoing,
constitutes infringement of intellectual property or other rights owned or licensable
by you, then any licenses granted to you under this Agreement shall terminate as of
the date such litigation or claim is filed or instituted. You will indemnify and hold
harmless Meta from and against any claim by any third party arising out of or related
to your use or distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your
acceptance of this Agreement or access to the Llama Materials and will continue in
full force and effect until terminated in accordance with the terms and conditions
herein. Meta may terminate this Agreement if you are in breach of any term or
condition of this Agreement. Upon termination of this Agreement, you shall delete
and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the
termination of this Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and
construed under the laws of the State of California without regard to choice of law
principles, and the UN Convention on Contracts for the International Sale of Goods
does not apply to this Agreement. The courts of California shall have exclusive
jurisdiction of any dispute arising out of this Agreement.
---
| 7,238 | [
[
-0.026275634765625,
-0.039764404296875,
0.035858154296875,
0.046051025390625,
-0.041046142578125,
-0.007564544677734375,
-0.00013780593872070312,
-0.057403564453125,
0.032958984375,
0.0579833984375,
-0.04229736328125,
-0.0380859375,
-0.059906005859375,
0.017... |
heegyu/llama-2-ko-7b-chat | 2023-08-18T05:22:31.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"dataset:beomi/KoAlpaca-v1.1a",
"dataset:dbdu/ShareGPT-74k-ko",
"dataset:heegyu/korquad-chat-v1",
"dataset:HAERAE-HUB/KoInstruct-QA",
"dataset:changpt/ko-lima-vicuna",
"dataset:nlpai-lab/kullm-v2",
"endpoints_compatible",
"text-gen... | text-generation | heegyu | null | null | heegyu/llama-2-ko-7b-chat | 10 | 2,245 | transformers | 2023-08-18T04:22:15 | ---
datasets:
- beomi/KoAlpaca-v1.1a
- dbdu/ShareGPT-74k-ko
- heegyu/korquad-chat-v1
- HAERAE-HUB/KoInstruct-QA
- changpt/ko-lima-vicuna
- nlpai-lab/kullm-v2
language:
- ko
---
# heegyu/llama-2-ko-7b-chat
- [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)์ 40B ์ฒดํฌํฌ์ธํธ๋ฅผ ์ฌ๋ฌ ํ๊ตญ์ด instruction ๋ฐ์ดํฐ์
์ผ๋ก ํ์ตํ ๋ชจ๋ธ
## ์ฌ์ฉํ ๋ฐ์ดํฐ์
| Dataset | # instance | ํ์
|
| --- | --- | --- |
| [KoAlpaca v1.1](https://raw.githubusercontent.com/Beomi/KoAlpaca/main/KoAlpaca_v1.1.jsonl) | 50K | ์ฑ๊ธํด |
| [dbdu/ShareGPT-74k-ko ์ part2_ko_uncleaned](https://huggingface.co/datasets/dbdu/ShareGPT-74k-ko/resolve/main/part2_ko_uncleaned.json) | 36K | ๋ฉํฐํด |
| [heegyu/korquad-chat-v1](https://huggingface.co/datasets/heegyu/korquad-chat-v1) | 9.6K | ๋ฉํฐํด, ์ง์๊ธฐ๋ฐ |
| [lcw99/evolve-instruct](https://github.com/lcw99/evolve-instruct/) | 37K | ์ฑ๊ธํด |
| [HAERAE-HUB/KoInstruct-QA](https://huggingface.co/datasets/HAERAE-HUB/KoInstruct-QA) | 50.3k | ์ฑ๊ธํด |
| [changpt/ko-lima-vicuna](https://huggingface.co/datasets/changpt/ko-lima-vicuna) | 1K | ์ฑ๊ธํด, ๋ฉํฐํด(๊ทนํ ์ผ๋ถ) |
| [nlpai-lab/kullm-v2](https://huggingface.co/datasets/nlpai-lab/kullm-v2) | 15K | ์ฑ๊ธํด |
- KULLM v2 ๋ฐ์ดํฐ์
์์๋ GPT4ALL, Dolly ๋ฐ์ดํฐ๋ง ์ถ์ถํด์ ์ฌ์ฉํ์ต๋๋ค.
- ๋ค์ํ ํ์ต ๋ฐ์ดํฐ์
์ [HeegyuKim/open-korean-instructions](https://github.com/HeegyuKim/open-korean-instructions) GitHub repository๋ฅผ ์ฐธ๊ณ ํ์ธ์.
## ์์ฑ Prompt
- EOS token(<\/s>)์ด ๋์ฌ ๋๊น์ง ์์ฑํ๋ฉด ๋ฉ๋๋ค.
- ์ต์๋จ ํ๋กฌํํธ๋ ์๋๊ฒ ์ข์ ๋ต๋ณ์ด ๋ ์์ฃผ ๋์ค๋ ๊ฒ ๊ฐ์์.
```
๋น์ ์ AI ์ฑ๋ด์
๋๋ค. ์ฌ์ฉ์์๊ฒ ๋์์ด ๋๊ณ ์ ์ตํ ๋ด์ฉ์ ์ ๊ณตํด์ผํฉ๋๋ค. ๋ต๋ณ์ ๊ธธ๊ณ ์์ธํ๋ฉฐ ์น์ ํ ์ค๋ช
์ ๋ง๋ถ์ฌ์ ์์ฑํ์ธ์.
### ์ฌ์ฉ์:
ํฐ๋ผ๋
ธ์ฌ์ฐ๋ฅด์ค๋ณด๋ค ์ ๊ณต๋ฃก์ด ์์๊น?
### ์ฑ๋ด:
```
์์ฑ๋ ๋ต๋ณ:
```
ํฐ๋ผ๋
ธ์ฌ์ฐ๋ฅด์ค๋ณด๋ค ๊ฐ๋ ฅํ ๊ณต๋ฃก์ด ์์ ์ ์์ต๋๋ค. ์๋ฅผ ๋ค์ด, ํฐ๋ผ๋
ธ์ฌ์ฐ๋ฃจ์ค๋ณด๋ค ๋ ๊ฐ๋ ฅํ ์ด๋นจ๊ตฌ์กฐ์ ๊ฐํ ํฑ, ๊ฐ๋ ฅํ ๋ค๋ฆฌ, ๊ทธ๋ฆฌ๊ณ ๋ค์ํ ๋ชธ๊ธธ์ด ์ค 2์ข
์ด์์ ์ง๋
์ ์ฌ๋ฅ์ ๋ ๋น ๋ฅด๊ฒ ํ ์ ์๋ ๊ณต๋ฃก์ด ์์ ์ ์์ต๋๋ค.
๋ํ, ํฐ๋ผ๋
ธ์ฌ์ฐ๋ฃจ์ค๊ฐ ์ง๋ฐฐ์ ์ธ ํฌ์์์๋ ๊ฒ์ ์ฌ์ค์ด ์๋๋๋ค. ๋ค๋ฅธ ๊ณต๋ฃก๋ค์ ๋ค๋ฅธ ์์ต๊ด๊ณผ ๋ฐฉ์ด ์ ๋ต์ ๊ฐ์ง๊ณ ์์๋๋ฐ, ์๋ฅผ ๋ค์ด ์ผ๋ถ ๊ณต๋ฃก๋ค์ ๋ ํฐ ๋ฟ๊ณผ ๋พฐ์กฑํ ์ด๋นจ์ ๊ฐ์ง๊ณ ์์ด ๊ณต๊ฒฉ์์๊ฒ ํผํด๋ฅผ ์
ํ ์ ์์์ต๋๋ค.
๋ ๋ง์ ์ ๋ณด๋ฅผ ์ป๊ณ ์ถ๋ค๋ฉด, ๊ณต๋ฃก์ ๋ํ ๋ค์ํ ์ฐ๊ตฌ ๋
ผ๋ฌธ์ด๋ ์์ ๋๋ ์จ๋ผ์ธ ์ฝํ
์ธ ์ค ํ๋ ๋๋ ๋ ์ด์์ ์ง์คํด๋ณด์ธ์. ์ด๋ฅผ ํตํด ํฐ๋ผ๋
ธ์ฌ์ฐ๋ฃจ์ค์ ๊ทธ ์ดํ์ ์ญ์ฌ์ ๋ฌธํ, ๊ทธ๋ฆฌ๊ณ ๊ณต๋ฃก์ ๋ํ ๋ ๋ง์ ์ ๋ณด๋ฅผ ์ป์ ์ ์์ ๊ฒ์
๋๋ค.
```
๋ฉํฐํด ๋ํ ์์
```
๋น์ ์ AI ์ฑ๋ด์
๋๋ค. ์ฌ์ฉ์์๊ฒ ๋์์ด ๋๊ณ ์ ์ตํ ๋ด์ฉ์ ์ ๊ณตํด์ผํฉ๋๋ค. ๋ต๋ณ์ ๊ธธ๊ณ ์์ธํ๋ฉฐ ์น์ ํ ์ค๋ช
์ ๋ง๋ถ์ฌ์ ์์ฑํ์ธ์.
### ์ฌ์ฉ์:
๋๋ k-pop ๊ฐ์ ๋๊ตฌ ์ข์ํ๋?
### ์ฑ๋ด:
์ ๋ ๋ชจ๋ k-pop ๊ฐ์๋ค์ ์ข์ํ์ง ์์ต๋๋ค. ์ ๋ ๋ชจ๋ ์ฅ๋ฅด์ ์์
์ ์ฆ๊ธฐ๊ณ , ์ด๋ค ํน์ ํ ๊ฐ์๋ ๊ณก์ ๋ํ ์ ํธ๋๊ฐ ์์ต๋๋ค.
ํ์ง๋ง ํ ๊ฐ์ง ์์ธ๋ก, ์ ๊ฐ ๊ฐ์ฅ ์ข์ํ ๊ฒ์ ํ๊ตญ ๊ฐ์ BTS์
๋๋ค. BTS๋ ๋ค์ํ ์ฅ๋ฅด์ ์์
๊ณผ ๋
ํนํ ๋ฐฉ์์ผ๋ก ํฌ์๊ฒ ์ฌ๋์ ๋ฐ๊ณ ์์ต๋๋ค.
๊ทธ๋ค์ ๋
์ฐฝ์ ์ด๊ณ ๊ฐ์ธ์ ์ธ ์์
๊ณผ ๊ฐ์ฌ๋ฅผ ํตํด ๋
ํนํจ์ ๊ฐ์กฐํ๊ณ , ์ด์ ๊ฐ๋ํ ๊ณต์ฐ์ ๋ณด์ฌ์ค๋๋ค.
BTS๋ ์๋ง์ ํฌ๋ค์ด ์ฌ๋ํ๋ ์์
๊ทธ๋ฃน์ธ ๋งํผ, ๊ทธ๋ค์ ์์
์ ์ฌ๋๋ฐ๊ณ ์์ผ๋ฉฐ ๊ทธ๋ค์ ์์
์ ๋ฃ๋ ๊ฒ์ด ์ ์๊ฒ ํฐ ์ฆ๊ฑฐ์์
๋๋ค.
### ์ฌ์ฉ์:
์ค ๋๋ ๋ฐฉํ ์ข์ํ๋๊ตฌ๋
### ์ฑ๋ด:
๋ค, ๋ง์ต๋๋ค. ๋ฐฉํ์ ์ ์๊ฒ ๋ง์ ์ฆ๊ฑฐ์์ ์ฃผ๋ ๊ทธ๋ฃน์
๋๋ค. ๊ทธ๋ค์ ์ ์ ์ ํธ๋์ ์ผ์นํ์ฌ ๋ค์ํ ์ฅ๋ฅด์ ์์
์ ์๋ํ๋ ๋ฐ ๋์์ ์ฃผ์์ต๋๋ค.
๋ํ, ๊ทธ๋ค์ ์์
์ ๋
ํนํ ๊ฐ์ฌ์ ์คํ ๋ฆฌํ
๋ง์ผ๋ก ๊ด๊ฐ๋ค์๊ฒ ๊ฐํ ์ธ์์ ๋จ๊น๋๋ค. ๋ํ, ๋ฉค๋ฒ๋ค ๋ชจ๋์ ํํํ ํผํฌ๋จผ์ฑ๊ณผ ๊ฐ์ฐฝ๋ ฅ ์ญ์ ๊ทธ๋ค์ ํน์ง์ผ๋ก ๊ผฝํ๋๋ค.
์ด๋ฌํ ์ด์ ๋ก, ์ ๋ ๋ฐฉํ์ด ์ ๊ฐ ๊ฐ์ฅ ์ข์ํ๋ k-pop ์ํฐ์คํธ ์ค ํ ๋ช
์ด๋ผ๋ ๊ฒ์ ํ์ ํ๋ฉฐ, ๊ทธ๋ค์ด ๋ง๋ค์ด๋ด๋ ์์
์ ์ ์๊ฒ ํฐ ๊ธฐ์จ๊ณผ ์ฌ๋ฏธ๋ฅผ ์ ๊ณตํฉ๋๋ค.
### ์ฌ์ฉ์:
๋ฐฉํ ๊ตฐ๋๊ฐ๋๊ฑฐ ์์?
### ์ฑ๋ด:
```
์์ฑ๋ ๋ต๋ณ(ํ ๋ฃจ์๋ค์ด์
ใ
ใ
):
```
๋ค, ๋ฐฉํ ๊ตฐ๋์ ๊ณต์ฐ๊ณผ ๊ด๋ จ๋ ์์์ ๋ช ์ฐจ๋ก ์ ํ ์ ์ด ์์ต๋๋ค. ๊ทธ๋ค์ ํ๊ตญ ๊ตฐ๋ณต์ ์
๊ณ ๊ตฐ๋ฌด๋ก ์ ๋ช
ํ๋ฉฐ, ๊ทธ๋ค์ ๊ตฐ๋ฌด์ ํผํฌ๋จผ์ค๋ ๋ง์ ์ฌ๋๋ค์๊ฒ ๋ง์ ๊ฐ๋์ ์ค๋๋ค.๊ทธ
๋ค์ ์ ํฌ์ ์ด๊ณ ์๋์ง ๋์น๋ ๋ฌด๋๋ ๊ตฐ๋์์ ์ด๋ฃจ์ด์ง๋ ํ๋ จ๊ณผ ํ๋ จ์ ํตํด ๋ง๋ค์ด์ง๋ ๊ฒ์ด๋ผ๊ณ ํฉ๋๋ค. ์ด ๊ณต์ฐ์ ๊ทธ๋ค์ ์์
๊ณผ ํจ๊ป ๊ตฐ์ธ๋ค์ด ์์ ์ ๋ฅ๋ ฅ์ ๋ฐํํ ์ ์๊ฒ ํด์ฃผ๋ ๊ธฐํ๋ฅผ ์ ๊ณตํ๊ธฐ๋ ํฉ๋๋ค.
๋ฐฉํ ๊ตฐ๋๋ ํ๊ตญ ๊ตญ๋ฐฉ์ ํต์ฌ์ ์ธ ์ญํ ์ ๋ด๋นํ๊ณ ์์ผ๋ฉฐ, ๊ทธ๋ค์ ์์
๋ ๊ตฐ์ธ๋ค์ ์ฌ๋ช
์ ์ํด ํจ๊ป ๋ฌ๋ ค๊ฐ๊ณ ์์ต๋๋ค.
``` | 3,003 | [
[
-0.045623779296875,
-0.037078857421875,
0.0213623046875,
0.031463623046875,
-0.033905029296875,
0.01165008544921875,
0.0222930908203125,
-0.026641845703125,
0.04962158203125,
0.0268402099609375,
-0.037139892578125,
-0.0308685302734375,
-0.046600341796875,
0.... |
glrh11/vit-base-patch16-224 | 2023-05-07T06:58:46.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"vit",
"image-classification",
"vision",
"dataset:imagenet-1k",
"dataset:imagenet-21k",
"arxiv:2010.11929",
"arxiv:2006.03677",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | glrh11 | null | null | glrh11/vit-base-patch16-224 | 0 | 2,239 | transformers | 2023-05-07T06:30:18 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
- imagenet-21k
widget:
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Vision Transformer (base-sized model)
Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him.
Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import ViTImageProcessor, ViTForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = ViTImageProcessor.from_pretrained('google/vit-base-patch16-224')
model = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224')
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/vit.html#).
## Training data
The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py).
Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
### Pretraining
The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Training resolution is 224.
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
```
| 5,736 | [
[
-0.046722412109375,
-0.01190948486328125,
-0.0011224746704101562,
-0.00600433349609375,
-0.029052734375,
-0.0122528076171875,
-0.004547119140625,
-0.04632568359375,
0.0118255615234375,
0.03741455078125,
-0.0236663818359375,
-0.0192413330078125,
-0.05657958984375... |
ogstradamus/brandon | 2023-10-14T00:51:48.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | ogstradamus | null | null | ogstradamus/brandon | 0 | 2,239 | diffusers | 2023-10-14T00:46:09 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Brandon Dreambooth model trained by ogstradamus with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| 500 | [
[
-0.026153564453125,
-0.061065673828125,
0.053070068359375,
0.0265045166015625,
-0.01947021484375,
0.02813720703125,
0.0177459716796875,
-0.0225677490234375,
0.04583740234375,
0.004886627197265625,
-0.0117645263671875,
-0.0215911865234375,
-0.03656005859375,
... |
bigcode/starencoder | 2023-05-10T14:52:27.000Z | [
"transformers",
"pytorch",
"bert",
"pretraining",
"code",
"arxiv:1810.04805",
"endpoints_compatible",
"region:us"
] | null | bigcode | null | null | bigcode/starencoder | 33 | 2,237 | transformers | 2023-03-13T14:59:15 | ---
language:
- code
extra_gated_prompt: >-
## Model License Agreement
Please read the BigCode [OpenRAIL-M
license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
agreement before accepting it.
extra_gated_fields:
I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
---
# StarEnCoder
## Table of Contents
1. [Model Summary](##model-summary)
3. [Training](##training)
4. [Use](##use)
5. [Limitations](##limitations)
6. [License](##license)
## Model Summary
This is an encoder-only model (i.e., bi-directionally self-attentive Transformers) trained on [The Stack](https://huggingface.co/datasets/bigcode/the-stack) dataset.
- **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
- **Point of Contact:** [contact@bigcode-project.org](mailto:contact@bigcode-project.org)
- **Languages:** 80+ Programming languages
We leveraged the :
- Masked Language Modelling (MLM) and Next Sentence Prediction (NSP) objectives from [BERT](https://arxiv.org/abs/1810.04805).
- Predicted masked-out tokens from an input sentence and whether a pair of sentences occur as neighbors in a document.
## Training
We train for 100,000 steps with a global batch size of 4,096 sequences of a maximum length of 1,024 so that approximately 400B~tokens are observed. This takes roughly two days using 64 NVIDIA A100 GPUs.
Details about the model architecture are reported in the table below.
| Hyperparameter | Value |
|--------------------------|-----------|
| Hidden size | 768 |
| Intermediate size | 3072 |
| Max. position embeddings | 1024 |
| Num. of attention heads | 12 |
| Num. of hidden layers | 12 |
| Attention | Multi-head|
| Num. of parameters | โ125M |
## Use
This model is trained on 86 programming languages from GitHub code including GitHub issues and Git Commits, and can be efficiently fine-tuned for both code- and text-related tasks.
We fine-tuned on a token classification task to detect PII and have released [StaPII](https://huggingface.co/bigcode/starpii) model.
## Limitations
There are limitations to consider when using StarEncoder. It is an encoder-only model, which limits its flexibility in certain code generation or completion tasks,
and it was trained on data containing PII, which could pose privacy concerns. Performance may vary across the 80+ supported programming languages,
particularly for less common ones, and the model might struggle with understanding domains outside programming languages.
## License
The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement). | 2,863 | [
[
-0.0218048095703125,
-0.042266845703125,
0.0245819091796875,
0.028564453125,
-0.01036834716796875,
-0.0157012939453125,
-0.023345947265625,
-0.049224853515625,
0.0140380859375,
0.0489501953125,
-0.045379638671875,
-0.043609619140625,
-0.0577392578125,
0.0056... |
diffusers/controlnet-depth-sdxl-1.0-mid | 2023-08-16T14:11:04.000Z | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"controlnet",
"license:openrail++",
"diffusers:ControlNetModel",
"region:us"
] | text-to-image | diffusers | null | null | diffusers/controlnet-depth-sdxl-1.0-mid | 10 | 2,237 | diffusers | 2023-08-16T11:30:18 |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- controlnet
inference: false
---
# SDXL-controlnet: Depth
These are controlnet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with depth conditioning. This checkpoint is 5x smaller than the original XL controlnet checkpoint. You can find some example images in the following.
prompt: donald trump, serious look, cigar in the mouth, 70mm, film still, head shot

prompt: spiderman lecture, photorealistic

prompt: aerial view, a futuristic research complex in a bright foggy jungle, hard lighting

prompt: megatron in an apocalyptic world ground, runied city in the background, photorealistic

## Usage
Make sure to first install the libraries:
```bash
pip install accelerate transformers safetensors diffusers
```
And then we're ready to go:
```python
import torch
import numpy as np
from PIL import Image
from transformers import DPTFeatureExtractor, DPTForDepthEstimation
from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL
from diffusers.utils import load_image
depth_estimator = DPTForDepthEstimation.from_pretrained("Intel/dpt-hybrid-midas").to("cuda")
feature_extractor = DPTFeatureExtractor.from_pretrained("Intel/dpt-hybrid-midas")
controlnet = ControlNetModel.from_pretrained(
"diffusers/controlnet-depth-sdxl-1.0-mid",
variant="fp16",
use_safetensors=True,
torch_dtype=torch.float16,
).to("cuda")
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16).to("cuda")
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
controlnet=controlnet,
vae=vae,
variant="fp16",
use_safetensors=True,
torch_dtype=torch.float16,
).to("cuda")
pipe.enable_model_cpu_offload()
def get_depth_map(image):
image = feature_extractor(images=image, return_tensors="pt").pixel_values.to("cuda")
with torch.no_grad(), torch.autocast("cuda"):
depth_map = depth_estimator(image).predicted_depth
depth_map = torch.nn.functional.interpolate(
depth_map.unsqueeze(1),
size=(1024, 1024),
mode="bicubic",
align_corners=False,
)
depth_min = torch.amin(depth_map, dim=[1, 2, 3], keepdim=True)
depth_max = torch.amax(depth_map, dim=[1, 2, 3], keepdim=True)
depth_map = (depth_map - depth_min) / (depth_max - depth_min)
image = torch.cat([depth_map] * 3, dim=1)
image = image.permute(0, 2, 3, 1).cpu().numpy()[0]
image = Image.fromarray((image * 255.0).clip(0, 255).astype(np.uint8))
return image
prompt = "stormtrooper lecture, photorealistic"
image = load_image("https://huggingface.co/lllyasviel/sd-controlnet-depth/resolve/main/images/stormtrooper.png")
controlnet_conditioning_scale = 0.5 # recommended for good generalization
depth_image = get_depth_map(image)
images = pipe(
prompt, image=depth_image, num_inference_steps=30, controlnet_conditioning_scale=controlnet_conditioning_scale,
).images
images[0]
images[0].save(f"stormtrooper_grid.png")
```

To more details, check out the official documentation of [`StableDiffusionXLControlNetPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/controlnet_sdxl).
๐จ Please note that this checkpoint is experimental and there's a lot of room for improvement. We encourage the community to build on top of it, improve it, and provide us with feedback. ๐จ
### Training
Our training script was built on top of the official training script that we provide [here](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/README_sdxl.md).
You can refer to [this script](https://github.com/huggingface/diffusers/blob/7b93c2a882d8e12209fbaeffa51ee2b599ab5349/examples/research_projects/controlnet/train_controlnet_webdataset.py) for full discolsure.
* This checkpoint does not perform distillation. We just use a smaller ControlNet initialized from the SDXL UNet. We
encourage the community to try and conduct distillation too. This resource might be of help in [this regard](https://huggingface.co/blog/sd_distillation).
* To learn more about how the ControlNet was initialized, refer to [this code block](https://github.com/huggingface/diffusers/blob/7b93c2a882d8e12209fbaeffa51ee2b599ab5349/examples/research_projects/controlnet/train_controlnet_webdataset.py#L981C1-L999C36).
* It does not have any attention blocks.
* The model works pretty good on most conditioning images. But for more complex conditionings, the bigger checkpoints might be better. We are still working on improving the quality of this checkpoint and looking for feedback from the community.
* We recommend playing around with the `controlnet_conditioning_scale` and `guidance_scale` arguments for potentially better
image generation quality.
#### Training data
The model was trained on 3M images from LAION aesthetic 6 plus subset, with batch size of 256 for 50k steps with constant learning rate of 3e-5.
#### Compute
One 8xA100 machine
#### Mixed precision
FP16 | 5,309 | [
[
-0.048583984375,
-0.04144287109375,
0.0232696533203125,
0.0264739990234375,
-0.021514892578125,
-0.0007863044738769531,
0.005889892578125,
-0.0162200927734375,
0.0219573974609375,
0.02178955078125,
-0.040557861328125,
-0.031280517578125,
-0.055572509765625,
... |
ainize/bart-base-cnn | 2021-06-21T09:52:44.000Z | [
"transformers",
"pytorch",
"bart",
"feature-extraction",
"summarization",
"en",
"dataset:cnn_dailymail",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | summarization | ainize | null | null | ainize/bart-base-cnn | 9 | 2,235 | transformers | 2022-03-02T23:29:05 | ---
language: en
license: apache-2.0
datasets:
- cnn_dailymail
tags:
- summarization
- bart
---
# BART base model fine-tuned on CNN Dailymail
- This model is a [bart-base model](https://huggingface.co/facebook/bart-base) fine-tuned on the [CNN/Dailymail summarization dataset](https://huggingface.co/datasets/cnn_dailymail) using [Ainize Teachable-NLP](https://ainize.ai/teachable-nlp).
The Bart model was proposed by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019. According to the abstract,
Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT).
The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token.
BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE.
The Authorsโ code can be found here:
https://github.com/pytorch/fairseq/tree/master/examples/bart
## Usage
### Python Code
```python
from transformers import PreTrainedTokenizerFast, BartForConditionalGeneration
# Load Model and Tokenize
tokenizer = PreTrainedTokenizerFast.from_pretrained("ainize/bart-base-cnn")
model = BartForConditionalGeneration.from_pretrained("ainize/bart-base-cnn")
# Encode Input Text
input_text = '(CNN) -- South Korea launched an investigation Tuesday into reports of toxic chemicals being dumped at a former U.S. military base, the Defense Ministry said. The tests follow allegations of American soldiers burying chemicals on Korean soil. The first tests are being carried out by a joint military, government and civilian task force at the site of what was Camp Mercer, west of Seoul. "Soil and underground water will be taken in the areas where toxic chemicals were allegedly buried," said the statement from the South Korean Defense Ministry. Once testing is finished, the government will decide on how to test more than 80 other sites -- all former bases. The alarm was raised this month when a U.S. veteran alleged barrels of the toxic herbicide Agent Orange were buried at an American base in South Korea in the late 1970s. Two of his fellow soldiers corroborated his story about Camp Carroll, about 185 miles (300 kilometers) southeast of the capital, Seoul. "We\'ve been working very closely with the Korean government since we had the initial claims," said Lt. Gen. John Johnson, who is heading the Camp Carroll Task Force. "If we get evidence that there is a risk to health, we are going to fix it." A joint U.S.- South Korean investigation is being conducted at Camp Carroll to test the validity of allegations. The U.S. military sprayed Agent Orange from planes onto jungles in Vietnam to kill vegetation in an effort to expose guerrilla fighters. Exposure to the chemical has been blamed for a wide variety of ailments, including certain forms of cancer and nerve disorders. It has also been linked to birth defects, according to the Department of Veterans Affairs. Journalist Yoonjung Seo contributed to this report.'
input_ids = tokenizer.encode(input_text, return_tensors="pt")
# Generate Summary Text Ids
summary_text_ids = model.generate(
input_ids=input_ids,
bos_token_id=model.config.bos_token_id,
eos_token_id=model.config.eos_token_id,
length_penalty=2.0,
max_length=142,
min_length=56,
num_beams=4,
)
# Decoding Text
print(tokenizer.decode(summary_text_ids[0], skip_special_tokens=True))
```
### API
You can experience this model through [ainize](https://ainize.ai/gkswjdzz/summarize-torchserve?branch=main). | 3,958 | [
[
-0.0251007080078125,
-0.04144287109375,
0.03900146484375,
0.0159759521484375,
-0.00909423828125,
-0.007640838623046875,
-0.0011310577392578125,
-0.033172607421875,
-0.0023555755615234375,
0.037353515625,
-0.016937255859375,
-0.050506591796875,
-0.04656982421875,... |
comodoro/wav2vec2-xls-r-300m-cs-250 | 2023-10-31T10:01:10.000Z | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"xlsr-fine-tuning-week",
"cs",
"dataset:mozilla-foundation/common_voice_8_0",
"dataset:ovm",
"... | automatic-speech-recognition | comodoro | null | null | comodoro/wav2vec2-xls-r-300m-cs-250 | 1 | 2,233 | transformers | 2022-03-02T23:29:05 | ---
language:
- cs
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
- xlsr-fine-tuning-week
datasets:
- mozilla-foundation/common_voice_8_0
- ovm
- pscr
- vystadial2016
base_model: facebook/wav2vec2-xls-r-300m
model-index:
- name: Czech comodoro Wav2Vec2 XLSR 300M 250h data
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: cs
metrics:
- type: wer
value: 7.3
name: Test WER
- type: cer
value: 2.1
name: Test CER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: cs
metrics:
- type: wer
value: 43.44
name: Test WER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: cs
metrics:
- type: wer
value: 38.5
name: Test WER
---
# Czech wav2vec2-xls-r-300m-cs-250
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice 8.0 dataset as well as other datasets listed below.
It achieves the following results on the evaluation set:
- Loss: 0.1271
- Wer: 0.1475
- Cer: 0.0329
The `eval.py` script results using a LM are:
- WER: 0.07274312090176113
- CER: 0.021207369275558875
## Model description
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Czech using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("mozilla-foundation/common_voice_8_0", "cs", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs-250")
model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs-250")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated using the attached `eval.py` script:
```
python eval.py --model_id comodoro/wav2vec2-xls-r-300m-cs-250 --dataset mozilla-foundation/common-voice_8_0 --split test --config cs
```
## Training and evaluation data
The Common Voice 8.0 `train` and `validation` datasets were used for training, as well as the following datasets:
- ล mรญdl, Luboลก and Praลพรกk, Aleลก, 2013, OVM โ Otรกzky Vรกclava Moravce, LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (รFAL), Faculty of Mathematics and Physics, Charles University, http://hdl.handle.net/11858/00-097C-0000-000D-EC98-3.
- Praลพรกk, Aleลก and ล mรญdl, Luboลก, 2012, Czech Parliament Meetings, LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (รFAL), Faculty of Mathematics and Physics, Charles University, http://hdl.handle.net/11858/00-097C-0000-0005-CF9C-4.
- Plรกtek, Ondลej; Duลกek, Ondลej and Jurฤรญฤek, Filip, 2016, Vystadial 2016 โ Czech data, LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (รFAL), Faculty of Mathematics and Physics, Charles University, http://hdl.handle.net/11234/1-1740.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 3.4203 | 0.16 | 800 | 3.3148 | 1.0 | 1.0 |
| 2.8151 | 0.32 | 1600 | 0.8508 | 0.8938 | 0.2345 |
| 0.9411 | 0.48 | 2400 | 0.3335 | 0.3723 | 0.0847 |
| 0.7408 | 0.64 | 3200 | 0.2573 | 0.2840 | 0.0642 |
| 0.6516 | 0.8 | 4000 | 0.2365 | 0.2581 | 0.0595 |
| 0.6242 | 0.96 | 4800 | 0.2039 | 0.2433 | 0.0541 |
| 0.5754 | 1.12 | 5600 | 0.1832 | 0.2156 | 0.0482 |
| 0.5626 | 1.28 | 6400 | 0.1827 | 0.2091 | 0.0463 |
| 0.5342 | 1.44 | 7200 | 0.1744 | 0.2033 | 0.0468 |
| 0.4965 | 1.6 | 8000 | 0.1705 | 0.1963 | 0.0444 |
| 0.5047 | 1.76 | 8800 | 0.1604 | 0.1889 | 0.0422 |
| 0.4814 | 1.92 | 9600 | 0.1604 | 0.1827 | 0.0411 |
| 0.4471 | 2.09 | 10400 | 0.1566 | 0.1822 | 0.0406 |
| 0.4509 | 2.25 | 11200 | 0.1619 | 0.1853 | 0.0432 |
| 0.4415 | 2.41 | 12000 | 0.1513 | 0.1764 | 0.0397 |
| 0.4313 | 2.57 | 12800 | 0.1515 | 0.1739 | 0.0392 |
| 0.4163 | 2.73 | 13600 | 0.1445 | 0.1695 | 0.0377 |
| 0.4142 | 2.89 | 14400 | 0.1478 | 0.1699 | 0.0385 |
| 0.4184 | 3.05 | 15200 | 0.1430 | 0.1669 | 0.0376 |
| 0.3886 | 3.21 | 16000 | 0.1433 | 0.1644 | 0.0374 |
| 0.3795 | 3.37 | 16800 | 0.1426 | 0.1648 | 0.0373 |
| 0.3859 | 3.53 | 17600 | 0.1357 | 0.1604 | 0.0361 |
| 0.3762 | 3.69 | 18400 | 0.1344 | 0.1558 | 0.0349 |
| 0.384 | 3.85 | 19200 | 0.1379 | 0.1576 | 0.0359 |
| 0.3762 | 4.01 | 20000 | 0.1344 | 0.1539 | 0.0346 |
| 0.3559 | 4.17 | 20800 | 0.1339 | 0.1525 | 0.0351 |
| 0.3683 | 4.33 | 21600 | 0.1315 | 0.1518 | 0.0342 |
| 0.3572 | 4.49 | 22400 | 0.1307 | 0.1507 | 0.0342 |
| 0.3494 | 4.65 | 23200 | 0.1294 | 0.1491 | 0.0335 |
| 0.3476 | 4.81 | 24000 | 0.1287 | 0.1491 | 0.0336 |
| 0.3475 | 4.97 | 24800 | 0.1271 | 0.1475 | 0.0329 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| 7,184 | [
[
-0.042327880859375,
-0.045257568359375,
0.015869140625,
0.0131072998046875,
-0.007732391357421875,
-0.01384735107421875,
-0.023773193359375,
-0.017333984375,
0.02734375,
0.0262451171875,
-0.045135498046875,
-0.051239013671875,
-0.039459228515625,
-0.00596618... |
ml6team/bert-base-uncased-city-country-ner | 2022-07-01T07:27:25.000Z | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"address-NER",
"NER",
"bert-base-uncased",
"en",
"dataset:Ultra Fine Entity Typing",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | ml6team | null | null | ml6team/bert-base-uncased-city-country-ner | 8 | 2,232 | transformers | 2022-03-02T23:29:05 | ---
language:
- en
tags:
- token-classification
- address-NER
- NER
- bert-base-uncased
datasets:
- Ultra Fine Entity Typing
metrics:
- Precision
- Recall
- F1 Score
widget:
- text: "Hi, I am Kermit and I live in Berlin"
- text: "It is very difficult to find a house in Berlin, Germany."
- text: "ML6 is a very cool company from Belgium"
- text: "Samuel ppops in a happy plce called Berlin which happens to be Kazakhstan"
- text: "My family and I visited Montreal, Canada last week and the flight from Amsterdam took 9 hours"
---
## City-Country-NER
A `bert-base-uncased` model finetuned on a custom dataset to detect `Country` and `City` names from a given sentence.
### Custom Dataset
We weakly supervised the [Ultra-Fine Entity Typing](https://www.cs.utexas.edu/~eunsol/html_pages/open_entity.html) dataset to include the `City` and `Country` information. We also did some extra preprocessing to remove false labels.
The model predicts 3 different tags: `OTHER`, `CITY` and `COUNTRY`
### How to use the finetuned model?
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("ml6team/bert-base-uncased-city-country-ner")
model = AutoModelForTokenClassification.from_pretrained("ml6team/bert-base-uncased-city-country-ner")
from transformers import pipeline
nlp = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
nlp("My name is Kermit and I live in London.")
``` | 1,481 | [
[
-0.042694091796875,
-0.04925537109375,
0.029388427734375,
-0.00025725364685058594,
-0.0240478515625,
-0.0139312744140625,
-0.0275115966796875,
-0.0256805419921875,
0.01544952392578125,
0.041839599609375,
-0.026611328125,
-0.060699462890625,
-0.046234130859375,
... |
NousResearch/Llama-2-70b-hf | 2023-08-26T20:17:24.000Z | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"en",
"text-generation-inference",
"region:us"
] | text-generation | NousResearch | null | null | NousResearch/Llama-2-70b-hf | 14 | 2,232 | transformers | 2023-07-18T20:56:59 | ---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license terms and acceptable use policy before submitting this form. Requests
will be processed in 1-2 days.
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 70B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes โ 7B, 13B, and 70B โ as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Metaโs sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software โbug,โ or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
| 10,100 | [
[
-0.0164337158203125,
-0.0526123046875,
0.0277252197265625,
0.01485443115234375,
-0.0281829833984375,
0.0163726806640625,
-0.0033435821533203125,
-0.056427001953125,
0.004619598388671875,
0.0237579345703125,
-0.052520751953125,
-0.0426025390625,
-0.05047607421875... |
Supabase/gte-small | 2023-09-20T20:02:20.000Z | [
"transformers.js",
"pytorch",
"onnx",
"bert",
"feature-extraction",
"en",
"license:mit",
"region:us"
] | feature-extraction | Supabase | null | null | Supabase/gte-small | 16 | 2,230 | transformers.js | 2023-08-01T17:50:33 | ---
pipeline_tag: feature-extraction
library_name: "transformers.js"
language:
- en
license: mit
---
_Fork of https://huggingface.co/thenlper/gte-small with ONNX weights to be compatible with Transformers.js. See [JavaScript usage](#javascript)._
---
# gte-small
Gegeral Text Embeddings (GTE) model.
The GTE models are trained by Alibaba DAMO Academy. They are mainly based on the BERT framework and currently offer three different sizes of models, including [GTE-large](https://huggingface.co/thenlper/gte-large), [GTE-base](https://huggingface.co/thenlper/gte-base), and [GTE-small](https://huggingface.co/thenlper/gte-small). The GTE models are trained on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. This enables the GTE models to be applied to various downstream tasks of text embeddings, including **information retrieval**, **semantic textual similarity**, **text reranking**, etc.
## Metrics
Performance of GTE models were compared with other popular text embedding models on the MTEB benchmark. For more detailed comparison results, please refer to the [MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard).
| Model Name | Model Size (GB) | Dimension | Sequence Length | Average (56) | Clustering (11) | Pair Classification (3) | Reranking (4) | Retrieval (15) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [**gte-large**](https://huggingface.co/thenlper/gte-large) | 0.67 | 1024 | 512 | **63.13** | 46.84 | 85.00 | 59.13 | 52.22 | 83.35 | 31.66 | 73.33 |
| [**gte-base**](https://huggingface.co/thenlper/gte-base) | 0.22 | 768 | 512 | **62.39** | 46.2 | 84.57 | 58.61 | 51.14 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1.34 | 1024| 512 | 62.25 | 44.49 | 86.03 | 56.61 | 50.56 | 82.05 | 30.19 | 75.24 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 0.44 | 768 | 512 | 61.5 | 43.80 | 85.73 | 55.91 | 50.29 | 81.05 | 30.28 | 73.84 |
| [**gte-small**](https://huggingface.co/thenlper/gte-small) | 0.07 | 384 | 512 | **61.36** | 44.89 | 83.54 | 57.7 | 49.46 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | - | 1536 | 8192 | 60.99 | 45.9 | 84.89 | 56.32 | 49.25 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 0.13 | 384 | 512 | 59.93 | 39.92 | 84.67 | 54.32 | 49.04 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 9.73 | 768 | 512 | 59.51 | 43.72 | 85.06 | 56.42 | 42.24 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 0.44 | 768 | 514 | 57.78 | 43.69 | 83.04 | 59.36 | 43.81 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 28.27 | 4096 | 2048 | 57.59 | 38.93 | 81.9 | 55.65 | 48.22 | 77.74 | 33.6 | 66.19 |
| [all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) | 0.13 | 384 | 512 | 56.53 | 41.81 | 82.41 | 58.44 | 42.69 | 79.8 | 27.9 | 63.21 |
| [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | 0.09 | 384 | 512 | 56.26 | 42.35 | 82.37 | 58.04 | 41.95 | 78.9 | 30.81 | 63.05 |
| [contriever-base-msmarco](https://huggingface.co/nthakur/contriever-base-msmarco) | 0.44 | 768 | 512 | 56.00 | 41.1 | 82.54 | 53.14 | 41.88 | 76.51 | 30.36 | 66.68 |
| [sentence-t5-base](https://huggingface.co/sentence-transformers/sentence-t5-base) | 0.22 | 768 | 512 | 55.27 | 40.21 | 85.18 | 53.09 | 33.63 | 81.14 | 31.39 | 69.81 |
## Usage
This model can be used with both [Python](#python) and [JavaScript](#javascript).
### Python
Use with [Transformers](https://huggingface.co/docs/transformers/index) and [PyTorch](https://pytorch.org/docs/stable/index.html):
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
input_texts = [
"what is the capital of China?",
"how to implement quick sort in python?",
"Beijing",
"sorting algorithms"
]
tokenizer = AutoTokenizer.from_pretrained("Supabase/gte-small")
model = AutoModel.from_pretrained("Supabase/gte-small")
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# (Optionally) normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:1] @ embeddings[1:].T) * 100
print(scores.tolist())
```
Use with [sentence-transformers](https://www.sbert.net/):
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
sentences = ['That is a happy person', 'That is a very happy person']
model = SentenceTransformer('Supabase/gte-small')
embeddings = model.encode(sentences)
print(cos_sim(embeddings[0], embeddings[1]))
```
### JavaScript
This model can be used with JavaScript via [Transformers.js](https://huggingface.co/docs/transformers.js/index).
Use with [Deno](https://deno.land/manual/introduction) or [Supabase Edge Functions](https://supabase.com/docs/guides/functions):
```ts
import { serve } from 'https://deno.land/std@0.168.0/http/server.ts'
import { env, pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.5.0'
// Configuration for Deno runtime
env.useBrowserCache = false;
env.allowLocalModels = false;
const pipe = await pipeline(
'feature-extraction',
'Supabase/gte-small',
);
serve(async (req) => {
// Extract input string from JSON body
const { input } = await req.json();
// Generate the embedding from the user input
const output = await pipe(input, {
pooling: 'mean',
normalize: true,
});
// Extract the embedding output
const embedding = Array.from(output.data);
// Return the embedding
return new Response(
JSON.stringify({ embedding }),
{ headers: { 'Content-Type': 'application/json' } }
);
});
```
Use within the browser ([JavaScript Modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules)):
```html
<script type="module">
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.5.0';
const pipe = await pipeline(
'feature-extraction',
'Supabase/gte-small',
);
// Generate the embedding from text
const output = await pipe('Hello world', {
pooling: 'mean',
normalize: true,
});
// Extract the embedding output
const embedding = Array.from(output.data);
console.log(embedding);
</script>
```
Use within [Node.js](https://nodejs.org/en/docs) or a web bundler ([Webpack](https://webpack.js.org/concepts/), etc):
```js
import { pipeline } from '@xenova/transformers';
const pipe = await pipeline(
'feature-extraction',
'Supabase/gte-small',
);
// Generate the embedding from text
const output = await pipe('Hello world', {
pooling: 'mean',
normalize: true,
});
// Extract the embedding output
const embedding = Array.from(output.data);
console.log(embedding);
```
### Limitation
This model exclusively caters to English texts, and any lengthy texts will be truncated to a maximum of 512 tokens.
| 7,618 | [
[
-0.0338134765625,
-0.043914794921875,
0.01934814453125,
0.005588531494140625,
-0.0172882080078125,
0.002300262451171875,
-0.0220947265625,
-0.018218994140625,
0.047607421875,
0.01172637939453125,
-0.04351806640625,
-0.05743408203125,
-0.058441162109375,
-0.0... |
allenai/multicite-multilabel-scibert | 2022-05-10T17:45:24.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"scibert",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | allenai | null | null | allenai/multicite-multilabel-scibert | 1 | 2,228 | transformers | 2022-05-06T12:02:26 | ---
language: en
tags:
- scibert
license: mit
---
# MultiCite: Multi-label Citation Intent Classification with SciBERT (NAACL 2022)
This model has been trained on the data available here: https://github.com/allenai/multicite | 227 | [
[
-0.01462554931640625,
0.0040130615234375,
0.050994873046875,
0.050445556640625,
0.0123748779296875,
-0.00452423095703125,
0.014892578125,
-0.0176849365234375,
0.01763916015625,
0.0274505615234375,
-0.039886474609375,
-0.04168701171875,
-0.038543701171875,
0.... |
Salesforce/codegen2-3_7B | 2023-07-06T10:48:32.000Z | [
"transformers",
"pytorch",
"codegen",
"text-generation",
"custom_code",
"arxiv:2305.02309",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-generation | Salesforce | null | null | Salesforce/codegen2-3_7B | 13 | 2,228 | transformers | 2023-04-25T02:08:59 | ---
license: apache-2.0
---
# CodeGen2 (CodeGen2-3.7B)
## Model description
[CodeGen2](https://github.com/salesforce/CodeGen2) is a family of autoregressive language models for **program synthesis**, introduced in the paper:
[CodeGen2: Lessons for Training LLMs on Programming and Natural Languages](https://arxiv.org/abs/2305.02309) by Erik Nijkamp\*, Hiroaki Hayashi\*, Caiming Xiong, Silvio Savarese, Yingbo Zhou.
Unlike the original CodeGen model family (i.e., CodeGen1), CodeGen2 is capable of infilling, and supports more programming languages.
Four model sizes are released: `1B`, `3.7B`, `7B`, `16B`.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality.
### Causal sampling
For regular causal sampling, simply generate completions given the context:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen2-3_7B")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen2-3_7B", trust_remote_code=True, revision="main")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
### Infill sampling
For **infill** sampling, we introduce three new special token types:
* `<mask_N>`: N-th span to be masked. In practice, use `<mask_1>` to where you want to sample infill.
* `<sep>`: Separator token between the suffix and the infilled sample. See below.
* `<eom>`: "End-Of-Mask" token that model will output at the end of infilling. You may use this token to truncate the output.
For example, if we want to generate infill for the following cursor position of a function:
```python
def hello_world():
|
return name
```
we construct an input to the model by
1. Inserting `<mask_1>` token in place of cursor position
2. Append `<sep>` token to indicate the boundary
3. Insert another `<mask_1>` to indicate which mask we want to infill.
The final snippet looks as follows:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen2-3_7B")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen2-3_7B", trust_remote_code=True, revision="main")
def format(prefix, suffix):
return prefix + "<mask_1>" + suffix + "<|endoftext|>" + "<sep>" + "<mask_1>"
prefix = "def hello_world():\n "
suffix = " return name"
text = format(prefix, suffix)
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=False)[len(text):])
```
You might want to truncate the model output with `<eom>`.
## Training data
This checkpoint is trained on the stricter permissive subset of [the deduplicated version of the Stack dataset (v1.1)](https://huggingface.co/datasets/bigcode/the-stack-dedup). Supported languages (and frameworks) are as follows:
`c`, `c++`, `c-sharp`, `dart`, `go`, `java`, `javascript`, `kotlin`, `lua`, `php`, `python`, `ruby`, `rust`, `scala`, `shell`, `sql`, `swift`, `typescript`, `vue`.
## Training procedure
CodeGen2 was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The input sequences are formatted in two ways: (1) causal language modeling and (2) file-level span corruption.
Please refer to the paper for more details.
## Evaluation results
We evaluate our models on HumanEval and HumanEval-Infill. Please refer to the [paper](https://arxiv.org/abs/2305.02309) for more details.
## Intended use and limitations
As an autoregressive language model, CodeGen2 is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2023codegen2,
title={CodeGen2: Lessons for Training LLMs on Programming and Natural Languages},
author={Nijkamp, Erik and Hayashi, Hiroaki and Xiong, Caiming and Savarese, Silvio and Zhou, Yingbo},
journal={arXiv preprint},
year={2023}
}
```
| 4,431 | [
[
-0.016998291015625,
-0.049560546875,
0.0019989013671875,
0.0250091552734375,
-0.0070343017578125,
0.00998687744140625,
-0.01190948486328125,
-0.04095458984375,
-0.0155029296875,
0.032684326171875,
-0.041656494140625,
-0.0262451171875,
-0.03546142578125,
0.01... |
Yntec/Cetus | 2023-09-28T06:17:00.000Z | [
"diffusers",
"Anime",
"2D",
"2.5D",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"Eagelaxis",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Yntec | null | null | Yntec/Cetus | 2 | 2,228 | diffusers | 2023-08-29T04:42:12 | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
language:
- en
tags:
- Anime
- 2D
- 2.5D
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- Eagelaxis
inference: true
---
# Cetus
When you think about a Cetus generation, you think about the 3.5 version. It's fp16-no-ema.
Samples and prompts:


Pretty cute girl. Like lesser birds on the four winds. Like silver scrapes in May. Now the sands become a crust. And most of you have gone away.
Original page:
https://civitai.com/models/6755?modelVersionId=29851
| 798 | [
[
-0.049072265625,
-0.05615234375,
0.06451416015625,
0.0133819580078125,
-0.040130615234375,
-0.0045013427734375,
0.0010480880737304688,
-0.040130615234375,
0.041595458984375,
0.02984619140625,
-0.042755126953125,
-0.0419921875,
-0.0178375244140625,
0.01013183... |
timm/eva_giant_patch14_224.clip_ft_in1k | 2023-03-31T05:57:21.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:2303.15389",
"license:mit",
"region:us"
] | image-classification | timm | null | null | timm/eva_giant_patch14_224.clip_ft_in1k | 2 | 2,221 | timm | 2022-12-23T02:37:40 | ---
tags:
- image-classification
- timm
library_tag: timm
license: mit
---
# Model card for eva_giant_patch14_224.clip_ft_in1k
An EVA-CLIP image classification model. Pretrained on LAION-400M with CLIP and fine-tuned on ImageNet-1k by paper authors. EVA-CLIP uses MIM pretrained image towers and pretrained text towers, FLIP patch dropout, and different optimizers and hparams to accelerate training.
NOTE: `timm` checkpoints are float32 for consistency with other models. Original checkpoints are float16 or bfloat16 in some cases, see originals if that's preferred.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 1012.6
- GMACs: 267.2
- Activations (M): 192.6
- Image size: 224 x 224
- **Papers:**
- EVA-CLIP: Improved Training Techniques for CLIP at Scale: https://arxiv.org/abs/2303.15389
- **Original:**
- https://github.com/baaivision/EVA
- https://huggingface.co/QuanSun/EVA-CLIP
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('eva_giant_patch14_224.clip_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'eva_giant_patch14_224.clip_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 257, 1408) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |top1 |top5 |param_count|img_size|
|-----------------------------------------------|------|------|-----------|--------|
|eva02_large_patch14_448.mim_m38m_ft_in22k_in1k |90.054|99.042|305.08 |448 |
|eva02_large_patch14_448.mim_in22k_ft_in22k_in1k|89.946|99.01 |305.08 |448 |
|eva_giant_patch14_560.m30m_ft_in22k_in1k |89.792|98.992|1014.45 |560 |
|eva02_large_patch14_448.mim_in22k_ft_in1k |89.626|98.954|305.08 |448 |
|eva02_large_patch14_448.mim_m38m_ft_in1k |89.57 |98.918|305.08 |448 |
|eva_giant_patch14_336.m30m_ft_in22k_in1k |89.56 |98.956|1013.01 |336 |
|eva_giant_patch14_336.clip_ft_in1k |89.466|98.82 |1013.01 |336 |
|eva_large_patch14_336.in22k_ft_in22k_in1k |89.214|98.854|304.53 |336 |
|eva_giant_patch14_224.clip_ft_in1k |88.882|98.678|1012.56 |224 |
|eva02_base_patch14_448.mim_in22k_ft_in22k_in1k |88.692|98.722|87.12 |448 |
|eva_large_patch14_336.in22k_ft_in1k |88.652|98.722|304.53 |336 |
|eva_large_patch14_196.in22k_ft_in22k_in1k |88.592|98.656|304.14 |196 |
|eva02_base_patch14_448.mim_in22k_ft_in1k |88.23 |98.564|87.12 |448 |
|eva_large_patch14_196.in22k_ft_in1k |87.934|98.504|304.14 |196 |
|eva02_small_patch14_336.mim_in22k_ft_in1k |85.74 |97.614|22.13 |336 |
|eva02_tiny_patch14_336.mim_in22k_ft_in1k |80.658|95.524|5.76 |336 |
## Citation
```bibtex
@article{EVA-CLIP,
title={EVA-02: A Visual Representation for Neon Genesis},
author={Sun, Quan and Fang, Yuxin and Wu, Ledell and Wang, Xinlong and Cao, Yue},
journal={arXiv preprint arXiv:2303.15389},
year={2023}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 4,897 | [
[
-0.04644775390625,
-0.033447265625,
0.013336181640625,
0.00963592529296875,
-0.0182037353515625,
0.0026187896728515625,
-0.0095977783203125,
-0.029327392578125,
0.04852294921875,
0.026092529296875,
-0.03546142578125,
-0.053497314453125,
-0.04638671875,
0.008... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.