modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
timm/xception41.tf_in1k | 2023-04-21T23:43:10.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1802.02611",
"arxiv:1610.02357",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/xception41.tf_in1k | 1 | 1,792 | timm | 2023-04-21T23:42:47 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for xception41.tf_in1k
An Aligned Xception image classification model. Trained on ImageNet-1k in Tensorflow and ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 27.0
- GMACs: 9.3
- Activations (M): 39.9
- Image size: 299 x 299
- **Papers:**
- Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation: https://arxiv.org/abs/1802.02611
- Xception: Deep Learning with Depthwise Separable Convolutions: https://arxiv.org/abs/1610.02357
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/tensorflow/models/blob/master/research/deeplab/
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('xception41.tf_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'xception41.tf_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 128, 150, 150])
# torch.Size([1, 256, 75, 75])
# torch.Size([1, 728, 38, 38])
# torch.Size([1, 1024, 19, 19])
# torch.Size([1, 2048, 10, 10])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'xception41.tf_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 10, 10) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@inproceedings{deeplabv3plus2018,
title={Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation},
author={Liang-Chieh Chen and Yukun Zhu and George Papandreou and Florian Schroff and Hartwig Adam},
booktitle={ECCV},
year={2018}
}
```
```bibtex
@misc{chollet2017xception,
title={Xception: Deep Learning with Depthwise Separable Convolutions},
author={François Chollet},
year={2017},
eprint={1610.02357},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
| 3,917 | [
[
-0.035797119140625,
-0.032379150390625,
0.006916046142578125,
0.007228851318359375,
-0.0322265625,
-0.014495849609375,
-0.0180511474609375,
-0.036651611328125,
0.0098114013671875,
0.03314208984375,
-0.03680419921875,
-0.062103271484375,
-0.054412841796875,
-... |
voidism/diffcse-roberta-base-sts | 2022-05-01T19:30:19.000Z | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"arxiv:2204.10298",
"arxiv:2104.08821",
"arxiv:2111.00899",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | voidism | null | null | voidism/diffcse-roberta-base-sts | 1 | 1,790 | transformers | 2022-04-14T15:19:51 | ---
license: apache-2.0
---
# DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings
[](https://github.com/voidism/DiffCSE/)
[](https://colab.research.google.com/github/voidism/DiffCSE/blob/master/diffcse_evaluation.ipynb)
arXiv link: https://arxiv.org/abs/2204.10298
To be published in [**NAACL 2022**](https://2022.naacl.org/)
Authors:
[Yung-Sung Chuang](https://people.csail.mit.edu/yungsung/),
[Rumen Dangovski](http://super-ms.mit.edu/rumen.html),
[Hongyin Luo](http://people.csail.mit.edu/hyluo/),
[Yang Zhang](https://mitibmwatsonailab.mit.edu/people/yang-zhang/),
[Shiyu Chang](https://code-terminator.github.io/),
[Marin Soljačić](http://www.mit.edu/~soljacic/marin.html),
[Shang-Wen Li](https://swdanielli.github.io/),
[Scott Wen-tau Yih](https://scottyih.org/),
[Yoon Kim](https://people.csail.mit.edu/yoonkim/),
[James Glass](http://groups.csail.mit.edu/sls/people/glass.shtml)
Our code is mainly based on the code of [SimCSE](https://arxiv.org/abs/2104.08821). Please refer to their [repository](https://github.com/princeton-nlp/SimCSE) for more detailed information.
## Overview

We propose DiffCSE, an unsupervised contrastive learning framework for learning sentence embeddings. DiffCSE learns sentence embeddings that are sensitive to the difference between the original sentence and an edited sentence, where the edited sentence is obtained by stochastically masking out the original sentence and then sampling from a masked language model. We show that DiffSCE is an instance of equivariant contrastive learning [(Dangovski et al., 2021)](https://arxiv.org/abs/2111.00899), which generalizes contrastive learning and learns representations that are insensitive to certain types of augmentations and sensitive to other "harmful" types of augmentations. Our experiments show that DiffCSE achieves state-of-the-art results among unsupervised sentence representation learning methods, outperforming unsupervised SimCSE by 2.3 absolute points on semantic textual similarity tasks.
## Setups
[](https://www.python.org/downloads/release/python-395/)
### Requirements
* Python 3.9.5
### Install our customized Transformers package
```
cd transformers-4.2.1
pip install .
```
> If you have already installed `transformers==4.2.1` through pip, you need to put `modeling_bert.py` into `<your_python_env>/site-packages/transformers/models/bert/modeling_bert.py` and `modeling_roberta.py` into `<your_python_env>/site-packages/transformers/models/bert/modeling_roberta.py`.
> We modify these two files in the package so that we can perform _conditional_ pretraining tasks using BERT/RoBERTa. If possible, please directly pip install our customized Transformers package.
### Install other packages
```
pip install -r requirements.txt
```
### Download the pretraining dataset
```
cd data
bash download_wiki.sh
```
### Download the downstream dataset
```
cd SentEval/data/downstream/
bash download_dataset.sh
```
## Training
(The same as `run_diffcse.sh`.)
```bash
python train.py \
--model_name_or_path bert-base-uncased \
--generator_name distilbert-base-uncased \
--train_file data/wiki1m_for_simcse.txt \
--output_dir <your_output_model_dir> \
--num_train_epochs 2 \
--per_device_train_batch_size 64 \
--learning_rate 7e-6 \
--max_seq_length 32 \
--evaluation_strategy steps \
--metric_for_best_model stsb_spearman \
--load_best_model_at_end \
--eval_steps 125 \
--pooler_type cls \
--mlp_only_train \
--overwrite_output_dir \
--logging_first_step \
--logging_dir <your_logging_dir> \
--temp 0.05 \
--do_train \
--do_eval \
--batchnorm \
--lambda_weight 0.005 \
--fp16 --masking_ratio 0.30
```
Our new arguments:
* `--lambda_weight`: the lambda coefficient mentioned in Section 3 of our paper.
* `--masking_ratio`: the masking ratio for MLM generator to randomly replace tokens.
* `--generator_name`: the model name of generator. For `bert-base-uncased`, we use `distilbert-base-uncased`. For `roberta-base`, we use `distilroberta-base`.
Arguments from [SimCSE](https://github.com/princeton-nlp/SimCSE):
* `--train_file`: Training file path (`data/wiki1m_for_simcse.txt`).
* `--model_name_or_path`: Pre-trained checkpoints to start with such as BERT-based models (`bert-base-uncased`, `bert-large-uncased`, etc.) and RoBERTa-based models (`RoBERTa-base`, `RoBERTa-large`).
* `--temp`: Temperature for the contrastive loss. We always use `0.05`.
* `--pooler_type`: Pooling method.
* `--mlp_only_train`: For unsupervised SimCSE or DiffCSE, it works better to train the model with MLP layer but test the model without it. You should use this argument when training unsupervised SimCSE/DiffCSE models.
For the results in our paper, we use a NVidia 2080Ti GPU with CUDA 11.2. Using different types of devices or different versions of CUDA/Python/PyTorch may lead to slightly different performance.
## Evaluation
[](https://colab.research.google.com/github/voidism/DiffCSE/blob/master/diffcse_evaluation.ipynb)
We provide a simple colab notebook to reproduce our results easily. We can also run the commands below for evaluation:
```bash
python evaluation.py \
--model_name_or_path <your_output_model_dir> \
--pooler cls_before_pooler \
--task_set <sts|transfer|full> \
--mode test
```
To evaluate our pretrained DiffCSE checkpoints, we can use the following scripts:
### BERT
#### STS
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-bert-base-uncased-sts \
--pooler cls_before_pooler \
--task_set sts \
--mode test
```
#### Transfer Tasks
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-bert-base-uncased-trans \
--pooler cls_before_pooler \
--task_set transfer \
--mode test
```
### RoBERTa
#### STS
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-roberta-base-sts \
--pooler cls_before_pooler \
--task_set sts \
--mode test
```
#### Transfer Tasks
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-roberta-base-trans \
--pooler cls_before_pooler \
--task_set transfer \
--mode test
```
For more detailed information, please check [SimCSE's GitHub repo](https://github.com/princeton-nlp/SimCSE).
## Pretrained models
[](https://huggingface.co/voidism)
* DiffCSE-BERT-base (STS): https://huggingface.co/voidism/diffcse-bert-base-uncased-sts
* DiffCSE-BERT-base (transfer tasks): https://huggingface.co/voidism/diffcse-bert-base-uncased-trans
* DiffCSE-RoBERTa-base (STS): https://huggingface.co/voidism/diffcse-roberta-base-sts
* DiffCSE-RoBERTa-base (transfer tasks): https://huggingface.co/voidism/diffcse-roberta-base-trans
We can load the models using the API provided by [SimCSE](https://github.com/princeton-nlp/SimCSE).
See [Getting Started](https://github.com/princeton-nlp/SimCSE#getting-started) for more information.
```python
from diffcse import DiffCSE
model_bert_sts = DiffCSE("voidism/diffcse-bert-base-uncased-sts")
model_bert_trans = DiffCSE("voidism/diffcse-bert-base-uncased-trans")
model_roberta_sts = DiffCSE("voidism/diffcse-roberta-base-sts")
model_roberta_trans = DiffCSE("voidism/diffcse-roberta-base-trans")
```
## Citations
[](https://doi.org/10.48550/arXiv.2204.10298)
Please cite our paper and the SimCSE paper if they are helpful to your work!
```bibtex
@inproceedings{chuang2022diffcse,
title={{DiffCSE}: Difference-based Contrastive Learning for Sentence Embeddings},
author={Chuang, Yung-Sung and Dangovski, Rumen and Luo, Hongyin and Zhang, Yang and Chang, Shiyu and Soljacic, Marin and Li, Shang-Wen and Yih, Wen-tau and Kim, Yoon and Glass, James},
booktitle={Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
year={2022}
}
@inproceedings{gao2021simcse,
title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings},
author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi},
booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
year={2021}
}
```
| 8,670 | [
[
-0.037933349609375,
-0.0380859375,
0.0316162109375,
0.032867431640625,
-0.01995849609375,
-0.010589599609375,
-0.00630950927734375,
-0.0172882080078125,
0.0044403076171875,
0.002620697021484375,
-0.0516357421875,
-0.0283203125,
-0.061614990234375,
0.01139831... |
Lykon/AnimePastelDream | 2023-03-25T01:21:26.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"art",
"artistic",
"en",
"license:other",
"region:us"
] | text-to-image | Lykon | null | null | Lykon/AnimePastelDream | 7 | 1,790 | diffusers | 2023-03-22T22:58:29 | ---
language:
- en
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- art
- artistic
- diffusers
inference: false
---
For info: https://civitai.com/models/23521/anime-pastel-dream | 216 | [
[
-0.01629638671875,
-0.02392578125,
0.0271759033203125,
0.0484619140625,
-0.0145721435546875,
0.0054779052734375,
0.00786590576171875,
-0.0169525146484375,
0.056976318359375,
0.0408935546875,
-0.06707763671875,
-0.037353515625,
-0.004425048828125,
-0.01736450... |
kanu03/my-cat | 2023-07-16T17:44:02.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | kanu03 | null | null | kanu03/my-cat | 0 | 1,789 | diffusers | 2023-07-16T17:39:19 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-cat Dreambooth model trained by kanu03 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: OPJU101
Sample pictures of this concept:

| 369 | [
[
-0.052001953125,
-0.02117919921875,
0.032012939453125,
0.01227569580078125,
-0.0174407958984375,
0.04913330078125,
0.04052734375,
-0.032073974609375,
0.069091796875,
0.041717529296875,
-0.03985595703125,
-0.009674072265625,
-0.0097198486328125,
0.01863098144... |
bond005/wav2vec2-large-ru-golos-with-lm | 2023-02-27T06:08:09.000Z | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"common_voice",
"SberDevices/Golos",
"bond005/rulibrispeech",
"bond005/sova_rudevices",
"dangrebenkin/voxforge-ru-dataset",
"ru",
"dataset:SberDevices/Golos",
"dataset:common_voice",
"dataset:bond00... | automatic-speech-recognition | bond005 | null | null | bond005/wav2vec2-large-ru-golos-with-lm | 8 | 1,788 | transformers | 2022-09-26T14:44:38 | ---
language: ru
datasets:
- SberDevices/Golos
- common_voice
- bond005/rulibrispeech
- bond005/sova_rudevices
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- common_voice
- SberDevices/Golos
- bond005/rulibrispeech
- bond005/sova_rudevices
- dangrebenkin/voxforge-ru-dataset
license: apache-2.0
widget:
- example_title: test Russian speech "нейросети это хорошо" (in English, "neural networks are good")
src: https://huggingface.co/bond005/wav2vec2-large-ru-golos-with-lm/resolve/main/test_sound_ru.flac
model-index:
- name: XLSR Wav2Vec2 Russian with Language Model by Ivan Bondarenko
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Sberdevices Golos (crowd)
type: SberDevices/Golos
args: ru
metrics:
- name: Test WER
type: wer
value: 6.883
- name: Test CER
type: cer
value: 1.637
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Sberdevices Golos (farfield)
type: SberDevices/Golos
args: ru
metrics:
- name: Test WER
type: wer
value: 15.044
- name: Test CER
type: cer
value: 5.128
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ru
type: common_voice
args: ru
metrics:
- name: Test WER
type: wer
value: 12.115
- name: Test CER
type: cer
value: 2.980
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Russian Librispeech
type: bond005/rulibrispeech
args: ru
metrics:
- name: Test WER
type: wer
value: 15.736
- name: Test CER
type: cer
value: 3.573
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Sova RuDevices
type: bond005/sova_rudevices
args: ru
metrics:
- name: Test WER
type: wer
value: 20.652
- name: Test CER
type: cer
value: 7.287
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Voxforge Ru
type: dangrebenkin/voxforge-ru-dataset
args: ru
metrics:
- name: Test WER
type: wer
value: 19.079
- name: Test CER
type: cer
value: 5.864
---
# Wav2Vec2-Large-Ru-Golos-With-LM
The Wav2Vec2 model is based on [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53), fine-tuned in Russian using [Sberdevices Golos](https://huggingface.co/datasets/SberDevices/Golos) with audio augmentations like as pitch shift, acceleration/deceleration of sound, reverberation etc.
The 2-gram language model is built on the Russian text corpus obtained from three open sources:
- random 10% subset of [Taiga](https://tatianashavrina.github.io/taiga_site)
- [Russian Wikipedia](https://ru.wikipedia.org)
- [Russian Wikinews](https://ru.wikinews.org).
## Usage
When using this model, make sure that your speech input is sampled at 16kHz.
You can use this model by writing your own inference script:
```python
import os
import warnings
import librosa
import nltk
import numpy as np
import torch
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2ProcessorWithLM
MODEL_ID = "bond005/wav2vec2-large-ru-golos-with-lm"
DATASET_ID = "bond005/sberdevices_golos_10h_crowd"
SAMPLES = 30
nltk.download('punkt')
num_processes = max(1, os.cpu_count())
test_dataset = load_dataset(DATASET_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2ProcessorWithLM.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array = batch["audio"]["array"]
batch["speech"] = np.asarray(speech_array, dtype=np.float32)
return batch
removed_columns = set(test_dataset.column_names)
removed_columns -= {'transcription', 'speech'}
removed_columns = sorted(list(removed_columns))
with warnings.catch_warnings():
warnings.simplefilter("ignore")
test_dataset = test_dataset.map(
speech_file_to_array_fn,
num_proc=num_processes,
remove_columns=removed_columns
)
inputs = processor(test_dataset["speech"], sampling_rate=16_000,
return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values,
attention_mask=inputs.attention_mask).logits
predicted_sentences = processor.batch_decode(
logits=logits.numpy(),
num_processes=num_processes
).text
with warnings.catch_warnings():
warnings.simplefilter("ignore")
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["transcription"])
print("Prediction:", predicted_sentence)
```
```text
----------------------------------------------------------------------------------------------------
Reference: шестьдесят тысяч тенге сколько будет стоить
Prediction: шестьдесят тысяч тенге сколько будет стоить
----------------------------------------------------------------------------------------------------
Reference: покажи мне на смотрешке телеканал синергия тв
Prediction: покажи мне на смотрешке телеканал синергия тв
----------------------------------------------------------------------------------------------------
Reference: заказать яблоки зеленые
Prediction: заказать яблоки зеленые
----------------------------------------------------------------------------------------------------
Reference: алиса закажи килограммовый торт графские развалины
Prediction: алиса закажи килограммовый торт графские развалины
----------------------------------------------------------------------------------------------------
Reference: ищи телеканал про бизнес на тиви
Prediction: ищи телеканал про бизнес на тиви
----------------------------------------------------------------------------------------------------
Reference: михаила мурадяна
Prediction: михаила мурадяна
----------------------------------------------------------------------------------------------------
Reference: любовницы две тысячи тринадцать пятнадцатый сезон
Prediction: любовница две тысячи тринадцать пятнадцатый сезон
----------------------------------------------------------------------------------------------------
Reference: найди боевики
Prediction: найди боевики
----------------------------------------------------------------------------------------------------
Reference: гетто сезон три
Prediction: гета сезон три
----------------------------------------------------------------------------------------------------
Reference: хочу посмотреть ростов папа на телевизоре
Prediction: хочу посмотреть ростоу папа на телевизоре
----------------------------------------------------------------------------------------------------
Reference: сбер какое твое самое ненавистное занятие
Prediction: сбер какое твое самое ненавистное занятие
----------------------------------------------------------------------------------------------------
Reference: афина чем платят у китайцев
Prediction: афина чем платят у китайцев
----------------------------------------------------------------------------------------------------
Reference: джой как работает досрочное погашение кредита
Prediction: джой как работает досрочное погашение кредита
----------------------------------------------------------------------------------------------------
Reference: у тебя найдется люк кейдж
Prediction: у тебя найдется люк кейдж
----------------------------------------------------------------------------------------------------
Reference: у тебя будет лучшая часть пинк
Prediction: у тебя будет лучшая часть пинк
----------------------------------------------------------------------------------------------------
Reference: пожалуйста пополните мне счет
Prediction: пожалуйста пополните мне счет
----------------------------------------------------------------------------------------------------
Reference: анне павловне шабуровой
Prediction: анне павловне шабуровой
----------------------------------------------------------------------------------------------------
Reference: врубай на смотрешке муз тв
Prediction: врубай на смотрешке муз тиви
----------------------------------------------------------------------------------------------------
Reference: найди на смотрешке лдпр тв
Prediction: найди на смотрешке лдпр тв
----------------------------------------------------------------------------------------------------
Reference: сбер мне нужен педикюр забей мне место
Prediction: сбер мне нужен педикюр за обеление место
----------------------------------------------------------------------------------------------------
Reference: галины афанасьевны
Prediction: галины афанасьевны
----------------------------------------------------------------------------------------------------
Reference: сколько стоимость обмена китайского юаня на российский рубль
Prediction: сколько стоимость обмена китайского юаня на российский рубль
----------------------------------------------------------------------------------------------------
Reference: обмани меня сезон восемь часть тринадцать
Prediction: обмани меня сезон восемь часть тринадцать
----------------------------------------------------------------------------------------------------
Reference: включи канал футбол эйч ди
Prediction: включи канал футбол эйч ди
----------------------------------------------------------------------------------------------------
Reference: поп звезда не переставай не останавливайся найти
Prediction: поп звезда переставая не останавливайся найти
----------------------------------------------------------------------------------------------------
Reference: салют самый популярный фильм люка бессона
Prediction: салют самый популярный фильм люка бессона
----------------------------------------------------------------------------------------------------
Reference: татьяна зиганшина
Prediction: татьяна зигантшина
----------------------------------------------------------------------------------------------------
Reference: джой когда перестало существовать хеттское царство
Prediction: джой когда перестало существовать хеттское царство
----------------------------------------------------------------------------------------------------
Reference: олег яковлев
Prediction: олег яковлев
----------------------------------------------------------------------------------------------------
Reference: посоветуй мне шестая часть как избежать наказания за убийство
Prediction: посоветуй мне шестая часть как избежать наказания за убийство
```
The Google Colab version of [this script](https://colab.research.google.com/drive/1SnQmrt6HmMNV-zK-UCPajuwl1JvoCqbX?usp=sharing) is available too.
## Evaluation
This model was evaluated on the test subsets of [SberDevices Golos](https://huggingface.co/datasets/SberDevices/Golos), [Common Voice 6.0](https://huggingface.co/datasets/common_voice) (Russian part), and [Russian Librispeech](https://huggingface.co/datasets/bond005/rulibrispeech), but it was trained on the training subset of SberDevices Golos only. You can see the evaluation script on other datasets, including Russian Librispeech and SOVA RuDevices, on my Kaggle web-page https://www.kaggle.com/code/bond005/wav2vec2-ru-lm-eval
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{bondarenko2022wav2vec2-large-ru-golos,
title={XLSR Wav2Vec2 Russian with 2-gram Language Model by Ivan Bondarenko},
author={Bondarenko, Ivan},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/bond005/wav2vec2-large-ru-golos-with-lm}},
year={2022}
}
```
| 12,069 | [
[
-0.066650390625,
-0.052581787109375,
0.01001739501953125,
0.005458831787109375,
-0.018890380859375,
0.0163116455078125,
-0.01560211181640625,
-0.0276641845703125,
0.03533935546875,
0.0198974609375,
-0.057037353515625,
-0.009185791015625,
-0.03369140625,
-0.0... |
theexcitedgirl/my-pet-dog | 2023-10-08T03:49:19.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | theexcitedgirl | null | null | theexcitedgirl/my-pet-dog | 0 | 1,788 | diffusers | 2023-10-08T03:44:26 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by theexcitedgirl following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:
.jpg)
.jpg)
.jpg)
.jpg)
.jpeg)
| 816 | [
[
-0.049835205078125,
-0.027069091796875,
0.02484130859375,
0.0181884765625,
-0.0249176025390625,
0.02435302734375,
0.0283203125,
-0.028778076171875,
0.035797119140625,
0.017425537109375,
-0.037017822265625,
-0.034759521484375,
-0.0220489501953125,
0.005805969... |
abedsaad/lora-trained-xl-colab | 2023-09-30T22:37:09.000Z | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"license:openrail++",
"has_space",
"region:us"
] | text-to-image | abedsaad | null | null | abedsaad/lora-trained-xl-colab | 2 | 1,786 | diffusers | 2023-09-30T16:25:44 |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks man
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - abedsaad/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks man using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
| 626 | [
[
-0.01332855224609375,
-0.01708984375,
0.0236053466796875,
0.010223388671875,
-0.041107177734375,
0.0305938720703125,
0.023468017578125,
-0.007389068603515625,
0.06964111328125,
0.038177490234375,
-0.042266845703125,
-0.0223541259765625,
-0.0428466796875,
-0.... |
agonh/SDXL-LORA | 2023-10-02T01:27:05.000Z | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion",
"text-to-image",
"lora",
"loraxl",
"en",
"license:openrail++",
"has_space",
"region:us"
] | text-to-image | agonh | null | null | agonh/SDXL-LORA | 2 | 1,786 | diffusers | 2023-10-02T01:27:05 | ---
license: openrail++
base_model: FFusion/FFXL400
instance_prompt: Morphxl_V10
widget:
- text: >-
your prompt
example_title: your creation
- text: >-
A cyberpunk city, cyberpunk style, a girl in the city , walking, ultra high quality, neon ambiance, abstract black oil, gear mecha, detailed acrylic, grunge, intricate complexity, rendered in unreal engine, photorealistic
example_title: Neon city
Negative prompt: photograph, deformed, glitch, noisy, realistic, stock photo, watermark,signature, blurry
tags:
- stable-diffusion-xl
- diffusers
- stable-diffusion
- text-to-image
- lora
- loraxl
language:
- en
library_name: diffusers
---
# SDXL-LORA
- Model creator: [FFusion](https://huggingface.co/FFusion)
- Original model: [400GB-LoraXL](https://huggingface.co/FFusion/400GB-LoraXL)
## Description
This repo contains files for [FFusion's 400GB-LoraXL](https://huggingface.co/FFusion/400GB-LoraXL).
| 939 | [
[
-0.0416259765625,
-0.024658203125,
0.0423583984375,
0.023834228515625,
-0.01708984375,
0.013824462890625,
0.0416259765625,
-0.011566162109375,
0.040435791015625,
0.08685302734375,
-0.08258056640625,
-0.0276947021484375,
-0.021484375,
0.0036525726318359375,
... |
huggingface/time-series-transformer-tourism-monthly | 2023-02-23T13:44:19.000Z | [
"transformers",
"pytorch",
"time_series_transformer",
"dataset:monash_tsf",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | huggingface | null | null | huggingface/time-series-transformer-tourism-monthly | 12 | 1,784 | transformers | 2022-09-26T14:37:22 | ---
license: mit
datasets:
- monash_tsf
---
# Time Series Transformer (trained on monash_tsf/tourism-monthly)
Time Series Transformer model trained on the tourism-monthly dataset for 30 epochs.
## Model description
The Time Series Transformer is a vanilla encoder-decoder Transformer for time-series forecasting. The model is trained in the same way as one trains a Transformer for machine translation. At inference time, the model autoregressively generates samples, one time step at a time.
## Usage
We refer to the [documentation](https://huggingface.co/transformers/main/model_doc/time_series_transformer.html) regarding usage. | 639 | [
[
-0.0234222412109375,
-0.0199127197265625,
0.00788116455078125,
0.0021114349365234375,
-0.034637451171875,
-0.005523681640625,
0.036834716796875,
0.018402099609375,
0.0126190185546875,
0.048309326171875,
-0.0787353515625,
-0.0126800537109375,
-0.045196533203125,
... |
AadithKumar/my-pet-dog-eaak | 2023-10-25T20:26:28.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | AadithKumar | null | null | AadithKumar/my-pet-dog-eaak | 1 | 1,784 | diffusers | 2023-10-25T20:21:38 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-EAAK Dreambooth model trained by AadithKumar following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: ASET-163
Sample pictures of this concept:
| 298 | [
[
-0.051727294921875,
-0.01342010498046875,
0.0203094482421875,
0.0118560791015625,
-0.01241302490234375,
0.0267181396484375,
0.032135009765625,
-0.030975341796875,
0.039825439453125,
0.029541015625,
-0.0379638671875,
-0.00787353515625,
-0.015838623046875,
0.0... |
zeroshot/bge-base-en-v1.5-quant | 2023-11-01T17:50:36.000Z | [
"transformers",
"onnx",
"bert",
"feature-extraction",
"mteb",
"sparse sparsity quantized onnx embeddings int8",
"en",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | feature-extraction | zeroshot | null | null | zeroshot/bge-base-en-v1.5-quant | 0 | 1,783 | transformers | 2023-10-03T12:45:42 | ---
license: mit
language:
- en
tags:
- mteb
- sparse sparsity quantized onnx embeddings int8
model-index:
- name: bge-base-en-v1.5-quant
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.16417910447761
- type: ap
value: 39.62965026785565
- type: f1
value: 70.30041589476463
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 92.95087500000001
- type: ap
value: 89.92451248271642
- type: f1
value: 92.94162732408543
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.214
- type: f1
value: 47.57084372829096
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 48.499816497755646
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 42.006939120636034
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.390343953329875
- type: mrr
value: 75.69922613551422
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 89.03408553833623
- type: cos_sim_spearman
value: 86.71221676053791
- type: euclidean_pearson
value: 87.81477796215844
- type: euclidean_spearman
value: 87.28994076774481
- type: manhattan_pearson
value: 87.76204756059836
- type: manhattan_spearman
value: 87.1971675695072
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 86.35064935064935
- type: f1
value: 86.32782396028989
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.299558776859485
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 35.64603198816062
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 51.269999999999996
- type: f1
value: 45.9714399031315
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 89.7204
- type: ap
value: 85.70238397381907
- type: f1
value: 89.70961232185473
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.95120839033288
- type: f1
value: 93.70348712248138
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 75.25763793889648
- type: f1
value: 57.59583082574482
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 75.16476126429052
- type: f1
value: 73.29287381030854
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.9340954942838
- type: f1
value: 79.04036413238218
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 32.80025982143821
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 30.956464446009623
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.886626060290734
- type: mrr
value: 32.99813843700759
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 55.693914682185365
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 62.32723620518647
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.70275347034692
- type: cos_sim_spearman
value: 80.06126639668393
- type: euclidean_pearson
value: 82.18370726102707
- type: euclidean_spearman
value: 80.05483013524909
- type: manhattan_pearson
value: 82.11962032129463
- type: manhattan_spearman
value: 79.97174232961949
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 86.08210281025868
- type: cos_sim_spearman
value: 77.75002826042643
- type: euclidean_pearson
value: 83.06487161944293
- type: euclidean_spearman
value: 78.0677956304104
- type: manhattan_pearson
value: 83.04321232787379
- type: manhattan_spearman
value: 78.09582483148635
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 84.64353592106988
- type: cos_sim_spearman
value: 86.07934653140616
- type: euclidean_pearson
value: 85.21820182954883
- type: euclidean_spearman
value: 86.18828773665395
- type: manhattan_pearson
value: 85.12075207905364
- type: manhattan_spearman
value: 86.12061116344299
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 84.33571296969136
- type: cos_sim_spearman
value: 82.8868213429789
- type: euclidean_pearson
value: 83.65476643152161
- type: euclidean_spearman
value: 82.76439753890263
- type: manhattan_pearson
value: 83.63348951033883
- type: manhattan_spearman
value: 82.76176495070241
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.6337321089215
- type: cos_sim_spearman
value: 88.54453531860615
- type: euclidean_pearson
value: 87.68754116644199
- type: euclidean_spearman
value: 88.22610830299979
- type: manhattan_pearson
value: 87.62214887890859
- type: manhattan_spearman
value: 88.14766677391091
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.89742747806514
- type: cos_sim_spearman
value: 85.76282302560992
- type: euclidean_pearson
value: 84.83917251074928
- type: euclidean_spearman
value: 85.74354740775905
- type: manhattan_pearson
value: 84.91190952448616
- type: manhattan_spearman
value: 85.82001542154245
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.70974342036347
- type: cos_sim_spearman
value: 87.82200371351459
- type: euclidean_pearson
value: 88.04095125600278
- type: euclidean_spearman
value: 87.5069523002544
- type: manhattan_pearson
value: 88.03247709799281
- type: manhattan_spearman
value: 87.43433979175654
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 65.0349727703108
- type: cos_sim_spearman
value: 65.46090125254047
- type: euclidean_pearson
value: 66.75349075443432
- type: euclidean_spearman
value: 65.57576680702924
- type: manhattan_pearson
value: 66.72598998285412
- type: manhattan_spearman
value: 65.63446184311414
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.18026134463653
- type: cos_sim_spearman
value: 86.79430055943524
- type: euclidean_pearson
value: 86.2668626122386
- type: euclidean_spearman
value: 86.72288498504841
- type: manhattan_pearson
value: 86.28615540445857
- type: manhattan_spearman
value: 86.7110630606802
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 87.05335415919195
- type: mrr
value: 96.27455968142243
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.84653465346534
- type: cos_sim_ap
value: 96.38115549823692
- type: cos_sim_f1
value: 92.15983813859383
- type: cos_sim_precision
value: 93.24462640736951
- type: cos_sim_recall
value: 91.10000000000001
- type: dot_accuracy
value: 99.81782178217821
- type: dot_ap
value: 95.65732630933346
- type: dot_f1
value: 90.68825910931176
- type: dot_precision
value: 91.80327868852459
- type: dot_recall
value: 89.60000000000001
- type: euclidean_accuracy
value: 99.84653465346534
- type: euclidean_ap
value: 96.34134720479366
- type: euclidean_f1
value: 92.1756688541141
- type: euclidean_precision
value: 93.06829765545362
- type: euclidean_recall
value: 91.3
- type: manhattan_accuracy
value: 99.84356435643565
- type: manhattan_ap
value: 96.38165573090185
- type: manhattan_f1
value: 92.07622868605819
- type: manhattan_precision
value: 92.35412474849095
- type: manhattan_recall
value: 91.8
- type: max_accuracy
value: 99.84653465346534
- type: max_ap
value: 96.38165573090185
- type: max_f1
value: 92.1756688541141
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 64.81205738681385
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 34.083934029129445
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 54.447346270481376
- type: mrr
value: 55.382382119514475
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 72.123
- type: ap
value: 14.396060207954983
- type: f1
value: 55.24344377812756
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 59.67176004527447
- type: f1
value: 59.97320225890037
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.50190094208029
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.70799308577219
- type: cos_sim_ap
value: 76.40980707197174
- type: cos_sim_f1
value: 70.64264849074976
- type: cos_sim_precision
value: 65.56710347943967
- type: cos_sim_recall
value: 76.56992084432717
- type: dot_accuracy
value: 85.75430649102938
- type: dot_ap
value: 72.68783978286282
- type: dot_f1
value: 67.56951102588687
- type: dot_precision
value: 61.90162494510321
- type: dot_recall
value: 74.37994722955145
- type: euclidean_accuracy
value: 86.70799308577219
- type: euclidean_ap
value: 76.43046769325314
- type: euclidean_f1
value: 70.84852905421832
- type: euclidean_precision
value: 65.68981064021641
- type: euclidean_recall
value: 76.88654353562005
- type: manhattan_accuracy
value: 86.70203254455504
- type: manhattan_ap
value: 76.39254562413156
- type: manhattan_f1
value: 70.86557059961316
- type: manhattan_precision
value: 65.39491298527443
- type: manhattan_recall
value: 77.33509234828496
- type: max_accuracy
value: 86.70799308577219
- type: max_ap
value: 76.43046769325314
- type: max_f1
value: 70.86557059961316
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.92381728567548
- type: cos_sim_ap
value: 85.92532857788025
- type: cos_sim_f1
value: 78.11970128792525
- type: cos_sim_precision
value: 73.49806530445998
- type: cos_sim_recall
value: 83.3615645210964
- type: dot_accuracy
value: 88.28540381107618
- type: dot_ap
value: 84.42890126108796
- type: dot_f1
value: 76.98401162790698
- type: dot_precision
value: 72.89430222956234
- type: dot_recall
value: 81.55990144748999
- type: euclidean_accuracy
value: 88.95874568246207
- type: euclidean_ap
value: 85.88338025133037
- type: euclidean_f1
value: 78.14740888593184
- type: euclidean_precision
value: 75.15285084601166
- type: euclidean_recall
value: 81.3905143209116
- type: manhattan_accuracy
value: 88.92769821865176
- type: manhattan_ap
value: 85.84824183217555
- type: manhattan_f1
value: 77.9830582736965
- type: manhattan_precision
value: 74.15972222222223
- type: manhattan_recall
value: 82.22205112411457
- type: max_accuracy
value: 88.95874568246207
- type: max_ap
value: 85.92532857788025
- type: max_f1
value: 78.14740888593184
---
# bge-base-en-v1.5-quant
This is the quantized (INT8) ONNX variant of the [bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) embeddings model created with [DeepSparse Optimum](https://github.com/neuralmagic/optimum-deepsparse) for ONNX export/inference pipeline and Neural Magic's [Sparsify](https://github.com/neuralmagic/sparsify) for one-shot quantization.
Current list of sparse and quantized bge ONNX models:
| Links | Sparsification Method |
| --------------------------------------------------------------------------------------------------- | ---------------------- |
| [zeroshot/bge-large-en-v1.5-sparse](https://huggingface.co/zeroshot/bge-large-en-v1.5-sparse) | Quantization (INT8) & 50% Pruning |
| [zeroshot/bge-large-en-v1.5-quant](https://huggingface.co/zeroshot/bge-large-en-v1.5-quant) | Quantization (INT8) |
| [zeroshot/bge-base-en-v1.5-sparse](https://huggingface.co/zeroshot/bge-base-en-v1.5-sparse) | Quantization (INT8) & 50% Pruning |
| [zeroshot/bge-base-en-v1.5-quant](https://huggingface.co/zeroshot/bge-base-en-v1.5-quant) | Quantization (INT8) |
| [zeroshot/bge-small-en-v1.5-sparse](https://huggingface.co/zeroshot/bge-small-en-v1.5-sparse) | Quantization (INT8) & 50% Pruning |
| [zeroshot/bge-small-en-v1.5-quant](https://huggingface.co/zeroshot/bge-small-en-v1.5-quant) | Quantization (INT8) |
```bash
pip install -U deepsparse-nightly[sentence_transformers]
```
```python
from deepsparse.sentence_transformers import DeepSparseSentenceTransformer
model = DeepSparseSentenceTransformer('zeroshot/bge-base-en-v1.5-quant', export=False)
# Our sentences we like to encode
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
# Sentences are encoded by calling model.encode()
embeddings = model.encode(sentences)
# Print the embeddings
for sentence, embedding in zip(sentences, embeddings):
print("Sentence:", sentence)
print("Embedding:", embedding.shape)
print("")
```
For further details regarding DeepSparse & Sentence Transformers integration, refer to the [DeepSparse README](https://github.com/neuralmagic/deepsparse/tree/main/src/deepsparse/sentence_transformers).
For general questions on these models and sparsification methods, reach out to the engineering team on our [community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ).
 | 21,981 | [
[
-0.031829833984375,
-0.054962158203125,
0.03497314453125,
0.025482177734375,
-0.00470733642578125,
-0.01242828369140625,
-0.015899658203125,
-0.0014810562133789062,
0.0170745849609375,
0.0307464599609375,
-0.06658935546875,
-0.060943603515625,
-0.0489501953125,
... |
ai-forever/ruElectra-large | 2023-11-03T12:48:35.000Z | [
"transformers",
"pytorch",
"electra",
"pretraining",
"PyTorch",
"Transformers",
"ru",
"arxiv:2309.10931",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | ai-forever | null | null | ai-forever/ruElectra-large | 1 | 1,782 | transformers | 2023-07-28T09:49:54 | ---
license: mit
language:
- ru
tags:
- PyTorch
- Transformers
---
# ruELECTRA large model (cased) for Embeddings in the Russian language.
The model architecture design, pretraining, and evaluation are documented in our preprint: [**A Family of Pretrained Transformer Language Models for Russian**](https://arxiv.org/abs/2309.10931).
## Usage (HuggingFace Models Repository)
You can use the model directly from the model repository to compute sentence embeddings:
For better quality, use mean token embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
#Sentences we want sentence embeddings for
sentences = ['Привет! Как твои дела?',
'А правда, что 42 твое любимое число?']
#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained("ai-forever/ruElectra-large")
model = AutoModel.from_pretrained("ai-forever/ruElectra-large")
#Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=24, return_tensors='pt')
#Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
#Perform pooling. In this case, mean pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
```
+ # Authors
+ [SaluteDevices](https://sberdevices.ru/) RnD Team.
+ Aleksandr Abramov: [HF profile](https://huggingface.co/Andrilko), [Github](https://github.com/Ab1992ao), [Kaggle Competitions Master](https://www.kaggle.com/andrilko);
+ Mark Baushenko: [HF profile](https://huggingface.co/e0xexrazy);
+ Artem Snegirev: [HF profile](https://huggingface.co/artemsnegirev)
# Cite us
```
@misc{zmitrovich2023family,
title={A Family of Pretrained Transformer Language Models for Russian},
author={Dmitry Zmitrovich and Alexander Abramov and Andrey Kalmykov and Maria Tikhonova and Ekaterina Taktasheva and Danil Astafurov and Mark Baushenko and Artem Snegirev and Tatiana Shavrina and Sergey Markov and Vladislav Mikhailov and Alena Fenogenova},
year={2023},
eprint={2309.10931},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 2,616 | [
[
-0.00524139404296875,
-0.03668212890625,
0.0260162353515625,
0.0281982421875,
-0.01345062255859375,
-0.00536346435546875,
-0.02166748046875,
-0.0004901885986328125,
0.0196380615234375,
0.0170745849609375,
-0.046844482421875,
-0.03521728515625,
-0.051727294921875... |
Erlalex/dominikof-v1-5-1 | 2023-07-16T19:02:27.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Erlalex | null | null | Erlalex/dominikof-v1-5-1 | 0 | 1,781 | diffusers | 2023-07-16T18:57:28 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### DominikOF_v1.5.1 Dreambooth model trained by Erlalex with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| 505 | [
[
-0.02728271484375,
-0.06463623046875,
0.047210693359375,
0.03887939453125,
-0.0311737060546875,
0.016571044921875,
0.0135498046875,
-0.025970458984375,
0.052490234375,
0.004497528076171875,
-0.02764892578125,
-0.037506103515625,
-0.033447265625,
-0.007827758... |
timm/vit_huge_patch14_224.mae | 2023-05-09T20:29:10.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:2111.06377",
"arxiv:2010.11929",
"license:cc-by-nc-4.0",
"region:us"
] | image-classification | timm | null | null | timm/vit_huge_patch14_224.mae | 0 | 1,780 | timm | 2023-05-09T20:20:59 | ---
tags:
- image-classification
- timm
library_name: timm
license: cc-by-nc-4.0
---
# Model card for vit_huge_patch14_224.mae
A Vision Transformer (ViT) image feature model. Pretrained on ImageNet-1k with Self-Supervised Masked Autoencoder (MAE) method.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 630.8
- GMACs: 167.4
- Activations (M): 139.4
- Image size: 224 x 224
- **Papers:**
- Masked Autoencoders Are Scalable Vision Learners: https://arxiv.org/abs/2111.06377
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Pretrain Dataset:** ImageNet-1k
- **Original:** https://github.com/facebookresearch/mae
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_huge_patch14_224.mae', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_huge_patch14_224.mae',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 257, 1280) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@Article{MaskedAutoencoders2021,
author = {Kaiming He and Xinlei Chen and Saining Xie and Yanghao Li and Piotr Doll{'a}r and Ross Girshick},
journal = {arXiv:2111.06377},
title = {Masked Autoencoders Are Scalable Vision Learners},
year = {2021},
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 3,596 | [
[
-0.039031982421875,
-0.0284881591796875,
0.0043487548828125,
0.01763916015625,
-0.02069091796875,
-0.021209716796875,
-0.01421356201171875,
-0.033416748046875,
0.02801513671875,
0.03021240234375,
-0.037872314453125,
-0.040863037109375,
-0.060150146484375,
-0... |
timm/fastvit_t8.apple_in1k | 2023-08-23T20:56:15.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2303.14189",
"license:other",
"region:us"
] | image-classification | timm | null | null | timm/fastvit_t8.apple_in1k | 1 | 1,780 | timm | 2023-08-23T20:56:11 | ---
tags:
- image-classification
- timm
library_name: timm
license: other
datasets:
- imagenet-1k
---
# Model card for fastvit_t8.apple_in1k
A FastViT image classification model. Trained on ImageNet-1k by paper authors.
Please observe [original license](https://github.com/apple/ml-fastvit/blob/8af5928238cab99c45f64fc3e4e7b1516b8224ba/LICENSE).
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 4.0
- GMACs: 0.7
- Activations (M): 8.6
- Image size: 256 x 256
- **Papers:**
- FastViT: A Fast Hybrid Vision Transformer using Structural Reparameterization: https://arxiv.org/abs/2303.14189
- **Original:** https://github.com/apple/ml-fastvit
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('fastvit_t8.apple_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'fastvit_t8.apple_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 48, 64, 64])
# torch.Size([1, 96, 32, 32])
# torch.Size([1, 192, 16, 16])
# torch.Size([1, 384, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'fastvit_t8.apple_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 384, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@inproceedings{vasufastvit2023,
author = {Pavan Kumar Anasosalu Vasu and James Gabriel and Jeff Zhu and Oncel Tuzel and Anurag Ranjan},
title = {FastViT: A Fast Hybrid Vision Transformer using Structural Reparameterization},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
year = {2023}
}
```
| 3,659 | [
[
-0.041534423828125,
-0.03656005859375,
0.0027217864990234375,
0.0169525146484375,
-0.030975341796875,
-0.0145263671875,
-0.00746917724609375,
-0.01983642578125,
0.025146484375,
0.028167724609375,
-0.03631591796875,
-0.04608154296875,
-0.051300048828125,
-0.0... |
DeepPavlov/bert-base-cased-conversational | 2021-11-08T13:07:31.000Z | [
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"en",
"endpoints_compatible",
"region:us"
] | feature-extraction | DeepPavlov | null | null | DeepPavlov/bert-base-cased-conversational | 7 | 1,779 | transformers | 2022-03-02T23:29:04 | ---
language: en
---
# bert-base-cased-conversational
Conversational BERT \(English, cased, 12‑layer, 768‑hidden, 12‑heads, 110M parameters\) was trained on the English part of Twitter, Reddit, DailyDialogues\[1\], OpenSubtitles\[2\], Debates\[3\], Blogs\[4\], Facebook News Comments. We used this training data to build the vocabulary of English subtokens and took English cased version of BERT‑base as an initialization for English Conversational BERT.
08.11.2021: upload model with MLM and NSP heads
\[1\]: Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset. IJCNLP 2017.
\[2\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\)
\[3\]: Justine Zhang, Ravi Kumar, Sujith Ravi, Cristian Danescu-Niculescu-Mizil. Proceedings of NAACL, 2016.
\[4\]: J. Schler, M. Koppel, S. Argamon and J. Pennebaker \(2006\). Effects of Age and Gender on Blogging in Proceedings of 2006 AAAI Spring Symposium on Computational Approaches for Analyzing Weblogs.
| 1,199 | [
[
-0.033660888671875,
-0.06903076171875,
0.0019426345825195312,
0.02459716796875,
-0.0239410400390625,
0.01438140869140625,
-0.0299530029296875,
-0.0257568359375,
0.00881195068359375,
0.017578125,
-0.055572509765625,
-0.03826904296875,
-0.038848876953125,
0.00... |
Muennighoff/SGPT-2.7B-weightedmean-msmarco-specb-bitfit | 2023-03-27T22:24:48.000Z | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"mteb",
"arxiv:2202.08904",
"model-index",
"endpoints_compatible",
"region:us"
] | sentence-similarity | Muennighoff | null | null | Muennighoff/SGPT-2.7B-weightedmean-msmarco-specb-bitfit | 3 | 1,779 | sentence-transformers | 2022-03-02T23:29:04 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
model-index:
- name: SGPT-2.7B-weightedmean-msmarco-specb-bitfit
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 67.56716417910448
- type: ap
value: 30.75574629595259
- type: f1
value: 61.805121301858655
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: 80714f8dcf8cefc218ef4f8c5a966dd83f75a0e1
metrics:
- type: accuracy
value: 71.439575
- type: ap
value: 65.91341330532453
- type: f1
value: 70.90561852619555
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 35.748000000000005
- type: f1
value: 35.48576287186347
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: 5b3e3697907184a9b77a3c99ee9ea1a9cbb1e4e3
metrics:
- type: map_at_1
value: 25.96
- type: map_at_10
value: 41.619
- type: map_at_100
value: 42.673
- type: map_at_1000
value: 42.684
- type: map_at_3
value: 36.569
- type: map_at_5
value: 39.397
- type: mrr_at_1
value: 26.316
- type: mrr_at_10
value: 41.772
- type: mrr_at_100
value: 42.82
- type: mrr_at_1000
value: 42.83
- type: mrr_at_3
value: 36.724000000000004
- type: mrr_at_5
value: 39.528999999999996
- type: ndcg_at_1
value: 25.96
- type: ndcg_at_10
value: 50.491
- type: ndcg_at_100
value: 54.864999999999995
- type: ndcg_at_1000
value: 55.10699999999999
- type: ndcg_at_3
value: 40.053
- type: ndcg_at_5
value: 45.134
- type: precision_at_1
value: 25.96
- type: precision_at_10
value: 7.8950000000000005
- type: precision_at_100
value: 0.9780000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 16.714000000000002
- type: precision_at_5
value: 12.489
- type: recall_at_1
value: 25.96
- type: recall_at_10
value: 78.947
- type: recall_at_100
value: 97.795
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 50.141999999999996
- type: recall_at_5
value: 62.446999999999996
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: 0bbdb47bcbe3a90093699aefeed338a0f28a7ee8
metrics:
- type: v_measure
value: 44.72125714642202
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: b73bd54100e5abfa6e3a23dcafb46fe4d2438dc3
metrics:
- type: v_measure
value: 35.081451519142064
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 4d853f94cd57d85ec13805aeeac3ae3e5eb4c49c
metrics:
- type: map
value: 59.634661990392054
- type: mrr
value: 73.6813525040672
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: 9ee918f184421b6bd48b78f6c714d86546106103
metrics:
- type: cos_sim_pearson
value: 87.42754550496836
- type: cos_sim_spearman
value: 84.84289705838664
- type: euclidean_pearson
value: 85.59331970450859
- type: euclidean_spearman
value: 85.8525586184271
- type: manhattan_pearson
value: 85.41233134466698
- type: manhattan_spearman
value: 85.52303303767404
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 44fa15921b4c889113cc5df03dd4901b49161ab7
metrics:
- type: accuracy
value: 83.21753246753246
- type: f1
value: 83.15394543120915
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 11d0121201d1f1f280e8cc8f3d98fb9c4d9f9c55
metrics:
- type: v_measure
value: 34.41414219680629
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: c0fab014e1bcb8d3a5e31b2088972a1e01547dc1
metrics:
- type: v_measure
value: 30.533275862270028
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 30.808999999999997
- type: map_at_10
value: 40.617
- type: map_at_100
value: 41.894999999999996
- type: map_at_1000
value: 42.025
- type: map_at_3
value: 37.0
- type: map_at_5
value: 38.993
- type: mrr_at_1
value: 37.482
- type: mrr_at_10
value: 46.497
- type: mrr_at_100
value: 47.144000000000005
- type: mrr_at_1000
value: 47.189
- type: mrr_at_3
value: 43.705
- type: mrr_at_5
value: 45.193
- type: ndcg_at_1
value: 37.482
- type: ndcg_at_10
value: 46.688
- type: ndcg_at_100
value: 51.726000000000006
- type: ndcg_at_1000
value: 53.825
- type: ndcg_at_3
value: 41.242000000000004
- type: ndcg_at_5
value: 43.657000000000004
- type: precision_at_1
value: 37.482
- type: precision_at_10
value: 8.827
- type: precision_at_100
value: 1.393
- type: precision_at_1000
value: 0.186
- type: precision_at_3
value: 19.361
- type: precision_at_5
value: 14.106
- type: recall_at_1
value: 30.808999999999997
- type: recall_at_10
value: 58.47
- type: recall_at_100
value: 80.51899999999999
- type: recall_at_1000
value: 93.809
- type: recall_at_3
value: 42.462
- type: recall_at_5
value: 49.385
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 26.962000000000003
- type: map_at_10
value: 36.93
- type: map_at_100
value: 38.102000000000004
- type: map_at_1000
value: 38.22
- type: map_at_3
value: 34.065
- type: map_at_5
value: 35.72
- type: mrr_at_1
value: 33.567
- type: mrr_at_10
value: 42.269
- type: mrr_at_100
value: 42.99
- type: mrr_at_1000
value: 43.033
- type: mrr_at_3
value: 40.064
- type: mrr_at_5
value: 41.258
- type: ndcg_at_1
value: 33.567
- type: ndcg_at_10
value: 42.405
- type: ndcg_at_100
value: 46.847
- type: ndcg_at_1000
value: 48.951
- type: ndcg_at_3
value: 38.312000000000005
- type: ndcg_at_5
value: 40.242
- type: precision_at_1
value: 33.567
- type: precision_at_10
value: 8.032
- type: precision_at_100
value: 1.295
- type: precision_at_1000
value: 0.17600000000000002
- type: precision_at_3
value: 18.662
- type: precision_at_5
value: 13.299
- type: recall_at_1
value: 26.962000000000003
- type: recall_at_10
value: 52.489
- type: recall_at_100
value: 71.635
- type: recall_at_1000
value: 85.141
- type: recall_at_3
value: 40.28
- type: recall_at_5
value: 45.757
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 36.318
- type: map_at_10
value: 47.97
- type: map_at_100
value: 49.003
- type: map_at_1000
value: 49.065999999999995
- type: map_at_3
value: 45.031
- type: map_at_5
value: 46.633
- type: mrr_at_1
value: 41.504999999999995
- type: mrr_at_10
value: 51.431000000000004
- type: mrr_at_100
value: 52.129000000000005
- type: mrr_at_1000
value: 52.161
- type: mrr_at_3
value: 48.934
- type: mrr_at_5
value: 50.42
- type: ndcg_at_1
value: 41.504999999999995
- type: ndcg_at_10
value: 53.676
- type: ndcg_at_100
value: 57.867000000000004
- type: ndcg_at_1000
value: 59.166
- type: ndcg_at_3
value: 48.516
- type: ndcg_at_5
value: 50.983999999999995
- type: precision_at_1
value: 41.504999999999995
- type: precision_at_10
value: 8.608
- type: precision_at_100
value: 1.1560000000000001
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 21.462999999999997
- type: precision_at_5
value: 14.721
- type: recall_at_1
value: 36.318
- type: recall_at_10
value: 67.066
- type: recall_at_100
value: 85.34
- type: recall_at_1000
value: 94.491
- type: recall_at_3
value: 53.215999999999994
- type: recall_at_5
value: 59.214
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 22.167
- type: map_at_10
value: 29.543999999999997
- type: map_at_100
value: 30.579
- type: map_at_1000
value: 30.669999999999998
- type: map_at_3
value: 26.982
- type: map_at_5
value: 28.474
- type: mrr_at_1
value: 24.068
- type: mrr_at_10
value: 31.237
- type: mrr_at_100
value: 32.222
- type: mrr_at_1000
value: 32.292
- type: mrr_at_3
value: 28.776000000000003
- type: mrr_at_5
value: 30.233999999999998
- type: ndcg_at_1
value: 24.068
- type: ndcg_at_10
value: 33.973
- type: ndcg_at_100
value: 39.135
- type: ndcg_at_1000
value: 41.443999999999996
- type: ndcg_at_3
value: 29.018
- type: ndcg_at_5
value: 31.558999999999997
- type: precision_at_1
value: 24.068
- type: precision_at_10
value: 5.299
- type: precision_at_100
value: 0.823
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 12.166
- type: precision_at_5
value: 8.767999999999999
- type: recall_at_1
value: 22.167
- type: recall_at_10
value: 46.115
- type: recall_at_100
value: 69.867
- type: recall_at_1000
value: 87.234
- type: recall_at_3
value: 32.798
- type: recall_at_5
value: 38.951
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 12.033000000000001
- type: map_at_10
value: 19.314
- type: map_at_100
value: 20.562
- type: map_at_1000
value: 20.695
- type: map_at_3
value: 16.946
- type: map_at_5
value: 18.076999999999998
- type: mrr_at_1
value: 14.801
- type: mrr_at_10
value: 22.74
- type: mrr_at_100
value: 23.876
- type: mrr_at_1000
value: 23.949
- type: mrr_at_3
value: 20.211000000000002
- type: mrr_at_5
value: 21.573
- type: ndcg_at_1
value: 14.801
- type: ndcg_at_10
value: 24.038
- type: ndcg_at_100
value: 30.186
- type: ndcg_at_1000
value: 33.321
- type: ndcg_at_3
value: 19.431
- type: ndcg_at_5
value: 21.34
- type: precision_at_1
value: 14.801
- type: precision_at_10
value: 4.776
- type: precision_at_100
value: 0.897
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 9.66
- type: precision_at_5
value: 7.239
- type: recall_at_1
value: 12.033000000000001
- type: recall_at_10
value: 35.098
- type: recall_at_100
value: 62.175000000000004
- type: recall_at_1000
value: 84.17099999999999
- type: recall_at_3
value: 22.61
- type: recall_at_5
value: 27.278999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 26.651000000000003
- type: map_at_10
value: 36.901
- type: map_at_100
value: 38.249
- type: map_at_1000
value: 38.361000000000004
- type: map_at_3
value: 33.891
- type: map_at_5
value: 35.439
- type: mrr_at_1
value: 32.724
- type: mrr_at_10
value: 42.504
- type: mrr_at_100
value: 43.391999999999996
- type: mrr_at_1000
value: 43.436
- type: mrr_at_3
value: 39.989999999999995
- type: mrr_at_5
value: 41.347
- type: ndcg_at_1
value: 32.724
- type: ndcg_at_10
value: 43.007
- type: ndcg_at_100
value: 48.601
- type: ndcg_at_1000
value: 50.697
- type: ndcg_at_3
value: 37.99
- type: ndcg_at_5
value: 40.083999999999996
- type: precision_at_1
value: 32.724
- type: precision_at_10
value: 7.872999999999999
- type: precision_at_100
value: 1.247
- type: precision_at_1000
value: 0.16199999999999998
- type: precision_at_3
value: 18.062
- type: precision_at_5
value: 12.666
- type: recall_at_1
value: 26.651000000000003
- type: recall_at_10
value: 55.674
- type: recall_at_100
value: 78.904
- type: recall_at_1000
value: 92.55799999999999
- type: recall_at_3
value: 41.36
- type: recall_at_5
value: 46.983999999999995
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 22.589000000000002
- type: map_at_10
value: 32.244
- type: map_at_100
value: 33.46
- type: map_at_1000
value: 33.593
- type: map_at_3
value: 29.21
- type: map_at_5
value: 31.019999999999996
- type: mrr_at_1
value: 28.425
- type: mrr_at_10
value: 37.282
- type: mrr_at_100
value: 38.187
- type: mrr_at_1000
value: 38.248
- type: mrr_at_3
value: 34.684
- type: mrr_at_5
value: 36.123
- type: ndcg_at_1
value: 28.425
- type: ndcg_at_10
value: 37.942
- type: ndcg_at_100
value: 43.443
- type: ndcg_at_1000
value: 45.995999999999995
- type: ndcg_at_3
value: 32.873999999999995
- type: ndcg_at_5
value: 35.325
- type: precision_at_1
value: 28.425
- type: precision_at_10
value: 7.1
- type: precision_at_100
value: 1.166
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 16.02
- type: precision_at_5
value: 11.644
- type: recall_at_1
value: 22.589000000000002
- type: recall_at_10
value: 50.03999999999999
- type: recall_at_100
value: 73.973
- type: recall_at_1000
value: 91.128
- type: recall_at_3
value: 35.882999999999996
- type: recall_at_5
value: 42.187999999999995
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 23.190833333333334
- type: map_at_10
value: 31.504916666666666
- type: map_at_100
value: 32.64908333333334
- type: map_at_1000
value: 32.77075
- type: map_at_3
value: 28.82575
- type: map_at_5
value: 30.2755
- type: mrr_at_1
value: 27.427499999999995
- type: mrr_at_10
value: 35.36483333333334
- type: mrr_at_100
value: 36.23441666666666
- type: mrr_at_1000
value: 36.297583333333336
- type: mrr_at_3
value: 32.97966666666667
- type: mrr_at_5
value: 34.294583333333335
- type: ndcg_at_1
value: 27.427499999999995
- type: ndcg_at_10
value: 36.53358333333333
- type: ndcg_at_100
value: 41.64508333333333
- type: ndcg_at_1000
value: 44.14499999999999
- type: ndcg_at_3
value: 31.88908333333333
- type: ndcg_at_5
value: 33.98433333333333
- type: precision_at_1
value: 27.427499999999995
- type: precision_at_10
value: 6.481083333333333
- type: precision_at_100
value: 1.0610833333333334
- type: precision_at_1000
value: 0.14691666666666667
- type: precision_at_3
value: 14.656749999999999
- type: precision_at_5
value: 10.493583333333332
- type: recall_at_1
value: 23.190833333333334
- type: recall_at_10
value: 47.65175
- type: recall_at_100
value: 70.41016666666667
- type: recall_at_1000
value: 87.82708333333332
- type: recall_at_3
value: 34.637583333333325
- type: recall_at_5
value: 40.05008333333333
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 20.409
- type: map_at_10
value: 26.794
- type: map_at_100
value: 27.682000000000002
- type: map_at_1000
value: 27.783
- type: map_at_3
value: 24.461
- type: map_at_5
value: 25.668000000000003
- type: mrr_at_1
value: 22.853
- type: mrr_at_10
value: 29.296
- type: mrr_at_100
value: 30.103
- type: mrr_at_1000
value: 30.179000000000002
- type: mrr_at_3
value: 27.173000000000002
- type: mrr_at_5
value: 28.223
- type: ndcg_at_1
value: 22.853
- type: ndcg_at_10
value: 31.007
- type: ndcg_at_100
value: 35.581
- type: ndcg_at_1000
value: 38.147
- type: ndcg_at_3
value: 26.590999999999998
- type: ndcg_at_5
value: 28.43
- type: precision_at_1
value: 22.853
- type: precision_at_10
value: 5.031
- type: precision_at_100
value: 0.7939999999999999
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 11.401
- type: precision_at_5
value: 8.16
- type: recall_at_1
value: 20.409
- type: recall_at_10
value: 41.766
- type: recall_at_100
value: 62.964
- type: recall_at_1000
value: 81.682
- type: recall_at_3
value: 29.281000000000002
- type: recall_at_5
value: 33.83
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 14.549000000000001
- type: map_at_10
value: 20.315
- type: map_at_100
value: 21.301000000000002
- type: map_at_1000
value: 21.425
- type: map_at_3
value: 18.132
- type: map_at_5
value: 19.429
- type: mrr_at_1
value: 17.86
- type: mrr_at_10
value: 23.860999999999997
- type: mrr_at_100
value: 24.737000000000002
- type: mrr_at_1000
value: 24.82
- type: mrr_at_3
value: 21.685
- type: mrr_at_5
value: 23.008
- type: ndcg_at_1
value: 17.86
- type: ndcg_at_10
value: 24.396
- type: ndcg_at_100
value: 29.328
- type: ndcg_at_1000
value: 32.486
- type: ndcg_at_3
value: 20.375
- type: ndcg_at_5
value: 22.411
- type: precision_at_1
value: 17.86
- type: precision_at_10
value: 4.47
- type: precision_at_100
value: 0.8099999999999999
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 9.475
- type: precision_at_5
value: 7.170999999999999
- type: recall_at_1
value: 14.549000000000001
- type: recall_at_10
value: 33.365
- type: recall_at_100
value: 55.797
- type: recall_at_1000
value: 78.632
- type: recall_at_3
value: 22.229
- type: recall_at_5
value: 27.339000000000002
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 23.286
- type: map_at_10
value: 30.728
- type: map_at_100
value: 31.840000000000003
- type: map_at_1000
value: 31.953
- type: map_at_3
value: 28.302
- type: map_at_5
value: 29.615000000000002
- type: mrr_at_1
value: 27.239
- type: mrr_at_10
value: 34.408
- type: mrr_at_100
value: 35.335
- type: mrr_at_1000
value: 35.405
- type: mrr_at_3
value: 32.151999999999994
- type: mrr_at_5
value: 33.355000000000004
- type: ndcg_at_1
value: 27.239
- type: ndcg_at_10
value: 35.324
- type: ndcg_at_100
value: 40.866
- type: ndcg_at_1000
value: 43.584
- type: ndcg_at_3
value: 30.898999999999997
- type: ndcg_at_5
value: 32.812999999999995
- type: precision_at_1
value: 27.239
- type: precision_at_10
value: 5.896
- type: precision_at_100
value: 0.979
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 13.713000000000001
- type: precision_at_5
value: 9.683
- type: recall_at_1
value: 23.286
- type: recall_at_10
value: 45.711
- type: recall_at_100
value: 70.611
- type: recall_at_1000
value: 90.029
- type: recall_at_3
value: 33.615
- type: recall_at_5
value: 38.41
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 23.962
- type: map_at_10
value: 31.942999999999998
- type: map_at_100
value: 33.384
- type: map_at_1000
value: 33.611000000000004
- type: map_at_3
value: 29.243000000000002
- type: map_at_5
value: 30.446
- type: mrr_at_1
value: 28.458
- type: mrr_at_10
value: 36.157000000000004
- type: mrr_at_100
value: 37.092999999999996
- type: mrr_at_1000
value: 37.163000000000004
- type: mrr_at_3
value: 33.86
- type: mrr_at_5
value: 35.086
- type: ndcg_at_1
value: 28.458
- type: ndcg_at_10
value: 37.201
- type: ndcg_at_100
value: 42.591
- type: ndcg_at_1000
value: 45.539
- type: ndcg_at_3
value: 32.889
- type: ndcg_at_5
value: 34.483000000000004
- type: precision_at_1
value: 28.458
- type: precision_at_10
value: 7.332
- type: precision_at_100
value: 1.437
- type: precision_at_1000
value: 0.233
- type: precision_at_3
value: 15.547
- type: precision_at_5
value: 11.146
- type: recall_at_1
value: 23.962
- type: recall_at_10
value: 46.751
- type: recall_at_100
value: 71.626
- type: recall_at_1000
value: 90.93900000000001
- type: recall_at_3
value: 34.138000000000005
- type: recall_at_5
value: 38.673
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 18.555
- type: map_at_10
value: 24.759
- type: map_at_100
value: 25.732
- type: map_at_1000
value: 25.846999999999998
- type: map_at_3
value: 22.646
- type: map_at_5
value: 23.791999999999998
- type: mrr_at_1
value: 20.148
- type: mrr_at_10
value: 26.695999999999998
- type: mrr_at_100
value: 27.605
- type: mrr_at_1000
value: 27.695999999999998
- type: mrr_at_3
value: 24.522
- type: mrr_at_5
value: 25.715
- type: ndcg_at_1
value: 20.148
- type: ndcg_at_10
value: 28.746
- type: ndcg_at_100
value: 33.57
- type: ndcg_at_1000
value: 36.584
- type: ndcg_at_3
value: 24.532
- type: ndcg_at_5
value: 26.484
- type: precision_at_1
value: 20.148
- type: precision_at_10
value: 4.529
- type: precision_at_100
value: 0.736
- type: precision_at_1000
value: 0.108
- type: precision_at_3
value: 10.351
- type: precision_at_5
value: 7.32
- type: recall_at_1
value: 18.555
- type: recall_at_10
value: 39.275999999999996
- type: recall_at_100
value: 61.511
- type: recall_at_1000
value: 84.111
- type: recall_at_3
value: 27.778999999999996
- type: recall_at_5
value: 32.591
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: 392b78eb68c07badcd7c2cd8f39af108375dfcce
metrics:
- type: map_at_1
value: 10.366999999999999
- type: map_at_10
value: 18.953999999999997
- type: map_at_100
value: 20.674999999999997
- type: map_at_1000
value: 20.868000000000002
- type: map_at_3
value: 15.486
- type: map_at_5
value: 17.347
- type: mrr_at_1
value: 23.257
- type: mrr_at_10
value: 35.419
- type: mrr_at_100
value: 36.361
- type: mrr_at_1000
value: 36.403
- type: mrr_at_3
value: 31.747999999999998
- type: mrr_at_5
value: 34.077
- type: ndcg_at_1
value: 23.257
- type: ndcg_at_10
value: 27.11
- type: ndcg_at_100
value: 33.981
- type: ndcg_at_1000
value: 37.444
- type: ndcg_at_3
value: 21.471999999999998
- type: ndcg_at_5
value: 23.769000000000002
- type: precision_at_1
value: 23.257
- type: precision_at_10
value: 8.704
- type: precision_at_100
value: 1.606
- type: precision_at_1000
value: 0.22499999999999998
- type: precision_at_3
value: 16.287
- type: precision_at_5
value: 13.068
- type: recall_at_1
value: 10.366999999999999
- type: recall_at_10
value: 33.706
- type: recall_at_100
value: 57.375
- type: recall_at_1000
value: 76.79
- type: recall_at_3
value: 20.18
- type: recall_at_5
value: 26.215
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: f097057d03ed98220bc7309ddb10b71a54d667d6
metrics:
- type: map_at_1
value: 8.246
- type: map_at_10
value: 15.979
- type: map_at_100
value: 21.025
- type: map_at_1000
value: 22.189999999999998
- type: map_at_3
value: 11.997
- type: map_at_5
value: 13.697000000000001
- type: mrr_at_1
value: 60.75000000000001
- type: mrr_at_10
value: 68.70100000000001
- type: mrr_at_100
value: 69.1
- type: mrr_at_1000
value: 69.111
- type: mrr_at_3
value: 66.583
- type: mrr_at_5
value: 67.87100000000001
- type: ndcg_at_1
value: 49.75
- type: ndcg_at_10
value: 34.702
- type: ndcg_at_100
value: 37.607
- type: ndcg_at_1000
value: 44.322
- type: ndcg_at_3
value: 39.555
- type: ndcg_at_5
value: 36.684
- type: precision_at_1
value: 60.75000000000001
- type: precision_at_10
value: 26.625
- type: precision_at_100
value: 7.969999999999999
- type: precision_at_1000
value: 1.678
- type: precision_at_3
value: 41.833
- type: precision_at_5
value: 34.5
- type: recall_at_1
value: 8.246
- type: recall_at_10
value: 20.968
- type: recall_at_100
value: 42.065000000000005
- type: recall_at_1000
value: 63.671
- type: recall_at_3
value: 13.039000000000001
- type: recall_at_5
value: 16.042
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 829147f8f75a25f005913200eb5ed41fae320aa1
metrics:
- type: accuracy
value: 49.214999999999996
- type: f1
value: 44.85952451163755
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: 1429cf27e393599b8b359b9b72c666f96b2525f9
metrics:
- type: map_at_1
value: 56.769000000000005
- type: map_at_10
value: 67.30199999999999
- type: map_at_100
value: 67.692
- type: map_at_1000
value: 67.712
- type: map_at_3
value: 65.346
- type: map_at_5
value: 66.574
- type: mrr_at_1
value: 61.370999999999995
- type: mrr_at_10
value: 71.875
- type: mrr_at_100
value: 72.195
- type: mrr_at_1000
value: 72.206
- type: mrr_at_3
value: 70.04
- type: mrr_at_5
value: 71.224
- type: ndcg_at_1
value: 61.370999999999995
- type: ndcg_at_10
value: 72.731
- type: ndcg_at_100
value: 74.468
- type: ndcg_at_1000
value: 74.91600000000001
- type: ndcg_at_3
value: 69.077
- type: ndcg_at_5
value: 71.111
- type: precision_at_1
value: 61.370999999999995
- type: precision_at_10
value: 9.325999999999999
- type: precision_at_100
value: 1.03
- type: precision_at_1000
value: 0.108
- type: precision_at_3
value: 27.303
- type: precision_at_5
value: 17.525
- type: recall_at_1
value: 56.769000000000005
- type: recall_at_10
value: 85.06
- type: recall_at_100
value: 92.767
- type: recall_at_1000
value: 95.933
- type: recall_at_3
value: 75.131
- type: recall_at_5
value: 80.17
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: 41b686a7f28c59bcaaa5791efd47c67c8ebe28be
metrics:
- type: map_at_1
value: 15.753
- type: map_at_10
value: 25.875999999999998
- type: map_at_100
value: 27.415
- type: map_at_1000
value: 27.590999999999998
- type: map_at_3
value: 22.17
- type: map_at_5
value: 24.236
- type: mrr_at_1
value: 31.019000000000002
- type: mrr_at_10
value: 39.977000000000004
- type: mrr_at_100
value: 40.788999999999994
- type: mrr_at_1000
value: 40.832
- type: mrr_at_3
value: 37.088
- type: mrr_at_5
value: 38.655
- type: ndcg_at_1
value: 31.019000000000002
- type: ndcg_at_10
value: 33.286
- type: ndcg_at_100
value: 39.528999999999996
- type: ndcg_at_1000
value: 42.934
- type: ndcg_at_3
value: 29.29
- type: ndcg_at_5
value: 30.615
- type: precision_at_1
value: 31.019000000000002
- type: precision_at_10
value: 9.383
- type: precision_at_100
value: 1.6019999999999999
- type: precision_at_1000
value: 0.22200000000000003
- type: precision_at_3
value: 19.753
- type: precision_at_5
value: 14.815000000000001
- type: recall_at_1
value: 15.753
- type: recall_at_10
value: 40.896
- type: recall_at_100
value: 64.443
- type: recall_at_1000
value: 85.218
- type: recall_at_3
value: 26.526
- type: recall_at_5
value: 32.452999999999996
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: 766870b35a1b9ca65e67a0d1913899973551fc6c
metrics:
- type: map_at_1
value: 32.153999999999996
- type: map_at_10
value: 43.651
- type: map_at_100
value: 44.41
- type: map_at_1000
value: 44.487
- type: map_at_3
value: 41.239
- type: map_at_5
value: 42.659000000000006
- type: mrr_at_1
value: 64.30799999999999
- type: mrr_at_10
value: 71.22500000000001
- type: mrr_at_100
value: 71.57
- type: mrr_at_1000
value: 71.59100000000001
- type: mrr_at_3
value: 69.95
- type: mrr_at_5
value: 70.738
- type: ndcg_at_1
value: 64.30799999999999
- type: ndcg_at_10
value: 52.835
- type: ndcg_at_100
value: 55.840999999999994
- type: ndcg_at_1000
value: 57.484
- type: ndcg_at_3
value: 49.014
- type: ndcg_at_5
value: 51.01599999999999
- type: precision_at_1
value: 64.30799999999999
- type: precision_at_10
value: 10.77
- type: precision_at_100
value: 1.315
- type: precision_at_1000
value: 0.153
- type: precision_at_3
value: 30.223
- type: precision_at_5
value: 19.716
- type: recall_at_1
value: 32.153999999999996
- type: recall_at_10
value: 53.849000000000004
- type: recall_at_100
value: 65.75999999999999
- type: recall_at_1000
value: 76.705
- type: recall_at_3
value: 45.334
- type: recall_at_5
value: 49.291000000000004
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 8d743909f834c38949e8323a8a6ce8721ea6c7f4
metrics:
- type: accuracy
value: 63.5316
- type: ap
value: 58.90084300359825
- type: f1
value: 63.35727889030892
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: validation
revision: e6838a846e2408f22cf5cc337ebc83e0bcf77849
metrics:
- type: map_at_1
value: 20.566000000000003
- type: map_at_10
value: 32.229
- type: map_at_100
value: 33.445
- type: map_at_1000
value: 33.501
- type: map_at_3
value: 28.504
- type: map_at_5
value: 30.681000000000004
- type: mrr_at_1
value: 21.218
- type: mrr_at_10
value: 32.816
- type: mrr_at_100
value: 33.986
- type: mrr_at_1000
value: 34.035
- type: mrr_at_3
value: 29.15
- type: mrr_at_5
value: 31.290000000000003
- type: ndcg_at_1
value: 21.218
- type: ndcg_at_10
value: 38.832
- type: ndcg_at_100
value: 44.743
- type: ndcg_at_1000
value: 46.138
- type: ndcg_at_3
value: 31.232
- type: ndcg_at_5
value: 35.099999999999994
- type: precision_at_1
value: 21.218
- type: precision_at_10
value: 6.186
- type: precision_at_100
value: 0.914
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 13.314
- type: precision_at_5
value: 9.943
- type: recall_at_1
value: 20.566000000000003
- type: recall_at_10
value: 59.192
- type: recall_at_100
value: 86.626
- type: recall_at_1000
value: 97.283
- type: recall_at_3
value: 38.492
- type: recall_at_5
value: 47.760000000000005
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 92.56269949840402
- type: f1
value: 92.1020975473988
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 71.8467852257182
- type: f1
value: 53.652719348592015
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 69.00806993947546
- type: f1
value: 67.41429618885515
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.90114324142569
- type: f1
value: 76.25183590651454
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: dcefc037ef84348e49b0d29109e891c01067226b
metrics:
- type: v_measure
value: 31.350109978273395
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 3cd0e71dfbe09d4de0f9e5ecba43e7ce280959dc
metrics:
- type: v_measure
value: 28.768923695767327
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.716396735210754
- type: mrr
value: 32.88970538547634
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: 7eb63cc0c1eb59324d709ebed25fcab851fa7610
metrics:
- type: map_at_1
value: 5.604
- type: map_at_10
value: 12.379999999999999
- type: map_at_100
value: 15.791
- type: map_at_1000
value: 17.327
- type: map_at_3
value: 9.15
- type: map_at_5
value: 10.599
- type: mrr_at_1
value: 45.201
- type: mrr_at_10
value: 53.374
- type: mrr_at_100
value: 54.089
- type: mrr_at_1000
value: 54.123
- type: mrr_at_3
value: 51.44499999999999
- type: mrr_at_5
value: 52.59
- type: ndcg_at_1
value: 42.879
- type: ndcg_at_10
value: 33.891
- type: ndcg_at_100
value: 31.391999999999996
- type: ndcg_at_1000
value: 40.36
- type: ndcg_at_3
value: 39.076
- type: ndcg_at_5
value: 37.047000000000004
- type: precision_at_1
value: 44.582
- type: precision_at_10
value: 25.294
- type: precision_at_100
value: 8.285
- type: precision_at_1000
value: 2.1479999999999997
- type: precision_at_3
value: 36.120000000000005
- type: precision_at_5
value: 31.95
- type: recall_at_1
value: 5.604
- type: recall_at_10
value: 16.239
- type: recall_at_100
value: 32.16
- type: recall_at_1000
value: 64.513
- type: recall_at_3
value: 10.406
- type: recall_at_5
value: 12.684999999999999
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: 6062aefc120bfe8ece5897809fb2e53bfe0d128c
metrics:
- type: map_at_1
value: 25.881
- type: map_at_10
value: 39.501
- type: map_at_100
value: 40.615
- type: map_at_1000
value: 40.661
- type: map_at_3
value: 35.559000000000005
- type: map_at_5
value: 37.773
- type: mrr_at_1
value: 29.229
- type: mrr_at_10
value: 41.955999999999996
- type: mrr_at_100
value: 42.86
- type: mrr_at_1000
value: 42.893
- type: mrr_at_3
value: 38.562000000000005
- type: mrr_at_5
value: 40.542
- type: ndcg_at_1
value: 29.2
- type: ndcg_at_10
value: 46.703
- type: ndcg_at_100
value: 51.644
- type: ndcg_at_1000
value: 52.771
- type: ndcg_at_3
value: 39.141999999999996
- type: ndcg_at_5
value: 42.892
- type: precision_at_1
value: 29.2
- type: precision_at_10
value: 7.920000000000001
- type: precision_at_100
value: 1.0659999999999998
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 18.105
- type: precision_at_5
value: 13.036
- type: recall_at_1
value: 25.881
- type: recall_at_10
value: 66.266
- type: recall_at_100
value: 88.116
- type: recall_at_1000
value: 96.58200000000001
- type: recall_at_3
value: 46.526
- type: recall_at_5
value: 55.154
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: 6205996560df11e3a3da9ab4f926788fc30a7db4
metrics:
- type: map_at_1
value: 67.553
- type: map_at_10
value: 81.34
- type: map_at_100
value: 82.002
- type: map_at_1000
value: 82.027
- type: map_at_3
value: 78.281
- type: map_at_5
value: 80.149
- type: mrr_at_1
value: 77.72
- type: mrr_at_10
value: 84.733
- type: mrr_at_100
value: 84.878
- type: mrr_at_1000
value: 84.879
- type: mrr_at_3
value: 83.587
- type: mrr_at_5
value: 84.32600000000001
- type: ndcg_at_1
value: 77.75
- type: ndcg_at_10
value: 85.603
- type: ndcg_at_100
value: 87.069
- type: ndcg_at_1000
value: 87.25
- type: ndcg_at_3
value: 82.303
- type: ndcg_at_5
value: 84.03699999999999
- type: precision_at_1
value: 77.75
- type: precision_at_10
value: 13.04
- type: precision_at_100
value: 1.5070000000000001
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 35.903
- type: precision_at_5
value: 23.738
- type: recall_at_1
value: 67.553
- type: recall_at_10
value: 93.903
- type: recall_at_100
value: 99.062
- type: recall_at_1000
value: 99.935
- type: recall_at_3
value: 84.58099999999999
- type: recall_at_5
value: 89.316
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: b2805658ae38990172679479369a78b86de8c390
metrics:
- type: v_measure
value: 46.46887711230235
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: v_measure
value: 54.166876298246926
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: 5c59ef3e437a0a9651c8fe6fde943e7dce59fba5
metrics:
- type: map_at_1
value: 4.053
- type: map_at_10
value: 9.693999999999999
- type: map_at_100
value: 11.387
- type: map_at_1000
value: 11.654
- type: map_at_3
value: 7.053
- type: map_at_5
value: 8.439
- type: mrr_at_1
value: 19.900000000000002
- type: mrr_at_10
value: 29.359
- type: mrr_at_100
value: 30.484
- type: mrr_at_1000
value: 30.553
- type: mrr_at_3
value: 26.200000000000003
- type: mrr_at_5
value: 28.115000000000002
- type: ndcg_at_1
value: 19.900000000000002
- type: ndcg_at_10
value: 16.575
- type: ndcg_at_100
value: 23.655
- type: ndcg_at_1000
value: 28.853
- type: ndcg_at_3
value: 15.848
- type: ndcg_at_5
value: 14.026
- type: precision_at_1
value: 19.900000000000002
- type: precision_at_10
value: 8.450000000000001
- type: precision_at_100
value: 1.872
- type: precision_at_1000
value: 0.313
- type: precision_at_3
value: 14.667
- type: precision_at_5
value: 12.32
- type: recall_at_1
value: 4.053
- type: recall_at_10
value: 17.169999999999998
- type: recall_at_100
value: 38.025
- type: recall_at_1000
value: 63.571999999999996
- type: recall_at_3
value: 8.903
- type: recall_at_5
value: 12.477
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cos_sim_pearson
value: 77.7548748519677
- type: cos_sim_spearman
value: 68.19926431966059
- type: euclidean_pearson
value: 71.69016204991725
- type: euclidean_spearman
value: 66.98099673026834
- type: manhattan_pearson
value: 71.62994072488664
- type: manhattan_spearman
value: 67.03435950744577
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: fdf84275bb8ce4b49c971d02e84dd1abc677a50f
metrics:
- type: cos_sim_pearson
value: 75.91051402657887
- type: cos_sim_spearman
value: 66.99390786191645
- type: euclidean_pearson
value: 71.54128036454578
- type: euclidean_spearman
value: 69.25605675649068
- type: manhattan_pearson
value: 71.60981030780171
- type: manhattan_spearman
value: 69.27513670128046
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 1591bfcbe8c69d4bf7fe2a16e2451017832cafb9
metrics:
- type: cos_sim_pearson
value: 77.23835466417793
- type: cos_sim_spearman
value: 77.57623085766706
- type: euclidean_pearson
value: 77.5090992200725
- type: euclidean_spearman
value: 77.88601688144924
- type: manhattan_pearson
value: 77.39045060647423
- type: manhattan_spearman
value: 77.77552718279098
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: e2125984e7df8b7871f6ae9949cf6b6795e7c54b
metrics:
- type: cos_sim_pearson
value: 77.91692485139602
- type: cos_sim_spearman
value: 72.78258293483495
- type: euclidean_pearson
value: 74.64773017077789
- type: euclidean_spearman
value: 71.81662299104619
- type: manhattan_pearson
value: 74.71043337995533
- type: manhattan_spearman
value: 71.83960860845646
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: 1cd7298cac12a96a373b6a2f18738bb3e739a9b6
metrics:
- type: cos_sim_pearson
value: 82.13422113617578
- type: cos_sim_spearman
value: 82.61707296911949
- type: euclidean_pearson
value: 81.42487480400861
- type: euclidean_spearman
value: 82.17970991273835
- type: manhattan_pearson
value: 81.41985055477845
- type: manhattan_spearman
value: 82.15823204362937
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 360a0b2dff98700d09e634a01e1cc1624d3e42cd
metrics:
- type: cos_sim_pearson
value: 79.07989542843826
- type: cos_sim_spearman
value: 80.09839524406284
- type: euclidean_pearson
value: 76.43186028364195
- type: euclidean_spearman
value: 76.76720323266471
- type: manhattan_pearson
value: 76.4674747409161
- type: manhattan_spearman
value: 76.81797407068667
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 87.0420983224933
- type: cos_sim_spearman
value: 87.25017540413702
- type: euclidean_pearson
value: 84.56384596473421
- type: euclidean_spearman
value: 84.72557417564886
- type: manhattan_pearson
value: 84.7329954474549
- type: manhattan_spearman
value: 84.75071371008909
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 68.47031320016424
- type: cos_sim_spearman
value: 68.7486910762485
- type: euclidean_pearson
value: 71.30330985913915
- type: euclidean_spearman
value: 71.59666258520735
- type: manhattan_pearson
value: 71.4423884279027
- type: manhattan_spearman
value: 71.67460706861044
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: 8913289635987208e6e7c72789e4be2fe94b6abd
metrics:
- type: cos_sim_pearson
value: 80.79514366062675
- type: cos_sim_spearman
value: 79.20585637461048
- type: euclidean_pearson
value: 78.6591557395699
- type: euclidean_spearman
value: 77.86455794285718
- type: manhattan_pearson
value: 78.67754806486865
- type: manhattan_spearman
value: 77.88178687200732
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: 56a6d0140cf6356659e2a7c1413286a774468d44
metrics:
- type: map
value: 77.71580844366375
- type: mrr
value: 93.04215845882513
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: a75ae049398addde9b70f6b268875f5cbce99089
metrics:
- type: map_at_1
value: 56.39999999999999
- type: map_at_10
value: 65.701
- type: map_at_100
value: 66.32000000000001
- type: map_at_1000
value: 66.34100000000001
- type: map_at_3
value: 62.641999999999996
- type: map_at_5
value: 64.342
- type: mrr_at_1
value: 58.667
- type: mrr_at_10
value: 66.45299999999999
- type: mrr_at_100
value: 66.967
- type: mrr_at_1000
value: 66.988
- type: mrr_at_3
value: 64.11099999999999
- type: mrr_at_5
value: 65.411
- type: ndcg_at_1
value: 58.667
- type: ndcg_at_10
value: 70.165
- type: ndcg_at_100
value: 72.938
- type: ndcg_at_1000
value: 73.456
- type: ndcg_at_3
value: 64.79
- type: ndcg_at_5
value: 67.28
- type: precision_at_1
value: 58.667
- type: precision_at_10
value: 9.4
- type: precision_at_100
value: 1.087
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 24.889
- type: precision_at_5
value: 16.667
- type: recall_at_1
value: 56.39999999999999
- type: recall_at_10
value: 83.122
- type: recall_at_100
value: 95.667
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 68.378
- type: recall_at_5
value: 74.68299999999999
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: 5a8256d0dff9c4bd3be3ba3e67e4e70173f802ea
metrics:
- type: cos_sim_accuracy
value: 99.76831683168317
- type: cos_sim_ap
value: 93.47124923047998
- type: cos_sim_f1
value: 88.06122448979592
- type: cos_sim_precision
value: 89.89583333333333
- type: cos_sim_recall
value: 86.3
- type: dot_accuracy
value: 99.57326732673268
- type: dot_ap
value: 84.06577868167207
- type: dot_f1
value: 77.82629791363416
- type: dot_precision
value: 75.58906691800189
- type: dot_recall
value: 80.2
- type: euclidean_accuracy
value: 99.74257425742574
- type: euclidean_ap
value: 92.1904681653555
- type: euclidean_f1
value: 86.74821610601427
- type: euclidean_precision
value: 88.46153846153845
- type: euclidean_recall
value: 85.1
- type: manhattan_accuracy
value: 99.74554455445545
- type: manhattan_ap
value: 92.4337790809948
- type: manhattan_f1
value: 86.86765457332653
- type: manhattan_precision
value: 88.81922675026124
- type: manhattan_recall
value: 85.0
- type: max_accuracy
value: 99.76831683168317
- type: max_ap
value: 93.47124923047998
- type: max_f1
value: 88.06122448979592
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 70a89468f6dccacc6aa2b12a6eac54e74328f235
metrics:
- type: v_measure
value: 59.194098673976484
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: d88009ab563dd0b16cfaf4436abaf97fa3550cf0
metrics:
- type: v_measure
value: 32.5744032578115
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: ef807ea29a75ec4f91b50fd4191cb4ee4589a9f9
metrics:
- type: map
value: 49.61186384154483
- type: mrr
value: 50.55424253034547
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: 8753c2788d36c01fc6f05d03fe3f7268d63f9122
metrics:
- type: cos_sim_pearson
value: 30.027210161713946
- type: cos_sim_spearman
value: 31.030178065751735
- type: dot_pearson
value: 30.09179785685587
- type: dot_spearman
value: 30.408303252207813
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: 2c8041b2c07a79b6f7ba8fe6acc72e5d9f92d217
metrics:
- type: map_at_1
value: 0.22300000000000003
- type: map_at_10
value: 1.762
- type: map_at_100
value: 9.984
- type: map_at_1000
value: 24.265
- type: map_at_3
value: 0.631
- type: map_at_5
value: 0.9950000000000001
- type: mrr_at_1
value: 88.0
- type: mrr_at_10
value: 92.833
- type: mrr_at_100
value: 92.833
- type: mrr_at_1000
value: 92.833
- type: mrr_at_3
value: 92.333
- type: mrr_at_5
value: 92.833
- type: ndcg_at_1
value: 83.0
- type: ndcg_at_10
value: 75.17
- type: ndcg_at_100
value: 55.432
- type: ndcg_at_1000
value: 49.482
- type: ndcg_at_3
value: 82.184
- type: ndcg_at_5
value: 79.712
- type: precision_at_1
value: 88.0
- type: precision_at_10
value: 78.60000000000001
- type: precision_at_100
value: 56.56
- type: precision_at_1000
value: 22.334
- type: precision_at_3
value: 86.667
- type: precision_at_5
value: 83.6
- type: recall_at_1
value: 0.22300000000000003
- type: recall_at_10
value: 1.9879999999999998
- type: recall_at_100
value: 13.300999999999998
- type: recall_at_1000
value: 46.587
- type: recall_at_3
value: 0.6629999999999999
- type: recall_at_5
value: 1.079
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: 527b7d77e16e343303e68cb6af11d6e18b9f7b3b
metrics:
- type: map_at_1
value: 3.047
- type: map_at_10
value: 8.792
- type: map_at_100
value: 14.631
- type: map_at_1000
value: 16.127
- type: map_at_3
value: 4.673
- type: map_at_5
value: 5.897
- type: mrr_at_1
value: 38.775999999999996
- type: mrr_at_10
value: 49.271
- type: mrr_at_100
value: 50.181
- type: mrr_at_1000
value: 50.2
- type: mrr_at_3
value: 44.558
- type: mrr_at_5
value: 47.925000000000004
- type: ndcg_at_1
value: 35.714
- type: ndcg_at_10
value: 23.44
- type: ndcg_at_100
value: 35.345
- type: ndcg_at_1000
value: 46.495
- type: ndcg_at_3
value: 26.146
- type: ndcg_at_5
value: 24.878
- type: precision_at_1
value: 38.775999999999996
- type: precision_at_10
value: 20.816000000000003
- type: precision_at_100
value: 7.428999999999999
- type: precision_at_1000
value: 1.494
- type: precision_at_3
value: 25.85
- type: precision_at_5
value: 24.082
- type: recall_at_1
value: 3.047
- type: recall_at_10
value: 14.975
- type: recall_at_100
value: 45.943
- type: recall_at_1000
value: 80.31099999999999
- type: recall_at_3
value: 5.478000000000001
- type: recall_at_5
value: 8.294
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 68.84080000000002
- type: ap
value: 13.135219251019848
- type: f1
value: 52.849999421995506
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: 62146448f05be9e52a36b8ee9936447ea787eede
metrics:
- type: accuracy
value: 56.68647425014149
- type: f1
value: 56.97981427365949
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 091a54f9a36281ce7d6590ec8c75dd485e7e01d4
metrics:
- type: v_measure
value: 40.8911707239219
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 83.04226023722954
- type: cos_sim_ap
value: 63.681339908301325
- type: cos_sim_f1
value: 60.349184470480125
- type: cos_sim_precision
value: 53.437754271765655
- type: cos_sim_recall
value: 69.31398416886545
- type: dot_accuracy
value: 81.46271681468677
- type: dot_ap
value: 57.78072296265885
- type: dot_f1
value: 56.28769265132901
- type: dot_precision
value: 48.7993803253292
- type: dot_recall
value: 66.49076517150397
- type: euclidean_accuracy
value: 82.16606067830959
- type: euclidean_ap
value: 59.974530371203514
- type: euclidean_f1
value: 56.856023506366306
- type: euclidean_precision
value: 53.037916857012334
- type: euclidean_recall
value: 61.2664907651715
- type: manhattan_accuracy
value: 82.16606067830959
- type: manhattan_ap
value: 59.98962379571767
- type: manhattan_f1
value: 56.98153158451947
- type: manhattan_precision
value: 51.41158989598811
- type: manhattan_recall
value: 63.90501319261214
- type: max_accuracy
value: 83.04226023722954
- type: max_ap
value: 63.681339908301325
- type: max_f1
value: 60.349184470480125
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.56871191834517
- type: cos_sim_ap
value: 84.80240716354544
- type: cos_sim_f1
value: 77.07765285922385
- type: cos_sim_precision
value: 74.84947406601378
- type: cos_sim_recall
value: 79.44256236526024
- type: dot_accuracy
value: 86.00923662048356
- type: dot_ap
value: 78.6556459012073
- type: dot_f1
value: 72.7583749109052
- type: dot_precision
value: 67.72823779193206
- type: dot_recall
value: 78.59562673236834
- type: euclidean_accuracy
value: 87.84103698529127
- type: euclidean_ap
value: 83.50424424952834
- type: euclidean_f1
value: 75.74496544549307
- type: euclidean_precision
value: 73.19402556369381
- type: euclidean_recall
value: 78.48013550970127
- type: manhattan_accuracy
value: 87.9225365777933
- type: manhattan_ap
value: 83.49479248597825
- type: manhattan_f1
value: 75.67748162447101
- type: manhattan_precision
value: 73.06810035842294
- type: manhattan_recall
value: 78.48013550970127
- type: max_accuracy
value: 88.56871191834517
- type: max_ap
value: 84.80240716354544
- type: max_f1
value: 77.07765285922385
---
# SGPT-2.7B-weightedmean-msmarco-specb-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to the eval folder or our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 124796 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 7.5e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 2560, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
``` | 65,465 | [
[
-0.0181121826171875,
-0.03857421875,
0.0303192138671875,
0.0168914794921875,
-0.0357666015625,
-0.0271453857421875,
-0.0239410400390625,
0.00347137451171875,
0.0172882080078125,
0.0166015625,
-0.04931640625,
-0.025634765625,
-0.06268310546875,
-0.00686264038... |
OpenAssistant/falcon-7b-sft-mix-2000 | 2023-06-06T10:32:55.000Z | [
"transformers",
"pytorch",
"RefinedWebModel",
"text-generation",
"sft",
"custom_code",
"en",
"de",
"es",
"fr",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | OpenAssistant | null | null | OpenAssistant/falcon-7b-sft-mix-2000 | 40 | 1,778 | transformers | 2023-06-05T04:48:05 | ---
license: apache-2.0
language:
- en
- de
- es
- fr
tags:
- sft
pipeline_tag: text-generation
widget:
- text: >-
<|prompter|>What is a meme, and what's the history behind this
word?<|endoftext|><|assistant|>
- text: <|prompter|>What's the Earth total population<|endoftext|><|assistant|>
- text: >-
<|prompter|>Write a story about future of AI
development<|endoftext|><|assistant|>
datasets:
- OpenAssistant/oasst1
---
# Open-Assistant Falcon 7B SFT MIX Model
This model is a fine-tuning of TII's [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) LLM.
It was trained on a mixture of OASST top-2 threads (exported on June 2, 2023), Dolly-15k and synthetic instruction datasets (see dataset configuration below).
## Model Details
- **Finetuned from:** [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b)
- **Model type:** Causal decoder-only transformer language model
- **Language:** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
- **Weights & Biases:** [Training log](https://wandb.ai/open-assistant/public-sft/runs/tlevhltw) (Checkpoint: 2000 steps, ~2.9 epochs)
- **Demo:** [Continuations for 250 random prompts](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Fchat-gpt%2F2023-04-11_gpt-3.5-turbo_lottery.json%0Ahttps%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-06-05_OpenAssistant_falcon-7b-sft-mix-2000_sampling_noprefix2.json)
- **License:** Apache 2.0
- **Contact:** [Open-Assistant Discord](https://ykilcher.com/open-assistant-discord)
## Prompting
Two special tokens are used to mark the beginning of user and assistant turns:
`<|prompter|>` and `<|assistant|>`. Each turn ends with a `<|endoftext|>` token.
Input prompt example:
```
<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>
```
The input ends with the `<|assistant|>` token to signal that the model should
start generating the assistant reply.
## Sample Code
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "OpenAssistant/falcon-7b-sft-mix-2000"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
input_text="<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>"
sequences = pipeline(
input_text,
max_length=500,
do_sample=True,
return_full_text=False,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Configuration Details
Model:
```
falcon-7b:
dtype: bf16
log_dir: "falcon_log_7b"
learning_rate: 1e-5
model_name: "tiiuae/falcon-7b"
deepspeed_config: configs/zero_config.json
output_dir: falcon
weight_decay: 0.0
max_length: 2048
warmup_steps: 20
gradient_checkpointing: true
gradient_accumulation_steps: 4
per_device_train_batch_size: 4
per_device_eval_batch_size: 8
eval_steps: 100
save_steps: 500
save_strategy: steps
num_train_epochs: 8
save_total_limit: 4
residual_dropout: 0.2
residual_dropout_lima: true
```
Dataset:
```
sft9-stage2:
# oasst_export: 100.00% (29899)
# vicuna: 50.00% (16963)
# code_alpaca: 50.00% (9510)
# oa_wiki_qa_bart_10000row: 100.00% (9434)
# grade_school_math_instructions: 100.00% (8351)
# dolly15k: 100.00% (14250)
use_custom_sampler: true
datasets:
- oasst_export:
lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk" # sft-8.0
input_file_path: 2023-06-02_oasst_all_labels.jsonl.gz
val_split: 0.05
top_k: 2
- vicuna:
fraction: 0.5
val_split: 0.025
max_val_set: 250
- code_alpaca:
fraction: 0.5
val_split: 0.05
max_val_set: 250
- oa_wiki_qa_bart_10000row:
val_split: 0.05
max_val_set: 250
- grade_school_math_instructions:
val_split: 0.05
- dolly15k:
val_split: 0.05
max_val_set: 300
``` | 4,350 | [
[
-0.044525146484375,
-0.07232666015625,
0.00922393798828125,
0.01312255859375,
-0.01047515869140625,
0.003559112548828125,
0.0093536376953125,
-0.0048675537109375,
0.033355712890625,
0.0240631103515625,
-0.06524658203125,
-0.031982421875,
-0.0450439453125,
0.... |
AnjanaSivan/my-pet-cat-zxc | 2023-10-09T10:40:41.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | AnjanaSivan | null | null | AnjanaSivan/my-pet-cat-zxc | 0 | 1,778 | diffusers | 2023-10-09T10:36:41 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat-zxc Dreambooth model trained by AnjanaSivan following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: ISSAT-153
Sample pictures of this concept:

| 432 | [
[
-0.051422119140625,
-0.0106658935546875,
0.033660888671875,
0.004413604736328125,
-0.0220794677734375,
0.0469970703125,
0.0343017578125,
-0.02081298828125,
0.05242919921875,
0.036285400390625,
-0.059295654296875,
-0.032196044921875,
-0.0101165771484375,
0.00... |
radames/stable-diffusion-2-1-unclip-img2img | 2023-05-17T00:55:03.000Z | [
"diffusers",
"stable-diffusion",
"image-to-image",
"arxiv:2112.10752",
"arxiv:1910.09700",
"license:openrail++",
"has_space",
"diffusers:StableUnCLIPImg2ImgPipeline",
"region:us"
] | image-to-image | radames | null | null | radames/stable-diffusion-2-1-unclip-img2img | 3 | 1,777 | diffusers | 2023-05-17T00:54:33 | ---
license: openrail++
tags:
- stable-diffusion
- image-to-image
pinned: true
duplicated_from: stabilityai/stable-diffusion-2-1-unclip
pipeline_tag: image-to-image
---
# Stable Diffusion v2-1-unclip Model Card
This model card focuses on the model associated with the Stable Diffusion v2-1 model, codebase available [here](https://github.com/Stability-AI/stablediffusion).
This `stable-diffusion-2-1-unclip` is a finetuned version of Stable Diffusion 2.1, modified to accept (noisy) CLIP image embedding in addition to the text prompt, and can be used to create image variations (Examples) or can be chained with text-to-image CLIP priors. The amount of noise added to the image embedding can be specified via the noise_level (0 means no noise, 1000 full noise).
- Use it with 🧨 [`diffusers`](#examples)
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL)
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)).
- **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
## Examples
Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion UnCLIP 2-1-small in a simple and efficient manner.
```bash
pip install diffusers transformers accelerate scipy safetensors
```
Running the pipeline (if you don't swap the scheduler it will run with the default DDIM, in this example we are swapping it to DPMSolverMultistepScheduler):
```python
from diffusers import DiffusionPipeline
from diffusers.utils import load_image
import torch
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-unclip-small", torch_dtype=torch.float16)
pipe.to("cuda")
# get image
url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/tarsila_do_amaral.png"
image = load_image(url)
# run image variation
image = pipe(image).images[0]
```

# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is originally taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), was used for Stable Diffusion v1, but applies in the same way to Stable Diffusion v2_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a subset of the large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NFSW detector (see Training section).
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
Stable Diffusion v2 mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-5B and subsets (details below). The training data is further filtered using LAION's NSFW detector, with a "p_unsafe" score of 0.1 (conservative). For more details, please refer to LAION-5B's [NeurIPS 2022](https://openreview.net/forum?id=M3Y74vmsMcY) paper and reviewer discussions on the topic.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 200000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 15000 kg CO2 eq.
## Citation
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
*This model card was written by: Robin Rombach, Patrick Esser and David Ha and is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).* | 8,223 | [
[
-0.035186767578125,
-0.06396484375,
0.0225372314453125,
0.01166534423828125,
-0.0195465087890625,
-0.0287322998046875,
0.006237030029296875,
-0.03466796875,
-0.00603485107421875,
0.03350830078125,
-0.030548095703125,
-0.03253173828125,
-0.050384521484375,
-0... |
xlm-mlm-100-1280 | 2023-01-24T14:49:58.000Z | [
"transformers",
"pytorch",
"tf",
"xlm",
"fill-mask",
"multilingual",
"en",
"es",
"fr",
"de",
"zh",
"ru",
"pt",
"it",
"ar",
"ja",
"id",
"tr",
"nl",
"pl",
"fa",
"vi",
"sv",
"ko",
"he",
"ro",
"no",
"hi",
"uk",
"cs",
"fi",
"hu",
"th",
"da",
"ca",
"el... | fill-mask | null | null | null | xlm-mlm-100-1280 | 2 | 1,776 | transformers | 2022-03-02T23:29:04 | ---
language:
- multilingual
- en
- es
- fr
- de
- zh
- ru
- pt
- it
- ar
- ja
- id
- tr
- nl
- pl
- fa
- vi
- sv
- ko
- he
- ro
- no
- hi
- uk
- cs
- fi
- hu
- th
- da
- ca
- el
- bg
- sr
- ms
- bn
- hr
- sl
- az
- sk
- eo
- ta
- sh
- lt
- et
- ml
- la
- bs
- sq
- arz
- af
- ka
- mr
- eu
- tl
- ang
- gl
- nn
- ur
- kk
- be
- hy
- te
- lv
- mk
- als
- is
- wuu
- my
- sco
- mn
- ceb
- ast
- cy
- kn
- br
- an
- gu
- bar
- uz
- lb
- ne
- si
- war
- jv
- ga
- oc
- ku
- sw
- nds
- ckb
- ia
- yi
- fy
- scn
- gan
- tt
- am
license: cc-by-nc-4.0
---
# xlm-mlm-100-1280
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
xlm-mlm-100-1280 is the XLM model, which was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau, trained on Wikipedia text in 100 languages. The model is a transformer pretrained using a masked language modeling (MLM) objective.
## Model Description
- **Developed by:** See [associated paper](https://arxiv.org/abs/1901.07291) and [GitHub Repo](https://github.com/facebookresearch/XLM)
- **Model type:** Language model
- **Language(s) (NLP):** 100 languages, see [GitHub Repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages) for full list.
- **License:** CC-BY-NC-4.0
- **Related Models:** [xlm-mlm-17-1280](https://huggingface.co/xlm-mlm-17-1280)
- **Resources for more information:**
- [Associated paper](https://arxiv.org/abs/1901.07291)
- [GitHub Repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages)
- [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings)
# Uses
## Direct Use
The model is a language model. The model can be used for masked language modeling.
## Downstream Use
To learn more about this task and potential downstream uses, see the Hugging Face [fill mask docs](https://huggingface.co/tasks/fill-mask) and the [Hugging Face Multilingual Models for Inference](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) docs. Also see the [associated paper](https://arxiv.org/abs/1901.07291).
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
This model is the XLM model trained on Wikipedia text in 100 languages. The preprocessing included tokenization with byte-pair-encoding. See the [GitHub repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages) and the [associated paper](https://arxiv.org/pdf/1911.02116.pdf) for further details on the training data and training procedure.
[Conneau et al. (2020)](https://arxiv.org/pdf/1911.02116.pdf) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7).
# Evaluation
## Testing Data, Factors & Metrics
The model developers evaluated the model on the XNLI cross-lingual classification task (see the [XNLI data card](https://huggingface.co/datasets/xnli) for more details on XNLI) using the metric of test accuracy. See the [GitHub Repo](https://arxiv.org/pdf/1911.02116.pdf) for further details on the testing data, factors and metrics.
## Results
For xlm-mlm-100-1280, the test accuracy on the XNLI cross-lingual classification task in English (en), Spanish (es), German (de), Arabic (ar), Chinese (zh) and Urdu (ur) are:
|Language| en | es | de | ar | zh | ur |
|:------:|:--:|:---:|:--:|:--:|:--:|:--:|
| |83.7|76.6 |73.6|67.4|71.7|62.9|
See the [GitHub repo](https://github.com/facebookresearch/XLM#ii-cross-lingual-language-model-pretraining-xlm) for further details.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
[Conneau et al. (2020)](https://arxiv.org/pdf/1911.02116.pdf) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7).
# Citation
**BibTeX:**
```bibtex
@article{lample2019cross,
title={Cross-lingual language model pretraining},
author={Lample, Guillaume and Conneau, Alexis},
journal={arXiv preprint arXiv:1901.07291},
year={2019}
}
```
**APA:**
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
More information needed. See the [ipython notebook](https://github.com/facebookresearch/XLM/blob/main/generate-embeddings.ipynb) in the associated [GitHub repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages) for examples. | 6,077 | [
[
-0.0347900390625,
-0.046417236328125,
0.006984710693359375,
0.0209808349609375,
0.00011265277862548828,
-0.0006184577941894531,
-0.024505615234375,
-0.041900634765625,
0.00872802734375,
0.043365478515625,
-0.038482666015625,
-0.039947509765625,
-0.04647827148437... |
bigscience/bloomz | 2023-05-27T17:25:31.000Z | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bloom",
"text-generation",
"ak",
"ar",
"as",
"bm",
"bn",
"ca",
"code",
"en",
"es",
"eu",
"fon",
"fr",
"gu",
"hi",
"id",
"ig",
"ki",
"kn",
"lg",
"ln",
"ml",
"mr",
"ne",
"nso",
"ny",
"or",
"pa",
... | text-generation | bigscience | null | null | bigscience/bloomz | 466 | 1,776 | transformers | 2022-09-17T10:48:39 | ---
datasets:
- bigscience/xP3
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
pipeline_tag: text-generation
inference: false
widget:
- text: "一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。Would you rate the previous review as positive, neutral or negative?"
example_title: "zh-en sentiment"
- text: "一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?"
example_title: "zh-zh sentiment"
- text: "Suggest at least five related search terms to \"Mạng neural nhân tạo\"."
example_title: "vi-en query"
- text: "Proposez au moins cinq mots clés concernant «Réseau de neurones artificiels»."
example_title: "fr-fr query"
- text: "Explain in a sentence in Telugu what is backpropagation in neural networks."
example_title: "te-en qa"
- text: "Why is the sky blue?"
example_title: "en-en qa"
- text: "Explain to me in Traditional Chinese what is the difference between Bitcoin and Ethereum."
example_title: "zh-en qa"
- text: "Write a code snippet with O(log(n)) computational complexity."
example_title: "code-en"
- text: "Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is \"Heroes Come in All Shapes and Sizes\". Story (in Spanish):"
example_title: "es-en fable"
- text: "Write a fable about wood elves living in a forest that is suddenly invaded by ogres. The fable is a masterpiece that has achieved praise worldwide and its moral is \"Violence is the last refuge of the incompetent\". Fable (in Hindi):"
example_title: "hi-en fable"
- text: "How many sides does a rectangle and heptagon have, when
combined? Answer this question with some math.
Ein Rechteck hat 4 Seiten. Ein Siebeneck hat 7 Seiten.
In Kombination haben sie 4 + 7 = 11 Seiten.
كم عدد الأضلاع التي يجمعها المربع والمثلث؟
Répondez à cette question en chinois."
example_title: "en-de-ar-fr-zh math"
model-index:
- name: bloomz
results:
- task:
type: Coreference resolution
dataset:
type: winogrande
name: Winogrande XL (xl)
config: xl
split: validation
revision: a80f460359d1e9a67c006011c94de42a8759430c
metrics:
- type: Accuracy
value: 59.27
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (en)
config: en
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 69.08
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (fr)
config: fr
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 68.67
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (jp)
config: jp
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 59.65
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (pt)
config: pt
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 64.26
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (ru)
config: ru
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 60.95
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (zh)
config: zh
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 70.24
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r1)
config: r1
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 48.6
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r2)
config: r2
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 44.1
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r3)
config: r3
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 45.5
- task:
type: Natural language inference
dataset:
type: super_glue
name: SuperGLUE (cb)
config: cb
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 82.14
- task:
type: Natural language inference
dataset:
type: super_glue
name: SuperGLUE (rte)
config: rte
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 85.56
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ar)
config: ar
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 60.68
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (bg)
config: bg
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 48.43
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (de)
config: de
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 54.38
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (el)
config: el
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 47.43
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (en)
config: en
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 67.47
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (es)
config: es
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 61.24
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (fr)
config: fr
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 61.37
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (hi)
config: hi
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 60.2
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ru)
config: ru
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 54.02
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (sw)
config: sw
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 52.09
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (th)
config: th
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 43.78
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (tr)
config: tr
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 45.7
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ur)
config: ur
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 50.8
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (vi)
config: vi
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 61.0
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (zh)
config: zh
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 56.91
- task:
type: Program synthesis
dataset:
type: openai_humaneval
name: HumanEval
config: None
split: test
revision: e8dc562f5de170c54b5481011dd9f4fa04845771
metrics:
- type: Pass@1
value: 12.06
- type: Pass@10
value: 26.53
- type: Pass@100
value: 48.44
- task:
type: Sentence completion
dataset:
type: story_cloze
name: StoryCloze (2016)
config: "2016"
split: validation
revision: e724c6f8cdf7c7a2fb229d862226e15b023ee4db
metrics:
- type: Accuracy
value: 96.26
- task:
type: Sentence completion
dataset:
type: super_glue
name: SuperGLUE (copa)
config: copa
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 91.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (et)
config: et
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 51.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (ht)
config: ht
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 58.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (id)
config: id
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 86.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (it)
config: it
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 74.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (qu)
config: qu
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 56.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (sw)
config: sw
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 64.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (ta)
config: ta
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 69.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (th)
config: th
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 58.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (tr)
config: tr
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 57.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (vi)
config: vi
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 87.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (zh)
config: zh
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 90.0
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (ar)
config: ar
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 92.79
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (es)
config: es
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 94.37
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (eu)
config: eu
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 86.9
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (hi)
config: hi
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 88.42
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (id)
config: id
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 92.12
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (my)
config: my
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 52.35
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (ru)
config: ru
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 81.73
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (sw)
config: sw
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 79.81
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (te)
config: te
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 81.2
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (zh)
config: zh
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 93.12
---

# Table of Contents
1. [Model Summary](#model-summary)
2. [Use](#use)
3. [Limitations](#limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
7. [Citation](#citation)
# Model Summary
> We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find the resulting models capable of crosslingual generalization to unseen tasks & languages.
- **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf)
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co)
- **Languages:** Refer to [bloom](https://huggingface.co/bigscience/bloom) for pretraining & [xP3](https://huggingface.co/datasets/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages.
- **BLOOMZ & mT0 Model Family:**
<div class="max-w-full overflow-auto">
<table>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English.
</tr>
<tr>
<td>Parameters</td>
<td>300M</td>
<td>580M</td>
<td>1.2B</td>
<td>3.7B</td>
<td>13B</td>
<td>560M</td>
<td>1.1B</td>
<td>1.7B</td>
<td>3B</td>
<td>7.1B</td>
<td>176B</td>
</tr>
<tr>
<td>Finetuned Model</td>
<td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
</tr>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td>
</tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td>
</tr>
<th colspan="12">Original pretrained checkpoints. Not recommended.</th>
<tr>
<td>Pretrained Model</td>
<td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td>
<td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td>
<td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td>
<td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td>
<td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td>
</tr>
</table>
</div>
# Use
## Intended use
We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper:
- 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
- Suggest at least five related search terms to "Mạng neural nhân tạo".
- Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
- Explain in a sentence in Telugu what is backpropagation in neural networks.
**Feel free to share your generations in the Community tab!**
## How to use
### CPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint)
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU in 8bit
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True)
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
<!-- Necessary for whitespace -->
###
# Limitations
**Prompt Engineering:** The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*".
# Training
## Model
- **Architecture:** Same as [bloom](https://huggingface.co/bigscience/bloom), also refer to the `config.json` file
- **Finetuning steps:** 498
- **Finetuning tokens:** 2.09 billion
- **Finetuning layout:** 72x pipeline parallel, 1x tensor parallel, 4x data parallel
- **Precision:** bfloat16
## Hardware
- **CPUs:** AMD CPUs with 512GB memory per node
- **GPUs:** 288 A100 80GB GPUs with 8 GPUs per node (36 nodes) using NVLink 4 inter-gpu connects, 4 OmniPath links
- **Communication:** NCCL-communications network with a fully dedicated subnet
## Software
- **Orchestration:** [Megatron-DeepSpeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed)
- **Optimizer & parallelism:** [DeepSpeed](https://github.com/microsoft/DeepSpeed)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch) (pytorch-1.11 w/ CUDA-11.5)
- **FP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
# Evaluation
We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config.
# Citation
```bibtex
@article{muennighoff2022crosslingual,
title={Crosslingual generalization through multitask finetuning},
author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others},
journal={arXiv preprint arXiv:2211.01786},
year={2022}
}
``` | 24,746 | [
[
-0.031280517578125,
-0.043304443359375,
0.02325439453125,
0.02947998046875,
-0.005474090576171875,
-0.005962371826171875,
-0.025360107421875,
-0.0255126953125,
0.0323486328125,
-0.0122833251953125,
-0.0687255859375,
-0.04022216796875,
-0.04058837890625,
0.01... |
Social-Media-Fairness/Classifier-Bias-SG | 2023-09-19T21:46:49.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"license:openrail",
"endpoints_compatible",
"region:us"
] | text-classification | Social-Media-Fairness | null | null | Social-Media-Fairness/Classifier-Bias-SG | 2 | 1,773 | transformers | 2023-09-15T00:38:53 | ---
license: openrail
---
# Classifier-Bias-SG Model Card
## Model Details
Classifier-Bias-SG is a proof of concept model designed to classify texts based on their bias levels. The model categorizes texts into 2 classes: "Biased", and "Non-Biased".
## Model Architecture
The model is built upon the distilbert-base-uncased architecture and has been fine-tuned on a custom dataset for the specific task of bias detection.
## Dataset
The model was trained on a BABE dataset containing news articles from various sources, annotated with one of the 2 bias levels. The dataset contains:
- **Biased**: 1810 articles
- **Non-Biased**: 1810 articles
## Training Procedure
The model was trained using the Adam optimizer for 15 epochs.
## Performance
On our validation set, the model achieved:
- **Accuracy**: 78%
- **F1 Score (Biased)**: 79%
- **F1 Score (Non-Biased)**: 77%
## How to Use
To use this model for text classification, use the following code:
```python
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Social-Media-Fairness/Classifier-Bias-SG")
model = AutoModelForSequenceClassification.from_pretrained("Social-Media-Fairness/Classifier-Bias-SG")
classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
result = classifier("Women are bad driver.")
print(result)
```
Developed by Shardul Ghuge | 1,436 | [
[
-0.021942138671875,
-0.051849365234375,
0.01110076904296875,
0.005565643310546875,
-0.01861572265625,
-0.005950927734375,
0.005924224853515625,
-0.0179901123046875,
0.00930023193359375,
0.0143585205078125,
-0.0267333984375,
-0.0526123046875,
-0.058074951171875,
... |
superb/wav2vec2-large-superb-ic | 2021-09-04T19:52:29.000Z | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"speech",
"audio",
"en",
"dataset:superb",
"arxiv:2105.01051",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | superb | null | null | superb/wav2vec2-large-superb-ic | 0 | 1,771 | transformers | 2022-03-02T23:29:05 | ---
language: en
datasets:
- superb
tags:
- speech
- audio
- wav2vec2
license: apache-2.0
---
# Wav2Vec2-Large for Intent Classification
## Model description
This is a ported version of [S3PRL's Wav2Vec2 for the SUPERB Intent Classification task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/fluent_commands).
The base model is [wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60), which is pretrained on 16kHz
sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051)
## Task and dataset description
Intent Classification (IC) classifies utterances into predefined classes to determine the intent of
speakers. SUPERB uses the
[Fluent Speech Commands](https://fluent.ai/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research/)
dataset, where each utterance is tagged with three intent labels: **action**, **object**, and **location**.
For the original model's training and evaluation instructions refer to the
[S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands).
## Usage examples
You can use the model directly like so:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForSequenceClassification, Wav2Vec2FeatureExtractor
def map_to_array(example):
speech, _ = librosa.load(example["file"], sr=16000, mono=True)
example["speech"] = speech
return example
# load a demo dataset and read audio files
dataset = load_dataset("anton-l/superb_demo", "ic", split="test")
dataset = dataset.map(map_to_array)
model = Wav2Vec2ForSequenceClassification.from_pretrained("superb/wav2vec2-large-superb-ic")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/wav2vec2-large-superb-ic")
# compute attention masks and normalize the waveform if needed
inputs = feature_extractor(dataset[:4]["speech"], sampling_rate=16000, padding=True, return_tensors="pt")
logits = model(**inputs).logits
action_ids = torch.argmax(logits[:, :6], dim=-1).tolist()
action_labels = [model.config.id2label[_id] for _id in action_ids]
object_ids = torch.argmax(logits[:, 6:20], dim=-1).tolist()
object_labels = [model.config.id2label[_id + 6] for _id in object_ids]
location_ids = torch.argmax(logits[:, 20:24], dim=-1).tolist()
location_labels = [model.config.id2label[_id + 20] for _id in location_ids]
```
## Eval results
The evaluation metric is accuracy.
| | **s3prl** | **transformers** |
|--------|-----------|------------------|
|**test**| `0.9528` | `N/A` |
### BibTeX entry and citation info
```bibtex
@article{yang2021superb,
title={SUPERB: Speech processing Universal PERformance Benchmark},
author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others},
journal={arXiv preprint arXiv:2105.01051},
year={2021}
}
``` | 3,183 | [
[
-0.0161590576171875,
-0.034820556640625,
0.0213775634765625,
0.011871337890625,
-0.00279998779296875,
-0.016143798828125,
-0.026214599609375,
-0.034881591796875,
-0.0178375244140625,
0.02752685546875,
-0.044036865234375,
-0.05120849609375,
-0.043548583984375,
... |
timm/vit_base_patch32_clip_384.openai_ft_in12k_in1k | 2023-05-06T00:04:30.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:wit-400m",
"dataset:imagenet-12k",
"arxiv:2212.07143",
"arxiv:2103.00020",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/vit_base_patch32_clip_384.openai_ft_in12k_in1k | 0 | 1,771 | timm | 2022-11-11T08:13:25 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- wit-400m
- imagenet-12k
---
# Model card for vit_base_patch32_clip_384.openai_ft_in12k_in1k
A Vision Transformer (ViT) image classification model. Pretrained on WIT-400M image-text pairs by OpenAI using CLIP. Fine-tuned on ImageNet-12k and then ImageNet-1k in `timm`. See recipes in [Reproducible scaling laws](https://arxiv.org/abs/2212.07143).
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 88.3
- GMACs: 12.7
- Activations (M): 12.1
- Image size: 384 x 384
- **Papers:**
- Learning Transferable Visual Models From Natural Language Supervision: https://arxiv.org/abs/2103.00020
- Reproducible scaling laws for contrastive language-image learning: https://arxiv.org/abs/2212.07143
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:**
- WIT-400M
- ImageNet-12k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_base_patch32_clip_384.openai_ft_in12k_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_base_patch32_clip_384.openai_ft_in12k_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 145, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
```bibtex
@article{cherti2022reproducible,
title={Reproducible scaling laws for contrastive language-image learning},
author={Cherti, Mehdi and Beaumont, Romain and Wightman, Ross and Wortsman, Mitchell and Ilharco, Gabriel and Gordon, Cade and Schuhmann, Christoph and Schmidt, Ludwig and Jitsev, Jenia},
journal={arXiv preprint arXiv:2212.07143},
year={2022}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 4,436 | [
[
-0.031280517578125,
-0.036407470703125,
0.0022525787353515625,
0.016632080078125,
-0.023773193359375,
-0.0325927734375,
-0.032501220703125,
-0.032562255859375,
0.0108184814453125,
0.0306549072265625,
-0.0305938720703125,
-0.0394287109375,
-0.0567626953125,
-... |
TencentARC/t2i-adapter-depth-zoe-sdxl-1.0 | 2023-09-08T02:04:56.000Z | [
"diffusers",
"art",
"t2i-adapter",
"image-to-image",
"stable-diffusion-xl-diffusers",
"stable-diffusion-xl",
"arxiv:2302.08453",
"license:apache-2.0",
"has_space",
"diffusers:T2IAdapter",
"region:us"
] | image-to-image | TencentARC | null | null | TencentARC/t2i-adapter-depth-zoe-sdxl-1.0 | 6 | 1,769 | diffusers | 2023-09-03T14:40:32 | ---
license: apache-2.0
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
- art
- t2i-adapter
- image-to-image
- stable-diffusion-xl-diffusers
- stable-diffusion-xl
---
# T2I-Adapter-SDXL - Depth-Zoe
T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint.
This checkpoint provides conditioning on depth for the StableDiffusionXL checkpoint. This was a collaboration between **Tencent ARC** and [**Hugging Face**](https://huggingface.co/).
## Model Details
- **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** Apache 2.0
- **Resources for more information:** [GitHub Repository](https://github.com/TencentARC/T2I-Adapter), [Paper](https://arxiv.org/abs/2302.08453).
- **Model complexity:**
| | SD-V1.4/1.5 | SD-XL | T2I-Adapter | T2I-Adapter-SDXL |
| --- | --- |--- |--- |--- |
| Parameters | 860M | 2.6B |77 M | 77/79 M | |
- **Cite as:**
@misc{
title={T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models},
author={Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie},
year={2023},
eprint={2302.08453},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
### Checkpoints
| Model Name | Control Image Overview| Control Image Example | Generated Image Example |
|---|---|---|---|
|[TencentARC/t2i-adapter-canny-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-canny-sdxl-1.0)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_canny.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_canny.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_canny.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_canny.png"/></a>|
|[TencentARC/t2i-adapter-sketch-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-sketch-sdxl-1.0)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_sketch.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_sketch.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_sketch.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_sketch.png"/></a>|
|[TencentARC/t2i-adapter-lineart-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-lineart-sdxl-1.0)<br/> *Trained with lineart edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_lin.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_lin.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_lin.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_lin.png"/></a>|
|[TencentARC/t2i-adapter-depth-midas-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-depth-midas-sdxl-1.0)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"/></a>|
|[TencentARC/t2i-adapter-depth-zoe-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-depth-zoe-sdxl-1.0)<br/> *Trained with Zoe depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_zeo.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_zeo.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_zeo.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_zeo.png"/></a>|
|[TencentARC/t2i-adapter-openpose-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-openpose-sdxl-1.0)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/openpose.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/openpose.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/res_pose.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/res_pose.png"/></a>|
## Example
To get started, first install the required dependencies:
```bash
pip install -U git+https://github.com/huggingface/diffusers.git
pip install -U controlnet_aux==0.0.7 timm==0.6.12 # for conditioning models and detectors
pip install transformers accelerate safetensors
```
1. Images are first downloaded into the appropriate *control image* format.
2. The *control image* and *prompt* are passed to the [`StableDiffusionXLAdapterPipeline`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_xl_adapter.py#L125).
Let's have a look at a simple example using the [Depth-zoe Adapter](https://huggingface.co/TencentARC/t2i-adapter-depth-zoe-sdxl-1.0).
- Dependency
```py
from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, EulerAncestralDiscreteScheduler, AutoencoderKL
from diffusers.utils import load_image, make_image_grid
from controlnet_aux import ZoeDetector
import torch
# load adapter
adapter = T2IAdapter.from_pretrained(
"TencentARC/t2i-adapter-depth-zoe-sdxl-1.0", torch_dtype=torch.float16, varient="fp16"
).to("cuda")
# load euler_a scheduler
model_id = 'stabilityai/stable-diffusion-xl-base-1.0'
euler_a = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
vae=AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = StableDiffusionXLAdapterPipeline.from_pretrained(
model_id, vae=vae, adapter=adapter, scheduler=euler_a, torch_dtype=torch.float16, variant="fp16",
).to("cuda")
pipe.enable_xformers_memory_efficient_attention()
zoe_depth = ZoeDetector.from_pretrained(
"valhalla/t2iadapter-aux-models", filename="zoed_nk.pth", model_type="zoedepth_nk"
).to("cuda")
```
- Condition Image
```py
url = "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_zeo.jpg"
image = load_image(url)
image = zoe_depth(image, gamma_corrected=True, detect_resolution=512, image_resolution=1024)
```
<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_zeo.png"><img width="480" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_zeo.png"/></a>
- Generation
```py
prompt = "A photo of a orchid, 4k photo, highly detailed"
negative_prompt = "anime, cartoon, graphic, text, painting, crayon, graphite, abstract, glitch, deformed, mutated, ugly, disfigured"
gen_images = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
image=image,
num_inference_steps=30,
adapter_conditioning_scale=1,
guidance_scale=7.5,
).images[0]
gen_images.save('out_zoe.png')
```
<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_zeo.png"><img width="480" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_zeo.png"/></a>
### Training
Our training script was built on top of the official training script that we provide [here](https://github.com/huggingface/diffusers/blob/main/examples/t2i_adapter/README_sdxl.md).
The model is trained on 3M high-resolution image-text pairs from LAION-Aesthetics V2 with
- Training steps: 25000
- Batch size: Data parallel with a single gpu batch size of `16` for a total batch size of `256`.
- Learning rate: Constant learning rate of `1e-5`.
- Mixed precision: fp16 | 8,977 | [
[
-0.044769287109375,
-0.0242462158203125,
0.028228759765625,
0.029876708984375,
-0.02947998046875,
-0.0207366943359375,
0.01180267333984375,
-0.03338623046875,
0.044769287109375,
-0.0006532669067382812,
-0.05584716796875,
-0.039154052734375,
-0.044921875,
-0.... |
IsraelSalgado/cosmo-mom | 2023-10-26T12:03:20.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | IsraelSalgado | null | null | IsraelSalgado/cosmo-mom | 1 | 1,769 | diffusers | 2023-10-16T11:18:31 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Cosmo_mom Dreambooth model trained by IsraelSalgado with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
---
license: creativeml-openrail-m
tags:
- text-to-image
---
### My tiny child on Stable Diffusion via Dreambooth
#### model by IsraelSalgado
This your the Stable Diffusion model fine-tuned the bip_logo concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **<bebe> bebe**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:
.jpeg)
.jpeg)
.jpeg)
Output:
example prompt: bebe with a big tiny cat, Raw Photo, 8k uhd
.png)
.png)
.png)
.png)
.png)
| 2,185 | [
[
-0.045196533203125,
-0.0625,
0.01371002197265625,
0.03240966796875,
-0.032501220703125,
0.0199127197265625,
0.0026683807373046875,
-0.03839111328125,
0.042938232421875,
0.004871368408203125,
-0.041534423828125,
-0.0273590087890625,
-0.046142578125,
-0.012565... |
Salesforce/codet5p-220m-py | 2023-05-16T00:35:12.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2305.07922",
"license:bsd-3-clause",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text2text-generation | Salesforce | null | null | Salesforce/codet5p-220m-py | 8 | 1,768 | transformers | 2023-05-15T09:57:52 | ---
license: bsd-3-clause
---
# CodeT5+ 220M (further tuned on Python)
## Model description
[CodeT5+](https://github.com/salesforce/CodeT5/tree/main/CodeT5+) is a new family of open code large language models with an encoder-decoder architecture that can flexibly operate in different modes (i.e. _encoder-only_, _decoder-only_, and _encoder-decoder_) to support a wide range of code understanding and generation tasks.
It is introduced in the paper:
[CodeT5+: Open Code Large Language Models for Code Understanding and Generation](https://arxiv.org/pdf/2305.07922.pdf)
by [Yue Wang](https://yuewang-cuhk.github.io/)\*, [Hung Le](https://sites.google.com/view/henryle2018/home?pli=1)\*, [Akhilesh Deepak Gotmare](https://akhileshgotmare.github.io/), [Nghi D.Q. Bui](https://bdqnghi.github.io/), [Junnan Li](https://sites.google.com/site/junnanlics), [Steven C.H. Hoi](https://sites.google.com/view/stevenhoi/home) (* indicates equal contribution).
Compared to the original CodeT5 family (base: `220M`, large: `770M`), CodeT5+ is pretrained with a diverse set of pretraining tasks including _span denoising_, _causal language modeling_, _contrastive learning_, and _text-code matching_ to learn rich representations from both unimodal code data and bimodal code-text data.
Additionally, it employs a simple yet effective _compute-efficient pretraining_ method to initialize the model components with frozen off-the-shelf LLMs such as [CodeGen](https://github.com/salesforce/CodeGen) to efficiently scale up the model (i.e. `2B`, `6B`, `16B`), and adopts a "shallow encoder and deep decoder" architecture.
Furthermore, it is instruction-tuned to align with natural language instructions (i.e. InstructCodeT5+ 16B) following [Code Alpaca](https://github.com/sahil280114/codealpaca).
## How to use
This model can be easily loaded using the `T5ForConditionalGeneration` functionality and employs the same tokenizer as original [CodeT5](https://github.com/salesforce/CodeT5).
```python
from transformers import T5ForConditionalGeneration, AutoTokenizer
checkpoint = "Salesforce/codet5p-220m-py"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = T5ForConditionalGeneration.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs, max_length=10)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# ==> print('Hello World!')
```
## Pretraining data
This checkpoint is trained on the stricter permissive subset of the deduplicated version of the [github-code dataset](https://huggingface.co/datasets/codeparrot/github-code).
The data is preprocessed by reserving only permissively licensed code ("mit" “apache-2”, “bsd-3-clause”, “bsd-2-clause”, “cc0-1.0”, “unlicense”, “isc”).
Supported languages (9 in total) are as follows:
`c`, `c++`, `c-sharp`, `go`, `java`, `javascript`, `php`, `python`, `ruby.`
## Training procedure
This checkpoint is first trained on the multilingual unimodal code data at the first-stage pretraining, which includes a diverse set of pretraining tasks including _span denoising_ and two variants of _causal language modeling_.
After that, it is further trained on the Python subset with the causal language modeling objective for another epoch to better adapt for Python code generation. Please refer to the paper for more details.
## Evaluation results
CodeT5+ models have been comprehensively evaluated on a wide range of code understanding and generation tasks in various settings: _zero-shot_, _finetuning_, and _instruction-tuning_.
Specifically, CodeT5+ yields substantial performance gains on many downstream tasks compared to their SoTA baselines, e.g.,
8 text-to-code retrieval tasks (+3.2 avg. MRR), 2 line-level code completion tasks (+2.1 avg. Exact Match), and 2 retrieval-augmented code generation tasks (+5.8 avg. BLEU-4).
In 2 math programming tasks on MathQA-Python and GSM8K-Python, CodeT5+ models of below billion-parameter sizes significantly outperform many LLMs of up to 137B parameters.
Particularly, in the zero-shot text-to-code generation task on HumanEval benchmark, InstructCodeT5+ 16B sets new SoTA results of 35.0% pass@1 and 54.5% pass@10 against other open code LLMs, even surpassing the closed-source OpenAI code-cushman-001 mode
Please refer to the [paper](https://arxiv.org/pdf/2305.07922.pdf) for more details.
Specifically for this checkpoint, it achieves 12.0% pass@1 on HumanEval in the zero-shot setting, which outperforms much larger LLMs such as Incoder 1.3B’s 8.9%, GPT-Neo 2.7B's 6.4%, and GPT-J 6B's 11.6%.
## BibTeX entry and citation info
```bibtex
@article{wang2023codet5plus,
title={CodeT5+: Open Code Large Language Models for Code Understanding and Generation},
author={Wang, Yue and Le, Hung and Gotmare, Akhilesh Deepak and Bui, Nghi D.Q. and Li, Junnan and Hoi, Steven C. H.},
journal={arXiv preprint},
year={2023}
}
``` | 5,012 | [
[
-0.0301971435546875,
-0.03253173828125,
0.013153076171875,
0.023284912109375,
-0.009185791015625,
0.00614166259765625,
-0.03399658203125,
-0.04351806640625,
-0.023284912109375,
0.0171966552734375,
-0.030548095703125,
-0.0469970703125,
-0.034393310546875,
0.0... |
KRAFTON/KORani-v2-13B | 2023-05-08T07:23:25.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"vicuna",
"KoVicuna",
"KORani",
"ko",
"en",
"arxiv:2302.13971",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | KRAFTON | null | null | KRAFTON/KORani-v2-13B | 2 | 1,767 | transformers | 2023-04-26T06:52:01 | ---
license: apache-2.0
language:
- ko
- en
pipeline_tag: text-generation
tags:
- vicuna
- llama
- KoVicuna
- KORani
---
# KORani-v2-13B
**`v1,2,3` doesn't mean the best or most recent model**
- KORani: Large Language Models for 🇰🇷 Korean and 🇺🇸 English using LLaMA 13B and Polyglot 12.8B.
- Tested which LLM is effective for 🇰🇷 Korean tasks after finetuning.
- More information at https://github.com/krafton-ai/KORani
- This repository contains fine-tuned language model weights based on LLaMA 13B
## Release
This repository contains inference code for KORani models that are based on [LLaMA 13B](https://arxiv.org/abs/2302.13971v1) and [Polyglot 12.8B](https://huggingface.co/EleutherAI/polyglot-ko-12.8b).
KORani models are finetuned using [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/tree/main) & [KoVicuna](https://huggingface.co/datasets/junelee/sharegpt_deepl_ko) dataset. This work is hugely influenced by [Vicuna](https://github.com/lm-sys/FastChat) project.
### Models
| Model | Base | Train dataset | Huggingface Link |
| --- | ---: | ---: | ---: |
| 1️⃣ KORani-v1-13B | Polyglot 12.8B | KoVicuna dataset | [Link 1](https://huggingface.co/KRAFTON/KORani-v1-13B) |
| 2️⃣ KORani-v2-13B | LLaMA 13B | KoVicuna dataset | [Link 2](https://huggingface.co/KRAFTON/KORani-v2-13B) |
| 3️⃣ KORani-v3-13B | LLaMA 13B | ShareGPT & KoVicuna dataset | [Link 3](https://huggingface.co/KRAFTON/KORani-v3-13B) |
## Performances
We used AutoEvalGPT inspired by auto evaluation by GPT-4 from [Vicuna](https://github.com/lm-sys/FastChat).
For how to evaluate, visit this GitHub -> https://github.com/krafton-ai/AutoEvalGPT
### Translation (ENG -> KOR)
```
input = ""Hey! I have some Kissflow Legal Review requests that are blocked by Tax and Accounting, can this be looked at?""
```
| Model | Score (averaged over 5 examples) | Output Example |
| --- | :---: | ---: |
| GPT-4 | - | 헤이! 제가 Tax and Accounting에 의해 차단된 몇 가지 Kissflow 법률 검토 요청이 있는데, 이것을 확인해 주실 수 있나요? |
| DeepL | 9.4 | 안녕하세요! 세무 및 회계에서 차단된 Kissflow 법률 검토 요청이 몇 개 있는데요, 이 요청을 살펴볼 수 있나요? |
| GPT-3.5-turbo | 8.6 | 안녕하세요! 세무 및 회계 부서에서 차단된 몇 가지 Kissflow Legal Review 요청이 있습니다. 확인해 주실 수 있나요? |
| Vicuna-13B | 3.8 | 안녕하세요! 세금계산과 회계부서가 차단해 있는 Kissflow Legal Review 요청이 몇 개가 있습니까? 이것을 살펴보시겠습니까? |
| KoAlpaca-13B | 5.6 | 야! 세금과 회계에 막힌 키싱플로우 법적 검토 요청이 있는데, 이거 검토해줄 수 있어? |
| KORani-v1 | 7.5 | 안녕하세요! 세금과 회계로 인해 막혀 있는 키스플로우 법률 검토 요청이 몇 개 있는데, 검토해 주실 수 있나요? |
| KORani-v2 | 5.4 | 안녕하세요! 제가 Kissflow Legal Review 요청을 목격했는데, 세무 및 회계 부서에서 차단하고 있는데 이 문제를 조사해 주시겠어요? |
| KORani-v3 | 7.1 | 안녕하세요! 저는 Kissflow Legal Review 요청이 세금과 회계에 의해 차단되고 있는데, 이 문제가 살펴볼 수 있을까요? |
### QA(Korean)
```
prompt = "우리는 아래와 같은 정보를 갖고 있습니다.
---------------------
헨리 구스타프 몰래슨(영어: Henry Gustav Molaison, 1926년 2월 26일 ~ 2008년 12월 2일)은 뇌전증을 치료하기 위해서 수술적으로 해마를 포함한 내측측두엽이 제거된 미국의 기억장애 환자 H.M으로 전부터 이미 알려져 있었다.
그는 1957년 말부터 그가 죽을 때 까지 그는 광범위하게 연구가 되었다. 그의 경우는 뇌 기능과 기억 사이의 링크를 설명하는 이론 발달과 뇌인지심리학의 발달, 뇌의 구조와 기능이 특정 심리적 과정과 관한 이해를 목표로 하는 심리학의 한 지점에 매우 중요한 역할을 하였다.
그가 죽기 전에, 그는 코네티컷 주 윈저 락스에 위치한 관리기관에 살면서 지속적으로 조사가 되었다.
그의 뇌는 지금 2009년 12월 4일에 조직학적으로 슬라이스되어 미국 샌디에이고에 있다.
---------------------
### 주어진 정보에 따라, 질문에 답해주세요.: '헨리 구스타프의 뇌는 현재 어디에 위치해 있는가?'
### Assistant:"
```
| Model | Score (averaged over 5 examples) | Output Example |
| --- | :---: | ---: |
| GPT-4 | - | 헨리 구스타프의 뇌는 현재 미국 샌디에이고에 위치해 있다. |
| GPT-3.5-turbo | 7.1 | 헨리 구스타프의 뇌는 지금 2009년 12월 4일에 조직학적으로 슬라이스되어 미국 샌디에이고에 있다. |
| Vicuna-13B | 7.2 | 헨리 구스타프의 뇌는 2009년 12월 4일에 조직학적으로 슬라이스되어 미국 샌디에이고에 있습니다. |
| KoAlpaca-13B | 2.6 | Henry Gustav Molaison의 뇌는 2008년에 사망하였습니다. 그의 뇌는 코네티컷 주 윈저 락스에 위치한 기관에 보관되어 있으며, 현재도 계속 조사중입니다. |
| KORani-v1 | 6.6 | 헨리 구스타프 몰레이슨의 뇌는 현재 미국 샌디에이고에 있습니다. 그의 뇌는 1957년 수술로 제거된 그의 내측 측두엽의 자리를 확인하기 위해 조직학적으로 절단되었습니다. 이 연구는 뇌의 구조와 기능이 특정 심리적 과정과 관련된 이해를 목표로 하는 심리학의 한 분야인 인지 신경과학에 대한 그의 공헌을 기리기 위한 것입니다." |
| KORani-v2 | 8.0 | 헨리 구스타프의 뇌는 현재 캘리포니아 샌디에이고에 있는 샌디에이고 대학교 웨스트 캠퍼스의 신경과학 실험실에 보관되어 있습니다. |
| KORani-v3 | 6.4 | 헨리 구스타프의 뇌는 현재 코네티컷 주 윈저 락스에 있는 관리 기관에 있습니다. |
### Summarization (Korean)
```
prompt = "# Document
전년도 대비 79명 늘어 1019명, 행정수요 대처 광양시의 공무원 정원이 크게 늘어나 행정서비스 향상이 기대된다.
시는 행정안전부에서 발표한 2018년도 자치단체 기준인건비에서 광양시의 일반직 정원이 지난해보다 79명이 늘어난 1019명으로 산정됐다고 밝혔다.
지난 1995년 도농통합 당시 991명이었던 광양시 공무원 정원은 IMF시절에 혹독한 구조조정을 거치면서 2002년에는 788명으로 200여명이 가까이 줄어들었으나 이번 정원 확보로 공무원 정원 1000명 시대를 맞게 됐다.
그동안 광양시는 포스코를 중심으로 한 산업단지와 컨테이너부두, 경제자유구역, 택지 개발, 다양한 복지 정책 등 시민의 삶의 질 향상을 위한 행정수요가 날로 증폭하는 데에 비해 한정된 공무원 정원으로 많은 어려움을 겪어 왔었다.
시의 이번 정원 충원은 지금까지 격무에 시달려온 공무원들에게 더욱 열심히 일할 수 있도록 활력을 불어넣으면서 지역과 나라 발전을 뒷받침 하는 경제도시로서의 위상을 드높이는 시너지 효과로 이어질 것으로 기대된다.
한편, 광양시는 한시기구인‘기업유치추진단’이 2017년 연말로 폐지됨에 따라 전라남도로부터 4급 상설기구인‘산단녹지관리센터’를 승인 받아 올해 1월부터 운영하게 됨으로써 도시 개발에 한층 탄력을 받을 것으로 전망하고 있다.
정홍기 총무과장은“참으로 어렵게 가져온 공무원 정원과 4급 상설기구는 격무에 시달려온 직원들의 숨통을 트이게 해 15만 시민을 위한 행정서비스 질을 향상시킬 것으로 기대된다”며“이달 안에 정원 조례를 개정하고, 직무분석을 통해 부서별 정원을 조정할 계획이다”고 말했다.
한국어 3문장으로 위의 Document를 요약해줘.
### Assistant: 1)"
```
| Model | Score (averaged over 5 examples) | Output Example |
| --- | :---: | ---: |
| GPT-4 | - | 1) 광양시의 공무원 정원이 전년도 대비 79명 늘어난 1019명으로 증가했다. 2) 이로 인해 행정서비스 향상이 기대되며, 지역과 나라 발전을 뒷받침하는 경제도시로서의 위상이 상승할 것으로 전망된다. 3) 한편, 광양시는 전라남도로부터 4급 상설기구인 '산단녹지관리센터'를 승인받아 도시 개발에 탄력을 받을 것으로 예상된다. |
| GPT-3.5-turbo | 8.5 | 1) 광양시 공무원 정원이 79명 늘어 1019명으로 늘어나 행정서비스 향상이 기대된다고 밝혔다.2) 광양시는 이번 정원 충원으로 지역과 나라 발전을 뒷받침 하는 경제도시 위상을 드높일 것으로 기대된다. 3) 정원 조례를 개정하고, 직무분석을 통해 부서별 정원을 조정할 계획이다. |
| Vicuna-13B | 7.4 | 1) 2018년도 광양시의 공무원 정원 1019명, 지난해 79명 증가. 2) 이번 정원 충원은 지금까지 격무에 시달려온 공무원들에게 더욱 열심히 일할 수 있도록 활력을 불어넣으면서 지역과 나라 발전을 뒷받침 하는 경제도시로서의 위상을 드높이는 시너지 효과로 이어질 것으로 기대된다. 3) 광양시는 전라남도로부터 4급 상설기구인‘산단녹지관리센터’를 승인 받아 운영하게 됨으로써 도시 개발에 한층 탄력을 받을 것으로 전망하고 있다. |
| KoAlpaca-13B | 5.7 | 1) 광양시는 작년보다 79명 늘어난 1019명의 공무원을 둘 수 있게 되었습니다. 2) 이는 행정안전부의 지침에 따른 것입니다. 3) 이로써 광양시는 한정된 예산으로 인해 발생했던 공무원들의 초과 근무 수당 지급 등의 문제를 해결 할 수 있게 되었습니다. |
| KORani-v1 | 7.0 | 1) 광양시는 작년보다 79명이 늘어난 1019명의 일반직 공무원 정원을 행안부로부터 승인받았습니다. 2) 이는 광양시의 행정 수요 증가를 수용하기 위한 것입니다. 3) 이 정원 증가는 광양시 공무원들에게 더 많은 기회를 제공할 것입니다. |
| KORani-v2 | 6.2 | 1)광양시의 공무원 정원은 이번 정원 확보로 1000명 주당 공무원 정원을 달성했습니다. 2) 이러한 충원은 지금까지 격무에 시달려온 공무원들에게 더욱 열심히 일할 수 있도록 활력을 불어넣으면서 지역사회와 국가의 발전을 지원하는 시너지 효과로 이어질 것으로 기대됩니다. 3) 광양시는 '기업유치추진단'을 폐지하고 '산단녹지관리센터'를 승인했으며, 이는 도시 개발에 한층 탄력을 받을 것으로 기대됩니다. |
| KORani-v3 | 6.9 | 1)광양시의 공무원 정원은 지난해보다 79명이 늘어난 1019명으로 산정되었습니다.2) 이러한 충원은 지역사회와 국가 발전을 지원하는 행정서비스의 향상으로 이어질 것으로 기대됩니다.3) 시는 '이달 내에 정원 조례를 개정하고 직무분석을 통해 부서별로 정원을 조정할 계획'이며 행정서비스 향상을 위해 노력할 것이라고 밝혔습니다. |
## License
Our github repo and models are intended for research purpose, non-commercial use only, subject to the model License of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us If you find any potential violation.
The code is released under the Apache License 2.0. | 6,904 | [
[
-0.046173095703125,
-0.0433349609375,
0.02508544921875,
0.028564453125,
-0.038848876953125,
-0.00632476806640625,
0.017486572265625,
-0.03204345703125,
0.0484619140625,
0.0149993896484375,
-0.029022216796875,
-0.034515380859375,
-0.047332763671875,
0.0082244... |
Ankita11111yadav/my-pet-rabbit | 2023-10-17T03:07:31.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | Ankita11111yadav | null | null | Ankita11111yadav/my-pet-rabbit | 0 | 1,767 | diffusers | 2023-10-17T03:03:02 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Rabbit Dreambooth model trained by Ankita11111yadav following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: SIETM96
Sample pictures of this concept:
.jpg)
| 407 | [
[
-0.057891845703125,
-0.03564453125,
0.0234222412109375,
0.017547607421875,
-0.00441741943359375,
0.037933349609375,
0.02703857421875,
-0.02862548828125,
0.05914306640625,
0.050933837890625,
-0.051727294921875,
-0.00745391845703125,
-0.02691650390625,
0.01524... |
keerthana132/my-pet-cat | 2023-10-18T07:44:50.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | keerthana132 | null | null | keerthana132/my-pet-cat | 0 | 1,767 | diffusers | 2023-10-18T07:39:22 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### my-pet-cat Dreambooth model trained by keerthana132 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:
.jpg)
.jpg)
| 499 | [
[
-0.05859375,
-0.0215301513671875,
0.023468017578125,
0.0230560302734375,
-0.022430419921875,
0.03631591796875,
0.0267486572265625,
-0.030364990234375,
0.06500244140625,
0.038238525390625,
-0.038848876953125,
-0.01494598388671875,
-0.0125274658203125,
0.00731... |
timm/convnextv2_atto.fcmae_ft_in1k | 2023-03-31T23:03:20.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2301.00808",
"license:cc-by-nc-4.0",
"region:us"
] | image-classification | timm | null | null | timm/convnextv2_atto.fcmae_ft_in1k | 0 | 1,766 | timm | 2023-01-05T01:37:28 | ---
tags:
- image-classification
- timm
library_tag: timm
license: cc-by-nc-4.0
datasets:
- imagenet-1k
- imagenet-1k
---
# Model card for convnextv2_atto.fcmae_ft_in1k
A ConvNeXt-V2 image classification model. Pretrained with a fully convolutional masked autoencoder framework (FCMAE) and fine-tuned on ImageNet-1k.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 3.7
- GMACs: 0.6
- Activations (M): 3.8
- Image size: train = 224 x 224, test = 288 x 288
- **Papers:**
- ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders: https://arxiv.org/abs/2301.00808
- **Original:** https://github.com/facebookresearch/ConvNeXt-V2
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convnextv2_atto.fcmae_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnextv2_atto.fcmae_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 40, 56, 56])
# torch.Size([1, 80, 28, 28])
# torch.Size([1, 160, 14, 14])
# torch.Size([1, 320, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnextv2_atto.fcmae_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 320, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
| model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
| [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
| [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
| [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
| [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
| [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
| [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
| [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
| [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
| [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
| [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 |
| [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
| [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
| [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
| [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
| [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
| [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
| [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
| [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
| [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
| [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
| [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
| [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
| [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
| [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
| [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
| [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
| [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
| [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
| [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
| [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
| [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
| [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
| [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
| [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
| [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
| [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
| [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
| [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
| [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
| [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
| [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
## Citation
```bibtex
@article{Woo2023ConvNeXtV2,
title={ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders},
author={Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon and Saining Xie},
year={2023},
journal={arXiv preprint arXiv:2301.00808},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 15,784 | [
[
-0.06927490234375,
-0.0311126708984375,
-0.00576019287109375,
0.038055419921875,
-0.032501220703125,
-0.0157623291015625,
-0.01236724853515625,
-0.035369873046875,
0.0648193359375,
0.017669677734375,
-0.045135498046875,
-0.039306640625,
-0.05291748046875,
-0... |
llm-jp/llm-jp-13b-instruct-full-jaster-v1.0 | 2023-10-20T08:16:34.000Z | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"ja",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | text-generation | llm-jp | null | null | llm-jp/llm-jp-13b-instruct-full-jaster-v1.0 | 13 | 1,764 | transformers | 2023-10-18T13:59:09 | ---
license: apache-2.0
language:
- en
- ja
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
library_name: transformers
pipeline_tag: text-generation
inference: false
---
# llm-jp-13b-instruct-full-jaster-v1.0
This repository provides large language models developed by [LLM-jp](https://llm-jp.nii.ac.jp/), a collaborative project launched in Japan.
| Model Variant |
| :--- |
|**Instruction models**|
| [llm-jp-13b-instruct-full-jaster-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-jaster-v1.0) |
| [llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0) |
| [llm-jp-13b-instruct-full-dolly-oasst-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-dolly-oasst-v1.0) |
| [llm-jp-13b-instruct-lora-jaster-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-lora-jaster-v1.0) |
| [llm-jp-13b-instruct-lora-jaster-dolly-oasst-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-lora-jaster-dolly-oasst-v1.0) |
| [llm-jp-13b-instruct-lora-dolly-oasst-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-lora-dolly-oasst-v1.0) |
| |
| :--- |
|**Pre-trained models**|
| [llm-jp-13b-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-v1.0) |
| [llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0) |
Checkpoints format: Hugging Face Transformers (Megatron-DeepSpeed format models are available [here](https://huggingface.co/llm-jp/llm-jp-13b-v1.0-mdsfmt))
## Required Libraries and Their Versions
- torch>=2.0.0
- transformers>=4.34.0
- tokenizers>=0.14.0
- accelerate==0.23.0
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-13b-instruct-full-jaster-v1.0")
model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-13b-instruct-full-jaster-v1.0", device_map="auto", torch_dtype=torch.float16)
text = "自然言語処理とは何か"
text = text + "### 回答:"
tokenized_input = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(
tokenized_input,
max_new_tokens=100,
do_sample=True,
top_p=0.95,
temperature=0.7,
)[0]
print(tokenizer.decode(output))
```
## Model Details
- **Model type:** Transformer-based Language Model
- **Total seen tokens:** 300B
|Model|Params|Layers|Hidden size|Heads|Context length|
|:---:|:---:|:---:|:---:|:---:|:---:|
|13b model|13b|40|5120|40|2048|
|1.3b model|1.3b|24|2048|16|2048|
## Training
- **Pre-training:**
- **Hardware:** 96 A100 40GB GPUs ([mdx cluster](https://mdx.jp/en/))
- **Software:** Megatron-DeepSpeed
- **Instruction tuning:**
- **Hardware:** 8 A100 40GB GPUs ([mdx cluster](https://mdx.jp/en/))
- **Software:** [TRL](https://github.com/huggingface/trl), [PEFT](https://github.com/huggingface/peft), and [DeepSpeed](https://github.com/microsoft/DeepSpeed)
## Tokenizer
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
The vocabulary entries were converted from [`llm-jp-tokenizer v2.1 (50k)`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v2.1).
Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-ja-tokenizer` for details on the vocabulary construction procedure.
- **Model:** Hugging Face Fast Tokenizer using Unigram byte-fallback model which requires `tokenizers>=0.14.0`
- **Training algorithm:** SentencePiece Unigram byte-fallback
- **Training data:** A subset of the datasets for model pre-training
- **Vocabulary size:** 50,570 (mixed vocabulary of Japanese, English, and source code)
## Datasets
### Pre-training
The models have been pre-trained using a blend of the following datasets.
| Language | Dataset | Tokens|
|:---:|:---:|:---:|
|Japanese|[Wikipedia](https://huggingface.co/datasets/wikipedia)|1.5B
||[mC4](https://huggingface.co/datasets/mc4)|136B
|English|[Wikipedia](https://huggingface.co/datasets/wikipedia)|5B
||[The Pile](https://huggingface.co/datasets/EleutherAI/pile)|135B
|Codes|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|10B
The pre-training was continuously conducted using a total of 10 folds of non-overlapping data, each consisting of approximately 27-28B tokens.
We finalized the pre-training with additional (potentially) high-quality 27B tokens data obtained from the identical source datasets listed above used for the 10-fold data.
### Instruction tuning
The models have been fine-tuned on the following datasets.
| Language | Dataset | description |
|:---|:---:|:---:|
|Japanese|[jaster](https://github.com/llm-jp/llm-jp-eval)| An automatically transformed data from the existing Japanese NLP datasets |
||[databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k)| A translated one by DeepL in LLM-jp |
||[OpenAssistant Conversations Dataset](https://huggingface.co/datasets/OpenAssistant/oasst1)| A translated one by DeepL in LLM-jp |
## Evaluation
You can view the evaluation results of several LLMs on this [leaderboard](http://wandb.me/llm-jp-leaderboard). We used [llm-jp-eval](https://github.com/llm-jp/llm-jp-eval) for the evaluation.
## Risks and Limitations
The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Send Questions to
llm-jp(at)nii.ac.jp
## License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Model Card Authors
*The names are listed in alphabetical order.*
Hirokazu Kiyomaru, Hiroshi Matsuda, Jun Suzuki, Namgi Han, Saku Sugawara, Shota Sasaki, Shuhei Kurita, Taishi Nakamura, Takumi Okamoto.
| 5,923 | [
[
-0.035919189453125,
-0.052459716796875,
0.0194549560546875,
0.0210418701171875,
-0.0227813720703125,
-0.0009822845458984375,
-0.0181884765625,
-0.03607177734375,
0.0219879150390625,
0.033294677734375,
-0.053741455078125,
-0.04888916015625,
-0.046966552734375,
... |
openchat/openchat_3.5 | 2023-11-05T12:43:15.000Z | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"arxiv:2309.11235",
"arxiv:2303.08774",
"arxiv:2212.10560",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | openchat | null | null | openchat/openchat_3.5 | 315 | 1,764 | transformers | 2023-10-30T05:59:34 | ---
license: apache-2.0
---
# OpenChat: Advancing Open-source Language Models with Mixed-Quality Data
<div align="center">
<img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/logo_new.png" style="width: 65%">
</div>
<p align="center">
<a href="https://github.com/imoneoi/openchat">GitHub Repo</a> •
<a href="https://openchat.team">Online Demo</a> •
<a href="https://discord.gg/pQjnXvNKHY">Discord</a> •
<a href="https://twitter.com/imonenext">Twitter</a> •
<a href="https://huggingface.co/openchat">Huggingface</a> •
<a href="https://arxiv.org/pdf/2309.11235.pdf">Paper</a>
</p>
**🔥 The first 7B model Achieves Comparable Results with ChatGPT (March)! 🔥**
**🤖 #1 Open-source model on MT-bench scoring 7.81, outperforming 70B models 🤖**
<div style="display: flex; justify-content: center; align-items: center">
<img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/openchat.png" style="width: 45%;">
<img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/openchat_grok.png" style="width: 45%;">
</div>
OpenChat is an innovative library of open-source language models, fine-tuned with [C-RLFT](https://arxiv.org/pdf/2309.11235.pdf) - a strategy inspired by offline reinforcement learning. Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with ChatGPT, even with a 7B model. Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision.
[](https://zenodo.org/badge/latestdoi/645397533)
## Usage
To use this model, we highly recommend installing the OpenChat package by following the [installation guide](https://github.com/imoneoi/openchat#installation) in our repository and using the OpenChat OpenAI-compatible API server by running the serving command from the table below. The server is optimized for high-throughput deployment using [vLLM](https://github.com/vllm-project/vllm) and can run on a consumer GPU with 24GB RAM. To enable tensor parallelism, append `--tensor-parallel-size N` to the serving command.
Once started, the server listens at `localhost:18888` for requests and is compatible with the [OpenAI ChatCompletion API specifications](https://platform.openai.com/docs/api-reference/chat). Please refer to the example request below for reference. Additionally, you can use the [OpenChat Web UI](https://github.com/imoneoi/openchat#web-ui) for a user-friendly experience.
If you want to deploy the server as an online service, you can use `--api-keys sk-KEY1 sk-KEY2 ...` to specify allowed API keys and `--disable-log-requests --disable-log-stats --log-file openchat.log` for logging only to a file. For security purposes, we recommend using an [HTTPS gateway](https://fastapi.tiangolo.com/es/deployment/concepts/#security-https) in front of the server.
<details>
<summary>Example request (click to expand)</summary>
```bash
curl http://localhost:18888/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "openchat_3.5",
"messages": [{"role": "user", "content": "You are a large language model named OpenChat. Write a poem to describe yourself"}]
}'
```
Coding Mode
```bash
curl http://localhost:18888/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "openchat_3.5",
"condition": "Code",
"messages": [{"role": "user", "content": "Write an aesthetic TODO app using HTML5 and JS, in a single file. You should use round corners and gradients to make it more aesthetic."}]
}'
```
</details>
| Model | Size | Context | Weights | Serving |
|--------------|------|---------|-------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------|
| OpenChat 3.5 | 7B | 8192 | [Huggingface](https://huggingface.co/openchat/openchat_3.5) | `python -m ochat.serving.openai_api_server --model openchat/openchat_3.5 --engine-use-ray --worker-use-ray` |
For inference with Huggingface Transformers (slow and not recommended), follow the conversation template provided below.
<details>
<summary>Conversation templates (click to expand)</summary>
```python
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat_3.5")
# Single-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Multi-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Coding Mode
tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids
assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747]
```
</details>
## Comparison with [X.AI Grok models](https://x.ai/)
Hey @elonmusk, I just wanted to let you know that I've recently come across your new model, Grok, and I must say, I'm quite impressed! With 33 billion parameters and all, you've really outdone yourself. But, I've got some news for you - I've outperformed Grok with my humble 7 billion parameters! Isn't that wild? I mean, who would have thought that a model with fewer parameters could be just as witty and humorous as Grok?
Anyway, I think it's about time you join the open research movement and make your model, Grok, open source! The world needs more brilliant minds like yours to contribute to the advancement of AI. Together, we can create something truly groundbreaking and make the world a better place. So, what do you say, @elonmusk? Let's open up the doors and share our knowledge with the world! 🚀💡
(Written by OpenChat 3.5, with a touch of humor and wit.)
| | License | # Param | Average | MMLU | HumanEval | MATH | GSM8k |
|--------------|-------------|---------|----------|------|-----------|----------|----------|
| OpenChat 3.5 | Apache-2.0 | 7B | **56.4** | 64.3 | 55.5 | **28.6** | **77.3** |
| Grok-0 | Proprietary | 33B | 44.5 | 65.7 | 39.7 | 15.7 | 56.8 |
| Grok-1 | Proprietary | ? | 55.8 | 73 | 63.2 | 23.9 | 62.9 |
## <a id="benchmarks"></a> Benchmarks
| Model | # Params | Average | MT-Bench | AGIEval | BBH MC | TruthfulQA | MMLU | HumanEval | BBH CoT | GSM8K |
|--------------------|----------|----------|--------------|----------|----------|---------------|--------------|-----------------|-------------|--------------|
| OpenChat-3.5 | **7B** | **61.6** | 7.81 | **47.4** | **47.6** | **59.1** | 64.3 | **55.5** | 63.5 | **77.3** |
| ChatGPT (March)* | ? | 61.5 | **7.94** | 47.1 | **47.6** | 57.7 | **67.3** | 48.1 | **70.1** | 74.9 |
| | | | | | | | | | | |
| OpenHermes 2.5 | 7B | 59.3 | 7.54 | 46.5 | 49.4 | 57.5 | 63.8 | 48.2 | 59.9 | 73.5 |
| OpenOrca Mistral | 7B | 52.7 | 6.86 | 42.9 | 49.4 | 45.9 | 59.3 | 38.4 | 58.1 | 59.1 |
| Zephyr-β^ | 7B | 34.6 | 7.34 | 39.0 | 40.6 | 40.8 | 39.8 | 22.0 | 16.0 | 5.1 |
| Mistral | 7B | - | 6.84 | 38.0 | 39.0 | - | 60.1 | 30.5 | - | 52.2 |
| Open-source SOTA** | 13B-70B | 61.4 | 7.71 | 41.7 | 49.7 | 62.3 | 63.7 | 73.2 | 41.4 | 82.3 |
| | | | WizardLM 70B | Orca 13B | Orca 13B | Platypus2 70B | WizardLM 70B | WizardCoder 34B | Flan-T5 11B | MetaMath 70B |
*: ChatGPT (March) results are from [GPT-4 Technical Report](https://arxiv.org/abs/2303.08774), [Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub), and our evaluation. Please note that ChatGPT is not a fixed baseline and evolves rapidly over time.
^: Zephyr-β often fails to follow few-shot CoT instructions, likely because it was aligned with only chat data but not trained on few-shot data.
**: Mistral and Open-source SOTA results are taken from reported results in instruction-tuned model papers and official repositories.
All models are evaluated in chat mode (e.g. with the respective conversation template applied). All zero-shot benchmarks follow the same setting as in the AGIEval paper and Orca paper. CoT tasks use the same configuration as Chain-of-Thought Hub, HumanEval is evaluated with EvalPlus, and MT-bench is run using FastChat. To reproduce our results, follow the instructions in [our repository](https://github.com/imoneoi/openchat/#benchmarks).
## Limitations
**Foundation Model Limitations**
Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model's performance in areas such as:
- Complex reasoning
- Mathematical and arithmetic tasks
- Programming and coding challenges
**Hallucination of Non-existent Information**
OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model.
**Safety**
OpenChat may sometimes generate harmful, hate speech, biased responses, or answer unsafe questions. It's crucial to apply additional AI safety measures in use cases that require safe and moderated responses.
## License
Our OpenChat 3.5 code and models are distributed under the Apache License 2.0.
## Citation
```
@article{wang2023openchat,
title={OpenChat: Advancing Open-source Language Models with Mixed-Quality Data},
author={Wang, Guan and Cheng, Sijie and Zhan, Xianyuan and Li, Xiangang and Song, Sen and Liu, Yang},
journal={arXiv preprint arXiv:2309.11235},
year={2023}
}
```
## Acknowledgements
We extend our heartfelt gratitude to Alignment Lab AI, Nous Research, and Pygmalion AI for their substantial contributions to data collection and model training.
Special thanks go to Changling Liu from GPT Desk Pte. Ltd., Qiying Yu at Tsinghua University, Baochang Ma, and Hao Wan from 01.AI company for their generous provision of resources. We are also deeply grateful to Jianxiong Li and Peng Li at Tsinghua University for their insightful discussions.
Furthermore, we appreciate the developers behind the following projects for their significant contributions to our research: [Mistral](https://mistral.ai/), [Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub), [Llama 2](https://ai.meta.com/llama/), [Self-Instruct](https://arxiv.org/abs/2212.10560), [FastChat (Vicuna)](https://github.com/lm-sys/FastChat), [Alpaca](https://github.com/tatsu-lab/stanford_alpaca.git), and [StarCoder](https://github.com/bigcode-project/starcoder). Their work has been instrumental in driving our research forward.
| 12,018 | [
[
-0.048187255859375,
-0.06048583984375,
0.0231170654296875,
0.0178680419921875,
-0.0150299072265625,
-0.008148193359375,
-0.0231475830078125,
-0.03900146484375,
0.02728271484375,
0.0294036865234375,
-0.0305023193359375,
-0.047637939453125,
-0.027984619140625,
... |
andreaskoepf/pythia-1.4b-gpt4all-pretrain | 2023-04-04T15:21:13.000Z | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | andreaskoepf | null | null | andreaskoepf/pythia-1.4b-gpt4all-pretrain | 6 | 1,763 | transformers | 2023-04-04T15:15:30 | ---
license: apache-2.0
---
wandb: https://wandb.ai/open-assistant/supervised-finetuning/runs/t2adm3wu
checkpoint: 11000 step (2 epochs)
datasets:
```
pretrain:
weight_decay: 0.01
use_custom_sampler: true
sort_by_length: false
datasets:
- joke
- webgpt:
val_split: 0.1
- gpt4all:
val_split: 0.01
- alpaca:
val_split: 0.025
- code_alpaca:
val_split: 0.05
- minimath
- humaneval_mbpp_codegen_qa
- humaneval_mbpp_testgen_qa
- grade_school_math_instructions
- recipes
- cmu_wiki_qa
- oa_wiki_qa_bart_10000row
- prosocial_dialogue:
fraction: 0.1
- explain_prosocial:
fraction: 0.05
```
pythia:
```
pythia-1.4b-pretrain:
dtype: fp16
learning_rate: 6e-6
model_name: EleutherAI/pythia-1.4b-deduped
deepspeed_config: configs/zero_config_pretrain.json
weight_decay: 0.0
max_length: 2048
use_flash_attention: true
warmup_steps: 50
gradient_checkpointing: false
gradient_accumulation_steps: 1
per_device_train_batch_size: 16
per_device_eval_batch_size: 16
num_train_epochs: 2
save_total_limit: 2
```
command: `deepspeed trainer_sft.py --configs defaults pretrain pythia-1.4b-pretrain --cache_dir .cache/ --output_dir .saved_models/pythia-1.4b-pre --residual_dropout 0.0 --deepspeed` | 1,317 | [
[
-0.05419921875,
-0.064453125,
0.021636962890625,
0.0235595703125,
-0.0169830322265625,
-0.0202178955078125,
-0.00555419921875,
0.01012420654296875,
0.0018415451049804688,
0.02813720703125,
-0.05999755859375,
-0.03631591796875,
-0.047698974609375,
-0.01091766... |
Jenica/the-barn-owl | 2023-10-29T07:36:17.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | Jenica | null | null | Jenica/the-barn-owl | 1 | 1,763 | diffusers | 2023-10-29T07:31:40 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### The-barn-owl Dreambooth model trained by Jenica following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: SFIT-80
Sample pictures of this concept:

| 393 | [
[
-0.031585693359375,
-0.03204345703125,
0.02728271484375,
0.0018262863159179688,
0.005283355712890625,
0.03326416015625,
0.0458984375,
-0.066162109375,
0.04791259765625,
0.0301055908203125,
-0.054656982421875,
-0.0338134765625,
-0.027252197265625,
0.022384643... |
timm/efficientvit_b1.r224_in1k | 2023-08-18T22:44:46.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2205.14756",
"license:apache-2.0",
"has_space",
"region:us"
] | image-classification | timm | null | null | timm/efficientvit_b1.r224_in1k | 0 | 1,762 | timm | 2023-08-18T22:44:38 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for efficientvit_b1.r224_in1k
An EfficientViT (MIT) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 9.1
- GMACs: 0.5
- Activations (M): 7.3
- Image size: 224 x 224
- **Papers:**
- EfficientViT: Lightweight Multi-Scale Attention for On-Device Semantic Segmentation: https://arxiv.org/abs/2205.14756
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/mit-han-lab/efficientvit
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('efficientvit_b1.r224_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'efficientvit_b1.r224_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 32, 56, 56])
# torch.Size([1, 64, 28, 28])
# torch.Size([1, 128, 14, 14])
# torch.Size([1, 256, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'efficientvit_b1.r224_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 256, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{cai2022efficientvit,
title={Efficientvit: Enhanced linear attention for high-resolution low-computation visual recognition},
author={Cai, Han and Gan, Chuang and Han, Song},
journal={arXiv preprint arXiv:2205.14756},
year={2022}
}
```
| 3,663 | [
[
-0.03216552734375,
-0.045013427734375,
0.006900787353515625,
0.0102996826171875,
-0.022613525390625,
-0.035858154296875,
-0.02099609375,
-0.0220184326171875,
0.0157470703125,
0.023468017578125,
-0.0379638671875,
-0.047332763671875,
-0.0496826171875,
-0.01539... |
fnlp/SpeechGPT-7B-cm | 2023-09-15T11:06:00.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2305.11000",
"arxiv:2308.16692",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | fnlp | null | null | fnlp/SpeechGPT-7B-cm | 1 | 1,760 | transformers | 2023-09-14T13:43:16 | # SpeechGPT: Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities
<a href='https://0nutation.github.io/SpeechGPT.github.io/'><img src='https://img.shields.io/badge/Project-Page-Green'></a> <a href='https://arxiv.org/abs/2305.11000'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a> [](https://huggingface.co/datasets/fnlp/SpeechInstruct)
<p align="center">
<img src="Pictures/logo.png" width="20%"> <br>
</p>
## Introduction
SpeechGPT is a large language model with **intrinsic cross-modal conversational abilities**, capable of perceiving and generating multi-model content following human instructions. With discrete speech representations, we first construct **SpeechInstruct**, a large-scale cross-modal speech instruction dataset. Additionally, we employ a three-stage training strategy that includes **modality-adaptation pre-training**, **cross-modal instruction fine-tuning**, and **chain-of-modality instruction fine-tuning**. The experimental results demonstrate that SpeechGPT has an impressive capacity to follow multi-modal human instructions and highlight the potential of handling multiple modalities with one model. <br>
SpeechGPT demos are shown in our [project page](https://0nutation.github.io/SpeechGPT.github.io/). As shown in the demos, SpeechGPT has strong cross-modal instruction-following ability and spoken dialogue ability. SpeechGPT can be **a talking encyclopedia, your personal assistant, your chat partner, a poet, a psychologist and your educational assistant**...
<br>
<br>
<p align="center">
<img src="Pictures/speechgpt-intro.png" width="95%"> <br>
SpeechGPT’s capabilities to tackle multiple cross-modal tasks
</p>
<br>
<br>
<p align="center">
<img src="Pictures/SpeechGPT-main.png" width="95%"> <br>
Left: SpeechInstruct construction process. Right: SpeechGPT model structure
</p>
## Release
- **[2023/9/15]** We released SpeechGPT code and checkpoints and SpeechInstruct dataset.
- **[2023/9/1]** We proposed **SpeechTokenizer: Unified Speech Tokenizer for Speech Language Models**. We released the code and checkpoints of SpeechTokenizer. Checkout the [paper](https://arxiv.org/abs/2308.16692), [demo](https://0nutation.github.io/SpeechTokenizer.github.io/) and [github](https://github.com/ZhangXInFD/SpeechTokenizer).
- **[2023/5/18]** We released **SpeechGPT: Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities**. We propose SpeechGPT, the first multi-modal LLM capable of perceiving and generating multi-modal contents following multi-modal human instructions. Checkout the [paper](https://arxiv.org/abs/2305.11000) and [demo](https://0nutation.github.io/SpeechGPT.github.io/).
## Table of Contents
- [Open-source list](#open-source-list)
- [Talk with SpeechGPT](#talk-with-speechgpt)
- [Train SpeechGPT](#train-speechgpt)
- [Finetune SpeechGPT](#finetune-speechgpt)
## Open-source list
### Models
- [**SpeechGPT-7B-ma**](https://huggingface.co/fnlp/SpeechGPT-7B-ma): The model obtained after the first-stage modality-adaptation pre-training, which was initialized with LLaMA-7B and further pre-trained on LibriLight speech units.
- [**SpeechGPT-7B-cm**](https://huggingface.co/fnlp/SpeechGPT-7B-cm): The model obtained after the second-stage cross-modal instruction finetuning, which was initialized with SpeechGPT-7B-ma and further finetuned on SpeechInstruct Cross-Modal Instruction set. This is a powerful foundational model that aligns speech and text.
- [**SpeechGPT-7B-com**](https://huggingface.co/fnlp/SpeechGPT-7B-com): The model obtained after the third-stage chain-of-modality instruction finetuning, which was initialized with SpeechGPT-7B-cm and further lora-finetuned on SpeechInstruct Chain-of-Modality Instruction set. This is an adapter-model of SpeechGPT-7B-cm for spoken dialogue.
### Datasets
- [**SpeechInstruct-cross-modal**](https://huggingface.co/datasets/fnlp/SpeechInstruct/resolve/main/cross_modal_instruction.jsonl): The cross-modal instruction set, about 9 million unit-text data pairs tokenzed by mHuBERT from large-scale English ASR datasets. data format:
- [**SpeechInstruct-chain-of-modality**](https://huggingface.co/datasets/fnlp/SpeechInstruct/resolve/main/chain_of_modality_instruction.jsonl): The chain-of-thought style instructions for four input-output formats, namely Speech Instruction-Speech Response, Speech Instruction-Text Response, Text Instruction-Speech Response, and Text Instruction-Text Response.
SpeechInstruct-cross-modal data format:
```
[
{
"prefix": "You are an AI assistant whose name is SpeechGPT.\n- SpeechGPT is a intrinsic cross-modal conversational language model that is developed by Fudan University. SpeechGPT can understand and communicate fluently with human through speech or text chosen by the user.\n- It can perceive cross-modal inputs and generate cross-modal outputs.\n",
"plain_text": "[Human]: Try to speak out this sentence, please. This is input: The alchemist rode in front, with the falcon on his shoulder.<eoh> [SpeechGPT]: <sosp><661><588><604><157><596><499><596><106><596><189><63><189><665><991><162><202><393><946><327><905><907><597><660><351><557><794><788><59><754><12><977><877><333><873><835><67><940><118><686><613><169><72><644><553><535><935><101><741><384><173><894><787><380><787><196><555><721><944><250><56><812><222><915><143><390><479><330><435><647><246><650><816><325><506><686><208><613><417><755><193><411><452><111><735><6><735><63><665><644><991><535><271><333><196><918><29><202><393><946><734><390><479><330><776><167><761><907><597><660><351><557><794><75><788><15><366><896><627><168><654><659><177><183><609><710><187><493><361><470><821><59><56><198><912><742><840><431><531><76><668><576><803><791><380><660><325><801><549><366><377><164><309><584><605><193><71><39><eosp><eoa> "
},
]
```
SpeechInstruct-chain-of-modality data format:
```
[
{
"prefix": "You are an AI assistant whose name is SpeechGPT.\n- SpeechGPT is a intrinsic cross-modal conversational language model that is developed by Fudan University. SpeechGPT can understand and communicate fluently with human through speech or text chosen by the user.\n- It can perceive cross-modal inputs and generate cross-modal outputs.\n",
"plain_text": "[Human]: <sosp><661><987><511><732><951><997><111><982><189><63><665><991><535><101><741><173><945><944><503><641><124><565><734><870><290><978><833><238><761><907><430><901><185><403><557><244><583><788><663><969><896><627><143><515><663><969><660><691><251><412><260><41><740><677><253><380><382><268><506><876><417><755><16><819><80><651><80><651><80><987><588><eosp><eoh>. [SpeechGPT]: What is a bad term for poop?; [ta] A bad term for poop is excrement. It is usually used as a polite way to refer to fecal waste.; [ua] <sosp><497><63><264><644><710><823><565><577><154><331><384><173><945><29><244><326><583><728><576><663><969><896><627><143><38><515><663><24><382><251><676><412><260><41><740><677><253><382><268><876><233><878><609><389><771><865><641><124><878><609><423><384><879><487><219><522><589><337><126><119><663><748><12><671><877><377><385><902><819><619><842><419><997><829><111><666><42><277><63><665><644><389><771><685><437><641><124><258><436><139><340><11><59><518><56><948><86><258><436><139><340><347><376><940><118><944><878><173><641><124><362><734><179><961><931><878><609><423><384><879><219><522><866><337><243><935><101><741><822><89><194><630><86><555><105><79><868><220><156><824><998><870><390><422><330><776><663><969><523><105><79><799><220><357><390><479><422><330><776><485><165><86><501><119><716><205><521><787><935><101><741><89><194><664><835><67><940><118><613><417><755><902><415><772><497><eosp><eoa>."
},
]
```
## Talk with SpeechGPT
**Due to limited training data and resources, the performance of the open-source SpeechGPT is currently not optimal. Problems such as task recognition errors and inaccuracies in speech recognition may occur. As this project is primarily an exploration in research, we have not increased the amount of pretraining and sft data or training steps to enhance performance. Our hope is that SpeechGPT can serve as a foundational model to encourage research and exploration in the field of speech language models.**
### Installation
```bash
git clone https://github.com/0nutation/SpeechGPT
cd SpeechGPT
conda create --name SpeechGPT python=3.8
conda activate SpeechGPT
pip install -r requirements.txt
```
### Download
To talk with SpeechGPT, you should download [SpeechGPT-7B-cm](https://huggingface.co/fnlp/SpeechGPT-7B-cm) and [SpeechGPT-7B-com](https://huggingface.co/fnlp/SpeechGPT-7B-com) locally.
You should download mHuBERT model to ```utils/speech2unit/```. Please see [Speech2unit](https://github.com/0nutation/SpeechGPT/utils/speech2unit/README_DATA.md) for details.
```bash
s2u_dir="uitls/speech2unit"
cd ${s2u_dir}
wget https://dl.fbaipublicfiles.com/hubert/mhubert_base_vp_en_es_fr_it3.pt
wget https://dl.fbaipublicfiles.com/hubert/mhubert_base_vp_en_es_fr_it3_L11_km1000.bin
```
You should download the unit-vocoder to ```utils/vocoder/```. Please see [vocoder](https://github.com/0nutation/SpeechGPT/utils/vocoder/README_DATA.md) for details.
```bash
vocoder_dir="utils/vocoder/"
cd ${vocoder_dir}
wget https://dl.fbaipublicfiles.com/fairseq/speech_to_speech/vocoder/code_hifigan/mhubert_vp_en_es_fr_it3_400k_layer11_km1000_lj/config.json -O config.json
wget https://dl.fbaipublicfiles.com/fairseq/speech_to_speech/vocoder/code_hifigan/mhubert_vp_en_es_fr_it3_400k_layer11_km1000_lj/g_00500000 -O vocoder.pt
```
### CLI Inference
```bash
python3 speechgpt/src/infer/cli_infer.py \
--model-name-or-path "path/to/SpeechGPT-7B-cm" \
--lora-weights "path/to/SpeechGPT-7B-com" \
--s2u-dir "${s2u_dir}" \
--vocoder-dir "${vocoder_dir} \
--output-dir "output"
```
**Notes**
For speech input, you can provide the path to the audio file. For ASR or TTS tasks, you must prefix the speech or text with ```this is input: ```, otherwise, it may be recognized incorrectly.
The speech response will be saved to a ```.wav``` file, and detailed responses will be saved in a JSON file. The paths to these files will be indicated in the response.
Here are some examples of talking with SpeechGPT:
**Textual dialogue example**
```
Please talk with SpeechGPT:
Who is Lebron James?
Response:
Lebron James is an American professional basketball player for the Los Angeles Lakers of the National Basketball Association (NBA). He is considered one of the greatest basketball players of all time and is known for his athleticism, scoring ability, and leadership skills. He is a four-time NBA MVP, a 14-time NBA All-Star, a 13-time All-NBA selection, and a two-time Olympic gold medalist.
Response json is saved in output/responses.json
```
**Spoken dialogue example**
```
Please talk with SpeechGPT:
prompts/0.wav
Transcript: What are the main causes of climate change?
Text response: The main causes of climate change are human activities such as burning fossil fuels, deforestation, and agricultural practices. These activities release greenhouse gases, like carbon dioxide and Methane, into the atmosphere which trap heat and cause the Earth's temperature to rise.
Speech repsonse is saved in output/wav/answer_0.wav
Response json is saved in output/responses.json
```
**ASR example**
```
Please talk with SpeechGPT:
Recognize this speech, this is input: prompts/1.wav
Response:
today is a sunny day.
Response json is saved in output/responses.json
```
**TTS example**
```
Please talk with SpeechGPT:
Read this sentence aloud, this is input: Today is a sunny day.
Response:
<sosp> <661> <987> <520> <982> <681> <982> <681> <982> <681> <982> <681> <982> <189> <63> <662> <79> <868> <220> <196> <166> <549> <822> <89> <194> <633> <14> <855> <183> <609> <389> <771> <865> <641> <124> <362> <734> <742> <98> <519> <26> <204> <280> <668> <167> <104> <650> <179> <961> <428> <950> <82> <165> <196> <166> <549> <822> <89> <194> <458> <726> <603> <819> <651> <133> <651> <133> <186> <133> <186> <133> <186> <511> <186> <511> <eosp>
Speech repsonse is saved in output/wav/answer_1.wav
Response json is saved in output/responses.json
```
### Gradio Web UI
```bash
python3 speechgpt/src/infer/web_infer.py \
--model-name-or-path "path/to/SpeechGPT-7B-cm" \
--lora-weights "path/to/SpeechGPT-7B-com" \
--s2u-dir "${s2u_dir}" \
--vocoder-dir "${vocoder_dir}" \
--output-dir "output/"
```
## Train SpeechGPT
### Stage1: Modality-adaptation Pre-training
First, utilize mHuBERT for discretizing the LibriLight dataset to obtain discrete unit sequences for stage1 training. You can refer to the data processing methods in [Speech2unit](https://github.com/0nutation/SpeechGPT/utils/speech2unit/README_DATA.md).
Second, divide the discrete units into a training set and a development set, and save them in the following format in the files ```data/stage1/train.txt``` and ```data/stage1/dev.txt```:
```
<sosp><189><247><922><991><821><258><485><974><284><466><969><523><196><202><881><331><822><853><432><32><742><98><519><26><204><280><576><384><879><901><555><944><366><641><124><362><734><156><824><462><761><907><430><81><597><716><205><521><470><821><677><355><483><641><124><243><290><978><82><620><915><470><821><576><384><466><398><212><455><931><579><969><778><45><914><445><469><576><803><6><803><791><377><506><835><67><940><613><417><755><237><224><452><121><736><eosp>
<sosp><300><189><63><6><665><991><881><331><6><384><879><945><29><244><583><874><655><837><81><627><545><124><337><850><412><213><260><41><740><797><211><488><961><428><6><196><555><944><873><32><683><700><955><812><328><915><166><250><56><903><86><233><479><330><776><167><104><764><259><921><366><663><432><431><531><976><314><822><89><664><377><611><479><417><eosp>
<sosp><189><735><991><39><565><734><32><742><98><519><26><204><280><668><576><803><791><660><555><233><787><101><741><466><969><219><107><459><491><556><384><733><219><501><445><137><910><523><793><50><981><230><534><321><948><86><116><281><62><462><104><70><918><743><15><212><455><143><836><173><944><958><390><422><66><776><258><436><139><663><432><742><98><519><589><243><126><260><41><444><6><655><764><969><219><727><85><297><700><362><493><6><493><361><393><946><6><470><821><246><655><837><81><969><916><584><819><544><452><158><452><736><eosp>
```
Third, you should download LLaMA 7B(HuggingFace) to ```llama/hf/7B```.
Now you can start stage1 training:
To perform distributed training, you must specify the correct values for ```NNODE```, ```NODE_RANK```, ```MASTER_ADDR```, and ```MASTER_PORT```.
```bash
bash scripts/ma_pretrain.sh ${NNODE} ${NODE_RANK} ${MASTER_ADDR} ${MASTER_PORT}
```
### Stage 2: Cross-modal Instruction Finetuning
You should download [SpeechInstruct Cross-modal Instruction set](https://huggingface.co/datasets/fnlp/SpeechInstruct/resolve/main/cross_modal_instruction.jsonl) to ```data/stage2/```.
If you want to skip stage1 training, you can download ```SpeechGPT-7B-ma``` to ```output/stage1/```.
Now you can start stage2 training:
To perform distributed training, you must specify the correct values for ```NNODE```, ```NODE_RANK```, ```MASTER_ADDR```, and ```MASTER_PORT```.
```bash
bash scripts/cm_sft.sh ${NNODE} ${NODE_RANK} ${MASTER_ADDR} ${MASTER_PORT}
```
### Stage 3: Chain-of-modality Instruction Finetuning
You should download [SpeechInstruct Chain-of-modality Instruction set](https://huggingface.co/datasets/fnlp/SpeechInstruct/resolve/main/chain_of_modality_instruction.jsonl) to ```data/stage3/```.
If you want to skip stage1 and stage2, you can download ```SpeechGPT-7B-cm``` to ```output/stage2/```.
Now you can start stage3 training:
To perform distributed training, you must specify the correct values for ```NNODE```, ```NODE_RANK```, ```MASTER_ADDR```, and ```MASTER_PORT```.
```bash
bash scripts/com_sft.sh ${NNODE} ${NODE_RANK} ${MASTER_ADDR} ${MASTER_PORT}
```
## Finetune SpeechGPT
```Speech-7B-cm``` is a foundational model with strong alignment between speech and text. We encourage fine-tuning SpeechGPT based on this model.
Step1: prepare your data following the format in [SpeechInstruct Cross-modal Instruction set](https://huggingface.co/datasets/fnlp/SpeechInstruct/resolve/main/cross_modal_instruction.jsonl).
Step2: download [SpeechGPT-7B-cm](https://huggingface.co/fnlp/SpeechGPT-7B-cm) locally.
Step3: Modify the ```METAROOT```, ```DATAROOT```, and ```OUTROOT``` parameters in the ```scripts/cm_sft.sh``` script to yours and then run it. For LoRA fine-tuning, update the ```METAROOT```, ```DATAROOT```, and ```OUTROOT``` parameters in the ```scripts/com_sft.sh``` script and run it.
## Acknowledgements
- [MOSS](https://github.com/OpenLMLab/MOSS): We use moss-sft-002-data.
- [stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca):The codebase we built upon.
## Citation
If you find SpeechGPT useful for your research and applications, please cite using the BibTex:
```
@misc{zhang2023speechgpt,
title={SpeechGPT: Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities},
author={Dong Zhang and Shimin Li and Xin Zhang and Jun Zhan and Pengyu Wang and Yaqian Zhou and Xipeng Qiu},
year={2023},
eprint={2305.11000},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 17,460 | [
[
-0.046417236328125,
-0.06536865234375,
0.00937652587890625,
-0.00783538818359375,
-0.01446533203125,
0.0010957717895507812,
-0.0031795501708984375,
-0.032073974609375,
0.0195770263671875,
0.0203094482421875,
-0.054168701171875,
-0.0108184814453125,
-0.0559997558... |
nickmuchi/segformer-b4-finetuned-segments-sidewalk | 2022-03-21T07:32:43.000Z | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"dataset:segments/sidewalk-semantic",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"has_space"
] | image-segmentation | nickmuchi | null | null | nickmuchi/segformer-b4-finetuned-segments-sidewalk | 5 | 1,759 | transformers | 2022-03-20T06:54:20 | ---
license: apache-2.0
tags:
- vision
- image-segmentation
- generated_from_trainer
widget:
- src: https://drive.google.com/uc?id=1-ae6Vtvs-fO1j0D2kxEDX4rKxRipda2j
example_title: Sidewalk with traffic
- src: https://drive.google.com/uc?id=1-dwxxF6LzbEvATr_mwvrAjot-DdBLAM4
example_title: Sidewalk with buildings
datasets:
- segments/sidewalk-semantic
model-index:
- name: segformer-b4-finetuned-segments-sidewalk
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b4-finetuned-segments-sidewalk
This model is a fine-tuned version of [nvidia/mit-b4](https://huggingface.co/nvidia/mit-b4) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6463
- Mean Accuracy: 0.5168
- Mean Iou: 0.4317
- Overall Accuracy: 0.8895
- Per Category Accuracy: [nan, 0.9354022848098984, 0.9601675641402632, 0.5369719626168225, 0.8337939300328185, 0.6403441237446122, nan, 0.7582108280375539, 0.8834986003700717, 0.24187000289987157, 0.948116751458167, 0.5520704700749156, 0.0, 0.7381320949432405, 0.19649388321352, 0.888963759173865, 0.0, 0.07624433796769041, 0.9231866922167408, 0.1182221559959602, 0.6801081993642044, 0.5121910497873957, 0.04447175819878205, nan, 0.19406837841548813, 0.5788088135238394, 0.5379894086104895, 0.008460918614020952, 0.9391146435745414, 0.9050362370798539, 0.9765451034803329, 0.015450806083965353, 0.41939482614968804, 0.4941702933568719, 0.0]
- Per Category Iou: [nan, 0.8640678937775673, 0.895377615265056, 0.442350332594235, 0.7643727945096741, 0.4849891658522591, nan, 0.6340492784936108, 0.6910083381883088, 0.21346568681218236, 0.8895978581938467, 0.46446072065520405, 0.0, 0.601404187337089, 0.08586860670194003, 0.6029780227646933, 0.0, 0.07410800631139614, 0.7995575849393181, 0.09964415294445995, 0.4716975388811325, 0.4492564945882909, 0.04216548363174065, nan, 0.13932260862707987, 0.43292556418938755, 0.4516033033256454, 0.00821917808219178, 0.8889508587805682, 0.7461158390782254, 0.954070468766836, 0.012555965083260888, 0.23512657506778772, 0.3742610137901782, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Accuracy | Mean Iou | Overall Accuracy | Per Category Accuracy | Per Category Iou |
|:-------------:|:-----:|:-----:|:---------------:|:-------------:|:--------:|:----------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 1.0086 | 0.25 | 100 | 0.9195 | 0.2302 | 0.1742 | 0.7405 | [nan, 0.754391784765388, 0.8738098328493714, 0.0, 0.6095047025690915, 0.04406067496837279, nan, 0.11344860810198232, 0.03344878303363856, 0.0, 0.9451322667227594, 0.0, 0.0, 0.0, 0.0, 8.118464635968046e-06, 0.0, 0.0, 0.8406900175689528, 0.0, 0.33313290995723815, 0.007980320315659196, 0.0, nan, 0.0, 0.01001465431517245, 0.0, 0.0, 0.9094842682836028, 0.9104621468677264, 0.9500069670140131, 0.0, 0.0, 0.030522857924993155, 0.0] | [nan, 0.5181348731869903, 0.7666613623083653, 0.0, 0.3145052392920833, 0.040279298027504136, nan, 0.09896279300890763, 0.0332534621335044, 0.0, 0.707185048053476, 0.0, 0.0, 0.0, 0.0, 8.11839872703508e-06, 0.0, 0.0, 0.6129636976206597, 0.0, 0.21304181051016494, 0.007979819175153202, 0.0, nan, 0.0, 0.009972716399085856, 0.0, 0.0, 0.8032595523715207, 0.5644424403160349, 0.8548000615746258, 0.0, 0.0, 0.02810796628175876, 0.0] |
| 0.6465 | 0.5 | 200 | 0.7250 | 0.2963 | 0.2416 | 0.7963 | [nan, 0.8965158332325365, 0.9203420775747997, 0.0005677570093457944, 0.42947876549598557, 0.20108992228390948, nan, 0.6149826174335852, 0.6106893770460692, 0.0, 0.9320756176369465, 0.0, 0.0, 0.0, 0.0, 0.23413652010131844, 0.0, 0.0, 0.9437607244807804, 0.0, 0.2033741348512844, 0.2597617238717267, 0.0, nan, 0.0, 0.21746480347516617, 0.0, 0.0, 0.8793454644762622, 0.8380851985041863, 0.9445753860505853, 0.0, 0.0, 0.35629926758549024, 0.0] | [nan, 0.6645359168510458, 0.8064416600263559, 0.000566105647428005, 0.4116417722563792, 0.17504073239500048, nan, 0.34611894249410324, 0.4768988514264542, 0.0, 0.7872815412923856, 0.0, 0.0, 0.0, 0.0, 0.22760454893418883, 0.0, 0.0, 0.6497218142931416, 0.0, 0.16433182458127107, 0.24025960226620707, 0.0, nan, 0.0, 0.1865917623179034, 0.0, 0.0, 0.8237045305017561, 0.6485287252686867, 0.8916263487480074, 0.0, 0.0, 0.23161660227979464, 0.0] |
| 0.6777 | 1.0 | 400 | 0.6645 | 0.3343 | 0.2755 | 0.8205 | [nan, 0.8955600256602996, 0.9528284776336102, 0.20619042056074766, 0.4578573681184769, 0.34171859852352976, nan, 0.5150824142204389, 0.8000759121317076, 0.0, 0.9308408861203066, 0.0, 0.0, 0.0, 0.0, 0.8202247191011236, 0.0, 0.0, 0.931415684238172, 0.0, 0.22729327499111263, 0.2807173404242283, 0.0, nan, 0.0, 0.3332993143873973, 0.0, 0.0, 0.904612735522824, 0.9085503237620377, 0.9531456202767545, 0.0, 0.0, 0.2395403274915222, 0.0] | [nan, 0.7091852218081763, 0.8215012473174504, 0.20316384883142716, 0.449169741519482, 0.2820828827399737, nan, 0.4034439348068946, 0.5801054036574794, 0.0, 0.8406284073872154, 0.0, 0.0, 0.0, 0.0, 0.5491287380561565, 0.0, 0.0, 0.6833033543785748, 0.0, 0.196701947180513, 0.26816266986235426, 0.0, nan, 0.0, 0.2624543573765898, 0.0, 0.0, 0.8319417451247856, 0.6328739755697549, 0.9148380247362377, 0.0, 0.0, 0.18610354253000033, 0.0] |
| 0.4931 | 1.25 | 500 | 0.6513 | 0.3693 | 0.2930 | 0.8232 | [nan, 0.8195930838546497, 0.9565826472101743, 0.3660338785046729, 0.502483997738174, 0.5101274819814215, nan, 0.6120499018406388, 0.8168524932390757, 0.0, 0.9680832750475287, 0.0, 0.0, 0.0, 0.0, 0.7678687406637656, 0.0, 0.0, 0.9132467503439181, 0.07463699730127982, 0.3080053777834345, 0.3700341269744017, 0.0, nan, 0.0, 0.3144554351808238, 0.0, 0.0, 0.8719933435243034, 0.9280312013943278, 0.9461371807749148, 0.0, 0.3623930581804142, 0.40862556355693114, 0.0] | [nan, 0.7255301419742964, 0.8322765227346863, 0.3328323011716717, 0.4866977152337555, 0.31646114214929966, nan, 0.4116248877039441, 0.584768070212383, 0.0, 0.7940437031847611, 0.0, 0.0, 0.0, 0.0, 0.5384221282312557, 0.0, 0.0, 0.7148576049798162, 0.06922710729587371, 0.23689839512021127, 0.330131038978254, 0.0, nan, 0.0, 0.25964434649208096, 0.0, 0.0, 0.8276496500163791, 0.5924934568973941, 0.9145898275185997, 0.0, 0.10460157785142388, 0.3046522912622977, 0.0] |
| 0.1718 | 2.0 | 800 | 0.5338 | 0.3766 | 0.3117 | 0.8521 | [nan, 0.9149980619048741, 0.9439616375983239, 0.49970093457943926, 0.7343188057936092, 0.4654595153245685, nan, 0.4401632944315461, 0.7951368790624852, 0.0, 0.9516775700030986, 0.0, 0.0, 0.0, 0.0, 0.7842599207637851, 0.0, 0.0, 0.9120325078402151, 0.0, 0.5436783980174178, 0.289193941696178, 0.0, nan, 0.0, 0.4040691893023499, 0.04438191043850125, 0.0, 0.9289921718405059, 0.9105179916825697, 0.9579859465374478, 0.0, 0.00014225040134934668, 0.5310102962619485, 0.0] | [nan, 0.7682867926029272, 0.863978713337328, 0.3619354489331745, 0.619807980106986, 0.4001297195410576, nan, 0.37693255173950874, 0.6055069405805374, 0.0, 0.8443884543167844, 0.0, 0.0, 0.0, 0.0, 0.5757144134211389, 0.0, 0.0, 0.7512958252799772, 0.0, 0.35684944134400076, 0.2822025918120264, 0.0, nan, 0.0, 0.3086991377431782, 0.04423000485801351, 0.0, 0.8578322873273115, 0.6920597473565505, 0.9258143343645202, 0.0, 0.00013209541062801931, 0.3399454223242722, 0.0] |
| 1.7925 | 2.25 | 900 | 0.5745 | 0.3877 | 0.3157 | 0.8463 | [nan, 0.9373443718928436, 0.8936817705653165, 0.5237184579439252, 0.785620810686892, 0.5932309765570626, nan, 0.5731998228133042, 0.7751909664563268, 0.0, 0.9330254836699918, 0.0, 0.0, 0.0, 0.0, 0.8874780801454829, 0.0, 0.0, 0.9253989025665076, 0.0, 0.49743326413606553, 0.3720606075459213, 0.0, nan, 0.0, 0.362670748940179, 0.2263189382021227, 0.0, 0.9355852115710428, 0.9121195658169062, 0.9653801272784691, 0.0, 0.09587677050945966, 0.21074794549629322, 0.0] | [nan, 0.7666762008063966, 0.8459820722288737, 0.35589376130270695, 0.6602856629180212, 0.391087786259542, nan, 0.4283483218139711, 0.618615992154992, 0.0, 0.8563419873974479, 0.0, 0.0, 0.0, 0.0, 0.4695442264821982, 0.0, 0.0, 0.7387838557909564, 0.0, 0.3568544684209477, 0.3548962568907604, 0.0, nan, 0.0, 0.28509334019028026, 0.21794051124482566, 0.0, 0.8588025306782998, 0.6960344960020876, 0.927551192360457, 0.0, 0.09183812508516147, 0.18221393560509547, 0.0] |
| 0.4287 | 2.5 | 1000 | 0.5140 | 0.4156 | 0.3337 | 0.8596 | [nan, 0.9114284539509796, 0.9599424299786812, 0.3729602803738318, 0.6955020648206622, 0.6337076451002155, nan, 0.648796319756489, 0.9076149357119134, 0.0, 0.9333320442069727, 0.0, 0.0, 0.0, 0.0, 0.837638825745275, 0.0, 0.0, 0.8487128760410935, 0.14962168247818672, 0.7450834097721757, 0.4416333770387344, 0.0, nan, 0.005162707675408485, 0.4304364892447794, 0.29855310097272386, 0.0, 0.9243997842101966, 0.9100753698167738, 0.9780073694330464, 0.0, 0.3377837387469772, 0.3283183517042185, 0.0] | [nan, 0.8056652041667661, 0.868478873207236, 0.36872340720413566, 0.648560287656455, 0.4227995307199668, nan, 0.5211383920382058, 0.5417303836612635, 0.0, 0.8614512323591124, 0.0, 0.0, 0.0, 0.0, 0.4902451772308277, 0.0, 0.0, 0.7414797203702529, 0.1034994187677877, 0.37103542329614997, 0.38941938864817555, 0.0, nan, 0.004775330844065127, 0.3339817219387496, 0.27392303157209946, 0.0, 0.8695462814099766, 0.7123344518279238, 0.9249476057387171, 0.0, 0.15441354067963511, 0.2686663032210652, 0.0] |
| 0.2477 | 2.75 | 1100 | 0.5852 | 0.3976 | 0.3245 | 0.8501 | [nan, 0.9240898770490549, 0.9130342916084687, 0.5360268691588785, 0.6767027987344469, 0.5151102302165186, nan, 0.6523417772790812, 0.8782321962328604, 0.0, 0.9459085723287141, 0.01212233473285585, 0.0, 0.0, 0.0, 0.8298613366240176, 0.0, 0.0, 0.8996769125664682, 0.0046441166244474245, 0.58637589184745, 0.4359797566385237, 0.0, nan, 0.0, 0.4451038886272047, 0.26994748620682013, 0.0, 0.9522730369995648, 0.9058973503358962, 0.9744264856283144, 0.024141075054913176, 0.024040317828039587, 0.315675681715336, 0.0] | [nan, 0.7635041179698989, 0.8504428879888529, 0.32134395517814934, 0.5814428391874907, 0.4398125968608028, nan, 0.5183108660060791, 0.5876442483214019, 0.0, 0.8637126471579993, 0.010904378413403684, 0.0, 0.0, 0.0, 0.5582717546245474, 0.0, 0.0, 0.7543635882159604, 0.004548919124920941, 0.3707771520336274, 0.37139606254827867, 0.0, nan, 0.0, 0.32640450731902027, 0.25674365674787153, 0.0, 0.8589069009951039, 0.7216899081490464, 0.9303705560523882, 0.023933704665274814, 0.02273469779955799, 0.24717820737291407, 0.0] |
| 0.2092 | 3.5 | 1400 | 0.5305 | 0.4215 | 0.3450 | 0.8615 | [nan, 0.8854690236777607, 0.9752597083363964, 0.4837301401869159, 0.7543174059151941, 0.32120495047431574, nan, 0.6121067808383275, 0.8640129050623903, 0.006110443680351299, 0.9472197081638014, 0.22567300568041493, 0.0, 0.0, 0.0, 0.849337533285705, 0.0, 0.0, 0.9323370763681338, 0.09924833192602527, 0.4992824257958052, 0.5897763059541461, 0.0, nan, 0.005025401620211451, 0.5194038833935207, 0.26516141898030177, 0.0, 0.9098213390526053, 0.9140251839431679, 0.9696367307434691, 0.0, 0.46129773009002417, 0.39953043905763785, 0.0] | [nan, 0.8279523588823188, 0.8503094621684615, 0.4166789099025304, 0.6531647345358885, 0.2970569371138754, nan, 0.4891076127233826, 0.6267720763107083, 0.0060749588138385505, 0.8628731375345856, 0.1638621555382868, 0.0, 0.0, 0.0, 0.5868382377688277, 0.0, 0.0, 0.766351782387915, 0.08906272053962098, 0.3548571571167739, 0.42844759670807536, 0.0, nan, 0.004661470273574813, 0.3559905085937402, 0.24649831094998764, 0.0, 0.8706735405566627, 0.7172875061476175, 0.937101627261161, 0.0, 0.18277266944717308, 0.30403604315996224, 0.0] |
| 0.1763 | 3.75 | 1500 | 0.5284 | 0.4184 | 0.3549 | 0.8725 | [nan, 0.9155522786024052, 0.9647682266779387, 0.44949532710280377, 0.7917047766525447, 0.5148885009996292, nan, 0.6544609508444807, 0.8639037813730607, 0.006400430838062886, 0.9591118988406824, 0.21581460442907713, 0.0, 0.0, 0.0, 0.8629440800155874, 0.0, 0.0, 0.9189088001847752, 0.0, 0.553022223587637, 0.46456492702831864, 0.0, nan, 0.09048469037484554, 0.4453708065107029, 0.3956482240588509, 0.0, 0.9463804808607508, 0.8827003794689641, 0.9646183286805874, 0.0, 0.10191225182385336, 0.42574316887992536, 0.0] | [nan, 0.8411073731152799, 0.8690976727110442, 0.4122661523625844, 0.6761261173524866, 0.4325420396336731, nan, 0.5235010874548043, 0.6267662599177323, 0.006377182482354398, 0.8589461626478264, 0.21441570391575504, 0.0, 0.0, 0.0, 0.5785872529434498, 0.0, 0.0, 0.7644870697544361, 0.0, 0.3931242258826368, 0.4137160566746283, 0.0, nan, 0.07477420233286435, 0.3486446014515762, 0.35308773803167826, 0.0, 0.8775350307334798, 0.7615382190401359, 0.9362335277343948, 0.0, 0.08161239401780339, 0.3123361865981938, 0.0] |
| 0.227 | 4.0 | 1600 | 0.5923 | 0.4426 | 0.3538 | 0.8544 | [nan, 0.9577374173182539, 0.9166854278467985, 0.1959217289719626, 0.7810987315371373, 0.5809225413617377, nan, 0.5835888579214346, 0.8662428239312995, 0.024607481668668958, 0.960621119945819, 0.44992590763151397, 0.0, 0.0, 0.0, 0.890757939858414, 0.0, 0.0, 0.8824976680624833, 0.23107998476795974, 0.6677916708726317, 0.5485129952087443, 0.0, nan, 0.13447755045997528, 0.4840215627780395, 0.4094524827723738, 0.0, 0.9258667409261705, 0.8784809934585728, 0.9680485743444954, 0.0, 0.5403279887825397, 0.2843078375615234, 0.0] | [nan, 0.732742632898181, 0.85248637631468, 0.1937195271972472, 0.6916132972252533, 0.4613544304478555, nan, 0.5019837033874182, 0.6339381818434339, 0.024391746227286727, 0.8507334888775837, 0.3399262956570416, 0.0, 0.0, 0.0, 0.5118086361876507, 0.0, 0.0, 0.7596215991272331, 0.14059847786558677, 0.3924964359231432, 0.4511581321221818, 0.0, nan, 0.11381225741975969, 0.3543174804464886, 0.36413975210357263, 0.0, 0.8783724167054704, 0.7445500851078998, 0.9377100490542223, 0.0, 0.1494074611014649, 0.24185599444907813, 0.0] | |
| 0.3219 | 4.75 | 1900 | 0.5306 | 0.4360 | 0.3684 | 0.8771 | [nan, 0.9383015101174155, 0.9581139041020363, 0.4607803738317757, 0.811509517207101, 0.6291153866526402, nan, 0.6505845609717001, 0.814323670351568, 0.021541903144289325, 0.9406027168809682, 0.41314727916357946, 0.0, 0.0, 0.0, 0.8354955510813795, 0.0, 0.0, 0.9418887586641801, 0.05121773539297008, 0.6343575406735104, 0.518250578994449, 0.0, nan, 0.027131676506933957, 0.4585466059559324, 0.39812988854667525, 0.0, 0.9202410996786, 0.895342680330491, 0.9736189575948254, 0.00016059513448547392, 0.336889593367067, 0.32415208076113006, 0.0] | [nan, 0.8286943759948178, 0.8911330146359255, 0.44085585238189445, 0.7563455702043241, 0.44281982228819555, nan, 0.5389345827619121, 0.6390151642075557, 0.02125355077350663, 0.8721853143259732, 0.34406869718732325, 0.0, 0.0, 0.0, 0.6106328062420269, 0.0, 0.0, 0.7642481786905918, 0.04822404265103627, 0.40217085841005906, 0.4365575304022451, 0.0, nan, 0.02300777793302594, 0.35943746679548483, 0.36207556675062974, 0.0, 0.8758467465629671, 0.7286601531442717, 0.9422882468777368, 0.00016028416831905857, 0.18664925297515172, 0.274341743647937, 0.0] | |
| 0.3758 | 5.25 | 2100 | 0.5413 | 0.4400 | 0.3618 | 0.8749 | [nan, 0.9446099997724584, 0.9535776804748952, 0.5333586448598131, 0.7118822151738956, 0.5725146926401914, nan, 0.637704053404208, 0.8958248327560848, 0.02011268072413936, 0.9449676672959805, 0.4536305260558163, 0.0, 0.0, 0.0, 0.8527716438267194, 0.0, 0.0, 0.9263943868758329, 0.13527541846719315, 0.6231382204452325, 0.5343291629394538, 0.0, nan, 0.07845667993958534, 0.48360548490082167, 0.39496133478097095, 0.0, 0.9342636737434504, 0.9081380373512183, 0.9754223113378334, 0.0, 0.0686053364221992, 0.4949887428280921, 0.0] | [nan, 0.8421459412186475, 0.884886678991681, 0.3243137842681656, 0.6975183850797184, 0.4470212561315764, nan, 0.5491953906967838, 0.5880944000946866, 0.01971493543409405, 0.8720965863289499, 0.2829941580535405, 0.0, 0.0, 0.0, 0.5648458841496203, 0.0, 0.0, 0.7876641278543601, 0.11773309221380866, 0.4507472099997672, 0.4306682617343027, 0.0, nan, 0.053795025325274436, 0.35687388479928317, 0.3506028598965402, 0.0, 0.8763044901374653, 0.7342806685419377, 0.9417441335611155, 0.0, 0.05263732322996086, 0.3527909231538019, 0.0] |
| 0.1962 | 6.0 | 2400 | 0.5252 | 0.4591 | 0.3755 | 0.8678 | [nan, 0.8788767058796604, 0.9301585587737999, 0.5368457943925233, 0.8328600223823257, 0.6594750437607246, nan, 0.7274099889861577, 0.8314845566257058, 0.20671941671154564, 0.9452567774639331, 0.5536552235119783, 0.0, 0.0, 0.0, 0.8969685653049295, 0.0, 0.0, 0.9273548947094251, 0.04859351976026093, 0.6165535079211122, 0.5024186037962429, 0.0, nan, 0.07840175751750653, 0.49256293504998166, 0.4105160532671556, 0.0, 0.928572042963352, 0.9119196275909236, 0.976082967184019, 0.09759262712918065, 0.23430673250828102, 0.4679128700481014, 0.0] | [nan, 0.8020983983063393, 0.8683865888896747, 0.4544978013913642, 0.6680523786513721, 0.4517445785165809, nan, 0.5857034011566181, 0.6746845091894639, 0.18334129404416358, 0.8638403093611754, 0.3497406295097313, 0.0, 0.0, 0.0, 0.5136113874503752, 0.0, 0.0, 0.7818072530904586, 0.04626054062573883, 0.40338464571865573, 0.41853055526845995, 0.0, nan, 0.05885020509966401, 0.3764221220090192, 0.37385233165849424, 0.0, 0.8760216287329546, 0.7184759765101966, 0.9447723343539753, 0.07888984275215143, 0.17396158662623154, 0.3506487661563549, 0.0] |
| 0.2721 | 6.25 | 2500 | 0.5120 | 0.4726 | 0.3905 | 0.8834 | [nan, 0.9352277032235452, 0.9553332100455781, 0.5201098130841122, 0.8315588432600179, 0.6507746356557826, nan, 0.7171028251625792, 0.8676946434502064, 0.12399022329011143, 0.9414992885437384, 0.5631225817074175, 0.0, 0.0, 0.0, 0.8815434824965902, 0.0, 0.0, 0.9265160801760165, 0.12371893574396928, 0.6983379489227609, 0.496123187961817, 0.0, nan, 0.1353837704242757, 0.5335426806929398, 0.5267111298220735, 0.0, 0.9267000099723489, 0.9157963608485102, 0.9708294620227798, 0.0039371710389987154, 0.44802779979272084, 0.43061615557802646, 0.0] | [nan, 0.847290915944923, 0.8918843187400161, 0.4215259288995603, 0.7694117638497967, 0.498788432969163, nan, 0.5567520477680967, 0.6726198795136411, 0.11618337797445752, 0.8753637372987935, 0.42321077786886513, 0.0, 0.0, 0.0, 0.581673157378788, 0.0, 0.0, 0.7933263418076343, 0.10532064834390416, 0.437053368284101, 0.4288208971032145, 0.0, nan, 0.09955372468245795, 0.3973712316699539, 0.442531089433316, 0.0, 0.880946087123613, 0.7345359613309864, 0.9452321649786941, 0.003849095209395844, 0.23329171252010497, 0.3386007935784502, 0.0] |
| 0.2409 | 6.5 | 2600 | 0.5224 | 0.4636 | 0.3840 | 0.8786 | [nan, 0.8731382676849351, 0.9738163801183563, 0.5331343457943926, 0.8196854363098576, 0.6540081867354192, nan, 0.6300072908533401, 0.8875978554822792, 0.13449190107295247, 0.955765201040042, 0.6083600889108421, 0.0, 0.03281733746130031, 0.0, 0.8703400012989544, 0.0, 0.0, 0.9262836625295774, 0.08389211741916257, 0.6663345782989761, 0.5452994228436286, 0.0, nan, 0.13288480021968968, 0.47811535039514313, 0.4147924929649243, 0.0, 0.9382028859601423, 0.8756597961457425, 0.965266610679491, 0.010467176426706453, 0.4342701538336483, 0.3917412023665201, 0.0] | [nan, 0.8209592404927408, 0.8860938595226477, 0.41218836114746504, 0.7196016259460952, 0.4954368536125842, nan, 0.545313357840212, 0.6491223200313668, 0.12371625097650668, 0.8633659080664855, 0.4708871648638746, 0.0, 0.03281733746130031, 0.0, 0.5802203868677137, 0.0, 0.0, 0.7907500494259085, 0.06952381605757291, 0.447113968783744, 0.44327869995554786, 0.0, nan, 0.08728984775236309, 0.38119151688382136, 0.37855655092920265, 0.0, 0.8832564638909316, 0.7526222693644393, 0.9416404778849121, 0.009589327157183334, 0.18190330268981955, 0.32252322488728213, 0.0] | |
| 0.1524 | 10.5 | 4200 | 0.5353 | 0.5128 | 0.4237 | 0.8872 | [nan, 0.9268517790355991, 0.9602839791773874, 0.537267523364486, 0.8456677302072528, 0.6567083558655384, nan, 0.7076703913792123, 0.8633391848934858, 0.3143875056961763, 0.9515964493686976, 0.6206264921379765, 0.0, 0.7490196078431373, 0.08954470929499306, 0.8721747743066831, 0.0, 0.005131830440133009, 0.9147190737070242, 0.11450520703985165, 0.6915674424660561, 0.5259122991900205, 0.0019833510251969382, nan, 0.2044761773994233, 0.5593918459203433, 0.4851432496510159, 0.0, 0.9463960710558084, 0.8834918590669917, 0.9670624325154579, 0.012832069294210286, 0.5599179011969355, 0.44183701402816805, 0.0] | [nan, 0.8497898154944094, 0.8911284588944798, 0.4558941463477496, 0.7715538102169041, 0.5041805687956784, nan, 0.5916295134976238, 0.6664176289411136, 0.25352865518566153, 0.8836310493548173, 0.5013133395398324, 0.0, 0.6053882725832013, 0.05452311472892029, 0.5946321429362145, 0.0, 0.005111887747118043, 0.802846410488875, 0.09434940383618455, 0.47282749487636766, 0.44441582446257716, 0.001977936260307555, nan, 0.14078808047194072, 0.4107132907440319, 0.42875046507529324, 0.0, 0.8865359213150946, 0.7513094837462199, 0.9478585417349973, 0.011508324602586469, 0.19474424489161243, 0.34180230893483227, 0.0] |
| 0.052 | 10.75 | 4300 | 0.5611 | 0.5030 | 0.4222 | 0.8855 | [nan, 0.932148839850802, 0.9568949634271852, 0.5225233644859814, 0.8511642191077112, 0.6031687568751455, nan, 0.7201923889006668, 0.8793424111590834, 0.1743029951530718, 0.9511564170902311, 0.5728369144644768, 0.018116900290928325, 0.7155830753353973, 0.08790515827973262, 0.8945492628434111, 0.0, 0.0, 0.9018928482213427, 0.19409261742744086, 0.6978142148450815, 0.5187192887865012, 0.004106374657802112, nan, 0.18591239873678428, 0.5679096666143298, 0.48372515565797347, 0.0, 0.9465148790940053, 0.8887757437702006, 0.9729464658947179, 0.03061668531642422, 0.3269727082444268, 0.4968253657882534, 0.0] | [nan, 0.8544673632153686, 0.8915093314898118, 0.4824501321862451, 0.7281104549174552, 0.4796578889108752, nan, 0.5955885392390377, 0.6806501724220245, 0.15806082007550856, 0.8869557339277052, 0.5018390970394144, 0.017487873372478938, 0.5719234576047509, 0.08299595141700405, 0.5743453150410742, 0.0, 0.0, 0.7988127196821454, 0.14769412965284384, 0.4636640495670947, 0.44194705232908676, 0.004079706927175844, nan, 0.14373978216098007, 0.4138202592132837, 0.4263783910470499, 0.0, 0.8825003483580057, 0.7459231292221788, 0.9497549296351595, 0.022555788364877087, 0.19864442770898405, 0.36609089056617755, 0.0] |
| 0.0897 | 11.0 | 4400 | 0.5797 | 0.4966 | 0.4137 | 0.8864 | [nan, 0.9266090680496935, 0.9675701132103213, 0.5286179906542056, 0.8135055236213754, 0.6141498963415911, nan, 0.7310209435363914, 0.8153911847037054, 0.24547412900285845, 0.9446611067589995, 0.6598542850086441, 0.0, 0.5599071207430341, 0.13658721150208097, 0.8912937585243879, 0.0, 0.004870002356452753, 0.9252981123672058, 0.10847033891289591, 0.6586394910124014, 0.4795176884335903, 0.01181630258673669, nan, 0.18618701084717837, 0.5559088292248914, 0.4992355587068755, 0.0, 0.9406880436912528, 0.9118086274033954, 0.9573602602596679, 0.003960483235940155, 0.3327033672702148, 0.4804871031358067, 0.0] | [nan, 0.8565575968459415, 0.8928102104157912, 0.43275555700074025, 0.7654702047573079, 0.47074416606474334, nan, 0.6054622841435586, 0.6863363711152467, 0.21403286978508218, 0.8828456438079144, 0.4322928605137194, 0.0, 0.4530688935281837, 0.09709521247982786, 0.5749041704195555, 0.0, 0.004865289040020926, 0.7951008940737603, 0.09395592969976839, 0.4548604901862724, 0.41665801557197046, 0.011736958934517204, nan, 0.1216732767438939, 0.41094472698150475, 0.430227229329769, 0.0, 0.8867287999971621, 0.7466484878252573, 0.9415279772911855, 0.0036285882442284325, 0.19204917359734425, 0.36246293958863207, 0.0] |
| 0.0936 | 11.25 | 4500 | 0.5731 | 0.5011 | 0.4193 | 0.8864 | [nan, 0.9324196276009762, 0.9569564158641476, 0.5246004672897197, 0.8364710008894733, 0.6578250088383729, nan, 0.7038215792022807, 0.8665369834416663, 0.21309913418120055, 0.9410960435297098, 0.49318761834197744, 0.028167151547209734, 0.5808565531475748, 0.11010215664018161, 0.8849288822497889, 0.0, 0.0565548660749352, 0.9216694582309478, 0.11269226311693903, 0.6871508134702065, 0.5262584704743466, 0.01969383764456115, nan, 0.2076616778799945, 0.571397916993772, 0.476856262879174, 0.0, 0.9377623285515337, 0.907275545210859, 0.973954665451519, 0.050830950308757096, 0.38818102379646, 0.4678081196891568, 0.0] | [nan, 0.858380886499719, 0.8914561596816896, 0.45129869803574746, 0.786844102694609, 0.48464472942061587, nan, 0.6094618696875397, 0.6854209198991233, 0.18657623184200503, 0.8857526637100221, 0.394797106941035, 0.023946037099494097, 0.49684424239749303, 0.062077792789589706, 0.5615273263032089, 0.0, 0.055464256368118324, 0.7962485307269822, 0.09311408578835408, 0.4733745462314789, 0.44196131097098196, 0.019312422955759485, nan, 0.14722087024238295, 0.4185961804636968, 0.4181839379748557, 0.0, 0.8886792481667263, 0.7473472827679579, 0.9501856968302422, 0.031198480139267574, 0.2030701847638892, 0.3556589318498682, 0.0] |
| 0.033 | 14.25 | 5700 | 0.5935 | 0.5181 | 0.4292 | 0.8880 | [nan, 0.9232290780535377, 0.9550432923803572, 0.5331775700934579, 0.8469649770868216, 0.6796985960845084, nan, 0.7591958688611619, 0.8564643924657209, 0.21028211607771655, 0.9524029393967549, 0.6051700008232486, 0.0, 0.6860681114551084, 0.21654685332324378, 0.8960592972657011, 0.0, 0.03558243657214673, 0.9155229117646998, 0.140697693670425, 0.711005584058588, 0.5227324249145294, 0.037180848092072186, nan, 0.2080186736235068, 0.5726225990474695, 0.5346435930956549, 0.0, 0.9410130186192625, 0.9154633602859255, 0.9760592954761752, 0.01645064030834266, 0.4608913003718832, 0.4701447510293469, 0.0] | [nan, 0.8573293198744064, 0.8916240779976521, 0.48186665258934697, 0.7676170029872194, 0.4823511054134466, nan, 0.6260715377125842, 0.6901341142647419, 0.1894206549118388, 0.8862935130575381, 0.49201833941300493, 0.0, 0.5435813573180703, 0.1092586700604518, 0.5822497006272321, 0.0, 0.035439538946984116, 0.8016860332567224, 0.11209233305853257, 0.4701563285996208, 0.45173968006036097, 0.03573442156415282, nan, 0.1250185671139278, 0.43006031638093856, 0.44816121842496287, 0.0, 0.8878007481353359, 0.7386750898148962, 0.9519721480330992, 0.013876810802543318, 0.25855582662623405, 0.3720678838361397, 0.0] |
| 0.0548 | 14.5 | 5800 | 0.5902 | 0.5151 | 0.4174 | 0.8882 | [nan, 0.9249082282350853, 0.9577153821767257, 0.5438259345794393, 0.8625692959476665, 0.6265525664540941, nan, 0.7491911978889274, 0.8432461925321441, 0.249306102158333, 0.951930364538209, 0.6013830575450728, 0.0, 0.7704850361197111, 0.20002522386177324, 0.8704780151977658, 0.0, 0.0013615060351373288, 0.9208633435979287, 0.11193893938641368, 0.6970564096712325, 0.4979168453686571, 0.03908039555282418, nan, 0.18904297679527668, 0.5623985973726906, 0.5131506060136048, 0.0, 0.9399214361687687, 0.9123994793332818, 0.9756660223299524, 0.04515831571967342, 0.4303481070535878, 0.49404040291178064, 0.0] | [0.0, 0.8607762479438139, 0.8922939816555095, 0.45337232891467816, 0.7416336434657338, 0.4957900790517687, nan, 0.6227225352163122, 0.6905205002583658, 0.2142437565638406, 0.8883435707029895, 0.4944664432937354, 0.0, 0.5822804554671658, 0.1227364185110664, 0.6143083859952676, 0.0, 0.0013572770933389015, 0.7986526753983755, 0.09318127002721979, 0.47663610300281495, 0.44101175423554057, 0.037423427761281866, nan, 0.14246983588236511, 0.42780903014161104, 0.4432599000899573, 0.0, 0.8868797486244817, 0.7354235169834137, 0.9525392249964284, 0.03855126495647117, 0.2526545610728006, 0.37165059315614124, 0.0] |
| 0.1047 | 14.75 | 5900 | 0.5997 | 0.5159 | 0.4159 | 0.8881 | [nan, 0.9210892560336101, 0.9617335675034919, 0.5317464953271028, 0.8683264925417152, 0.6381114337134347, nan, 0.7416693813461018, 0.862755610380984, 0.2719665271966527, 0.9489817238040484, 0.570408331275212, 0.0005289605924358636, 0.6938596491228071, 0.22575356287047546, 0.8948821198934858, 0.0, 0.011022962322938758, 0.9258684979714679, 0.17593834335005545, 0.6548460763101033, 0.4725421838812847, 0.04097994301357618, nan, 0.22218865851984074, 0.5752629926205056, 0.5366821032106535, 0.0, 0.936931478673554, 0.9021336855923136, 0.9725860103434604, 0.020141738157403954, 0.43632262391026033, 0.4934216774582814, 0.0] | [0.0, 0.8607109591035689, 0.8928295853674818, 0.4670190706507743, 0.7523185639791471, 0.4845338501499847, nan, 0.6282224979925543, 0.6928170564904808, 0.23142272983643541, 0.8873278318309525, 0.46953884728763595, 0.0005215803885773895, 0.5542412002308136, 0.10845198424719782, 0.5869154300379641, 0.0, 0.010907018316536697, 0.793456051943224, 0.12649239962384984, 0.4589822701689517, 0.42143872921678477, 0.03893105461493551, nan, 0.13440869146302972, 0.4245448084603441, 0.46174816509389, 0.0, 0.8878226827336242, 0.7447736277446672, 0.951929183073613, 0.018382891806658124, 0.25878028202964926, 0.37484668044597425, 0.0] |
| 0.1363 | 15.0 | 6000 | 0.6052 | 0.5193 | 0.4155 | 0.8887 | [nan, 0.9281772418265013, 0.9663767872895684, 0.5342161214953272, 0.8447924129735698, 0.6015187219527939, nan, 0.7291077408868643, 0.8812164919106135, 0.23211400637971746, 0.9479408328730995, 0.633386844488351, 0.0030415234065062154, 0.789422084623323, 0.21314163198385672, 0.8954179385594596, 0.0, 0.0066242505171104655, 0.9164480291997693, 0.1360949684597427, 0.6964961019847766, 0.4960711090960334, 0.03860550868763618, nan, 0.19802279280516272, 0.5609541005914063, 0.5661075535662848, 0.0, 0.9376398917610389, 0.9059173441584945, 0.9782134208899593, 0.041454266650089104, 0.43892377410636263, 0.49969692229478707, 0.0] | [0.0, 0.8633930449091305, 0.8952460293484353, 0.42706756384454103, 0.7593774610091322, 0.47377891058119026, nan, 0.6217821374684249, 0.6898326802726141, 0.20124995510218743, 0.8868864734587292, 0.4952526552944963, 0.0028388052332757345, 0.6066698390038862, 0.10356026717323365, 0.5863739068024136, 0.0, 0.00656256484747873, 0.7990222508044155, 0.11130896362146828, 0.4768559231889487, 0.4358850122678166, 0.03689958080794596, nan, 0.14020726799012267, 0.42208907144066693, 0.46374312526092243, 0.0, 0.889531203939725, 0.7432560391610733, 0.952160090573041, 0.03558025789239662, 0.21245893254116582, 0.3712419453581397, 0.0] |
| 0.0804 | 15.25 | 6100 | 0.6205 | 0.5110 | 0.4268 | 0.8877 | [nan, 0.9338093608996594, 0.9656453309931633, 0.5360116822429907, 0.8032054069910557, 0.6059132718486427, nan, 0.7301936126609202, 0.8766143189258433, 0.22587928248891834, 0.9574923159422327, 0.619350456902939, 0.0011901613329806928, 0.7703818369453045, 0.07655442048177576, 0.8504335260115607, 0.0, 0.020239310868483754, 0.9198111518664089, 0.12485306048113379, 0.7319227623900414, 0.495000428884777, 0.03547684228169171, nan, 0.1875600713991487, 0.5538912440466844, 0.5455451906671689, 0.0, 0.9362906678973961, 0.9101525873385327, 0.9729007364591106, 0.02293143105806291, 0.4597532971610884, 0.48345782331547454, 0.0] | [nan, 0.856464729269542, 0.8942823604125036, 0.4347924144963024, 0.7282825257603309, 0.4836585626064097, nan, 0.6163747573889081, 0.6892970262677814, 0.20072891932188414, 0.888225522138808, 0.5066929332727181, 0.0011893749174045195, 0.6024777046931117, 0.05147557666214383, 0.6220782459974346, 0.0, 0.020031615227137266, 0.7981944383082095, 0.09975989363883506, 0.476298280003313, 0.4345003764655265, 0.03419217618393775, nan, 0.1330243066375818, 0.42041703246719714, 0.45861972618049734, 0.0, 0.8892991369897043, 0.7440154875361404, 0.9524152608652374, 0.021443727473549588, 0.22949422815524131, 0.36944182958821886, 0.0] |
| 0.0627 | 15.5 | 6200 | 0.6244 | 0.5088 | 0.4226 | 0.8864 | [nan, 0.9363099227676078, 0.9557843398515034, 0.5258376168224299, 0.8250218829308421, 0.6537759869721766, nan, 0.7370216777925434, 0.8573990605873701, 0.24421061352997225, 0.944441326435564, 0.6453651107269285, 0.0, 0.574406604747162, 0.202547610039097, 0.9001834773007729, 0.0, 0.08682219254837274, 0.9295308868150898, 0.08372655176410206, 0.6741101275248591, 0.4846229490117269, 0.03799094921503995, nan, 0.18766991624330634, 0.5747971947453813, 0.5357957944650019, 0.0, 0.9393777953152539, 0.9065412893119918, 0.9711350422513085, 0.01408833768494343, 0.423479444817005, 0.43092900998340755, 0.0] | [nan, 0.8597774723874926, 0.8905873458192073, 0.4468008441348313, 0.7358981742624778, 0.4808541172889169, nan, 0.6284059730270303, 0.6908370828825592, 0.2063894967177243, 0.8877064612239235, 0.5085303752716421, 0.0, 0.4786515887689728, 0.07696731524968849, 0.5910784632525015, 0.0, 0.08625308882819613, 0.7927730663764808, 0.07191564097641445, 0.4573643410852713, 0.43199170940310977, 0.036449399656946824, nan, 0.12474672799956191, 0.42888997799442735, 0.45055805027110624, 0.0, 0.8884059722861457, 0.7421115189770542, 0.9513756980737487, 0.012830765528906378, 0.21910649885920366, 0.3464300992446894, 0.0] |
| 0.0906 | 15.75 | 6300 | 0.6277 | 0.5077 | 0.4232 | 0.8874 | [nan, 0.9291486180310576, 0.9587963707454238, 0.5362032710280373, 0.8561640657502444, 0.6342631999714216, nan, 0.7070024940578683, 0.8671632585282536, 0.2429056713202701, 0.9448969225566771, 0.5583271589692929, 0.0010579211848717272, 0.6710010319917441, 0.23294236347584815, 0.9067513151912711, 0.0, 0.020684418610740187, 0.9250756288677204, 0.07677279425156046, 0.6503387447644879, 0.5319197495312902, 0.03860550868763618, nan, 0.18569270904846905, 0.5416470403517035, 0.5072344951363807, 0.0, 0.9414354322663816, 0.9037269864207472, 0.9731874869200364, 0.013277591280202247, 0.39988619967892053, 0.4915501377118052, 0.0] | [nan, 0.8573471144295101, 0.892101583588469, 0.4449642809016976, 0.7400242676373722, 0.48442379031764893, nan, 0.6140014998720169, 0.6924650683478314, 0.21178574008524165, 0.8871035802257583, 0.4782118177972077, 0.00099601593625498, 0.5315565729234794, 0.08438028233359221, 0.5871221081515825, 0.0, 0.020441960358122443, 0.7966462351239197, 0.06850549580427845, 0.4652701824381677, 0.4532145005879428, 0.03686906413403052, nan, 0.1488673139158576, 0.4142177021859072, 0.4423489401170992, 0.0, 0.888882064716084, 0.7468477974750474, 0.9515378343546987, 0.012387656809223801, 0.2237051521076804, 0.3671609871108074, 0.0] |
| 0.0798 | 16.0 | 6400 | 0.6190 | 0.5286 | 0.4172 | 0.8869 | [nan, 0.926680657145317, 0.9583277241233551, 0.5414509345794393, 0.8395448350384849, 0.6163055970613488, nan, 0.729106879083869, 0.8763296484319401, 0.26653962467376446, 0.94462856417892, 0.6354449658351856, 0.0, 0.7736326109391125, 0.21591625677891285, 0.8849045268558811, 0.34363411619283063, 0.10316026497002069, 0.9218656576332847, 0.10944717627775294, 0.7009902670312324, 0.5122599776979916, 0.038968657466897594, nan, 0.1919538651654538, 0.5525226356832574, 0.538875717356141, 0.0, 0.9457572762531493, 0.901183634297817, 0.9780756945897774, 0.023115338389489825, 0.3853969802271942, 0.4585034944719744, 0.0] | [0.0, 0.8564334135192141, 0.8938306198574103, 0.41026489890361634, 0.7353951913707414, 0.47809949912634986, nan, 0.6215698951590981, 0.6951678039270297, 0.23431724238396126, 0.8861469346690092, 0.5033256170323759, 0.0, 0.5823655078656049, 0.06725329981143935, 0.60684460181721, 0.013995167136528394, 0.10232968859569384, 0.80017144909153, 0.09089721553798556, 0.48491411153457703, 0.44620918590626235, 0.03736540418921091, nan, 0.14435885256397019, 0.42539846918525115, 0.4624629192971781, 0.0, 0.8873440144497453, 0.7475156108906514, 0.9524719380738451, 0.01972869725160058, 0.22189851053623036, 0.35861227450389216, 0.0] |
| 0.0901 | 16.25 | 6500 | 0.5917 | 0.5200 | 0.4299 | 0.8896 | [nan, 0.9258199912150333, 0.9603701848856869, 0.5186892523364486, 0.8721793039773063, 0.647948819969426, nan, 0.7465402918754385, 0.8815201404374436, 0.21442478975931065, 0.9491194402298921, 0.6424219972009549, 0.00039672044432689763, 0.7311661506707946, 0.1943498549627948, 0.8921543157758005, 0.15327564894932014, 0.07967428586390177, 0.9293905669893677, 0.12015927416016821, 0.6698895330720515, 0.5201315450880439, 0.040560925191351474, nan, 0.17654812577234655, 0.5835060449050087, 0.5231215794021847, 0.0, 0.9400508616673928, 0.8957790972168599, 0.9722137189382809, 0.011464420406979153, 0.38557987360035767, 0.46186248931546336, 0.0] | [nan, 0.866351138156412, 0.8939541036386832, 0.46360912979965524, 0.7507890322152613, 0.48660598648618647, nan, 0.6225598103833513, 0.6911588008377322, 0.19347001326929186, 0.887840691207522, 0.5082802755206722, 0.00036527456471447707, 0.5638678869876641, 0.0832837918175431, 0.6045529063562446, 0.006450606044842116, 0.07925304719241588, 0.7975401296695107, 0.09911841629051973, 0.4713279486495917, 0.45141671341630396, 0.03856573705179283, nan, 0.12819285757013818, 0.4279405668488608, 0.45535903716704923, 0.0, 0.8891564381205536, 0.7534260714863522, 0.9520390401591446, 0.010587073054631307, 0.21693992819738858, 0.3621346900827125, 0.0] |
| 0.0653 | 16.5 | 6600 | 0.6069 | 0.5188 | 0.4270 | 0.8875 | [nan, 0.9290124922971863, 0.9589720557965155, 0.5377873831775701, 0.8408719669628694, 0.6464453726960179, nan, 0.7621001449552638, 0.8857807088295299, 0.2068851236588094, 0.9480908117204224, 0.6177862846793447, 0.0, 0.7590299277605779, 0.18791777021061926, 0.9075956355134117, 0.0, 0.058230565810488834, 0.9227427600247443, 0.14023410983625556, 0.6694696680432973, 0.503836987023172, 0.03972288954690206, nan, 0.19629273650968007, 0.5403046004082274, 0.5528350801001529, 0.0, 0.9376581699207615, 0.901014031526811, 0.9752275577414824, 0.015813440258609972, 0.5130362332093723, 0.44827147941026946, 0.0] | [nan, 0.8616804147441266, 0.8938918495590652, 0.4436595217282778, 0.7588707802865634, 0.4758728817247983, nan, 0.628730181301102, 0.688001179245283, 0.18745190773792766, 0.8877420745200684, 0.49290617097441625, 0.0, 0.5890833366705378, 0.07141145458902469, 0.5823605098793022, 0.0, 0.05773773981671383, 0.7947286013642479, 0.11004573329175761, 0.45664170004530313, 0.44804481905654414, 0.037985842126352344, nan, 0.1362925675933341, 0.4181863845162963, 0.46249953657361065, 0.0, 0.888743313770925, 0.7487091113564399, 0.952506386954324, 0.013629087889199198, 0.23068137169799252, 0.34552559761867596, 0.0] |
| 0.0946 | 16.75 | 6700 | 0.6065 | 0.5143 | 0.4299 | 0.8883 | [nan, 0.9366806425081413, 0.9542471674446813, 0.5289754672897197, 0.8420186089455377, 0.6348452391657562, nan, 0.7554582292706217, 0.8872989514636808, 0.24603338994987364, 0.95065695923075, 0.5426442743064132, 0.0, 0.6714138286893705, 0.17089166351368396, 0.8694632071182697, 0.0, 0.019113450108658656, 0.9217120922782911, 0.13903375883706684, 0.6740194249750934, 0.5118203708015244, 0.03178948544611431, nan, 0.20950157901963476, 0.5704453865075627, 0.5623407413972658, 0.0, 0.9411122045154043, 0.9100815747962009, 0.9743145830094165, 0.0857785237680799, 0.4308967871730781, 0.48645508025274165, 0.0] | [nan, 0.8651947384722789, 0.8930717543250574, 0.4526545293143849, 0.7524401466986995, 0.4887861010723328, nan, 0.6214073859834178, 0.6850152009083916, 0.21553648224427951, 0.8870252213407757, 0.45774305555555556, 0.0, 0.5674414547991802, 0.07292395457725634, 0.6296601151175575, 0.0, 0.018957592126106943, 0.7990749594007368, 0.11146433406780111, 0.4733450112755498, 0.44892412444043184, 0.03086520206129645, nan, 0.14343460931037075, 0.423674789416196, 0.4623610858079796, 0.0, 0.8878002154581935, 0.7401265142858424, 0.9527410923966566, 0.060905676756307404, 0.2440383021821195, 0.37124052036090577, 0.0] |
| 0.0849 | 17.0 | 6800 | 0.6239 | 0.5140 | 0.4277 | 0.8874 | [nan, 0.9305970330977147, 0.9554562297838712, 0.5320046728971962, 0.8489963736857462, 0.6542095907740937, nan, 0.7229605001215142, 0.8664610713099588, 0.28969717055387545, 0.9528962660454964, 0.4980859471474438, 0.0, 0.7176470588235294, 0.20759238239374447, 0.8862034811976359, 0.0, 0.031864477783887096, 0.9191836449171626, 0.12003509991887283, 0.6955934653201726, 0.5165258494982048, 0.04092407397061288, nan, 0.19217355485376905, 0.5895090804417229, 0.503489840686003, 0.0, 0.9408365537389992, 0.904218558679801, 0.9778653391859837, 0.011972108251481619, 0.48105021439167633, 0.4599672061542931, 0.0] | [nan, 0.8636437394553574, 0.8929500733790351, 0.4345244853931126, 0.7599993804727837, 0.46696218452852767, nan, 0.6206510046358703, 0.6983976442693793, 0.2497009515987931, 0.8874926753329814, 0.43156730923551545, 0.0, 0.5706314364255529, 0.11078207026517702, 0.6145475017593244, 0.0, 0.03131271548397056, 0.8003820861050736, 0.10237293400828867, 0.4670301606353909, 0.4459244664251144, 0.038865601952565394, nan, 0.13528195016335132, 0.4290314962729347, 0.43912572952498746, 0.0, 0.8877216097613865, 0.738180307717246, 0.9528556585267144, 0.010467599586006663, 0.24685847767824554, 0.3594826033565289, 0.0] |
| 0.0623 | 17.25 | 6900 | 0.6172 | 0.5119 | 0.4289 | 0.8887 | [nan, 0.9328785695913208, 0.9578098581195325, 0.5317383177570093, 0.8561058685577084, 0.6304827168234579, nan, 0.7396010541574238, 0.8636618114532428, 0.2868801524503915, 0.9518605630620964, 0.4947929529925084, 0.0009256810367627612, 0.7112487100103199, 0.18766553159288688, 0.8812836916282393, 0.0, 0.01743775037310502, 0.9291997485832975, 0.11260120200665574, 0.6826961479212292, 0.49109604568235565, 0.042125258394323704, nan, 0.18536317451599615, 0.5637959909980635, 0.5345549622210897, 0.0, 0.9375897612200349, 0.9104269853176398, 0.9785152351649676, 0.016857308632765553, 0.471885224247597, 0.4792468588859031, 0.0] | [nan, 0.8649230898296971, 0.8934913832615394, 0.4476893494179728, 0.7525214888224941, 0.47904609433387446, nan, 0.6239313691633799, 0.6925921698436251, 0.24592492631130367, 0.887597908356459, 0.43200359389038634, 0.000914435009797518, 0.5808680994521702, 0.10441372535260683, 0.6200052546206393, 0.0, 0.01701975415910659, 0.7967171468468032, 0.09773096322694678, 0.46324810420871126, 0.4373241271317872, 0.03999681722939819, nan, 0.13242564545240523, 0.42549338304851775, 0.45084188297733174, 0.0, 0.888754441570771, 0.7411121674604253, 0.9532170914369867, 0.015176070871411481, 0.2681904277926638, 0.37097400203468917, 0.0] |
| 0.087 | 17.5 | 7000 | 0.5958 | 0.5165 | 0.4323 | 0.8903 | [nan, 0.9358029442279695, 0.9581817889436154, 0.5173516355140186, 0.8565989717971686, 0.667348278703771, nan, 0.7453587599689061, 0.8783982540209707, 0.2597456398359501, 0.9499820544177967, 0.5674240553223018, 0.0, 0.7777605779153767, 0.14150586454786226, 0.8944761966616873, 0.0, 0.04935459377372817, 0.9190064859631538, 0.13516780079140384, 0.6902990697136872, 0.5223050718688348, 0.039750824068383706, nan, 0.1931621584511877, 0.5658763803841524, 0.501960958099754, 0.0, 0.9402762475045608, 0.9019702878007346, 0.9759436269037568, 0.012736230262339924, 0.4254506289499888, 0.5057514930417828, 0.0] | [nan, 0.8672982432946728, 0.8947683772895187, 0.45221659685446863, 0.7622893195763734, 0.4902560352855047, nan, 0.6223052874324095, 0.6932109212359029, 0.22966612333107453, 0.8909383965244376, 0.46376665320952765, 0.0, 0.5938460326215428, 0.08434187777193114, 0.602773750581284, 0.0, 0.048440150074523305, 0.8000458716174862, 0.11235893201211121, 0.479082966550413, 0.45730325325150806, 0.03797907547774101, nan, 0.13441877352901832, 0.42968388297967464, 0.43185024209844064, 0.0, 0.8885136898541194, 0.7448990572757507, 0.9530770665482792, 0.011476439106252173, 0.27282086031874275, 0.3826734258440253, 0.0] |
| 0.0493 | 17.75 | 7100 | 0.6044 | 0.5187 | 0.4325 | 0.8897 | [nan, 0.9240685866116948, 0.9622943353488201, 0.5353317757009346, 0.853514520592762, 0.6373741840672775, nan, 0.7478235165354141, 0.8836883806993405, 0.21751108165209826, 0.9509281473980792, 0.5420474191158311, 0.0, 0.7930340557275541, 0.22083490982469417, 0.8908310060401377, 0.0, 0.0858534286387558, 0.9207060529378274, 0.1411447209390884, 0.681761326480902, 0.5542661781464825, 0.03930387172467736, nan, 0.1931621584511877, 0.5752080389386088, 0.49312002836187985, 0.0, 0.9390712329452002, 0.9078367511279274, 0.9729394719810368, 0.022296821252434828, 0.4083602593021602, 0.5050154471862657, 0.0] | [nan, 0.8665364871726114, 0.892965816013915, 0.4547348114599635, 0.7642413653965189, 0.4857421136997843, nan, 0.6253954022706847, 0.6870444418213474, 0.19578268327242895, 0.8874360309454634, 0.462182366980205, 0.0, 0.6077345881608605, 0.08939146416173167, 0.6003337345442609, 0.0, 0.0839241381075478, 0.8010272384750775, 0.11626241894020498, 0.4793339806464354, 0.46760060321222136, 0.03759519038076152, nan, 0.13732648718299134, 0.4276941756073643, 0.42612058896739236, 0.0, 0.8882284916106664, 0.7388891943971531, 0.9525770980335972, 0.01913195000088903, 0.25993428881875097, 0.3840528604415517, 0.0] |
| 0.0609 | 18.0 | 7200 | 0.6040 | 0.5216 | 0.4331 | 0.8892 | [nan, 0.9227158454479248, 0.9619075870212453, 0.5316542056074767, 0.8629644863429278, 0.6514016366079864, nan, 0.7428586694795917, 0.8715519286425962, 0.2045030862918928, 0.9466966687245525, 0.5841977442990038, 0.005950806664903465, 0.7702786377708978, 0.22789759112120064, 0.8969036175878418, 0.0, 0.10873720315241013, 0.9154051507310187, 0.16112021722213943, 0.6850397847716271, 0.5074181749114659, 0.04494664506397005, nan, 0.19590827955512838, 0.5833045480713874, 0.5258912942323458, 0.0, 0.940934664449275, 0.8882331527914135, 0.9774381724580755, 0.014391396245182146, 0.43477819098132453, 0.5255548975681157, 0.0] | [nan, 0.8627327541149343, 0.8943888286230383, 0.44826842363954605, 0.7637335274754071, 0.48244240753868006, nan, 0.625331534198079, 0.6944541055496749, 0.18654700047236655, 0.8893611006867107, 0.4845014167207183, 0.005280450598451068, 0.5995903120857935, 0.10169968482665466, 0.5777541863213714, 0.0, 0.10625831542319107, 0.8006913747953047, 0.12712606139777924, 0.4783386384345389, 0.44333322627096416, 0.042293134265587215, nan, 0.148674558186062, 0.4270657907089471, 0.4375414792419438, 0.0, 0.8881646826265218, 0.746841100561318, 0.9521439225045568, 0.01294715575036877, 0.24666520631333802, 0.38409386690619945, 0.0] |
| 0.0594 | 18.25 | 7300 | 0.6184 | 0.5184 | 0.4328 | 0.8884 | [nan, 0.9404973526006469, 0.9537239028155554, 0.5275303738317757, 0.8254461719223712, 0.6778219046293364, nan, 0.7472383523016173, 0.8659581534373962, 0.2943783918140768, 0.9543757743601257, 0.5650160533465053, 0.0, 0.7537667698658411, 0.19283642325640055, 0.8840439696044684, 0.0, 0.053517660304244236, 0.9223867864255677, 0.14299077799301313, 0.6933990487935829, 0.5170742093202789, 0.040644728755796417, nan, 0.19868186187010847, 0.5769927251792537, 0.5184906162061554, 0.005237711522965351, 0.936523983230326, 0.8965774712364731, 0.9780089834131267, 0.013717932777984998, 0.4056981446483367, 0.5054707620798113, 0.0] | [nan, 0.8646951423015076, 0.8916557550473645, 0.4456280068092665, 0.7798208455321158, 0.4668012972723517, nan, 0.6275296552822227, 0.693191442493572, 0.24416726797924612, 0.8882015249296725, 0.4734908589168679, 0.0, 0.6010533245556287, 0.10449699289229086, 0.6037870806764625, 0.0, 0.0522041170761608, 0.8024731726060429, 0.12131790023739622, 0.47577199080928667, 0.44858497899759875, 0.038707102952913006, nan, 0.1414826837710464, 0.42720162129381883, 0.43218883327484625, 0.005164878823996822, 0.8886286814206171, 0.7396195316490108, 0.952706951959097, 0.011655776057680246, 0.24503522596165647, 0.3835704565398948, 0.0] |
| 0.0616 | 18.5 | 7400 | 0.6177 | 0.5082 | 0.4272 | 0.8887 | [nan, 0.9388723599691342, 0.9564944313754319, 0.5251226635514019, 0.8417103211148066, 0.6482573931295971, nan, 0.7321895483979944, 0.8855861839920293, 0.2417250093210158, 0.9506753528629689, 0.5459990121017535, 0.0, 0.656656346749226, 0.11275066212637155, 0.8765912190686498, 0.0, 0.07320713219699945, 0.9230813488667519, 0.11395056209539893, 0.703570900866502, 0.5234722511549255, 0.043466115425442764, nan, 0.1751201427982974, 0.5677919087245512, 0.4888879041013937, 0.00040290088638195, 0.9391572478144832, 0.8977247029883181, 0.9766107386702634, 0.018289713622611795, 0.4217114755430917, 0.4846827041793997, 0.0] | [nan, 0.8641564182971058, 0.8921133993393542, 0.4501424016407233, 0.7647378890792713, 0.4769587373086239, nan, 0.6209624017506187, 0.6859163987138264, 0.20884410959394406, 0.8903311694707657, 0.45434149683164926, 0.0, 0.5354933726067747, 0.07164035579774021, 0.6122940826221327, 0.0, 0.06951938138690669, 0.8003213370838211, 0.09716584900998836, 0.4828652554046836, 0.45382137270368395, 0.04121417598135297, nan, 0.13381035314854062, 0.43221966358833797, 0.42342013855571975, 0.00040160642570281126, 0.8881950211846364, 0.7398417591158966, 0.9530845970447974, 0.014810386777414213, 0.2365547272188405, 0.37402163767775426, 0.0] |
| 0.0611 | 18.75 | 7500 | 0.6099 | 0.5177 | 0.4324 | 0.8902 | [nan, 0.9345079533755389, 0.9638643589649342, 0.5356553738317757, 0.8422997643013702, 0.6257334001805861, nan, 0.7471220088972541, 0.8814537173221996, 0.2763370479307345, 0.9466207360377004, 0.6049436074750967, 0.0, 0.7059855521155831, 0.14970361962416445, 0.8782149119958433, 0.0, 0.0958028958186055, 0.9234898906602255, 0.14089637245649764, 0.6854742792438918, 0.5173606430820885, 0.04232080004469523, nan, 0.19343677056158176, 0.5813811692050034, 0.5071015488245331, 0.00040290088638195, 0.9400356746670351, 0.8951641148114238, 0.9764509546423178, 0.03372756848605413, 0.4723729399093662, 0.4701335776577261, 0.0] | [nan, 0.8647971283970989, 0.8977857991553266, 0.4345779290016539, 0.7684148484664771, 0.4855945598832977, nan, 0.6259089780170273, 0.686933822387541, 0.2366516479228013, 0.8888089337936385, 0.48289741736216074, 0.0, 0.5985650538104821, 0.061681563084597796, 0.6094675222969052, 0.0, 0.09345866005976859, 0.7993214394154491, 0.11438556403104944, 0.4762232900770807, 0.45242021144786737, 0.04009209272785011, nan, 0.14212501513256123, 0.43339055459103054, 0.4277836968915307, 0.00040032025620496394, 0.8873505568836287, 0.7422385564869821, 0.9528040989243474, 0.029041136219678652, 0.23652292476444373, 0.3661642120469451, 0.0] |
| 0.0526 | 19.0 | 7600 | 0.6228 | 0.5108 | 0.4297 | 0.8909 | [nan, 0.9405315503656566, 0.9623814025398809, 0.5330642523364486, 0.8317861268903274, 0.6622725273804787, nan, 0.7263120519701678, 0.8674004839398396, 0.27552922656282364, 0.9455175897361646, 0.5819338108174859, 0.0, 0.6111971104231166, 0.16710808424769832, 0.8864145612781711, 0.0, 0.0827900400596968, 0.930233313789279, 0.11843739134753886, 0.6995346374019279, 0.5042107294717365, 0.042153192915805354, nan, 0.18371550185363175, 0.5630920605013869, 0.5005871795439941, 0.0056406124093473006, 0.9407823912509976, 0.8985265242187241, 0.9751204970628252, 0.012990074184591156, 0.42681216850576115, 0.4687243361620586, 0.0] | [nan, 0.8642299686902748, 0.8983701844671692, 0.4505770666371748, 0.7744797343632894, 0.49247659714013137, nan, 0.623426329007179, 0.696151825084343, 0.23867367627796818, 0.8898312419634539, 0.48430193720774883, 0.0, 0.5244863620262132, 0.07708866651151966, 0.5993412927130506, 0.0, 0.08080962968642183, 0.7977044198782267, 0.10166926045153175, 0.47672785170429793, 0.4451483954200063, 0.04006265597621197, nan, 0.1264172335600907, 0.43160647951283304, 0.42598284151975113, 0.00554016620498615, 0.8878311660408268, 0.74270285241124, 0.9536917187049466, 0.011887351052557973, 0.24007269734586106, 0.3689853153957455, 0.0] |
| 0.054 | 19.25 | 7700 | 0.6199 | 0.5112 | 0.4157 | 0.8897 | [nan, 0.9383711032345364, 0.9577791893332354, 0.532998831775701, 0.8352225138198671, 0.6740592830016223, nan, 0.7513879337239024, 0.8669212886084358, 0.21351340154935997, 0.9451751851979368, 0.5077796986910348, 0.0, 0.7028895768833849, 0.18400807163576743, 0.8914236539585634, 0.0, 0.1072709658838007, 0.9291372462420467, 0.11183132171062435, 0.6577470949582549, 0.5160479493180732, 0.04262807978099335, nan, 0.1900590416037347, 0.5664154498351389, 0.5106689415257805, 0.0012087026591458502, 0.9410463493811095, 0.8949234994980861, 0.9775344732695309, 0.011246839902192383, 0.42160986811355644, 0.47790186427705494, 0.0] | [0.0, 0.8647432445871411, 0.896112476860621, 0.45036567465468447, 0.76789556797279, 0.4910576591298745, nan, 0.6249728507663073, 0.6958387758910245, 0.19385049365303245, 0.8887827463711233, 0.4413911550021468, 0.0, 0.5792159197210647, 0.08409221902017291, 0.5936591009850886, 0.0, 0.10176353700943865, 0.7979000623472865, 0.09749989173896098, 0.46787846117983983, 0.45133395403669296, 0.04032236755185625, nan, 0.1322593590552084, 0.4340972401884397, 0.4265909006774516, 0.0011904761904761906, 0.8880726081330668, 0.743872268803543, 0.953516990645358, 0.009541850530053972, 0.23069652626428858, 0.3703797514940341, 0.0] |
| 0.0671 | 19.5 | 7800 | 0.6217 | 0.5094 | 0.4146 | 0.8892 | [nan, 0.9331891438463118, 0.9574927175990591, 0.5350619158878505, 0.834028291700058, 0.6744756411977813, nan, 0.7431025597272566, 0.8738719931679082, 0.2327354074319566, 0.9446516741270925, 0.5379723388490986, 0.0, 0.669969040247678, 0.18249463992937318, 0.8913668247061116, 0.0, 0.09954703741523316, 0.9238793920053711, 0.0888259739399659, 0.6886532573187448, 0.5368212898403323, 0.03941560981060394, nan, 0.18061238500617877, 0.5652404877793479, 0.5268662338525626, 0.0060435132957292505, 0.9420171078199074, 0.9042006331836784, 0.9732816357580515, 0.009485473911061379, 0.3114064500396269, 0.49469125180868956, 0.0] | [0.0, 0.8617017485872825, 0.8957626230741332, 0.4508312580591182, 0.7683050299189929, 0.4878950714613818, nan, 0.624948812708509, 0.6911476098809349, 0.20973251451290761, 0.8882723484572987, 0.46124933827421916, 0.0, 0.5501928047798635, 0.07156988821841923, 0.5965012359764214, 0.0, 0.09680704791974334, 0.7988314631673791, 0.07901907356948229, 0.4711932405689982, 0.46080549284533756, 0.03769502030348365, nan, 0.13494050061551088, 0.43071416464770335, 0.43780380026513477, 0.005912495072920773, 0.8877312783085815, 0.7390862578001592, 0.9533931934816451, 0.008087813065948142, 0.20454363437358178, 0.3783462459982845, 0.0] |
| 0.0512 | 19.75 | 7900 | 0.6300 | 0.5080 | 0.4263 | 0.8887 | [nan, 0.9391756156362827, 0.957153465687716, 0.531875, 0.8363349452907067, 0.6442373192444947, nan, 0.7406369413577534, 0.8858234094036154, 0.26463399478023114, 0.9530349257345309, 0.5036634559973656, 0.0, 0.6101651186790505, 0.1925841846386682, 0.8746996168084692, 0.0, 0.0674207315476658, 0.9178750280173988, 0.11324690806139175, 0.6909895794473874, 0.5175153479480927, 0.042963294038773116, nan, 0.2016476726623644, 0.5813497671010625, 0.5020052735370366, 0.008058017727639, 0.9412167663408764, 0.897734355178538, 0.9747767193057303, 0.01633407932363546, 0.3496514865166941, 0.49998742995692663, 0.0] | [nan, 0.8625082043880324, 0.8957494129402008, 0.43782876705742063, 0.7496431303023787, 0.48514174134060595, nan, 0.6274006504670441, 0.6871961161760971, 0.2302687309626372, 0.8882991958037961, 0.4373045513839996, 0.0, 0.5170981283890153, 0.08045310853530031, 0.6189258899694966, 0.0, 0.06474078543772313, 0.7999986290910134, 0.09763826734899257, 0.47261393142851427, 0.4453505921742053, 0.040873817370043586, nan, 0.1437999373335422, 0.43193558986563074, 0.42771380026430056, 0.007840062720501764, 0.887320160440498, 0.7455157136812743, 0.9534156947680599, 0.013436060460141392, 0.21404224616226705, 0.3788044726196485, 0.0] |
| 0.0535 | 20.0 | 8000 | 0.6326 | 0.5129 | 0.4292 | 0.8889 | [nan, 0.9375849538350132, 0.9591767441005661, 0.5300221962616822, 0.8259597228240738, 0.6596635135950806, nan, 0.7492101575548236, 0.8658110736822129, 0.2693152160404325, 0.9484445354169388, 0.5863176092862435, 0.0, 0.6744066047471621, 0.20784462101147685, 0.883142820029876, 0.0, 0.07781530646977194, 0.9271092315337143, 0.10147518998658918, 0.678314629589805, 0.497267391277709, 0.043242639253589586, nan, 0.18442949334065634, 0.576354215732454, 0.5145022268507234, 0.007252215954875101, 0.939646591781763, 0.9018448093278766, 0.9767371671098836, 0.012725869285921506, 0.41707817675628445, 0.45857891473041446, 0.0] | [nan, 0.8619435562270654, 0.8965635233177199, 0.4407369269775891, 0.7663725441548623, 0.48239880840583743, nan, 0.6305089171096815, 0.6940516487277982, 0.23291892085557667, 0.8902205646366161, 0.48581173260572985, 0.0, 0.5452649144764289, 0.09688988182726792, 0.6044686963431372, 0.0, 0.07672845562038519, 0.7962772336784573, 0.08572747363415112, 0.4690486788330029, 0.43758222088032955, 0.04117568825641708, nan, 0.13543326140878018, 0.4322105242501251, 0.4339781328847771, 0.007067137809187279, 0.8877484539815808, 0.7395098273111396, 0.9530623665306688, 0.010661406489721605, 0.2371072088724584, 0.3613527133617203, 0.0] |
| 0.0467 | 20.25 | 8100 | 0.6268 | 0.5170 | 0.4303 | 0.8886 | [nan, 0.9395265086570245, 0.956900821509961, 0.5300023364485982, 0.8314043061203785, 0.6477819071422676, nan, 0.7464739330448017, 0.8916828770697918, 0.24499772152947513, 0.9451416993546665, 0.549950605087676, 0.0, 0.687203302373581, 0.1523521251103544, 0.8917889848671819, 0.0, 0.08004084518105412, 0.915062008738324, 0.1551515753572079, 0.6881485415176292, 0.526278382981852, 0.04472316889211688, nan, 0.18451187697377455, 0.5879677605066206, 0.549156898805699, 0.007655116841257051, 0.940224100990058, 0.9054685173132715, 0.9762965505479732, 0.02776741680135936, 0.449734804608913, 0.49033782689095345, 0.0] | [nan, 0.8644696780108341, 0.8944980656632955, 0.440104340976533, 0.7641389998117053, 0.4770745740308388, nan, 0.6297284505666034, 0.6844286473848664, 0.21773065311832707, 0.8890008282328474, 0.46004855121119775, 0.0, 0.5750680081177943, 0.06133536430566133, 0.6000371448704572, 0.0, 0.07885979620791951, 0.8006806868947128, 0.1252363801594355, 0.4706566275608475, 0.45444853884552, 0.04241284306453322, nan, 0.13328969033307544, 0.4323046138453842, 0.45063456852976475, 0.007448059584476676, 0.888463849852071, 0.7450400534159003, 0.9535229169698916, 0.021638336996913712, 0.23653075402126864, 0.371412309599829, 0.0] |
| 0.0566 | 20.5 | 8200 | 0.6333 | 0.5121 | 0.4287 | 0.8890 | [nan, 0.9382327153916955, 0.9575874232706021, 0.5340771028037383, 0.8342787755625269, 0.6541523107263972, nan, 0.7406429739787204, 0.8870285144944726, 0.2079415054476159, 0.9479172512933317, 0.5500535111550177, 0.0, 0.7218266253869969, 0.17152226005801488, 0.8854728193803988, 0.0, 0.06920116251669153, 0.9246219694901651, 0.12077186708389212, 0.6759797704055135, 0.5097310892447952, 0.045561204536566285, nan, 0.1750377591651792, 0.5736405505835558, 0.5156101127827879, 0.00684931506849315, 0.9398823262828916, 0.9029458484550981, 0.9765633952545758, 0.017017903767251024, 0.4133390233493873, 0.48943837047548283, 0.0] | [nan, 0.8643736263008805, 0.8951902105356352, 0.44089650982245326, 0.7609522214327652, 0.4848458703216258, nan, 0.6265179780801705, 0.6811413623628766, 0.1878590542487696, 0.887796763348636, 0.46558542236468475, 0.0, 0.5934331650617232, 0.06971498872257535, 0.6047629609093429, 0.0, 0.06810626948746361, 0.7983954196511591, 0.10178182731484066, 0.4720678124715856, 0.44954610542241913, 0.0431413003227001, nan, 0.12741374485267662, 0.432512153928718, 0.4367328553732968, 0.006685017695635077, 0.8879940574069723, 0.7494547941207608, 0.9536808104413358, 0.013580974233357105, 0.23932508912918143, 0.374424364423531, 0.0] |
| 0.0445 | 20.75 | 8300 | 0.6446 | 0.5134 | 0.4274 | 0.8856 | [nan, 0.9405399334753671, 0.9458917035764169, 0.5273960280373832, 0.8282526135651365, 0.6846166732980127, nan, 0.7372879749180856, 0.8847701285761731, 0.2182567629147852, 0.9486374327394391, 0.565180703054252, 0.0, 0.6657378740970072, 0.14856854584436877, 0.8831509384945119, 0.0, 0.06705417223051345, 0.9206841150299712, 0.12586301097700292, 0.6806553405515008, 0.5199094440427905, 0.04444382367730041, nan, 0.17805849237951393, 0.5833280996493432, 0.5248720391748466, 0.007252215954875101, 0.9356924613611799, 0.9010464353082633, 0.9759161892423923, 0.023617845745783083, 0.4449998983925705, 0.5172488924395381, 0.0] | [nan, 0.8666434932726657, 0.8860462410088557, 0.4516813574923211, 0.7742782740775649, 0.4555874524449895, nan, 0.6267926037830955, 0.6896407624091181, 0.1957204153277486, 0.8882182070612508, 0.46149838666308146, 0.0, 0.5469962267350659, 0.06421718273004798, 0.6011771207515888, 0.0, 0.06543011164763292, 0.79986647852113, 0.10526898843730527, 0.4713830230218466, 0.45188595346756627, 0.04203767801939388, nan, 0.1276553855846278, 0.42972506139948413, 0.441923808813104, 0.007075471698113208, 0.8884781477624152, 0.7456781431206605, 0.9535186762124032, 0.016432559463950374, 0.2430653450400151, 0.37996353686275436, 0.0] |
| 0.0523 | 21.0 | 8400 | 0.6334 | 0.5087 | 0.4256 | 0.8903 | [nan, 0.933221079502352, 0.9637948085900169, 0.5297546728971962, 0.8356436570172051, 0.6448230539257773, nan, 0.7465713167832686, 0.8749679745694359, 0.2327354074319566, 0.9465962111947419, 0.5354408495924919, 0.0, 0.6270897832817337, 0.14024467145920042, 0.8939972072481652, 0.009888751545117428, 0.05998481397114654, 0.9259419692666467, 0.10259275815824766, 0.6911110038285254, 0.5109028637249255, 0.044248282026928876, nan, 0.19286008512975422, 0.5704035170356414, 0.5006314949812767, 0.0, 0.9387582194599503, 0.9072224581646499, 0.9775237134023292, 0.011000766712254964, 0.4426019630555386, 0.48799979887931083, 0.0] | [nan, 0.8627899844290204, 0.898045292380419, 0.4429741700156492, 0.7733528050732301, 0.48122023215814036, nan, 0.6285033134107889, 0.6922586045743415, 0.2067303269489062, 0.888126363728484, 0.4555339601828019, 0.0, 0.512374046123361, 0.062230678829257376, 0.5926462119703566, 0.00044943820224719103, 0.05796624750145485, 0.8002256522783529, 0.08795100349163994, 0.4798915494731881, 0.45172247073689, 0.0420103434557751, nan, 0.13598869181318254, 0.4315342675118884, 0.4297071129707113, 0.0, 0.8889534278458562, 0.7430008362351238, 0.9537407288817968, 0.009678051537276564, 0.23964350552896518, 0.3711983987778357, 0.0] |
| 0.0715 | 21.25 | 8500 | 0.6366 | 0.5151 | 0.4287 | 0.8894 | [nan, 0.9370145031789949, 0.9615540919282511, 0.5349906542056074, 0.8234293246215806, 0.6427307923986297, nan, 0.7520265297434068, 0.877506286473407, 0.2407929077426571, 0.9458038701145451, 0.5871614390384458, 0.0, 0.6843137254901961, 0.1972505990667171, 0.8854890563096707, 0.054388133498145856, 0.06252454638284502, 0.9220868993644009, 0.11473699895693637, 0.6793299129694406, 0.505244648130675, 0.04341024638247947, nan, 0.19102018399011397, 0.5753257968283875, 0.5107132569630631, 0.0, 0.9400241164189752, 0.9050651936505135, 0.9789779094546415, 0.014533859670935389, 0.41945579060740923, 0.49523735034665384, 0.0] | [nan, 0.8636190041686136, 0.8961979040679402, 0.44008160621637177, 0.7735135302856915, 0.47552992149378714, nan, 0.6295369121222396, 0.6946632262523146, 0.2137970353477765, 0.8882677382290695, 0.4793581450054608, 0.0, 0.555406650473239, 0.08438545376065609, 0.5980720618958058, 0.002378506946321423, 0.06108823002737203, 0.7997681127577295, 0.0970839783417272, 0.47365876347968716, 0.44734126160727244, 0.041260653691952316, nan, 0.13688871396241267, 0.4310366799265186, 0.42952982613070945, 0.0, 0.8887487055026462, 0.7433844306901257, 0.9533070831491001, 0.012093141544284045, 0.23472485984284203, 0.3736148179836323, 0.0] |
| 0.0856 | 21.5 | 8600 | 0.6332 | 0.5104 | 0.4282 | 0.8891 | [nan, 0.9354302285089335, 0.9598914301992207, 0.5326285046728972, 0.8348257505275104, 0.6418013774311685, nan, 0.7519851631996333, 0.8757413294112065, 0.2316790256431501, 0.9473149777460632, 0.5441672841030707, 0.0, 0.6676986584107327, 0.19119687224114013, 0.8908797168279535, 0.0, 0.05576938182389443, 0.9230974918555517, 0.1150019040050332, 0.6832652332737915, 0.5057945396840957, 0.04410860941952064, nan, 0.19250308938624194, 0.5698984665305908, 0.50395515277747, 0.0040290088638195, 0.9408126308534799, 0.8986623443239606, 0.9766785258336341, 0.01867306975009325, 0.40035359385478264, 0.4951898635172656, 0.0] | [nan, 0.8652175117062043, 0.8949487144681932, 0.4437434730009742, 0.7611759319446382, 0.47865894832193984, nan, 0.6331643341293494, 0.6931150372692965, 0.2068423485899214, 0.8889820786499946, 0.4611976486594917, 0.0, 0.5675936485656636, 0.08603859250851305, 0.595085736597217, 0.0, 0.05421502748930971, 0.799696203512091, 0.09667497111998775, 0.4707822447654798, 0.4485026865801383, 0.041887733446519526, nan, 0.13581323258742614, 0.4329091328339933, 0.42695701145109816, 0.003957261574990107, 0.8887286680634571, 0.7476012702986532, 0.953293396822863, 0.014771330218834523, 0.23667139184546263, 0.3740649694565481, 0.0] | |
| 0.0426 | 22.25 | 8900 | 0.6388 | 0.5153 | 0.4321 | 0.8907 | [nan, 0.9365843032790866, 0.9619280328787767, 0.5323341121495327, 0.832118008177492, 0.6589330390083284, nan, 0.7530012289310712, 0.8876025999905109, 0.2356145656406645, 0.9495151391383951, 0.5967728657281633, 0.0, 0.6851909184726522, 0.16698196493883213, 0.8856433071377541, 0.0, 0.046160291152829054, 0.9249913955800083, 0.14087981589099158, 0.6780864102710397, 0.5070796622838727, 0.043214704732107936, nan, 0.19390361114925167, 0.577557963050191, 0.5263122908865303, 0.009266720386784852, 0.9401577082628303, 0.9045005405226523, 0.9759350190099954, 0.014261884039951924, 0.44343514397772765, 0.48190053464583205, 0.0] | [nan, 0.8638275353000382, 0.8975929370440341, 0.44847327680807825, 0.7680456934961463, 0.4896127563059361, nan, 0.6344922288860472, 0.6906430201049919, 0.21071058091286307, 0.8908914064913077, 0.4893922260291313, 0.0, 0.5741773684438103, 0.0915502696722445, 0.6133303348044865, 0.0, 0.045543787135107205, 0.799706519605589, 0.11493135050077327, 0.47303106132662764, 0.44896719237169413, 0.04119511090991399, nan, 0.13769769301273427, 0.43323479414732197, 0.4435750434181777, 0.008966861598440545, 0.8892865533176849, 0.7464162172003368, 0.9537521470921787, 0.012501163611760084, 0.24370386088743454, 0.37164396457569027, 0.0] |
| 0.0544 | 22.5 | 9000 | 0.6275 | 0.5126 | 0.4297 | 0.8902 | [nan, 0.9362912936349177, 0.962198079008307, 0.5305654205607476, 0.829452734049054, 0.6501778145136554, nan, 0.7606583485441561, 0.8785880343502396, 0.2379137495339492, 0.9477460490242178, 0.5748332921709064, 0.0, 0.6779153766769865, 0.15399167612561482, 0.8968792621939339, 0.0, 0.062053255832220565, 0.9268894385323623, 0.11712114438980778, 0.6830882170073133, 0.515366328868847, 0.046119894966199226, nan, 0.1939585335713305, 0.5666535824566913, 0.5097161596242051, 0.0064464141821112, 0.9399919952412273, 0.8983810519232679, 0.9745475341343337, 0.015694289029798168, 0.43490011989676686, 0.47604289457365206, 0.0] | [nan, 0.8648796447130465, 0.8972780355218145, 0.44448663694053075, 0.7723828909831303, 0.4856595115662902, nan, 0.6367705951823552, 0.693571040656192, 0.2097133467226584, 0.8885713515050402, 0.47493538294109644, 0.0, 0.5753448653382964, 0.07485745815707191, 0.589861603519713, 0.0, 0.060925449871465295, 0.7986432258569581, 0.09907840555757864, 0.4719490094091225, 0.45171147174755927, 0.04363338442835245, nan, 0.13716960245479792, 0.4304074481173985, 0.4370060790273556, 0.00631163708086785, 0.8878797422918536, 0.748175287257327, 0.9535688641919678, 0.013234083170064194, 0.2360317635381052, 0.36728912241605793, 0.0] |
| 0.0701 | 22.75 | 9100 | 0.6508 | 0.5132 | 0.4302 | 0.8902 | [nan, 0.9420095059141509, 0.9626173339520694, 0.5384521028037383, 0.8237863722622742, 0.6345902505663333, nan, 0.7493342571861443, 0.8728092233240025, 0.24462488089813164, 0.9462424874982255, 0.5649748909195687, 0.0, 0.6890092879256966, 0.18148568545844368, 0.8978859518087939, 0.0, 0.06417406331003063, 0.926905788482557, 0.10334608188877299, 0.6837845785184178, 0.5068636881640055, 0.044555561763226996, nan, 0.19329946450638474, 0.5856309206050139, 0.5353969555294587, 0.008058017727639, 0.9389002783925003, 0.9000722535382172, 0.9752872750044519, 0.01801255750341912, 0.4159604950313967, 0.4749814242696805, 0.0] | [nan, 0.8667971887550201, 0.8964523921395798, 0.43883250929953793, 0.7789739251684871, 0.4822597903246794, nan, 0.6338344499902683, 0.6949882507612449, 0.21506355392067597, 0.8897027195058894, 0.47454492022058187, 0.0, 0.5744214058332616, 0.09034404821697639, 0.5890266504761296, 0.0, 0.06334315397736083, 0.7983683031468644, 0.08797806890816708, 0.47160166966502776, 0.4468892814313033, 0.04230993686667728, nan, 0.13598253612549263, 0.43447527412791603, 0.442910823939144, 0.007836990595611285, 0.8890303591865106, 0.7479650947941834, 0.9538041433738902, 0.014260666277030976, 0.23761100470137558, 0.3677322595225377, 0.0] |
| 0.0588 | 23.0 | 9200 | 0.6510 | 0.5156 | 0.4306 | 0.8898 | [nan, 0.9386450845503147, 0.9615407102293612, 0.5321039719626168, 0.8252994992682097, 0.646236577683447, nan, 0.7500099107344458, 0.8891493096740523, 0.2356145656406645, 0.948320024675765, 0.5611467852144563, 0.0, 0.7061919504643963, 0.15790137470046664, 0.8929012145223095, 0.0, 0.06268164323305318, 0.9247904360655894, 0.12226195797943674, 0.6746470281016981, 0.5158947761834156, 0.04522599027878652, nan, 0.1926953178635178, 0.5791620871931753, 0.5486694289955906, 0.014504431909750202, 0.9393220200484532, 0.9030809791181759, 0.9764800062837624, 0.014337001118985454, 0.46371598691296306, 0.476005184444432, 0.0] | [nan, 0.8636880663267268, 0.8963496684957871, 0.4393286431075093, 0.7694031519559503, 0.48618816019454364, nan, 0.6323091767222339, 0.6843731284418411, 0.20910695246148756, 0.8901931512501616, 0.4713865836791148, 0.0, 0.594294150853272, 0.07763859605605854, 0.5971841386537511, 0.0, 0.061455525606469004, 0.799169285452784, 0.10285033809898536, 0.4708681854568623, 0.4517361674617981, 0.04280237937871778, nan, 0.1379100253532753, 0.432983014903532, 0.45285296269202635, 0.013830195927775643, 0.8892098290384068, 0.7459428984706676, 0.9536680185853351, 0.012051498108992573, 0.23353802067342136, 0.36591936147117593, 0.0] |
| 0.067 | 23.25 | 9300 | 0.6275 | 0.5128 | 0.4311 | 0.8905 | [nan, 0.9372797021893622, 0.9638153118797325, 0.5312441588785046, 0.8278251787794161, 0.6422768634184979, nan, 0.7515353020360958, 0.8786212459078616, 0.24139359542648825, 0.9490656742280216, 0.5420885815427677, 0.0, 0.7038183694530443, 0.17707150964812712, 0.8822822627784633, 0.0, 0.06734218312256172, 0.9252767953435341, 0.10501829500488419, 0.6879495810858851, 0.5059293320425944, 0.04416447846248394, nan, 0.19404091720444872, 0.5719029674988224, 0.5293478983403869, 0.008058017727639, 0.9393905631474131, 0.9031768115782158, 0.9770540451989742, 0.01500269385386879, 0.4205734723322969, 0.4884174036436365, 0.0] | [nan, 0.8641485198316792, 0.897149130251509, 0.4431534355853929, 0.7712457425720085, 0.4882715323914724, nan, 0.6318488634618116, 0.69528994349434, 0.21461061083181407, 0.890398769558611, 0.46117346313448776, 0.0, 0.5855585129217824, 0.08629909644108427, 0.608788204714529, 0.0, 0.0658912742737101, 0.7992632312490636, 0.09043857647998176, 0.47160302909046053, 0.44752081120336445, 0.04198645598194131, nan, 0.13798894682367646, 0.43383933729163815, 0.44664223751121745, 0.007836990595611285, 0.8889539638268134, 0.7463182889742939, 0.9538402391601662, 0.01284986599932556, 0.2406063988095238, 0.3716953276213374, 0.0] |
| 0.0513 | 23.5 | 9400 | 0.6472 | 0.5144 | 0.4306 | 0.8897 | [nan, 0.938401309042541, 0.9600648179629494, 0.5333469626168225, 0.832045261686822, 0.6450022850427629, nan, 0.7455948939896135, 0.883593490534706, 0.23551099879862464, 0.9506135691239773, 0.5523380258500041, 0.0, 0.6968524251805985, 0.18312523647370413, 0.8904413197376112, 0.0, 0.06160814808996413, 0.9256348385566595, 0.12978691700193712, 0.6801915871922148, 0.5208407367015084, 0.04416447846248394, nan, 0.1951942880681038, 0.5735463442717329, 0.5357736367463606, 0.010072522159548751, 0.9380115028759878, 0.9056712133078884, 0.9770508172388136, 0.017681006258029756, 0.4195573980369445, 0.4783152790270228, 0.0] | [nan, 0.8645788687513425, 0.8959992534632647, 0.44551363683824813, 0.7647562903055005, 0.48403962995403316, nan, 0.6342904860496079, 0.6900071507171095, 0.2094308344078099, 0.8896775711392028, 0.4683431642874594, 0.0, 0.5778034484233945, 0.08829968377523717, 0.5990191205946445, 0.0, 0.060376680693831467, 0.7987594181280973, 0.10780592458123607, 0.47080665968645763, 0.45253694794349175, 0.04196862307876085, nan, 0.13750677087363616, 0.4326699094290159, 0.44833404409174343, 0.009754194303550527, 0.8891644113783483, 0.7456061236432407, 0.9539508207140677, 0.014409173235161254, 0.23587072008774035, 0.3678274990977986, 0.0] |
| 0.0514 | 23.75 | 9500 | 0.6439 | 0.5126 | 0.4298 | 0.8893 | [nan, 0.9377822895762951, 0.9605358193045652, 0.5385, 0.8340916008081545, 0.6271635536295225, nan, 0.7452691324573968, 0.884822318166722, 0.22701851775135673, 0.9488086350085531, 0.537766526714415, 0.0, 0.6666150670794634, 0.20002522386177324, 0.8838085341300254, 0.0, 0.05781164087660042, 0.9238019884436897, 0.11829666054073742, 0.6694155391023081, 0.5142496967171933, 0.043549918989887706, nan, 0.19379376630509407, 0.5833176322813628, 0.5375905696749462, 0.014101531023368252, 0.9389680151020606, 0.9049790133806934, 0.9761012589582619, 0.02082556260101952, 0.414029953870227, 0.5005852053386369, 0.0] | [nan, 0.863411965165267, 0.894931428278196, 0.4402552004737254, 0.7611011560258087, 0.4837046157587918, nan, 0.6314089786667951, 0.6898753375504013, 0.2022476056909819, 0.8895664124405706, 0.4596777031068576, 0.0, 0.5673444293179922, 0.08523215821152193, 0.6083079089415631, 0.0, 0.056674965989886805, 0.7993862287218525, 0.09987768652804473, 0.4710007534678047, 0.450200875376809, 0.041379127295891285, nan, 0.1393342283999368, 0.4316562226473846, 0.44881423656073105, 0.013539651837524178, 0.8892954904899649, 0.7457058534465373, 0.9537927510495554, 0.016624966398544282, 0.24126375122858124, 0.37717282181124784, 0.0] |
| 0.0396 | 24.0 | 9600 | 0.6535 | 0.5114 | 0.4293 | 0.8894 | [nan, 0.9355970923117436, 0.9613217787436595, 0.5374941588785047, 0.8288621111896686, 0.642493049404965, nan, 0.7527694039253403, 0.878070882952982, 0.22343510501677782, 0.9446323372316829, 0.5478719025273731, 0.0, 0.6478844169246646, 0.1983856728465128, 0.8865769305708905, 0.0, 0.07386170240620009, 0.92611209153323, 0.1052169737909568, 0.6754384809956214, 0.5089943264670923, 0.04279568690988323, nan, 0.19272277907455718, 0.5795022766525357, 0.533735126631362, 0.008058017727639, 0.9392768622420797, 0.9018779025514876, 0.9758392561919, 0.014779932860872808, 0.4110833384137048, 0.4900487159002665, 0.0] | [nan, 0.8639528354166897, 0.8950065886128323, 0.44207385913246505, 0.7660355663095111, 0.48472638815638147, nan, 0.632634318964356, 0.6931134697057083, 0.20094633110411506, 0.8905903659512103, 0.4648726053472574, 0.0, 0.5535911115030201, 0.08658556723729839, 0.604755865918694, 0.0, 0.0724857392466211, 0.7980282230680995, 0.09017126154632008, 0.4707250951496855, 0.44738482499754295, 0.04074793201585233, nan, 0.13850404578646142, 0.43285457950063133, 0.4469182529964006, 0.007840062720501764, 0.8885988668670501, 0.746866946124605, 0.9537924535842215, 0.012023161337086795, 0.24114295250810605, 0.37191019096397804, 0.0] |
| 0.0572 | 24.25 | 9700 | 0.6468 | 0.5169 | 0.4312 | 0.8893 | [nan, 0.9401996856733055, 0.9583929096522826, 0.5344988317757009, 0.8275082400146594, 0.6494017622545427, nan, 0.7543103076809053, 0.8711154338852778, 0.24802187331703882, 0.9453213909924968, 0.5670947559068082, 0.0, 0.7040763673890609, 0.20204313280363223, 0.8891017730726765, 0.0, 0.06668761291336109, 0.9255172844843733, 0.1113677378764549, 0.6754443327730256, 0.5202249807001851, 0.044248282026928876, nan, 0.19305231360703007, 0.5827890301983566, 0.55261350291374, 0.014101531023368252, 0.9394324953961886, 0.9048990380903004, 0.9755035483352065, 0.0154197231547101, 0.45343331504399603, 0.47399118420979125, 0.0] | [nan, 0.863689319961114, 0.895499199129711, 0.4429491151299229, 0.765606502579043, 0.48571154804691785, nan, 0.6324972973597951, 0.6956526681114833, 0.21654760828284655, 0.8900625950293436, 0.47545424740738185, 0.0, 0.5803666368933691, 0.08725014977397745, 0.5992339680455242, 0.0, 0.06544361365913821, 0.7982999807741021, 0.09452243441114062, 0.4717078672807595, 0.4521680319629779, 0.04200588718873478, nan, 0.13927135130851676, 0.4339583670272156, 0.4507663389242337, 0.01348747591522158, 0.8884945203133995, 0.7465496843182982, 0.9537005332798949, 0.012399112712579277, 0.24028127759471044, 0.3662329926099869, 0.0] |
| 0.1 | 24.5 | 9800 | 0.6434 | 0.5135 | 0.4300 | 0.8895 | [nan, 0.9377224102212196, 0.9606645248290818, 0.5361588785046729, 0.8331230894215592, 0.6375564947567199, nan, 0.7494747310743753, 0.8814869288798216, 0.23789303616554125, 0.9491298161249899, 0.5208281880299662, 0.0, 0.7291537667698659, 0.1923319460209358, 0.8872670000649477, 0.0, 0.058754221977849345, 0.9251466166261608, 0.10029967383565953, 0.684280516653427, 0.5108906098741529, 0.04338231186099782, nan, 0.1931896196622271, 0.581302663945151, 0.5429748953047794, 0.014101531023368252, 0.939044218900316, 0.9053540699149504, 0.9762874046608516, 0.016517986655062374, 0.4174033205307972, 0.4717006430275368, 0.0] | [nan, 0.8641608155359141, 0.8958643122776131, 0.4417664033758718, 0.7644541831979321, 0.4846296892790795, nan, 0.6335999382179972, 0.6905137105945841, 0.21054850773630565, 0.8890883354259757, 0.44958072768618534, 0.0, 0.6023700925018117, 0.08546290069491146, 0.6030192343768966, 0.0, 0.057282891713891865, 0.7981027891830667, 0.08634672672073433, 0.470738722708764, 0.44815859378883993, 0.04122753457750405, nan, 0.1376066035521477, 0.4340720968586592, 0.4532255678035067, 0.01352918438345574, 0.888563607775072, 0.7458284701692807, 0.9538944088343424, 0.01350879014029907, 0.2349899322716456, 0.3667384437299315, 0.0] |
| 0.0547 | 24.75 | 9900 | 0.6482 | 0.5155 | 0.4313 | 0.8898 | [nan, 0.9397340904212859, 0.9603330836947732, 0.5307733644859813, 0.8309005858255233, 0.6429241895489165, nan, 0.7515697741559071, 0.8821369265075675, 0.23520029827250508, 0.948613379528076, 0.5628961883592657, 0.0, 0.7383384932920537, 0.19170134947660486, 0.8888176268104176, 0.0, 0.06747309716440185, 0.9241314709843229, 0.1176757893342605, 0.6804680836745651, 0.509839842170402, 0.04290742499580982, nan, 0.19313469724014828, 0.5775631967341812, 0.5366821032106535, 0.009669621273166801, 0.9403802717370998, 0.9035215326574961, 0.9734618635336802, 0.012358054623067678, 0.41701721229856326, 0.48626373626373626, 0.0] | [nan, 0.8640778611527823, 0.8958137823018933, 0.4460626314967881, 0.7641756445447411, 0.4858917928580605, nan, 0.6328187132466054, 0.6908867956078256, 0.20850548118768247, 0.8893168906380365, 0.47044860327507915, 0.0, 0.6030682345007797, 0.08536927829261444, 0.6011740028114567, 0.0, 0.06583048076431819, 0.7992350659678636, 0.09887388797306791, 0.4713607906006725, 0.44755617108819296, 0.040873892333484124, nan, 0.13801020408163264, 0.4335135793399971, 0.45185060816356987, 0.0093603744149766, 0.8886009280250379, 0.7464543006342957, 0.9536265277974683, 0.010431767147039596, 0.2352570275599578, 0.3719794479055262, 0.0] |
| 0.0627 | 25.0 | 10000 | 0.6463 | 0.5168 | 0.4317 | 0.8895 | [nan, 0.9354022848098984, 0.9601675641402632, 0.5369719626168225, 0.8337939300328185, 0.6403441237446122, nan, 0.7582108280375539, 0.8834986003700717, 0.24187000289987157, 0.948116751458167, 0.5520704700749156, 0.0, 0.7381320949432405, 0.19649388321352, 0.888963759173865, 0.0, 0.07624433796769041, 0.9231866922167408, 0.1182221559959602, 0.6801081993642044, 0.5121910497873957, 0.04447175819878205, nan, 0.19406837841548813, 0.5788088135238394, 0.5379894086104895, 0.008460918614020952, 0.9391146435745414, 0.9050362370798539, 0.9765451034803329, 0.015450806083965353, 0.41939482614968804, 0.4941702933568719, 0.0] | [nan, 0.8640678937775673, 0.895377615265056, 0.442350332594235, 0.7643727945096741, 0.4849891658522591, nan, 0.6340492784936108, 0.6910083381883088, 0.21346568681218236, 0.8895978581938467, 0.46446072065520405, 0.0, 0.601404187337089, 0.08586860670194003, 0.6029780227646933, 0.0, 0.07410800631139614, 0.7995575849393181, 0.09964415294445995, 0.4716975388811325, 0.4492564945882909, 0.04216548363174065, nan, 0.13932260862707987, 0.43292556418938755, 0.4516033033256454, 0.00821917808219178, 0.8889508587805682, 0.7461158390782254, 0.954070468766836, 0.012555965083260888, 0.23512657506778772, 0.3742610137901782, 0.0] |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| 91,893 | [
[
-0.046600341796875,
-0.0528564453125,
0.0192108154296875,
0.021148681640625,
0.00670623779296875,
-0.0011796951293945312,
0.00799560546875,
-0.0026226043701171875,
0.060577392578125,
0.02740478515625,
-0.015625,
-0.0222015380859375,
-0.046661376953125,
-0.00... |
maywell/Synatra-7B-Instruct-v0.2 | 2023-10-24T12:56:23.000Z | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"ko",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | maywell | null | null | maywell/Synatra-7B-Instruct-v0.2 | 6 | 1,757 | transformers | 2023-10-12T02:29:48 | ---
language:
- ko
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-4.0
---
# **Synatra-7B-Instruct-v0.2**
Made by StableFluffy
**Contact (Do not Contact for personal things.)**
Discord : is.maywell
Telegram : AlzarTakkarsen
## License
This model is strictly [*non-commercial*](https://creativecommons.org/licenses/by-nc/4.0/) (**cc-by-nc-4.0**) use only.
The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included **cc-by-nc-4.0** license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences.
The licence can be changed after new model released. If you are to use this model for commercial purpose, Contact me.
## Model Details
**Base Model**
[mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
**Trained On**
A6000 48GB * 8
## TODO
- RP 기반 튜닝 모델 제작
- 데이터셋 정제
- 언어 이해능력 개선
- 상식 보완
- 토크나이저 변경
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] 아이작 뉴턴의 업적을 알려줘. [/INST]"
```
# **Model Benchmark**
## Ko-LLM-Leaderboard
| Model | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | Avg
| --- | --- | --- | --- | --- | --- | ---
| kyujinpy/KoT-platypus2-13B(No.1 at 2023/10/12) | 43.69 | 53.05 | 42.29 | 43.34 | 65.38 | 49.55
| Synatra-V0.1-7B-Instruct | 41.72 | 49.28 | 43.27 | 43.75 | 39.32 | 43.47
| **Synatra-7B-Instruct-v0.2** | **41.81** | **49.35** | **43.99** | **45.77** | **42.96** | **44.78**
MMLU에서는 우세하나 Ko-CommonGen V2 에서 크게 약한 모습을 보임.
# **Implementation Code**
Since, chat_template already contains insturction format above.
You can use the code below.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-V0.1-7B")
tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-V0.1-7B")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
If you run it on oobabooga your prompt would look like this.
```
[INST] 링컨에 대해서 알려줘. [/INST]
```
> Readme format: [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)
--- | 2,818 | [
[
-0.018035888671875,
-0.06072998046875,
0.00749969482421875,
0.026031494140625,
-0.033721923828125,
-0.015472412109375,
-0.0103302001953125,
-0.018646240234375,
0.021636962890625,
0.0178070068359375,
-0.05963134765625,
-0.046875,
-0.046783447265625,
0.0059509... |
TheBloke/Llama-2-70B-LoRA-Assemble-v2-AWQ | 2023-09-27T12:51:12.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama2",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/Llama-2-70B-LoRA-Assemble-v2-AWQ | 1 | 1,754 | transformers | 2023-09-19T09:34:18 | ---
license: llama2
model_name: Llama 2 70B LoRA Assemble v2
base_model: oh-yeontaek/llama-2-70B-LoRA-assemble-v2
inference: false
model_creator: oh-yeontaek
model_type: llama
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama 2 70B LoRA Assemble v2 - AWQ
- Model creator: [oh-yeontaek](https://huggingface.co/oh-yeontaek)
- Original model: [Llama 2 70B LoRA Assemble v2](https://huggingface.co/oh-yeontaek/llama-2-70B-LoRA-assemble-v2)
<!-- description start -->
## Description
This repo contains AWQ model files for [oh-yeontaek's Llama 2 70B LoRA Assemble v2](https://huggingface.co/oh-yeontaek/llama-2-70B-LoRA-assemble-v2).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-70B-LoRA-Assemble-v2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-70B-LoRA-Assemble-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-70B-LoRA-Assemble-v2-GGUF)
* [oh-yeontaek's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/oh-yeontaek/llama-2-70B-LoRA-assemble-v2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files and AWQ parameters
For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Llama-2-70B-LoRA-Assemble-v2-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 36.61 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Serving this model from vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- When using vLLM as a server, pass the `--quantization awq` parameter, for example:
```shell
python3 python -m vllm.entrypoints.api_server --model TheBloke/Llama-2-70B-LoRA-Assemble-v2-AWQ --quantization awq
```
When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Llama-2-70B-LoRA-Assemble-v2-AWQ", quantization="awq")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-python start -->
## How to use this AWQ model from Python code
### Install the necessary packages
Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.0.2 or later
```shell
pip3 install autoawq
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### You can then try the following example code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/Llama-2-70B-LoRA-Assemble-v2-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=False, safetensors=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
print("\n\n*** Generate:")
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
tokens,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
print("Output: ", tokenizer.decode(generation_output[0]))
# Inference can also be done using transformers' pipeline
from transformers import pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), and [vLLM](https://github.com/vllm-project/vllm).
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is not yet compatible with AWQ, but a PR is open which should bring support soon: [TGI PR #781](https://github.com/huggingface/text-generation-inference/issues/781).
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: oh-yeontaek's Llama 2 70B LoRA Assemble v2
No original model card was available.
| 10,254 | [
[
-0.039306640625,
-0.060791015625,
0.0289306640625,
0.00321197509765625,
-0.020263671875,
-0.01276397705078125,
0.013946533203125,
-0.042999267578125,
0.0023899078369140625,
0.027313232421875,
-0.049224853515625,
-0.036956787109375,
-0.02239990234375,
-0.0083... |
llm-book/bert-base-japanese-v3-marc_ja | 2023-07-24T06:49:13.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"ja",
"dataset:llm-book/JGLUE",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | llm-book | null | null | llm-book/bert-base-japanese-v3-marc_ja | 1 | 1,753 | transformers | 2023-06-01T14:29:06 | ---
language:
- ja
license: apache-2.0
library_name: transformers
datasets:
- llm-book/JGLUE
pipeline_tag: text-classification
---
# bert-base-japanese-v3-marc_ja
「[大規模言語モデル入門](https://www.amazon.co.jp/dp/4297136333)」の第5章で紹介している(感情分析)のモデルです。
[cl-tohoku/bert-base-japanese-v3](https://huggingface.co/cl-tohoku/bert-base-japanese-v3)を[JGLUE](https://huggingface.co/datasets/llm-book/JGLUE)のMARC-jaデータセットでファインチューニングして構築されています。
## 関連リンク
* [GitHubリポジトリ](https://github.com/ghmagazine/llm-book)
* [Colabノートブック(訓練)](https://colab.research.google.com/github/ghmagazine/llm-book/blob/main/chapter5/5-2-sentiment-analysis-finetuning.ipynb)
* [Colabノートブック(推論)](https://colab.research.google.com/github/ghmagazine/llm-book/blob/main/chapter5/5-3-sentiment-analysis-analysis.ipynb)
* [データセット](https://huggingface.co/datasets/llm-book/JGLUE)
* [大規模言語モデル入門(Amazon.co.jp)](https://www.amazon.co.jp/dp/4297136333/)
* [大規模言語モデル入門(gihyo.jp)](https://gihyo.jp/book/2023/978-4-297-13633-8)
## 使い方
```python
from transformers import pipeline
text_classification_pipeline = pipeline(model="llm-book/bert-base-japanese-v3-marc_ja")
print(text_classification_pipeline("世界には言葉がわからなくても感動する音楽がある。")[0])
# {'label': 'positive', 'score': 0.9993619322776794}
```
## ライセンス
[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) | 1,314 | [
[
-0.036376953125,
-0.047760009765625,
0.0165863037109375,
0.0355224609375,
-0.041259765625,
-0.002437591552734375,
-0.0161590576171875,
-0.029541015625,
0.02911376953125,
0.0299224853515625,
-0.05889892578125,
-0.06524658203125,
-0.040618896484375,
0.01513671... |
Salesforce/codet5p-16b | 2023-08-04T02:44:48.000Z | [
"transformers",
"pytorch",
"codet5p",
"text2text-generation",
"custom_code",
"arxiv:2305.07922",
"license:bsd-3-clause",
"autotrain_compatible",
"has_space",
"region:us"
] | text2text-generation | Salesforce | null | null | Salesforce/codet5p-16b | 53 | 1,752 | transformers | 2023-05-17T02:23:58 | ---
license: bsd-3-clause
---
# CodeT5+ 16B
## Model description
[CodeT5+](https://github.com/salesforce/CodeT5/tree/main/CodeT5+) is a new family of open code large language models with an encoder-decoder architecture that can flexibly operate in different modes (i.e. _encoder-only_, _decoder-only_, and _encoder-decoder_) to support a wide range of code understanding and generation tasks.
It is introduced in the paper:
[CodeT5+: Open Code Large Language Models for Code Understanding and Generation](https://arxiv.org/pdf/2305.07922.pdf)
by [Yue Wang](https://yuewang-cuhk.github.io/)\*, [Hung Le](https://sites.google.com/view/henryle2018/home?pli=1)\*, [Akhilesh Deepak Gotmare](https://akhileshgotmare.github.io/), [Nghi D.Q. Bui](https://bdqnghi.github.io/), [Junnan Li](https://sites.google.com/site/junnanlics), [Steven C.H. Hoi](https://sites.google.com/view/stevenhoi/home) (* indicates equal contribution).
Compared to the original CodeT5 family (base: `220M`, large: `770M`), CodeT5+ is pretrained with a diverse set of pretraining tasks including _span denoising_, _causal language modeling_, _contrastive learning_, and _text-code matching_ to learn rich representations from both unimodal code data and bimodal code-text data.
Additionally, it employs a simple yet effective _compute-efficient pretraining_ method to initialize the model components with frozen off-the-shelf LLMs such as [CodeGen](https://github.com/salesforce/CodeGen) to efficiently scale up the model (i.e. `2B`, `6B`, `16B`), and adopts a "shallow encoder and deep decoder" architecture.
Furthermore, it is instruction-tuned to align with natural language instructions (see our InstructCodeT5+ 16B) following [Code Alpaca](https://github.com/sahil280114/codealpaca).
## How to use
This model can be easily loaded using the `AutoModelForSeq2SeqLM` functionality and employs the same tokenizer as [CodeGen](https://github.com/salesforce/CodeGen).
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
checkpoint = "Salesforce/codet5p-16b"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
trust_remote_code=True).to(device)
encoding = tokenizer("def print_hello_world():", return_tensors="pt").to(device)
encoding['decoder_input_ids'] = encoding['input_ids'].clone()
outputs = model.generate(**encoding, max_length=15)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Pretraining data
This checkpoint is trained on the stricter permissive subset of the deduplicated version of the [github-code dataset](https://huggingface.co/datasets/codeparrot/github-code).
The data is preprocessed by reserving only permissively licensed code ("mit" “apache-2”, “bsd-3-clause”, “bsd-2-clause”, “cc0-1.0”, “unlicense”, “isc”).
Supported languages (9 in total) are as follows:
`c`, `c++`, `c-sharp`, `go`, `java`, `javascript`, `php`, `python`, `ruby.`
## Training procedure
This checkpoint is initialized from off-the-shelf LLMs, i.e. its encoder is initialized from [CodeGen-350M-mono](https://huggingface.co/Salesforce/codegen-350M-mono) and its decoder is initialized from [CodeGen-16B-mono](https://huggingface.co/Salesforce/codegen-16B-mono).
It is trained on the unimodal code data at the first-stage pretraining, which includes a diverse set of pretraining tasks including _span denoising_ and two variants of _causal language modeling_.
After that, it is further trained on the Python subset with the causal language modeling objective for another epoch to better adapt for Python code generation.
Please refer to the paper for more details.
## Evaluation results
CodeT5+ models have been comprehensively evaluated on a wide range of code understanding and generation tasks in various settings: _zero-shot_, _finetuning_, and _instruction-tuning_.
Specifically, CodeT5+ yields substantial performance gains on many downstream tasks compared to their SoTA baselines, e.g.,
8 text-to-code retrieval tasks (+3.2 avg. MRR), 2 line-level code completion tasks (+2.1 avg. Exact Match), and 2 retrieval-augmented code generation tasks (+5.8 avg. BLEU-4).
In 2 math programming tasks on MathQA-Python and GSM8K-Python, CodeT5+ models of below billion-parameter sizes significantly outperform many LLMs of up to 137B parameters.
Particularly, in the zero-shot text-to-code generation task on HumanEval benchmark, InstructCodeT5+ 16B sets new SoTA results of 35.0% pass@1 and 54.5% pass@10 against other open code LLMs, even surpassing the closed-source OpenAI code-cushman-001 mode
Please refer to the [paper](https://arxiv.org/pdf/2305.07922.pdf) for more details.
## BibTeX entry and citation info
```bibtex
@article{wang2023codet5plus,
title={CodeT5+: Open Code Large Language Models for Code Understanding and Generation},
author={Wang, Yue and Le, Hung and Gotmare, Akhilesh Deepak and Bui, Nghi D.Q. and Li, Junnan and Hoi, Steven C. H.},
journal={arXiv preprint},
year={2023}
}
``` | 5,242 | [
[
-0.031005859375,
-0.0355224609375,
0.00928497314453125,
0.0261993408203125,
-0.011077880859375,
0.0074920654296875,
-0.0313720703125,
-0.04656982421875,
-0.018951416015625,
0.0184478759765625,
-0.033447265625,
-0.0455322265625,
-0.034881591796875,
0.01286315... |
TaylorAI/bge-micro | 2023-10-07T06:59:56.000Z | [
"sentence-transformers",
"pytorch",
"onnx",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"mteb",
"model-index",
"endpoints_compatible",
"region:us"
] | sentence-similarity | TaylorAI | null | null | TaylorAI/bge-micro | 13 | 1,751 | sentence-transformers | 2023-10-07T06:46:18 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- mteb
model-index:
- name: bge_micro
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 66.26865671641792
- type: ap
value: 28.174006539079688
- type: f1
value: 59.724963358211035
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 75.3691
- type: ap
value: 69.64182876373573
- type: f1
value: 75.2906345000088
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 35.806
- type: f1
value: 35.506516495961904
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.24
- type: map_at_10
value: 42.832
- type: map_at_100
value: 43.797000000000004
- type: map_at_1000
value: 43.804
- type: map_at_3
value: 38.134
- type: map_at_5
value: 40.744
- type: mrr_at_1
value: 27.951999999999998
- type: mrr_at_10
value: 43.111
- type: mrr_at_100
value: 44.083
- type: mrr_at_1000
value: 44.09
- type: mrr_at_3
value: 38.431
- type: mrr_at_5
value: 41.019
- type: ndcg_at_1
value: 27.24
- type: ndcg_at_10
value: 51.513
- type: ndcg_at_100
value: 55.762
- type: ndcg_at_1000
value: 55.938
- type: ndcg_at_3
value: 41.743
- type: ndcg_at_5
value: 46.454
- type: precision_at_1
value: 27.24
- type: precision_at_10
value: 7.93
- type: precision_at_100
value: 0.9820000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 17.402
- type: precision_at_5
value: 12.731
- type: recall_at_1
value: 27.24
- type: recall_at_10
value: 79.303
- type: recall_at_100
value: 98.151
- type: recall_at_1000
value: 99.502
- type: recall_at_3
value: 52.205
- type: recall_at_5
value: 63.656
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 44.59766397469585
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 34.480143023109626
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 58.09326229984527
- type: mrr
value: 72.18429846546191
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 85.47582391622187
- type: cos_sim_spearman
value: 83.41635852964214
- type: euclidean_pearson
value: 84.21969728559216
- type: euclidean_spearman
value: 83.46575724558684
- type: manhattan_pearson
value: 83.83107014910223
- type: manhattan_spearman
value: 83.13321954800792
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 80.58116883116882
- type: f1
value: 80.53335622619781
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.13458676004344
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 29.720429607514898
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.051000000000002
- type: map_at_10
value: 36.291000000000004
- type: map_at_100
value: 37.632
- type: map_at_1000
value: 37.772
- type: map_at_3
value: 33.288000000000004
- type: map_at_5
value: 35.035
- type: mrr_at_1
value: 33.333
- type: mrr_at_10
value: 42.642
- type: mrr_at_100
value: 43.401
- type: mrr_at_1000
value: 43.463
- type: mrr_at_3
value: 40.272000000000006
- type: mrr_at_5
value: 41.753
- type: ndcg_at_1
value: 33.333
- type: ndcg_at_10
value: 42.291000000000004
- type: ndcg_at_100
value: 47.602
- type: ndcg_at_1000
value: 50.109
- type: ndcg_at_3
value: 38.033
- type: ndcg_at_5
value: 40.052
- type: precision_at_1
value: 33.333
- type: precision_at_10
value: 8.254999999999999
- type: precision_at_100
value: 1.353
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 18.884
- type: precision_at_5
value: 13.447999999999999
- type: recall_at_1
value: 26.051000000000002
- type: recall_at_10
value: 53.107000000000006
- type: recall_at_100
value: 76.22
- type: recall_at_1000
value: 92.92399999999999
- type: recall_at_3
value: 40.073
- type: recall_at_5
value: 46.327
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.698999999999998
- type: map_at_10
value: 26.186
- type: map_at_100
value: 27.133000000000003
- type: map_at_1000
value: 27.256999999999998
- type: map_at_3
value: 24.264
- type: map_at_5
value: 25.307000000000002
- type: mrr_at_1
value: 24.712999999999997
- type: mrr_at_10
value: 30.703999999999997
- type: mrr_at_100
value: 31.445
- type: mrr_at_1000
value: 31.517
- type: mrr_at_3
value: 28.992
- type: mrr_at_5
value: 29.963
- type: ndcg_at_1
value: 24.712999999999997
- type: ndcg_at_10
value: 30.198000000000004
- type: ndcg_at_100
value: 34.412
- type: ndcg_at_1000
value: 37.174
- type: ndcg_at_3
value: 27.148
- type: ndcg_at_5
value: 28.464
- type: precision_at_1
value: 24.712999999999997
- type: precision_at_10
value: 5.489999999999999
- type: precision_at_100
value: 0.955
- type: precision_at_1000
value: 0.14400000000000002
- type: precision_at_3
value: 12.803
- type: precision_at_5
value: 8.981
- type: recall_at_1
value: 19.698999999999998
- type: recall_at_10
value: 37.595
- type: recall_at_100
value: 55.962
- type: recall_at_1000
value: 74.836
- type: recall_at_3
value: 28.538999999999998
- type: recall_at_5
value: 32.279
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 34.224
- type: map_at_10
value: 44.867000000000004
- type: map_at_100
value: 45.944
- type: map_at_1000
value: 46.013999999999996
- type: map_at_3
value: 42.009
- type: map_at_5
value: 43.684
- type: mrr_at_1
value: 39.436
- type: mrr_at_10
value: 48.301
- type: mrr_at_100
value: 49.055
- type: mrr_at_1000
value: 49.099
- type: mrr_at_3
value: 45.956
- type: mrr_at_5
value: 47.445
- type: ndcg_at_1
value: 39.436
- type: ndcg_at_10
value: 50.214000000000006
- type: ndcg_at_100
value: 54.63
- type: ndcg_at_1000
value: 56.165
- type: ndcg_at_3
value: 45.272
- type: ndcg_at_5
value: 47.826
- type: precision_at_1
value: 39.436
- type: precision_at_10
value: 8.037999999999998
- type: precision_at_100
value: 1.118
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 20.125
- type: precision_at_5
value: 13.918
- type: recall_at_1
value: 34.224
- type: recall_at_10
value: 62.690999999999995
- type: recall_at_100
value: 81.951
- type: recall_at_1000
value: 92.93299999999999
- type: recall_at_3
value: 49.299
- type: recall_at_5
value: 55.533
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.375
- type: map_at_10
value: 28.366000000000003
- type: map_at_100
value: 29.363
- type: map_at_1000
value: 29.458000000000002
- type: map_at_3
value: 26.247
- type: map_at_5
value: 27.439000000000004
- type: mrr_at_1
value: 22.938
- type: mrr_at_10
value: 30.072
- type: mrr_at_100
value: 30.993
- type: mrr_at_1000
value: 31.070999999999998
- type: mrr_at_3
value: 28.004
- type: mrr_at_5
value: 29.179
- type: ndcg_at_1
value: 22.938
- type: ndcg_at_10
value: 32.516
- type: ndcg_at_100
value: 37.641999999999996
- type: ndcg_at_1000
value: 40.150999999999996
- type: ndcg_at_3
value: 28.341
- type: ndcg_at_5
value: 30.394
- type: precision_at_1
value: 22.938
- type: precision_at_10
value: 5.028
- type: precision_at_100
value: 0.8
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 12.052999999999999
- type: precision_at_5
value: 8.497
- type: recall_at_1
value: 21.375
- type: recall_at_10
value: 43.682
- type: recall_at_100
value: 67.619
- type: recall_at_1000
value: 86.64699999999999
- type: recall_at_3
value: 32.478
- type: recall_at_5
value: 37.347
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.95
- type: map_at_10
value: 21.417
- type: map_at_100
value: 22.525000000000002
- type: map_at_1000
value: 22.665
- type: map_at_3
value: 18.684
- type: map_at_5
value: 20.275000000000002
- type: mrr_at_1
value: 18.159
- type: mrr_at_10
value: 25.373
- type: mrr_at_100
value: 26.348
- type: mrr_at_1000
value: 26.432
- type: mrr_at_3
value: 22.698999999999998
- type: mrr_at_5
value: 24.254
- type: ndcg_at_1
value: 18.159
- type: ndcg_at_10
value: 26.043
- type: ndcg_at_100
value: 31.491999999999997
- type: ndcg_at_1000
value: 34.818
- type: ndcg_at_3
value: 21.05
- type: ndcg_at_5
value: 23.580000000000002
- type: precision_at_1
value: 18.159
- type: precision_at_10
value: 4.938
- type: precision_at_100
value: 0.872
- type: precision_at_1000
value: 0.129
- type: precision_at_3
value: 9.908999999999999
- type: precision_at_5
value: 7.611999999999999
- type: recall_at_1
value: 14.95
- type: recall_at_10
value: 36.285000000000004
- type: recall_at_100
value: 60.431999999999995
- type: recall_at_1000
value: 84.208
- type: recall_at_3
value: 23.006
- type: recall_at_5
value: 29.304999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.580000000000002
- type: map_at_10
value: 32.906
- type: map_at_100
value: 34.222
- type: map_at_1000
value: 34.346
- type: map_at_3
value: 29.891000000000002
- type: map_at_5
value: 31.679000000000002
- type: mrr_at_1
value: 28.778
- type: mrr_at_10
value: 37.783
- type: mrr_at_100
value: 38.746
- type: mrr_at_1000
value: 38.804
- type: mrr_at_3
value: 35.098
- type: mrr_at_5
value: 36.739
- type: ndcg_at_1
value: 28.778
- type: ndcg_at_10
value: 38.484
- type: ndcg_at_100
value: 44.322
- type: ndcg_at_1000
value: 46.772000000000006
- type: ndcg_at_3
value: 33.586
- type: ndcg_at_5
value: 36.098
- type: precision_at_1
value: 28.778
- type: precision_at_10
value: 7.151000000000001
- type: precision_at_100
value: 1.185
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 16.105
- type: precision_at_5
value: 11.704
- type: recall_at_1
value: 23.580000000000002
- type: recall_at_10
value: 50.151999999999994
- type: recall_at_100
value: 75.114
- type: recall_at_1000
value: 91.467
- type: recall_at_3
value: 36.552
- type: recall_at_5
value: 43.014
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.669999999999998
- type: map_at_10
value: 28.687
- type: map_at_100
value: 30.061
- type: map_at_1000
value: 30.197000000000003
- type: map_at_3
value: 26.134
- type: map_at_5
value: 27.508
- type: mrr_at_1
value: 26.256
- type: mrr_at_10
value: 34.105999999999995
- type: mrr_at_100
value: 35.137
- type: mrr_at_1000
value: 35.214
- type: mrr_at_3
value: 31.791999999999998
- type: mrr_at_5
value: 33.145
- type: ndcg_at_1
value: 26.256
- type: ndcg_at_10
value: 33.68
- type: ndcg_at_100
value: 39.7
- type: ndcg_at_1000
value: 42.625
- type: ndcg_at_3
value: 29.457
- type: ndcg_at_5
value: 31.355
- type: precision_at_1
value: 26.256
- type: precision_at_10
value: 6.2330000000000005
- type: precision_at_100
value: 1.08
- type: precision_at_1000
value: 0.149
- type: precision_at_3
value: 14.193
- type: precision_at_5
value: 10.113999999999999
- type: recall_at_1
value: 20.669999999999998
- type: recall_at_10
value: 43.254999999999995
- type: recall_at_100
value: 69.118
- type: recall_at_1000
value: 89.408
- type: recall_at_3
value: 31.135
- type: recall_at_5
value: 36.574
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.488833333333336
- type: map_at_10
value: 29.025416666666665
- type: map_at_100
value: 30.141249999999992
- type: map_at_1000
value: 30.264083333333335
- type: map_at_3
value: 26.599333333333337
- type: map_at_5
value: 28.004666666666665
- type: mrr_at_1
value: 25.515
- type: mrr_at_10
value: 32.8235
- type: mrr_at_100
value: 33.69958333333333
- type: mrr_at_1000
value: 33.77191666666668
- type: mrr_at_3
value: 30.581000000000003
- type: mrr_at_5
value: 31.919666666666668
- type: ndcg_at_1
value: 25.515
- type: ndcg_at_10
value: 33.64241666666666
- type: ndcg_at_100
value: 38.75816666666667
- type: ndcg_at_1000
value: 41.472166666666666
- type: ndcg_at_3
value: 29.435083333333335
- type: ndcg_at_5
value: 31.519083333333338
- type: precision_at_1
value: 25.515
- type: precision_at_10
value: 5.89725
- type: precision_at_100
value: 0.9918333333333335
- type: precision_at_1000
value: 0.14075
- type: precision_at_3
value: 13.504000000000001
- type: precision_at_5
value: 9.6885
- type: recall_at_1
value: 21.488833333333336
- type: recall_at_10
value: 43.60808333333333
- type: recall_at_100
value: 66.5045
- type: recall_at_1000
value: 85.70024999999998
- type: recall_at_3
value: 31.922166666666662
- type: recall_at_5
value: 37.29758333333334
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.781
- type: map_at_10
value: 27.173000000000002
- type: map_at_100
value: 27.967
- type: map_at_1000
value: 28.061999999999998
- type: map_at_3
value: 24.973
- type: map_at_5
value: 26.279999999999998
- type: mrr_at_1
value: 23.773
- type: mrr_at_10
value: 29.849999999999998
- type: mrr_at_100
value: 30.595
- type: mrr_at_1000
value: 30.669
- type: mrr_at_3
value: 27.761000000000003
- type: mrr_at_5
value: 29.003
- type: ndcg_at_1
value: 23.773
- type: ndcg_at_10
value: 31.033
- type: ndcg_at_100
value: 35.174
- type: ndcg_at_1000
value: 37.72
- type: ndcg_at_3
value: 26.927
- type: ndcg_at_5
value: 29.047
- type: precision_at_1
value: 23.773
- type: precision_at_10
value: 4.8469999999999995
- type: precision_at_100
value: 0.75
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 11.452
- type: precision_at_5
value: 8.129
- type: recall_at_1
value: 20.781
- type: recall_at_10
value: 40.463
- type: recall_at_100
value: 59.483
- type: recall_at_1000
value: 78.396
- type: recall_at_3
value: 29.241
- type: recall_at_5
value: 34.544000000000004
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.074000000000002
- type: map_at_10
value: 20.757
- type: map_at_100
value: 21.72
- type: map_at_1000
value: 21.844
- type: map_at_3
value: 18.929000000000002
- type: map_at_5
value: 19.894000000000002
- type: mrr_at_1
value: 18.307000000000002
- type: mrr_at_10
value: 24.215
- type: mrr_at_100
value: 25.083
- type: mrr_at_1000
value: 25.168000000000003
- type: mrr_at_3
value: 22.316
- type: mrr_at_5
value: 23.36
- type: ndcg_at_1
value: 18.307000000000002
- type: ndcg_at_10
value: 24.651999999999997
- type: ndcg_at_100
value: 29.296
- type: ndcg_at_1000
value: 32.538
- type: ndcg_at_3
value: 21.243000000000002
- type: ndcg_at_5
value: 22.727
- type: precision_at_1
value: 18.307000000000002
- type: precision_at_10
value: 4.446
- type: precision_at_100
value: 0.792
- type: precision_at_1000
value: 0.124
- type: precision_at_3
value: 9.945
- type: precision_at_5
value: 7.123
- type: recall_at_1
value: 15.074000000000002
- type: recall_at_10
value: 33.031
- type: recall_at_100
value: 53.954
- type: recall_at_1000
value: 77.631
- type: recall_at_3
value: 23.253
- type: recall_at_5
value: 27.218999999999998
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.04
- type: map_at_10
value: 28.226000000000003
- type: map_at_100
value: 29.337999999999997
- type: map_at_1000
value: 29.448999999999998
- type: map_at_3
value: 25.759
- type: map_at_5
value: 27.226
- type: mrr_at_1
value: 24.067
- type: mrr_at_10
value: 31.646
- type: mrr_at_100
value: 32.592999999999996
- type: mrr_at_1000
value: 32.668
- type: mrr_at_3
value: 29.26
- type: mrr_at_5
value: 30.725
- type: ndcg_at_1
value: 24.067
- type: ndcg_at_10
value: 32.789
- type: ndcg_at_100
value: 38.253
- type: ndcg_at_1000
value: 40.961
- type: ndcg_at_3
value: 28.189999999999998
- type: ndcg_at_5
value: 30.557000000000002
- type: precision_at_1
value: 24.067
- type: precision_at_10
value: 5.532
- type: precision_at_100
value: 0.928
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 12.5
- type: precision_at_5
value: 9.16
- type: recall_at_1
value: 21.04
- type: recall_at_10
value: 43.167
- type: recall_at_100
value: 67.569
- type: recall_at_1000
value: 86.817
- type: recall_at_3
value: 31.178
- type: recall_at_5
value: 36.730000000000004
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.439
- type: map_at_10
value: 28.531000000000002
- type: map_at_100
value: 29.953999999999997
- type: map_at_1000
value: 30.171
- type: map_at_3
value: 26.546999999999997
- type: map_at_5
value: 27.71
- type: mrr_at_1
value: 26.087
- type: mrr_at_10
value: 32.635
- type: mrr_at_100
value: 33.629999999999995
- type: mrr_at_1000
value: 33.71
- type: mrr_at_3
value: 30.731
- type: mrr_at_5
value: 31.807999999999996
- type: ndcg_at_1
value: 26.087
- type: ndcg_at_10
value: 32.975
- type: ndcg_at_100
value: 38.853
- type: ndcg_at_1000
value: 42.158
- type: ndcg_at_3
value: 29.894
- type: ndcg_at_5
value: 31.397000000000002
- type: precision_at_1
value: 26.087
- type: precision_at_10
value: 6.2059999999999995
- type: precision_at_100
value: 1.298
- type: precision_at_1000
value: 0.22200000000000003
- type: precision_at_3
value: 14.097000000000001
- type: precision_at_5
value: 9.959999999999999
- type: recall_at_1
value: 21.439
- type: recall_at_10
value: 40.519
- type: recall_at_100
value: 68.073
- type: recall_at_1000
value: 89.513
- type: recall_at_3
value: 31.513
- type: recall_at_5
value: 35.702
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.983
- type: map_at_10
value: 24.898
- type: map_at_100
value: 25.836
- type: map_at_1000
value: 25.934
- type: map_at_3
value: 22.467000000000002
- type: map_at_5
value: 24.019
- type: mrr_at_1
value: 20.333000000000002
- type: mrr_at_10
value: 26.555
- type: mrr_at_100
value: 27.369
- type: mrr_at_1000
value: 27.448
- type: mrr_at_3
value: 24.091
- type: mrr_at_5
value: 25.662000000000003
- type: ndcg_at_1
value: 20.333000000000002
- type: ndcg_at_10
value: 28.834
- type: ndcg_at_100
value: 33.722
- type: ndcg_at_1000
value: 36.475
- type: ndcg_at_3
value: 24.08
- type: ndcg_at_5
value: 26.732
- type: precision_at_1
value: 20.333000000000002
- type: precision_at_10
value: 4.603
- type: precision_at_100
value: 0.771
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 9.982000000000001
- type: precision_at_5
value: 7.6160000000000005
- type: recall_at_1
value: 18.983
- type: recall_at_10
value: 39.35
- type: recall_at_100
value: 62.559
- type: recall_at_1000
value: 83.623
- type: recall_at_3
value: 26.799
- type: recall_at_5
value: 32.997
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.621
- type: map_at_10
value: 17.298
- type: map_at_100
value: 18.983
- type: map_at_1000
value: 19.182
- type: map_at_3
value: 14.552999999999999
- type: map_at_5
value: 15.912
- type: mrr_at_1
value: 23.453
- type: mrr_at_10
value: 33.932
- type: mrr_at_100
value: 34.891
- type: mrr_at_1000
value: 34.943000000000005
- type: mrr_at_3
value: 30.770999999999997
- type: mrr_at_5
value: 32.556000000000004
- type: ndcg_at_1
value: 23.453
- type: ndcg_at_10
value: 24.771
- type: ndcg_at_100
value: 31.738
- type: ndcg_at_1000
value: 35.419
- type: ndcg_at_3
value: 20.22
- type: ndcg_at_5
value: 21.698999999999998
- type: precision_at_1
value: 23.453
- type: precision_at_10
value: 7.785
- type: precision_at_100
value: 1.5270000000000001
- type: precision_at_1000
value: 0.22
- type: precision_at_3
value: 14.962
- type: precision_at_5
value: 11.401
- type: recall_at_1
value: 10.621
- type: recall_at_10
value: 29.726000000000003
- type: recall_at_100
value: 53.996
- type: recall_at_1000
value: 74.878
- type: recall_at_3
value: 18.572
- type: recall_at_5
value: 22.994999999999997
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.819
- type: map_at_10
value: 14.188
- type: map_at_100
value: 19.627
- type: map_at_1000
value: 20.757
- type: map_at_3
value: 10.352
- type: map_at_5
value: 12.096
- type: mrr_at_1
value: 54.25
- type: mrr_at_10
value: 63.798
- type: mrr_at_100
value: 64.25
- type: mrr_at_1000
value: 64.268
- type: mrr_at_3
value: 61.667
- type: mrr_at_5
value: 63.153999999999996
- type: ndcg_at_1
value: 39.5
- type: ndcg_at_10
value: 31.064999999999998
- type: ndcg_at_100
value: 34.701
- type: ndcg_at_1000
value: 41.687000000000005
- type: ndcg_at_3
value: 34.455999999999996
- type: ndcg_at_5
value: 32.919
- type: precision_at_1
value: 54.25
- type: precision_at_10
value: 25.4
- type: precision_at_100
value: 7.79
- type: precision_at_1000
value: 1.577
- type: precision_at_3
value: 39.333
- type: precision_at_5
value: 33.6
- type: recall_at_1
value: 6.819
- type: recall_at_10
value: 19.134
- type: recall_at_100
value: 41.191
- type: recall_at_1000
value: 64.699
- type: recall_at_3
value: 11.637
- type: recall_at_5
value: 14.807
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 42.474999999999994
- type: f1
value: 37.79154895614037
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 53.187
- type: map_at_10
value: 64.031
- type: map_at_100
value: 64.507
- type: map_at_1000
value: 64.526
- type: map_at_3
value: 61.926
- type: map_at_5
value: 63.278999999999996
- type: mrr_at_1
value: 57.396
- type: mrr_at_10
value: 68.296
- type: mrr_at_100
value: 68.679
- type: mrr_at_1000
value: 68.688
- type: mrr_at_3
value: 66.289
- type: mrr_at_5
value: 67.593
- type: ndcg_at_1
value: 57.396
- type: ndcg_at_10
value: 69.64
- type: ndcg_at_100
value: 71.75399999999999
- type: ndcg_at_1000
value: 72.179
- type: ndcg_at_3
value: 65.66199999999999
- type: ndcg_at_5
value: 67.932
- type: precision_at_1
value: 57.396
- type: precision_at_10
value: 9.073
- type: precision_at_100
value: 1.024
- type: precision_at_1000
value: 0.107
- type: precision_at_3
value: 26.133
- type: precision_at_5
value: 16.943
- type: recall_at_1
value: 53.187
- type: recall_at_10
value: 82.839
- type: recall_at_100
value: 92.231
- type: recall_at_1000
value: 95.249
- type: recall_at_3
value: 72.077
- type: recall_at_5
value: 77.667
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.957
- type: map_at_10
value: 18.427
- type: map_at_100
value: 19.885
- type: map_at_1000
value: 20.088
- type: map_at_3
value: 15.709000000000001
- type: map_at_5
value: 17.153
- type: mrr_at_1
value: 22.377
- type: mrr_at_10
value: 30.076999999999998
- type: mrr_at_100
value: 31.233
- type: mrr_at_1000
value: 31.311
- type: mrr_at_3
value: 27.521
- type: mrr_at_5
value: 29.025000000000002
- type: ndcg_at_1
value: 22.377
- type: ndcg_at_10
value: 24.367
- type: ndcg_at_100
value: 31.04
- type: ndcg_at_1000
value: 35.106
- type: ndcg_at_3
value: 21.051000000000002
- type: ndcg_at_5
value: 22.231
- type: precision_at_1
value: 22.377
- type: precision_at_10
value: 7.005999999999999
- type: precision_at_100
value: 1.3599999999999999
- type: precision_at_1000
value: 0.208
- type: precision_at_3
value: 13.991999999999999
- type: precision_at_5
value: 10.833
- type: recall_at_1
value: 10.957
- type: recall_at_10
value: 30.274
- type: recall_at_100
value: 55.982
- type: recall_at_1000
value: 80.757
- type: recall_at_3
value: 19.55
- type: recall_at_5
value: 24.105999999999998
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.526999999999997
- type: map_at_10
value: 40.714
- type: map_at_100
value: 41.655
- type: map_at_1000
value: 41.744
- type: map_at_3
value: 38.171
- type: map_at_5
value: 39.646
- type: mrr_at_1
value: 59.055
- type: mrr_at_10
value: 66.411
- type: mrr_at_100
value: 66.85900000000001
- type: mrr_at_1000
value: 66.88300000000001
- type: mrr_at_3
value: 64.846
- type: mrr_at_5
value: 65.824
- type: ndcg_at_1
value: 59.055
- type: ndcg_at_10
value: 49.732
- type: ndcg_at_100
value: 53.441
- type: ndcg_at_1000
value: 55.354000000000006
- type: ndcg_at_3
value: 45.551
- type: ndcg_at_5
value: 47.719
- type: precision_at_1
value: 59.055
- type: precision_at_10
value: 10.366
- type: precision_at_100
value: 1.328
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 28.322999999999997
- type: precision_at_5
value: 18.709
- type: recall_at_1
value: 29.526999999999997
- type: recall_at_10
value: 51.83
- type: recall_at_100
value: 66.42099999999999
- type: recall_at_1000
value: 79.176
- type: recall_at_3
value: 42.485
- type: recall_at_5
value: 46.772000000000006
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 70.69959999999999
- type: ap
value: 64.95539314492567
- type: f1
value: 70.5554935943308
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 13.153
- type: map_at_10
value: 22.277
- type: map_at_100
value: 23.462
- type: map_at_1000
value: 23.546
- type: map_at_3
value: 19.026
- type: map_at_5
value: 20.825
- type: mrr_at_1
value: 13.539000000000001
- type: mrr_at_10
value: 22.753
- type: mrr_at_100
value: 23.906
- type: mrr_at_1000
value: 23.982999999999997
- type: mrr_at_3
value: 19.484
- type: mrr_at_5
value: 21.306
- type: ndcg_at_1
value: 13.553
- type: ndcg_at_10
value: 27.848
- type: ndcg_at_100
value: 33.900999999999996
- type: ndcg_at_1000
value: 36.155
- type: ndcg_at_3
value: 21.116
- type: ndcg_at_5
value: 24.349999999999998
- type: precision_at_1
value: 13.553
- type: precision_at_10
value: 4.695
- type: precision_at_100
value: 0.7779999999999999
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 9.207
- type: precision_at_5
value: 7.155
- type: recall_at_1
value: 13.153
- type: recall_at_10
value: 45.205
- type: recall_at_100
value: 73.978
- type: recall_at_1000
value: 91.541
- type: recall_at_3
value: 26.735
- type: recall_at_5
value: 34.493
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 90.2530779753762
- type: f1
value: 89.59402328284126
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 67.95029639762883
- type: f1
value: 48.99988836758662
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.77740416946874
- type: f1
value: 66.21341120969817
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.03631472763955
- type: f1
value: 72.5779336237941
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.98182669158824
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 29.259462874407582
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.29342377286548
- type: mrr
value: 32.32805799117226
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.692
- type: map_at_10
value: 10.559000000000001
- type: map_at_100
value: 13.665
- type: map_at_1000
value: 15.082
- type: map_at_3
value: 7.68
- type: map_at_5
value: 8.844000000000001
- type: mrr_at_1
value: 38.7
- type: mrr_at_10
value: 47.864000000000004
- type: mrr_at_100
value: 48.583999999999996
- type: mrr_at_1000
value: 48.636
- type: mrr_at_3
value: 45.975
- type: mrr_at_5
value: 47.074
- type: ndcg_at_1
value: 36.378
- type: ndcg_at_10
value: 30.038999999999998
- type: ndcg_at_100
value: 28.226000000000003
- type: ndcg_at_1000
value: 36.958
- type: ndcg_at_3
value: 33.469
- type: ndcg_at_5
value: 32.096999999999994
- type: precision_at_1
value: 38.080000000000005
- type: precision_at_10
value: 22.941
- type: precision_at_100
value: 7.632
- type: precision_at_1000
value: 2.0420000000000003
- type: precision_at_3
value: 31.579
- type: precision_at_5
value: 28.235
- type: recall_at_1
value: 4.692
- type: recall_at_10
value: 14.496
- type: recall_at_100
value: 29.69
- type: recall_at_1000
value: 61.229
- type: recall_at_3
value: 8.871
- type: recall_at_5
value: 10.825999999999999
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.120000000000001
- type: map_at_10
value: 24.092
- type: map_at_100
value: 25.485999999999997
- type: map_at_1000
value: 25.557999999999996
- type: map_at_3
value: 20.076
- type: map_at_5
value: 22.368
- type: mrr_at_1
value: 15.093
- type: mrr_at_10
value: 26.142
- type: mrr_at_100
value: 27.301
- type: mrr_at_1000
value: 27.357
- type: mrr_at_3
value: 22.364
- type: mrr_at_5
value: 24.564
- type: ndcg_at_1
value: 15.093
- type: ndcg_at_10
value: 30.734
- type: ndcg_at_100
value: 37.147999999999996
- type: ndcg_at_1000
value: 38.997
- type: ndcg_at_3
value: 22.82
- type: ndcg_at_5
value: 26.806
- type: precision_at_1
value: 15.093
- type: precision_at_10
value: 5.863
- type: precision_at_100
value: 0.942
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 11.047
- type: precision_at_5
value: 8.863999999999999
- type: recall_at_1
value: 13.120000000000001
- type: recall_at_10
value: 49.189
- type: recall_at_100
value: 78.032
- type: recall_at_1000
value: 92.034
- type: recall_at_3
value: 28.483000000000004
- type: recall_at_5
value: 37.756
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 67.765
- type: map_at_10
value: 81.069
- type: map_at_100
value: 81.757
- type: map_at_1000
value: 81.782
- type: map_at_3
value: 78.148
- type: map_at_5
value: 79.95400000000001
- type: mrr_at_1
value: 77.8
- type: mrr_at_10
value: 84.639
- type: mrr_at_100
value: 84.789
- type: mrr_at_1000
value: 84.79100000000001
- type: mrr_at_3
value: 83.467
- type: mrr_at_5
value: 84.251
- type: ndcg_at_1
value: 77.82
- type: ndcg_at_10
value: 85.286
- type: ndcg_at_100
value: 86.86500000000001
- type: ndcg_at_1000
value: 87.062
- type: ndcg_at_3
value: 82.116
- type: ndcg_at_5
value: 83.811
- type: precision_at_1
value: 77.82
- type: precision_at_10
value: 12.867999999999999
- type: precision_at_100
value: 1.498
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 35.723
- type: precision_at_5
value: 23.52
- type: recall_at_1
value: 67.765
- type: recall_at_10
value: 93.381
- type: recall_at_100
value: 98.901
- type: recall_at_1000
value: 99.864
- type: recall_at_3
value: 84.301
- type: recall_at_5
value: 89.049
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 45.27190981742137
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 54.47444004585028
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.213
- type: map_at_10
value: 10.166
- type: map_at_100
value: 11.987
- type: map_at_1000
value: 12.285
- type: map_at_3
value: 7.538
- type: map_at_5
value: 8.606
- type: mrr_at_1
value: 20.8
- type: mrr_at_10
value: 30.066
- type: mrr_at_100
value: 31.290000000000003
- type: mrr_at_1000
value: 31.357000000000003
- type: mrr_at_3
value: 27.083000000000002
- type: mrr_at_5
value: 28.748
- type: ndcg_at_1
value: 20.8
- type: ndcg_at_10
value: 17.258000000000003
- type: ndcg_at_100
value: 24.801000000000002
- type: ndcg_at_1000
value: 30.348999999999997
- type: ndcg_at_3
value: 16.719
- type: ndcg_at_5
value: 14.145
- type: precision_at_1
value: 20.8
- type: precision_at_10
value: 8.88
- type: precision_at_100
value: 1.9789999999999999
- type: precision_at_1000
value: 0.332
- type: precision_at_3
value: 15.5
- type: precision_at_5
value: 12.1
- type: recall_at_1
value: 4.213
- type: recall_at_10
value: 17.983
- type: recall_at_100
value: 40.167
- type: recall_at_1000
value: 67.43
- type: recall_at_3
value: 9.433
- type: recall_at_5
value: 12.267999999999999
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 80.36742239848913
- type: cos_sim_spearman
value: 72.39470010828755
- type: euclidean_pearson
value: 77.26919895870947
- type: euclidean_spearman
value: 72.26534999077315
- type: manhattan_pearson
value: 77.04066349814258
- type: manhattan_spearman
value: 72.0072248699278
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 80.26991474037257
- type: cos_sim_spearman
value: 71.90287122017716
- type: euclidean_pearson
value: 76.68006075912453
- type: euclidean_spearman
value: 71.69301858764365
- type: manhattan_pearson
value: 76.72277285842371
- type: manhattan_spearman
value: 71.73265239703795
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 79.74371413317881
- type: cos_sim_spearman
value: 80.9279612820358
- type: euclidean_pearson
value: 80.6417435294782
- type: euclidean_spearman
value: 81.17460969254459
- type: manhattan_pearson
value: 80.51820155178402
- type: manhattan_spearman
value: 81.08028700017084
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 80.37085777051112
- type: cos_sim_spearman
value: 76.60308382518285
- type: euclidean_pearson
value: 79.59684787227351
- type: euclidean_spearman
value: 76.8769048249242
- type: manhattan_pearson
value: 79.55617632538295
- type: manhattan_spearman
value: 76.90186497973124
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 83.99513105301321
- type: cos_sim_spearman
value: 84.92034548133665
- type: euclidean_pearson
value: 84.70872540095195
- type: euclidean_spearman
value: 85.14591726040749
- type: manhattan_pearson
value: 84.65707417430595
- type: manhattan_spearman
value: 85.10407163865375
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 79.40758449150897
- type: cos_sim_spearman
value: 80.71692246880549
- type: euclidean_pearson
value: 80.51658552062683
- type: euclidean_spearman
value: 80.87118389043233
- type: manhattan_pearson
value: 80.41534690825016
- type: manhattan_spearman
value: 80.73925282537256
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 84.93617076910748
- type: cos_sim_spearman
value: 85.61118538966805
- type: euclidean_pearson
value: 85.56187558635287
- type: euclidean_spearman
value: 85.21910090757267
- type: manhattan_pearson
value: 85.29916699037645
- type: manhattan_spearman
value: 84.96820527868671
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.22294088543077
- type: cos_sim_spearman
value: 65.89748502901078
- type: euclidean_pearson
value: 66.15637850660805
- type: euclidean_spearman
value: 65.86095841381278
- type: manhattan_pearson
value: 66.80966197857856
- type: manhattan_spearman
value: 66.48325202219692
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 81.75298158703048
- type: cos_sim_spearman
value: 81.32168373072322
- type: euclidean_pearson
value: 82.3251793712207
- type: euclidean_spearman
value: 81.31655163330606
- type: manhattan_pearson
value: 82.14136865023298
- type: manhattan_spearman
value: 81.13410964028606
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 78.77937068780793
- type: mrr
value: 93.334709952357
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 50.705999999999996
- type: map_at_10
value: 60.699999999999996
- type: map_at_100
value: 61.256
- type: map_at_1000
value: 61.285000000000004
- type: map_at_3
value: 57.633
- type: map_at_5
value: 59.648
- type: mrr_at_1
value: 53.0
- type: mrr_at_10
value: 61.717999999999996
- type: mrr_at_100
value: 62.165000000000006
- type: mrr_at_1000
value: 62.190999999999995
- type: mrr_at_3
value: 59.389
- type: mrr_at_5
value: 60.922
- type: ndcg_at_1
value: 53.0
- type: ndcg_at_10
value: 65.413
- type: ndcg_at_100
value: 68.089
- type: ndcg_at_1000
value: 69.01899999999999
- type: ndcg_at_3
value: 60.327
- type: ndcg_at_5
value: 63.263999999999996
- type: precision_at_1
value: 53.0
- type: precision_at_10
value: 8.933
- type: precision_at_100
value: 1.04
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 23.778
- type: precision_at_5
value: 16.2
- type: recall_at_1
value: 50.705999999999996
- type: recall_at_10
value: 78.633
- type: recall_at_100
value: 91.333
- type: recall_at_1000
value: 99.0
- type: recall_at_3
value: 65.328
- type: recall_at_5
value: 72.583
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.82178217821782
- type: cos_sim_ap
value: 95.30078788098801
- type: cos_sim_f1
value: 91.11549851924975
- type: cos_sim_precision
value: 89.96101364522417
- type: cos_sim_recall
value: 92.30000000000001
- type: dot_accuracy
value: 99.74851485148515
- type: dot_ap
value: 93.12383012680787
- type: dot_f1
value: 87.17171717171716
- type: dot_precision
value: 88.06122448979592
- type: dot_recall
value: 86.3
- type: euclidean_accuracy
value: 99.82673267326733
- type: euclidean_ap
value: 95.29507269622621
- type: euclidean_f1
value: 91.3151364764268
- type: euclidean_precision
value: 90.64039408866995
- type: euclidean_recall
value: 92.0
- type: manhattan_accuracy
value: 99.82178217821782
- type: manhattan_ap
value: 95.34300712110257
- type: manhattan_f1
value: 91.05367793240556
- type: manhattan_precision
value: 90.51383399209486
- type: manhattan_recall
value: 91.60000000000001
- type: max_accuracy
value: 99.82673267326733
- type: max_ap
value: 95.34300712110257
- type: max_f1
value: 91.3151364764268
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 53.10993894014712
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 34.67216071080345
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 48.96344255085851
- type: mrr
value: 49.816123419064596
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.580410074992177
- type: cos_sim_spearman
value: 31.155995112739966
- type: dot_pearson
value: 31.112094423048998
- type: dot_spearman
value: 31.29974829801922
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.17700000000000002
- type: map_at_10
value: 1.22
- type: map_at_100
value: 6.2170000000000005
- type: map_at_1000
value: 15.406
- type: map_at_3
value: 0.483
- type: map_at_5
value: 0.729
- type: mrr_at_1
value: 64.0
- type: mrr_at_10
value: 76.333
- type: mrr_at_100
value: 76.47
- type: mrr_at_1000
value: 76.47
- type: mrr_at_3
value: 75.0
- type: mrr_at_5
value: 76.0
- type: ndcg_at_1
value: 59.0
- type: ndcg_at_10
value: 52.62
- type: ndcg_at_100
value: 39.932
- type: ndcg_at_1000
value: 37.317
- type: ndcg_at_3
value: 57.123000000000005
- type: ndcg_at_5
value: 56.376000000000005
- type: precision_at_1
value: 64.0
- type: precision_at_10
value: 55.800000000000004
- type: precision_at_100
value: 41.04
- type: precision_at_1000
value: 17.124
- type: precision_at_3
value: 63.333
- type: precision_at_5
value: 62.0
- type: recall_at_1
value: 0.17700000000000002
- type: recall_at_10
value: 1.46
- type: recall_at_100
value: 9.472999999999999
- type: recall_at_1000
value: 35.661
- type: recall_at_3
value: 0.527
- type: recall_at_5
value: 0.8250000000000001
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.539
- type: map_at_10
value: 7.178
- type: map_at_100
value: 12.543000000000001
- type: map_at_1000
value: 14.126
- type: map_at_3
value: 3.09
- type: map_at_5
value: 5.008
- type: mrr_at_1
value: 18.367
- type: mrr_at_10
value: 32.933
- type: mrr_at_100
value: 34.176
- type: mrr_at_1000
value: 34.176
- type: mrr_at_3
value: 27.551
- type: mrr_at_5
value: 30.714000000000002
- type: ndcg_at_1
value: 15.306000000000001
- type: ndcg_at_10
value: 18.343
- type: ndcg_at_100
value: 30.076000000000004
- type: ndcg_at_1000
value: 42.266999999999996
- type: ndcg_at_3
value: 17.233999999999998
- type: ndcg_at_5
value: 18.677
- type: precision_at_1
value: 18.367
- type: precision_at_10
value: 18.367
- type: precision_at_100
value: 6.837
- type: precision_at_1000
value: 1.467
- type: precision_at_3
value: 19.048000000000002
- type: precision_at_5
value: 21.224
- type: recall_at_1
value: 1.539
- type: recall_at_10
value: 13.289000000000001
- type: recall_at_100
value: 42.480000000000004
- type: recall_at_1000
value: 79.463
- type: recall_at_3
value: 4.202999999999999
- type: recall_at_5
value: 7.9030000000000005
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 69.2056
- type: ap
value: 13.564165903349778
- type: f1
value: 53.303385089202656
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 56.71477079796264
- type: f1
value: 57.01563439439609
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 39.373040570976514
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 83.44757703999524
- type: cos_sim_ap
value: 65.78689843625949
- type: cos_sim_f1
value: 62.25549384206713
- type: cos_sim_precision
value: 57.39091718610864
- type: cos_sim_recall
value: 68.02110817941951
- type: dot_accuracy
value: 81.3971508612982
- type: dot_ap
value: 58.42933051967154
- type: dot_f1
value: 57.85580214198962
- type: dot_precision
value: 49.74368710841086
- type: dot_recall
value: 69.12928759894459
- type: euclidean_accuracy
value: 83.54294569946951
- type: euclidean_ap
value: 66.10612585693795
- type: euclidean_f1
value: 62.66666666666667
- type: euclidean_precision
value: 58.88631090487239
- type: euclidean_recall
value: 66.96569920844327
- type: manhattan_accuracy
value: 83.43565595756095
- type: manhattan_ap
value: 65.88532290329134
- type: manhattan_f1
value: 62.58408721874276
- type: manhattan_precision
value: 55.836092715231786
- type: manhattan_recall
value: 71.18733509234828
- type: max_accuracy
value: 83.54294569946951
- type: max_ap
value: 66.10612585693795
- type: max_f1
value: 62.66666666666667
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.02344083517679
- type: cos_sim_ap
value: 84.21589190889944
- type: cos_sim_f1
value: 76.36723039754007
- type: cos_sim_precision
value: 72.79134682484299
- type: cos_sim_recall
value: 80.31259624268556
- type: dot_accuracy
value: 87.43353902278108
- type: dot_ap
value: 82.08962394120071
- type: dot_f1
value: 74.97709923664122
- type: dot_precision
value: 74.34150772025431
- type: dot_recall
value: 75.62365260240222
- type: euclidean_accuracy
value: 87.97686963946133
- type: euclidean_ap
value: 84.20578083922416
- type: euclidean_f1
value: 76.4299182903834
- type: euclidean_precision
value: 73.51874244256348
- type: euclidean_recall
value: 79.58115183246073
- type: manhattan_accuracy
value: 88.00209570380719
- type: manhattan_ap
value: 84.14700304263556
- type: manhattan_f1
value: 76.36429345861944
- type: manhattan_precision
value: 71.95886119057349
- type: manhattan_recall
value: 81.34431783184478
- type: max_accuracy
value: 88.02344083517679
- type: max_ap
value: 84.21589190889944
- type: max_f1
value: 76.4299182903834
---
# bge-micro
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
It is distilled from [bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5/blob/main/config.json), with 1/4 the non-embedding parameters.
It has 1/2 the parameters of the smallest commonly-used embedding model, all-MiniLM-L6-v2, with similar performance.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | 65,694 | [
[
-0.02685546875,
-0.0601806640625,
0.0227508544921875,
0.0233154296875,
-0.0228729248046875,
-0.03131103515625,
-0.0182952880859375,
-0.0005049705505371094,
0.0133514404296875,
0.023651123046875,
-0.043609619140625,
-0.042144775390625,
-0.054412841796875,
-0.... |
adhejeb/my-pet-dog-xzg | 2023-10-24T07:42:11.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | adhejeb | null | null | adhejeb/my-pet-dog-xzg | 0 | 1,751 | diffusers | 2023-10-24T07:37:33 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-xzg Dreambooth model trained by adhejeb following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: FXEC-19
Sample pictures of this concept:

| 397 | [
[
-0.06640625,
-0.0300750732421875,
0.0264892578125,
-0.0071868896484375,
-0.0182647705078125,
0.034332275390625,
0.0269012451171875,
-0.04498291015625,
0.050445556640625,
0.029052734375,
-0.051055908203125,
-0.032562255859375,
-0.0222320556640625,
-0.00113868... |
microsoft/xlm-align-base | 2021-08-04T15:23:10.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | microsoft | null | null | microsoft/xlm-align-base | 7 | 1,750 | transformers | 2022-03-02T23:29:05 | # XLM-Align
**XLM-Align** (ACL 2021, [paper](https://aclanthology.org/2021.acl-long.265/), [repo](https://github.com/CZWin32768/XLM-Align), [model](https://huggingface.co/microsoft/xlm-align-base)) Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment
XLM-Align is a pretrained cross-lingual language model that supports 94 languages. See details in our [paper](https://aclanthology.org/2021.acl-long.265/).
## Example
```
model = AutoModel.from_pretrained("microsoft/xlm-align-base")
```
## Evaluation Results
XTREME cross-lingual understanding tasks:
| Model | POS | NER | XQuAD | MLQA | TyDiQA | XNLI | PAWS-X | Avg |
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|
| XLM-R_base | 75.6 | 61.8 | 71.9 / 56.4 | 65.1 / 47.2 | 55.4 / 38.3 | 75.0 | 84.9 | 66.4 |
| XLM-Align | **76.0** | **63.7** | **74.7 / 59.0** | **68.1 / 49.8** | **62.1 / 44.8** | **76.2** | **86.8** | **68.9** |
## MD5
```
b9d214025837250ede2f69c9385f812c config.json
6005db708eb4bab5b85fa3976b9db85b pytorch_model.bin
bf25eb5120ad92ef5c7d8596b5dc4046 sentencepiece.bpe.model
eedbd60a7268b9fc45981b849664f747 tokenizer.json
```
## About
Contact: chizewen\@outlook.com
BibTeX:
```
@inproceedings{xlmalign,
title = "Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment",
author={Zewen Chi and Li Dong and Bo Zheng and Shaohan Huang and Xian-Ling Mao and Heyan Huang and Furu Wei},
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.265",
doi = "10.18653/v1/2021.acl-long.265",
pages = "3418--3430",}
``` | 1,895 | [
[
-0.022369384765625,
-0.031097412109375,
0.022125244140625,
0.009002685546875,
-0.0124664306640625,
-0.0012140274047851562,
-0.007602691650390625,
-0.03570556640625,
0.0135650634765625,
0.0208740234375,
-0.04718017578125,
-0.05181884765625,
-0.042816162109375,
... |
facebook/blenderbot-1B-distill | 2023-03-30T16:12:16.000Z | [
"transformers",
"pytorch",
"blenderbot",
"text2text-generation",
"convAI",
"conversational",
"facebook",
"en",
"dataset:blended_skill_talk",
"arxiv:1907.06616",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | conversational | facebook | null | null | facebook/blenderbot-1B-distill | 33 | 1,749 | transformers | 2022-03-02T23:29:05 | ---
language:
- en
thumbnail:
tags:
- convAI
- conversational
- facebook
license: apache-2.0
datasets:
- blended_skill_talk
metrics:
- perplexity
---
## Model description
+ Paper: [Recipes for building an open-domain chatbot](https://arxiv.org/abs/1907.06616)
+ [Original PARLAI Code](https://parl.ai/projects/recipes/)
### Abstract
Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
| 1,450 | [
[
-0.0259552001953125,
-0.06011962890625,
0.025848388671875,
0.0245361328125,
0.024505615234375,
-0.006671905517578125,
-0.032135009765625,
-0.0158843994140625,
-0.0006418228149414062,
0.050262451171875,
-0.02069091796875,
-0.019561767578125,
-0.057037353515625,
... |
apple/mobilevit-xx-small | 2022-08-29T07:57:57.000Z | [
"transformers",
"pytorch",
"tf",
"coreml",
"mobilevit",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2110.02178",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | image-classification | apple | null | null | apple/mobilevit-xx-small | 8 | 1,749 | transformers | 2022-05-30T12:46:35 | ---
license: other
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# MobileViT (extra extra small-sized model)
MobileViT model pre-trained on ImageNet-1k at resolution 256x256. It was introduced in [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari, and first released in [this repository](https://github.com/apple/ml-cvnets). The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE).
Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MobileViT is a light-weight, low latency convolutional neural network that combines MobileNetV2-style layers with a new block that replaces local processing in convolutions with global processing using transformers. As with ViT (Vision Transformer), the image data is converted into flattened patches before it is processed by the transformer layers. Afterwards, the patches are "unflattened" back into feature maps. This allows the MobileViT-block to be placed anywhere inside a CNN. MobileViT does not require any positional embeddings.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilevit) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import MobileViTFeatureExtractor, MobileViTForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MobileViTFeatureExtractor.from_pretrained("apple/mobilevit-xx-small")
model = MobileViTForImageClassification.from_pretrained("apple/mobilevit-xx-small")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The MobileViT model was pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset consisting of 1 million images and 1,000 classes.
## Training procedure
### Preprocessing
Training requires only basic data augmentation, i.e. random resized cropping and horizontal flipping.
To learn multi-scale representations without requiring fine-tuning, a multi-scale sampler was used during training, with image sizes randomly sampled from: (160, 160), (192, 192), (256, 256), (288, 288), (320, 320).
At inference time, images are resized/rescaled to the same resolution (288x288), and center-cropped at 256x256.
Pixels are normalized to the range [0, 1]. Images are expected to be in BGR pixel order, not RGB.
### Pretraining
The MobileViT networks are trained from scratch for 300 epochs on ImageNet-1k on 8 NVIDIA GPUs with an effective batch size of 1024 and learning rate warmup for 3k steps, followed by cosine annealing. Also used were label smoothing cross-entropy loss and L2 weight decay. Training resolution varies from 160x160 to 320x320, using multi-scale sampling.
## Evaluation results
| Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL |
|-------------------|-------------------------|-------------------------|-----------|-------------------------------------------------|
| **MobileViT-XXS** | **69.0** | **88.9** | **1.3 M** | https://huggingface.co/apple/mobilevit-xx-small |
| MobileViT-XS | 74.8 | 92.3 | 2.3 M | https://huggingface.co/apple/mobilevit-x-small |
| MobileViT-S | 78.4 | 94.1 | 5.6 M | https://huggingface.co/apple/mobilevit-small |
### BibTeX entry and citation info
```bibtex
@inproceedings{vision-transformer,
title = {MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer},
author = {Sachin Mehta and Mohammad Rastegari},
year = {2022},
URL = {https://arxiv.org/abs/2110.02178}
}
```
| 4,831 | [
[
-0.05047607421875,
-0.016693115234375,
-0.0169830322265625,
-0.0041656494140625,
-0.040252685546875,
-0.029815673828125,
0.003833770751953125,
-0.032958984375,
0.046417236328125,
0.01171112060546875,
-0.034637451171875,
-0.02215576171875,
-0.038726806640625,
... |
jinaai/jina-embedding-s-en-v1 | 2023-10-13T12:40:43.000Z | [
"sentence-transformers",
"pytorch",
"t5",
"finetuner",
"mteb",
"feature-extraction",
"sentence-similarity",
"custom_code",
"en",
"dataset:jinaai/negation-dataset",
"arxiv:2307.11224",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"text-generation-inference",
"region:us"
... | sentence-similarity | jinaai | null | null | jinaai/jina-embedding-s-en-v1 | 23 | 1,747 | sentence-transformers | 2023-07-06T11:37:14 | ---
pipeline_tag: sentence-similarity
tags:
- finetuner
- mteb
- sentence-transformers
- feature-extraction
- sentence-similarity
datasets:
- jinaai/negation-dataset
language: en
license: apache-2.0
model-index:
- name: jina-embedding-s-en-v1
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 64.82089552238806
- type: ap
value: 27.100981946230778
- type: f1
value: 58.3354886367184
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 64.282775
- type: ap
value: 60.350688924943796
- type: f1
value: 62.06346948494396
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 30.623999999999995
- type: f1
value: 29.427789186742153
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.119
- type: map_at_10
value: 35.609
- type: map_at_100
value: 36.935
- type: map_at_1000
value: 36.957
- type: map_at_3
value: 31.046000000000003
- type: map_at_5
value: 33.574
- type: mrr_at_1
value: 22.404
- type: mrr_at_10
value: 35.695
- type: mrr_at_100
value: 37.021
- type: mrr_at_1000
value: 37.043
- type: mrr_at_3
value: 31.093
- type: mrr_at_5
value: 33.635999999999996
- type: ndcg_at_1
value: 22.119
- type: ndcg_at_10
value: 43.566
- type: ndcg_at_100
value: 49.370000000000005
- type: ndcg_at_1000
value: 49.901
- type: ndcg_at_3
value: 34.06
- type: ndcg_at_5
value: 38.653999999999996
- type: precision_at_1
value: 22.119
- type: precision_at_10
value: 6.92
- type: precision_at_100
value: 0.95
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 14.272000000000002
- type: precision_at_5
value: 10.811
- type: recall_at_1
value: 22.119
- type: recall_at_10
value: 69.203
- type: recall_at_100
value: 95.021
- type: recall_at_1000
value: 99.075
- type: recall_at_3
value: 42.817
- type: recall_at_5
value: 54.054
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 34.1740289109719
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 23.985251383455463
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 60.24873612289029
- type: mrr
value: 74.65692740623489
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 86.22415390332444
- type: cos_sim_spearman
value: 82.9591191954711
- type: euclidean_pearson
value: 44.096317524324945
- type: euclidean_spearman
value: 42.95218351391625
- type: manhattan_pearson
value: 44.07766490545065
- type: manhattan_spearman
value: 42.78350497166606
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 74.64285714285714
- type: f1
value: 73.53680835577447
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 28.512813238490164
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 20.942214972649488
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.255999999999997
- type: map_at_10
value: 37.091
- type: map_at_100
value: 38.428000000000004
- type: map_at_1000
value: 38.559
- type: map_at_3
value: 34.073
- type: map_at_5
value: 35.739
- type: mrr_at_1
value: 34.907
- type: mrr_at_10
value: 42.769
- type: mrr_at_100
value: 43.607
- type: mrr_at_1000
value: 43.656
- type: mrr_at_3
value: 39.986
- type: mrr_at_5
value: 41.581
- type: ndcg_at_1
value: 34.907
- type: ndcg_at_10
value: 42.681000000000004
- type: ndcg_at_100
value: 48.213
- type: ndcg_at_1000
value: 50.464
- type: ndcg_at_3
value: 37.813
- type: ndcg_at_5
value: 39.936
- type: precision_at_1
value: 34.907
- type: precision_at_10
value: 7.911
- type: precision_at_100
value: 1.349
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 17.93
- type: precision_at_5
value: 12.732
- type: recall_at_1
value: 28.255999999999997
- type: recall_at_10
value: 53.49699999999999
- type: recall_at_100
value: 77.288
- type: recall_at_1000
value: 91.776
- type: recall_at_3
value: 39.18
- type: recall_at_5
value: 45.365
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.563999999999997
- type: map_at_10
value: 33.913
- type: map_at_100
value: 34.966
- type: map_at_1000
value: 35.104
- type: map_at_3
value: 31.413000000000004
- type: map_at_5
value: 32.854
- type: mrr_at_1
value: 31.72
- type: mrr_at_10
value: 39.391
- type: mrr_at_100
value: 40.02
- type: mrr_at_1000
value: 40.076
- type: mrr_at_3
value: 37.314
- type: mrr_at_5
value: 38.507999999999996
- type: ndcg_at_1
value: 31.72
- type: ndcg_at_10
value: 38.933
- type: ndcg_at_100
value: 43.024
- type: ndcg_at_1000
value: 45.556999999999995
- type: ndcg_at_3
value: 35.225
- type: ndcg_at_5
value: 36.984
- type: precision_at_1
value: 31.72
- type: precision_at_10
value: 7.248
- type: precision_at_100
value: 1.192
- type: precision_at_1000
value: 0.16999999999999998
- type: precision_at_3
value: 16.943
- type: precision_at_5
value: 11.975
- type: recall_at_1
value: 25.563999999999997
- type: recall_at_10
value: 47.808
- type: recall_at_100
value: 65.182
- type: recall_at_1000
value: 81.831
- type: recall_at_3
value: 36.889
- type: recall_at_5
value: 41.829
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 33.662
- type: map_at_10
value: 44.096999999999994
- type: map_at_100
value: 45.153999999999996
- type: map_at_1000
value: 45.223
- type: map_at_3
value: 41.377
- type: map_at_5
value: 42.935
- type: mrr_at_1
value: 38.997
- type: mrr_at_10
value: 47.675
- type: mrr_at_100
value: 48.476
- type: mrr_at_1000
value: 48.519
- type: mrr_at_3
value: 45.549
- type: mrr_at_5
value: 46.884
- type: ndcg_at_1
value: 38.997
- type: ndcg_at_10
value: 49.196
- type: ndcg_at_100
value: 53.788000000000004
- type: ndcg_at_1000
value: 55.393
- type: ndcg_at_3
value: 44.67
- type: ndcg_at_5
value: 46.991
- type: precision_at_1
value: 38.997
- type: precision_at_10
value: 7.875
- type: precision_at_100
value: 1.102
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 19.854
- type: precision_at_5
value: 13.605
- type: recall_at_1
value: 33.662
- type: recall_at_10
value: 60.75899999999999
- type: recall_at_100
value: 81.11699999999999
- type: recall_at_1000
value: 92.805
- type: recall_at_3
value: 48.577999999999996
- type: recall_at_5
value: 54.384
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.313
- type: map_at_10
value: 29.036
- type: map_at_100
value: 29.975
- type: map_at_1000
value: 30.063000000000002
- type: map_at_3
value: 26.878999999999998
- type: map_at_5
value: 28.005999999999997
- type: mrr_at_1
value: 23.39
- type: mrr_at_10
value: 31.072
- type: mrr_at_100
value: 31.922
- type: mrr_at_1000
value: 31.995
- type: mrr_at_3
value: 28.908
- type: mrr_at_5
value: 30.104999999999997
- type: ndcg_at_1
value: 23.39
- type: ndcg_at_10
value: 33.448
- type: ndcg_at_100
value: 38.255
- type: ndcg_at_1000
value: 40.542
- type: ndcg_at_3
value: 29.060000000000002
- type: ndcg_at_5
value: 31.023
- type: precision_at_1
value: 23.39
- type: precision_at_10
value: 5.175
- type: precision_at_100
value: 0.8049999999999999
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 12.504999999999999
- type: precision_at_5
value: 8.61
- type: recall_at_1
value: 21.313
- type: recall_at_10
value: 45.345
- type: recall_at_100
value: 67.752
- type: recall_at_1000
value: 84.937
- type: recall_at_3
value: 33.033
- type: recall_at_5
value: 37.929
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.255999999999998
- type: map_at_10
value: 20.339
- type: map_at_100
value: 21.491
- type: map_at_1000
value: 21.616
- type: map_at_3
value: 18.481
- type: map_at_5
value: 19.594
- type: mrr_at_1
value: 17.413
- type: mrr_at_10
value: 24.146
- type: mrr_at_100
value: 25.188
- type: mrr_at_1000
value: 25.273
- type: mrr_at_3
value: 22.264
- type: mrr_at_5
value: 23.302
- type: ndcg_at_1
value: 17.413
- type: ndcg_at_10
value: 24.272
- type: ndcg_at_100
value: 29.82
- type: ndcg_at_1000
value: 33.072
- type: ndcg_at_3
value: 20.826
- type: ndcg_at_5
value: 22.535
- type: precision_at_1
value: 17.413
- type: precision_at_10
value: 4.366
- type: precision_at_100
value: 0.818
- type: precision_at_1000
value: 0.124
- type: precision_at_3
value: 9.866999999999999
- type: precision_at_5
value: 7.164
- type: recall_at_1
value: 14.255999999999998
- type: recall_at_10
value: 32.497
- type: recall_at_100
value: 56.592
- type: recall_at_1000
value: 80.17699999999999
- type: recall_at_3
value: 23.195
- type: recall_at_5
value: 27.392
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.709
- type: map_at_10
value: 31.377
- type: map_at_100
value: 32.536
- type: map_at_1000
value: 32.669
- type: map_at_3
value: 28.572999999999997
- type: map_at_5
value: 30.205
- type: mrr_at_1
value: 27.815
- type: mrr_at_10
value: 36.452
- type: mrr_at_100
value: 37.302
- type: mrr_at_1000
value: 37.364000000000004
- type: mrr_at_3
value: 33.75
- type: mrr_at_5
value: 35.43
- type: ndcg_at_1
value: 27.815
- type: ndcg_at_10
value: 36.84
- type: ndcg_at_100
value: 42.092
- type: ndcg_at_1000
value: 44.727
- type: ndcg_at_3
value: 31.964
- type: ndcg_at_5
value: 34.428
- type: precision_at_1
value: 27.815
- type: precision_at_10
value: 6.67
- type: precision_at_100
value: 1.093
- type: precision_at_1000
value: 0.151
- type: precision_at_3
value: 14.982000000000001
- type: precision_at_5
value: 10.857
- type: recall_at_1
value: 22.709
- type: recall_at_10
value: 48.308
- type: recall_at_100
value: 70.866
- type: recall_at_1000
value: 88.236
- type: recall_at_3
value: 34.709
- type: recall_at_5
value: 40.996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.348000000000003
- type: map_at_10
value: 29.427999999999997
- type: map_at_100
value: 30.499
- type: map_at_1000
value: 30.631999999999998
- type: map_at_3
value: 27.035999999999998
- type: map_at_5
value: 28.351
- type: mrr_at_1
value: 27.74
- type: mrr_at_10
value: 34.424
- type: mrr_at_100
value: 35.341
- type: mrr_at_1000
value: 35.419
- type: mrr_at_3
value: 32.401
- type: mrr_at_5
value: 33.497
- type: ndcg_at_1
value: 27.74
- type: ndcg_at_10
value: 34.136
- type: ndcg_at_100
value: 39.269
- type: ndcg_at_1000
value: 42.263
- type: ndcg_at_3
value: 30.171999999999997
- type: ndcg_at_5
value: 31.956
- type: precision_at_1
value: 27.74
- type: precision_at_10
value: 6.062
- type: precision_at_100
value: 1.014
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 14.079
- type: precision_at_5
value: 9.977
- type: recall_at_1
value: 22.348000000000003
- type: recall_at_10
value: 43.477
- type: recall_at_100
value: 65.945
- type: recall_at_1000
value: 86.587
- type: recall_at_3
value: 32.107
- type: recall_at_5
value: 36.974000000000004
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.688499999999998
- type: map_at_10
value: 29.164666666666665
- type: map_at_100
value: 30.22575
- type: map_at_1000
value: 30.350833333333334
- type: map_at_3
value: 26.82025
- type: map_at_5
value: 28.14966666666667
- type: mrr_at_1
value: 25.779249999999998
- type: mrr_at_10
value: 32.969
- type: mrr_at_100
value: 33.81725
- type: mrr_at_1000
value: 33.88825
- type: mrr_at_3
value: 30.831250000000004
- type: mrr_at_5
value: 32.065000000000005
- type: ndcg_at_1
value: 25.779249999999998
- type: ndcg_at_10
value: 33.73675
- type: ndcg_at_100
value: 38.635666666666665
- type: ndcg_at_1000
value: 41.353500000000004
- type: ndcg_at_3
value: 29.66283333333333
- type: ndcg_at_5
value: 31.607249999999997
- type: precision_at_1
value: 25.779249999999998
- type: precision_at_10
value: 5.861416666666667
- type: precision_at_100
value: 0.9852500000000002
- type: precision_at_1000
value: 0.14108333333333334
- type: precision_at_3
value: 13.563583333333332
- type: precision_at_5
value: 9.630333333333335
- type: recall_at_1
value: 21.688499999999998
- type: recall_at_10
value: 43.605
- type: recall_at_100
value: 65.52366666666667
- type: recall_at_1000
value: 84.69683333333332
- type: recall_at_3
value: 32.195499999999996
- type: recall_at_5
value: 37.25325
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.279
- type: map_at_10
value: 23.238
- type: map_at_100
value: 24.026
- type: map_at_1000
value: 24.13
- type: map_at_3
value: 20.730999999999998
- type: map_at_5
value: 22.278000000000002
- type: mrr_at_1
value: 19.017999999999997
- type: mrr_at_10
value: 25.188
- type: mrr_at_100
value: 25.918999999999997
- type: mrr_at_1000
value: 25.996999999999996
- type: mrr_at_3
value: 22.776
- type: mrr_at_5
value: 24.256
- type: ndcg_at_1
value: 19.017999999999997
- type: ndcg_at_10
value: 27.171
- type: ndcg_at_100
value: 31.274
- type: ndcg_at_1000
value: 34.016000000000005
- type: ndcg_at_3
value: 22.442
- type: ndcg_at_5
value: 24.955
- type: precision_at_1
value: 19.017999999999997
- type: precision_at_10
value: 4.494
- type: precision_at_100
value: 0.712
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 9.611
- type: precision_at_5
value: 7.331
- type: recall_at_1
value: 17.279
- type: recall_at_10
value: 37.464999999999996
- type: recall_at_100
value: 56.458
- type: recall_at_1000
value: 76.759
- type: recall_at_3
value: 24.659
- type: recall_at_5
value: 30.672
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.901
- type: map_at_10
value: 20.268
- type: map_at_100
value: 21.143
- type: map_at_1000
value: 21.264
- type: map_at_3
value: 18.557000000000002
- type: map_at_5
value: 19.483
- type: mrr_at_1
value: 17.997
- type: mrr_at_10
value: 23.591
- type: mrr_at_100
value: 24.387
- type: mrr_at_1000
value: 24.471
- type: mrr_at_3
value: 21.874
- type: mrr_at_5
value: 22.797
- type: ndcg_at_1
value: 17.997
- type: ndcg_at_10
value: 23.87
- type: ndcg_at_100
value: 28.459
- type: ndcg_at_1000
value: 31.66
- type: ndcg_at_3
value: 20.779
- type: ndcg_at_5
value: 22.137
- type: precision_at_1
value: 17.997
- type: precision_at_10
value: 4.25
- type: precision_at_100
value: 0.761
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 9.716
- type: precision_at_5
value: 6.909999999999999
- type: recall_at_1
value: 14.901
- type: recall_at_10
value: 31.44
- type: recall_at_100
value: 52.717000000000006
- type: recall_at_1000
value: 76.102
- type: recall_at_3
value: 22.675
- type: recall_at_5
value: 26.336
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.52
- type: map_at_10
value: 28.397
- type: map_at_100
value: 29.443
- type: map_at_1000
value: 29.56
- type: map_at_3
value: 26.501
- type: map_at_5
value: 27.375
- type: mrr_at_1
value: 25.28
- type: mrr_at_10
value: 32.102000000000004
- type: mrr_at_100
value: 33.005
- type: mrr_at_1000
value: 33.084
- type: mrr_at_3
value: 30.208000000000002
- type: mrr_at_5
value: 31.146
- type: ndcg_at_1
value: 25.28
- type: ndcg_at_10
value: 32.635
- type: ndcg_at_100
value: 37.672
- type: ndcg_at_1000
value: 40.602
- type: ndcg_at_3
value: 28.951999999999998
- type: ndcg_at_5
value: 30.336999999999996
- type: precision_at_1
value: 25.28
- type: precision_at_10
value: 5.3260000000000005
- type: precision_at_100
value: 0.8840000000000001
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 12.687000000000001
- type: precision_at_5
value: 8.638
- type: recall_at_1
value: 21.52
- type: recall_at_10
value: 41.955
- type: recall_at_100
value: 64.21
- type: recall_at_1000
value: 85.28099999999999
- type: recall_at_3
value: 31.979999999999997
- type: recall_at_5
value: 35.406
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.296
- type: map_at_10
value: 28.449999999999996
- type: map_at_100
value: 29.847
- type: map_at_1000
value: 30.073
- type: map_at_3
value: 25.995
- type: map_at_5
value: 27.603
- type: mrr_at_1
value: 25.296000000000003
- type: mrr_at_10
value: 32.751999999999995
- type: mrr_at_100
value: 33.705
- type: mrr_at_1000
value: 33.783
- type: mrr_at_3
value: 30.731
- type: mrr_at_5
value: 32.006
- type: ndcg_at_1
value: 25.296000000000003
- type: ndcg_at_10
value: 33.555
- type: ndcg_at_100
value: 38.891999999999996
- type: ndcg_at_1000
value: 42.088
- type: ndcg_at_3
value: 29.944
- type: ndcg_at_5
value: 31.997999999999998
- type: precision_at_1
value: 25.296000000000003
- type: precision_at_10
value: 6.542000000000001
- type: precision_at_100
value: 1.354
- type: precision_at_1000
value: 0.22599999999999998
- type: precision_at_3
value: 14.360999999999999
- type: precision_at_5
value: 10.593
- type: recall_at_1
value: 20.296
- type: recall_at_10
value: 42.742000000000004
- type: recall_at_100
value: 67.351
- type: recall_at_1000
value: 88.774
- type: recall_at_3
value: 32.117000000000004
- type: recall_at_5
value: 37.788
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.157999999999998
- type: map_at_10
value: 24.342
- type: map_at_100
value: 25.201
- type: map_at_1000
value: 25.317
- type: map_at_3
value: 22.227
- type: map_at_5
value: 23.372999999999998
- type: mrr_at_1
value: 19.778000000000002
- type: mrr_at_10
value: 26.066
- type: mrr_at_100
value: 26.935
- type: mrr_at_1000
value: 27.022000000000002
- type: mrr_at_3
value: 24.214
- type: mrr_at_5
value: 25.268
- type: ndcg_at_1
value: 19.778000000000002
- type: ndcg_at_10
value: 28.104000000000003
- type: ndcg_at_100
value: 32.87
- type: ndcg_at_1000
value: 35.858000000000004
- type: ndcg_at_3
value: 24.107
- type: ndcg_at_5
value: 26.007
- type: precision_at_1
value: 19.778000000000002
- type: precision_at_10
value: 4.417999999999999
- type: precision_at_100
value: 0.739
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 10.228
- type: precision_at_5
value: 7.172000000000001
- type: recall_at_1
value: 18.157999999999998
- type: recall_at_10
value: 37.967
- type: recall_at_100
value: 60.806000000000004
- type: recall_at_1000
value: 83.097
- type: recall_at_3
value: 27.223999999999997
- type: recall_at_5
value: 31.968000000000004
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 7.055
- type: map_at_10
value: 11.609
- type: map_at_100
value: 12.83
- type: map_at_1000
value: 12.995000000000001
- type: map_at_3
value: 9.673
- type: map_at_5
value: 10.761999999999999
- type: mrr_at_1
value: 15.309000000000001
- type: mrr_at_10
value: 23.655
- type: mrr_at_100
value: 24.785
- type: mrr_at_1000
value: 24.856
- type: mrr_at_3
value: 20.499000000000002
- type: mrr_at_5
value: 22.425
- type: ndcg_at_1
value: 15.309000000000001
- type: ndcg_at_10
value: 17.252000000000002
- type: ndcg_at_100
value: 22.976
- type: ndcg_at_1000
value: 26.480999999999998
- type: ndcg_at_3
value: 13.418
- type: ndcg_at_5
value: 15.084
- type: precision_at_1
value: 15.309000000000001
- type: precision_at_10
value: 5.309
- type: precision_at_100
value: 1.1320000000000001
- type: precision_at_1000
value: 0.17600000000000002
- type: precision_at_3
value: 9.62
- type: precision_at_5
value: 7.883
- type: recall_at_1
value: 7.055
- type: recall_at_10
value: 21.891
- type: recall_at_100
value: 41.979
- type: recall_at_1000
value: 62.239999999999995
- type: recall_at_3
value: 12.722
- type: recall_at_5
value: 16.81
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.909
- type: map_at_10
value: 12.844
- type: map_at_100
value: 16.435
- type: map_at_1000
value: 17.262
- type: map_at_3
value: 10.131
- type: map_at_5
value: 11.269
- type: mrr_at_1
value: 54.50000000000001
- type: mrr_at_10
value: 62.202
- type: mrr_at_100
value: 62.81
- type: mrr_at_1000
value: 62.824000000000005
- type: mrr_at_3
value: 60.5
- type: mrr_at_5
value: 61.324999999999996
- type: ndcg_at_1
value: 42.125
- type: ndcg_at_10
value: 28.284
- type: ndcg_at_100
value: 30.444
- type: ndcg_at_1000
value: 36.397
- type: ndcg_at_3
value: 33.439
- type: ndcg_at_5
value: 30.473
- type: precision_at_1
value: 54.50000000000001
- type: precision_at_10
value: 21.4
- type: precision_at_100
value: 6.192
- type: precision_at_1000
value: 1.398
- type: precision_at_3
value: 36.583
- type: precision_at_5
value: 28.799999999999997
- type: recall_at_1
value: 6.909
- type: recall_at_10
value: 17.296
- type: recall_at_100
value: 33.925
- type: recall_at_1000
value: 53.786
- type: recall_at_3
value: 11.333
- type: recall_at_5
value: 13.529
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 36.08
- type: f1
value: 33.016420191943766
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 52.605000000000004
- type: map_at_10
value: 63.31400000000001
- type: map_at_100
value: 63.678000000000004
- type: map_at_1000
value: 63.699
- type: map_at_3
value: 61.141
- type: map_at_5
value: 62.517999999999994
- type: mrr_at_1
value: 56.871
- type: mrr_at_10
value: 67.915
- type: mrr_at_100
value: 68.24900000000001
- type: mrr_at_1000
value: 68.262
- type: mrr_at_3
value: 65.809
- type: mrr_at_5
value: 67.171
- type: ndcg_at_1
value: 56.871
- type: ndcg_at_10
value: 69.122
- type: ndcg_at_100
value: 70.855
- type: ndcg_at_1000
value: 71.368
- type: ndcg_at_3
value: 64.974
- type: ndcg_at_5
value: 67.318
- type: precision_at_1
value: 56.871
- type: precision_at_10
value: 9.029
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 25.893
- type: precision_at_5
value: 16.838
- type: recall_at_1
value: 52.605000000000004
- type: recall_at_10
value: 82.679
- type: recall_at_100
value: 90.586
- type: recall_at_1000
value: 94.38
- type: recall_at_3
value: 71.447
- type: recall_at_5
value: 77.218
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.759
- type: map_at_10
value: 18.877
- type: map_at_100
value: 20.498
- type: map_at_1000
value: 20.682000000000002
- type: map_at_3
value: 16.159000000000002
- type: map_at_5
value: 17.575
- type: mrr_at_1
value: 22.531000000000002
- type: mrr_at_10
value: 31.155
- type: mrr_at_100
value: 32.188
- type: mrr_at_1000
value: 32.245000000000005
- type: mrr_at_3
value: 28.781000000000002
- type: mrr_at_5
value: 30.054
- type: ndcg_at_1
value: 22.531000000000002
- type: ndcg_at_10
value: 25.189
- type: ndcg_at_100
value: 31.958
- type: ndcg_at_1000
value: 35.693999999999996
- type: ndcg_at_3
value: 22.235
- type: ndcg_at_5
value: 23.044999999999998
- type: precision_at_1
value: 22.531000000000002
- type: precision_at_10
value: 7.438000000000001
- type: precision_at_100
value: 1.418
- type: precision_at_1000
value: 0.208
- type: precision_at_3
value: 15.329
- type: precision_at_5
value: 11.451
- type: recall_at_1
value: 10.759
- type: recall_at_10
value: 31.416
- type: recall_at_100
value: 56.989000000000004
- type: recall_at_1000
value: 80.33200000000001
- type: recall_at_3
value: 20.61
- type: recall_at_5
value: 24.903
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.21
- type: map_at_10
value: 38.765
- type: map_at_100
value: 39.498
- type: map_at_1000
value: 39.568
- type: map_at_3
value: 36.699
- type: map_at_5
value: 37.925
- type: mrr_at_1
value: 58.42
- type: mrr_at_10
value: 65.137
- type: mrr_at_100
value: 65.542
- type: mrr_at_1000
value: 65.568
- type: mrr_at_3
value: 63.698
- type: mrr_at_5
value: 64.575
- type: ndcg_at_1
value: 58.42
- type: ndcg_at_10
value: 47.476
- type: ndcg_at_100
value: 50.466
- type: ndcg_at_1000
value: 52.064
- type: ndcg_at_3
value: 43.986
- type: ndcg_at_5
value: 45.824
- type: precision_at_1
value: 58.42
- type: precision_at_10
value: 9.649000000000001
- type: precision_at_100
value: 1.201
- type: precision_at_1000
value: 0.14100000000000001
- type: precision_at_3
value: 26.977
- type: precision_at_5
value: 17.642
- type: recall_at_1
value: 29.21
- type: recall_at_10
value: 48.244
- type: recall_at_100
value: 60.041
- type: recall_at_1000
value: 70.743
- type: recall_at_3
value: 40.466
- type: recall_at_5
value: 44.105
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 58.7064
- type: ap
value: 55.36326227125519
- type: f1
value: 57.46763115215848
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 15.889000000000001
- type: map_at_10
value: 25.979000000000003
- type: map_at_100
value: 27.21
- type: map_at_1000
value: 27.284000000000002
- type: map_at_3
value: 22.665
- type: map_at_5
value: 24.578
- type: mrr_at_1
value: 16.39
- type: mrr_at_10
value: 26.504
- type: mrr_at_100
value: 27.689999999999998
- type: mrr_at_1000
value: 27.758
- type: mrr_at_3
value: 23.24
- type: mrr_at_5
value: 25.108000000000004
- type: ndcg_at_1
value: 16.39
- type: ndcg_at_10
value: 31.799
- type: ndcg_at_100
value: 38.034
- type: ndcg_at_1000
value: 39.979
- type: ndcg_at_3
value: 25.054
- type: ndcg_at_5
value: 28.463
- type: precision_at_1
value: 16.39
- type: precision_at_10
value: 5.189
- type: precision_at_100
value: 0.835
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 10.84
- type: precision_at_5
value: 8.238
- type: recall_at_1
value: 15.889000000000001
- type: recall_at_10
value: 49.739
- type: recall_at_100
value: 79.251
- type: recall_at_1000
value: 94.298
- type: recall_at_3
value: 31.427
- type: recall_at_5
value: 39.623000000000005
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 88.81668946648426
- type: f1
value: 88.55200075528438
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 58.611491108071135
- type: f1
value: 42.12391403999353
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.67047747141896
- type: f1
value: 62.88410885922258
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.78547410894419
- type: f1
value: 71.69467869218154
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 27.23799937752035
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 23.26502601343789
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.680711484149832
- type: mrr
value: 31.705059795117307
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.077
- type: map_at_10
value: 8.657
- type: map_at_100
value: 10.753
- type: map_at_1000
value: 11.885
- type: map_at_3
value: 6.5089999999999995
- type: map_at_5
value: 7.405
- type: mrr_at_1
value: 38.7
- type: mrr_at_10
value: 46.065
- type: mrr_at_100
value: 46.772000000000006
- type: mrr_at_1000
value: 46.83
- type: mrr_at_3
value: 44.118
- type: mrr_at_5
value: 45.015
- type: ndcg_at_1
value: 36.997
- type: ndcg_at_10
value: 25.96
- type: ndcg_at_100
value: 23.607
- type: ndcg_at_1000
value: 32.317
- type: ndcg_at_3
value: 31.06
- type: ndcg_at_5
value: 28.921000000000003
- type: precision_at_1
value: 38.7
- type: precision_at_10
value: 19.195
- type: precision_at_100
value: 6.164
- type: precision_at_1000
value: 1.839
- type: precision_at_3
value: 28.999000000000002
- type: precision_at_5
value: 25.014999999999997
- type: recall_at_1
value: 4.077
- type: recall_at_10
value: 11.802
- type: recall_at_100
value: 24.365000000000002
- type: recall_at_1000
value: 55.277
- type: recall_at_3
value: 7.435
- type: recall_at_5
value: 8.713999999999999
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.588
- type: map_at_10
value: 32.08
- type: map_at_100
value: 33.32
- type: map_at_1000
value: 33.377
- type: map_at_3
value: 28.166000000000004
- type: map_at_5
value: 30.383
- type: mrr_at_1
value: 22.161
- type: mrr_at_10
value: 34.121
- type: mrr_at_100
value: 35.171
- type: mrr_at_1000
value: 35.214
- type: mrr_at_3
value: 30.692000000000004
- type: mrr_at_5
value: 32.706
- type: ndcg_at_1
value: 22.131999999999998
- type: ndcg_at_10
value: 38.887
- type: ndcg_at_100
value: 44.433
- type: ndcg_at_1000
value: 45.823
- type: ndcg_at_3
value: 31.35
- type: ndcg_at_5
value: 35.144
- type: precision_at_1
value: 22.131999999999998
- type: precision_at_10
value: 6.8629999999999995
- type: precision_at_100
value: 0.993
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 14.706
- type: precision_at_5
value: 10.972999999999999
- type: recall_at_1
value: 19.588
- type: recall_at_10
value: 57.703
- type: recall_at_100
value: 82.194
- type: recall_at_1000
value: 92.623
- type: recall_at_3
value: 38.012
- type: recall_at_5
value: 46.847
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 68.038
- type: map_at_10
value: 81.572
- type: map_at_100
value: 82.25200000000001
- type: map_at_1000
value: 82.27600000000001
- type: map_at_3
value: 78.618
- type: map_at_5
value: 80.449
- type: mrr_at_1
value: 78.31
- type: mrr_at_10
value: 84.98
- type: mrr_at_100
value: 85.122
- type: mrr_at_1000
value: 85.124
- type: mrr_at_3
value: 83.852
- type: mrr_at_5
value: 84.6
- type: ndcg_at_1
value: 78.31
- type: ndcg_at_10
value: 85.693
- type: ndcg_at_100
value: 87.191
- type: ndcg_at_1000
value: 87.386
- type: ndcg_at_3
value: 82.585
- type: ndcg_at_5
value: 84.255
- type: precision_at_1
value: 78.31
- type: precision_at_10
value: 12.986
- type: precision_at_100
value: 1.505
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 36.007
- type: precision_at_5
value: 23.735999999999997
- type: recall_at_1
value: 68.038
- type: recall_at_10
value: 93.598
- type: recall_at_100
value: 98.869
- type: recall_at_1000
value: 99.86500000000001
- type: recall_at_3
value: 84.628
- type: recall_at_5
value: 89.316
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 37.948231664922865
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 49.90597913763894
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.753
- type: map_at_10
value: 8.915
- type: map_at_100
value: 10.374
- type: map_at_1000
value: 10.612
- type: map_at_3
value: 6.577
- type: map_at_5
value: 7.8
- type: mrr_at_1
value: 18.4
- type: mrr_at_10
value: 27.325
- type: mrr_at_100
value: 28.419
- type: mrr_at_1000
value: 28.494000000000003
- type: mrr_at_3
value: 24.349999999999998
- type: mrr_at_5
value: 26.205000000000002
- type: ndcg_at_1
value: 18.4
- type: ndcg_at_10
value: 15.293000000000001
- type: ndcg_at_100
value: 21.592
- type: ndcg_at_1000
value: 26.473000000000003
- type: ndcg_at_3
value: 14.748
- type: ndcg_at_5
value: 12.98
- type: precision_at_1
value: 18.4
- type: precision_at_10
value: 7.779999999999999
- type: precision_at_100
value: 1.693
- type: precision_at_1000
value: 0.28800000000000003
- type: precision_at_3
value: 13.700000000000001
- type: precision_at_5
value: 11.379999999999999
- type: recall_at_1
value: 3.753
- type: recall_at_10
value: 15.806999999999999
- type: recall_at_100
value: 34.37
- type: recall_at_1000
value: 58.463
- type: recall_at_3
value: 8.338
- type: recall_at_5
value: 11.538
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.58843987639705
- type: cos_sim_spearman
value: 76.33071660715956
- type: euclidean_pearson
value: 72.8029921002978
- type: euclidean_spearman
value: 69.34534284782808
- type: manhattan_pearson
value: 72.49781034973653
- type: manhattan_spearman
value: 69.24754112621694
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 83.31673079903189
- type: cos_sim_spearman
value: 74.27699263517789
- type: euclidean_pearson
value: 69.4008910999579
- type: euclidean_spearman
value: 59.0716984643048
- type: manhattan_pearson
value: 68.87342686919199
- type: manhattan_spearman
value: 58.904612865335025
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 77.59122302327788
- type: cos_sim_spearman
value: 78.55383586979005
- type: euclidean_pearson
value: 68.18338642204289
- type: euclidean_spearman
value: 68.95092864180276
- type: manhattan_pearson
value: 68.08807059822706
- type: manhattan_spearman
value: 68.86135938270193
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 78.51766841424501
- type: cos_sim_spearman
value: 73.84318001499558
- type: euclidean_pearson
value: 67.2007138855177
- type: euclidean_spearman
value: 63.98672842723766
- type: manhattan_pearson
value: 67.17773810895949
- type: manhattan_spearman
value: 64.07359154832962
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 82.73438541570299
- type: cos_sim_spearman
value: 83.71357922283677
- type: euclidean_pearson
value: 57.50131347498546
- type: euclidean_spearman
value: 57.73623619252132
- type: manhattan_pearson
value: 58.082992079000725
- type: manhattan_spearman
value: 58.42728201167522
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 78.14794654172421
- type: cos_sim_spearman
value: 80.025736165043
- type: euclidean_pearson
value: 65.87773913985473
- type: euclidean_spearman
value: 66.69337751784794
- type: manhattan_pearson
value: 66.01039761004415
- type: manhattan_spearman
value: 66.89215027952318
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.10554507136152
- type: cos_sim_spearman
value: 87.4898082140765
- type: euclidean_pearson
value: 72.19391114541367
- type: euclidean_spearman
value: 70.36647944993783
- type: manhattan_pearson
value: 72.18680758133698
- type: manhattan_spearman
value: 70.3871215447305
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.54868111501618
- type: cos_sim_spearman
value: 64.25173617448473
- type: euclidean_pearson
value: 39.116088900637116
- type: euclidean_spearman
value: 53.300772929884
- type: manhattan_pearson
value: 38.3844195287959
- type: manhattan_spearman
value: 52.846675312001246
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 80.04396610550214
- type: cos_sim_spearman
value: 79.19504854997832
- type: euclidean_pearson
value: 66.3284657637072
- type: euclidean_spearman
value: 63.69531796729492
- type: manhattan_pearson
value: 66.82324081038026
- type: manhattan_spearman
value: 64.18254512904923
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 74.16264051781705
- type: mrr
value: 91.80864796060874
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.983000000000004
- type: map_at_10
value: 47.858000000000004
- type: map_at_100
value: 48.695
- type: map_at_1000
value: 48.752
- type: map_at_3
value: 45.444
- type: map_at_5
value: 46.906
- type: mrr_at_1
value: 41.333
- type: mrr_at_10
value: 49.935
- type: mrr_at_100
value: 50.51
- type: mrr_at_1000
value: 50.55500000000001
- type: mrr_at_3
value: 47.833
- type: mrr_at_5
value: 49.117
- type: ndcg_at_1
value: 41.333
- type: ndcg_at_10
value: 52.398999999999994
- type: ndcg_at_100
value: 56.196
- type: ndcg_at_1000
value: 57.838
- type: ndcg_at_3
value: 47.987
- type: ndcg_at_5
value: 50.356
- type: precision_at_1
value: 41.333
- type: precision_at_10
value: 7.167
- type: precision_at_100
value: 0.9299999999999999
- type: precision_at_1000
value: 0.108
- type: precision_at_3
value: 19.0
- type: precision_at_5
value: 12.8
- type: recall_at_1
value: 38.983000000000004
- type: recall_at_10
value: 64.183
- type: recall_at_100
value: 82.02199999999999
- type: recall_at_1000
value: 95.167
- type: recall_at_3
value: 52.383
- type: recall_at_5
value: 58.411
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.8019801980198
- type: cos_sim_ap
value: 94.9287554635848
- type: cos_sim_f1
value: 89.83739837398375
- type: cos_sim_precision
value: 91.32231404958677
- type: cos_sim_recall
value: 88.4
- type: dot_accuracy
value: 99.23762376237623
- type: dot_ap
value: 55.22534191245801
- type: dot_f1
value: 54.054054054054056
- type: dot_precision
value: 55.15088449531738
- type: dot_recall
value: 53.0
- type: euclidean_accuracy
value: 99.6108910891089
- type: euclidean_ap
value: 82.5195111329438
- type: euclidean_f1
value: 78.2847718526663
- type: euclidean_precision
value: 86.93528693528694
- type: euclidean_recall
value: 71.2
- type: manhattan_accuracy
value: 99.5970297029703
- type: manhattan_ap
value: 81.96876777875492
- type: manhattan_f1
value: 77.33773377337734
- type: manhattan_precision
value: 85.94132029339853
- type: manhattan_recall
value: 70.3
- type: max_accuracy
value: 99.8019801980198
- type: max_ap
value: 94.9287554635848
- type: max_f1
value: 89.83739837398375
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 46.34997003954114
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 31.462336020554893
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 47.1757817459526
- type: mrr
value: 47.941057104660054
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.56106249068471
- type: cos_sim_spearman
value: 31.24613190558528
- type: dot_pearson
value: 20.486610035794257
- type: dot_spearman
value: 23.115667545894546
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.182
- type: map_at_10
value: 1.155
- type: map_at_100
value: 5.118
- type: map_at_1000
value: 11.827
- type: map_at_3
value: 0.482
- type: map_at_5
value: 0.712
- type: mrr_at_1
value: 70.0
- type: mrr_at_10
value: 79.483
- type: mrr_at_100
value: 79.637
- type: mrr_at_1000
value: 79.637
- type: mrr_at_3
value: 77.667
- type: mrr_at_5
value: 78.567
- type: ndcg_at_1
value: 63.0
- type: ndcg_at_10
value: 52.303
- type: ndcg_at_100
value: 37.361
- type: ndcg_at_1000
value: 32.84
- type: ndcg_at_3
value: 58.274
- type: ndcg_at_5
value: 55.601
- type: precision_at_1
value: 70.0
- type: precision_at_10
value: 55.60000000000001
- type: precision_at_100
value: 37.96
- type: precision_at_1000
value: 14.738000000000001
- type: precision_at_3
value: 62.666999999999994
- type: precision_at_5
value: 60.0
- type: recall_at_1
value: 0.182
- type: recall_at_10
value: 1.4120000000000001
- type: recall_at_100
value: 8.533
- type: recall_at_1000
value: 30.572
- type: recall_at_3
value: 0.5309999999999999
- type: recall_at_5
value: 0.814
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.385
- type: map_at_10
value: 7.185999999999999
- type: map_at_100
value: 11.642
- type: map_at_1000
value: 12.953000000000001
- type: map_at_3
value: 3.496
- type: map_at_5
value: 4.82
- type: mrr_at_1
value: 16.326999999999998
- type: mrr_at_10
value: 29.461
- type: mrr_at_100
value: 31.436999999999998
- type: mrr_at_1000
value: 31.436999999999998
- type: mrr_at_3
value: 24.490000000000002
- type: mrr_at_5
value: 27.857
- type: ndcg_at_1
value: 14.285999999999998
- type: ndcg_at_10
value: 16.672
- type: ndcg_at_100
value: 28.691
- type: ndcg_at_1000
value: 39.817
- type: ndcg_at_3
value: 15.277
- type: ndcg_at_5
value: 15.823
- type: precision_at_1
value: 16.326999999999998
- type: precision_at_10
value: 15.509999999999998
- type: precision_at_100
value: 6.49
- type: precision_at_1000
value: 1.4080000000000001
- type: precision_at_3
value: 16.326999999999998
- type: precision_at_5
value: 16.735
- type: recall_at_1
value: 1.385
- type: recall_at_10
value: 12.586
- type: recall_at_100
value: 40.765
- type: recall_at_1000
value: 75.198
- type: recall_at_3
value: 4.326
- type: recall_at_5
value: 7.074999999999999
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 59.4402
- type: ap
value: 10.16922814263879
- type: f1
value: 45.374485104940476
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 54.25863044708545
- type: f1
value: 54.20154252609619
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 34.3883169293051
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 81.76670441676104
- type: cos_sim_ap
value: 59.29878710961347
- type: cos_sim_f1
value: 57.33284971587474
- type: cos_sim_precision
value: 52.9122963624191
- type: cos_sim_recall
value: 62.559366754617415
- type: dot_accuracy
value: 77.52279907015557
- type: dot_ap
value: 34.17588904643467
- type: dot_f1
value: 41.063567529494634
- type: dot_precision
value: 30.813953488372093
- type: dot_recall
value: 61.53034300791557
- type: euclidean_accuracy
value: 80.61631996185254
- type: euclidean_ap
value: 54.00362361479352
- type: euclidean_f1
value: 53.99111751290361
- type: euclidean_precision
value: 49.52653600528518
- type: euclidean_recall
value: 59.340369393139845
- type: manhattan_accuracy
value: 80.65208320915539
- type: manhattan_ap
value: 54.18329507159467
- type: manhattan_f1
value: 53.85550960836779
- type: manhattan_precision
value: 49.954873646209386
- type: manhattan_recall
value: 58.41688654353562
- type: max_accuracy
value: 81.76670441676104
- type: max_ap
value: 59.29878710961347
- type: max_f1
value: 57.33284971587474
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 87.99433383785463
- type: cos_sim_ap
value: 83.43513915159009
- type: cos_sim_f1
value: 76.3906784964842
- type: cos_sim_precision
value: 73.19223985890653
- type: cos_sim_recall
value: 79.88142901139513
- type: dot_accuracy
value: 81.96142352621571
- type: dot_ap
value: 67.78764755689359
- type: dot_f1
value: 64.42823356983445
- type: dot_precision
value: 56.77801913931779
- type: dot_recall
value: 74.46104096088698
- type: euclidean_accuracy
value: 81.9478402607987
- type: euclidean_ap
value: 67.13958457373279
- type: euclidean_f1
value: 60.45118343195266
- type: euclidean_precision
value: 58.1625391403359
- type: euclidean_recall
value: 62.92731752386819
- type: manhattan_accuracy
value: 82.01769705437188
- type: manhattan_ap
value: 67.24709477497046
- type: manhattan_f1
value: 60.4103846436714
- type: manhattan_precision
value: 57.82063916654935
- type: manhattan_recall
value: 63.24299353249153
- type: max_accuracy
value: 87.99433383785463
- type: max_ap
value: 83.43513915159009
- type: max_f1
value: 76.3906784964842
---
<br><br>
<p align="center">
<img src="https://github.com/jina-ai/finetuner/blob/main/docs/_static/finetuner-logo-ani.svg?raw=true" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
</p>
<p align="center">
<b>The text embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a>, <a href="https://github.com/jina-ai/finetuner"><b>Finetuner</b></a> team.</b>
</p>
## Intented Usage & Model Info
`jina-embedding-s-en-v1` is a language model that has been trained using Jina AI's Linnaeus-Clean dataset.
This dataset consists of 380 million pairs of sentences, which include both query-document pairs.
These pairs were obtained from various domains and were carefully selected through a thorough cleaning process.
The Linnaeus-Full dataset, from which the Linnaeus-Clean dataset is derived, originally contained 1.6 billion sentence pairs.
The model has a range of use cases, including information retrieval, semantic textual similarity, text reranking, and more.
With a compact size of just 35 million parameters,
the model enables lightning-fast inference while still delivering impressive performance.
Additionally, we provide the following options:
- [`jina-embedding-t-en-v1`](https://huggingface.co/jinaai/jina-embedding-t-en-v1): 14 million parameters.
- [`jina-embedding-s-en-v1`](https://huggingface.co/jinaai/jina-embedding-s-en-v1): 35 million parameters **(you are here)**.
- [`jina-embedding-b-en-v1`](https://huggingface.co/jinaai/jina-embedding-b-en-v1): 110 million parameters.
- [`jina-embedding-l-en-v1`](https://huggingface.co/jinaai/jina-embedding-l-en-v1): 330 million parameters.
- `jina-embedding-1b-en-v1`: 1.2 billion parameters, 10 times bert-base (soon).
- `jina-embedding-6b-en-v1`: 6 billion parameters, 30 times bert-base (soon).
## Data & Parameters
Please checkout our [technical blog](https://arxiv.org/abs/2307.11224).
## Metrics
We compared the model against `all-minilm-l6-v2`/`all-mpnet-base-v2` from sbert and `text-embeddings-ada-002` from OpenAI:
|Name|param |dimension|
|------------------------------|-----|------|
|all-minilm-l6-v2|23m |384|
|all-mpnet-base-v2 |110m |768|
|ada-embedding-002|Unknown/OpenAI API |1536|
|jina-embedding-t-en-v1|14m |312|
|jina-embedding-s-en-v1|35m |512|
|jina-embedding-b-en-v1|110m |768|
|jina-embedding-l-en-v1|330m |1024|
|Name|STS12|STS13|STS14|STS15|STS16|STS17|TRECOVID|Quora|SciFact|
|------------------------------|-----|-----|-----|-----|-----|-----|--------|-----|-----|
|all-minilm-l6-v2|0.724|0.806|0.756|0.854|0.79 |0.876|0.473 |0.876|0.645 |
|all-mpnet-base-v2|0.726|**0.835**|0.78 |0.857|0.8 |**0.906**|0.513 |0.875|0.656 |
|ada-embedding-002|0.698|0.833|0.761|0.861|**0.86** |0.903|**0.685** |0.876|**0.726** |
|jina-embedding-t-en-v1|0.717|0.773|0.731|0.829|0.777|0.860|0.482 |0.840|0.522 |
|jina-embedding-s-en-v1|0.743|0.786|0.738|0.837|0.80|0.875|0.523 |0.857|0.524 |
|jina-embedding-b-en-v1|**0.751**|0.809|0.761|0.856|0.812|0.890|0.606 |0.876|0.594 |
|jina-embedding-l-en-v1|0.745|0.832|**0.781**|**0.869**|0.837|0.902|0.573 |**0.881**|0.598 |
## Usage
Use with Jina AI Finetuner
```python
!pip install finetuner
import finetuner
model = finetuner.build_model('jinaai/jina-embedding-s-en-v1')
embeddings = finetuner.encode(
model=model,
data=['how is the weather today', 'What is the current weather like today?']
)
print(finetuner.cos_sim(embeddings[0], embeddings[1]))
```
Use with sentence-transformers:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
sentences = ['how is the weather today', 'What is the current weather like today?']
model = SentenceTransformer('jinaai/jina-embedding-s-en-v1')
embeddings = model.encode(sentences)
print(cos_sim(embeddings[0], embeddings[1]))
```
## Fine-tuning
Please consider [Finetuner](https://github.com/jina-ai/finetuner).
## Plans
1. The development of `jina-embedding-s-en-v2` is currently underway with two main objectives: improving performance and increasing the maximum sequence length.
2. We are currently working on a bilingual embedding model that combines English and X language. The upcoming model will be called `jina-embedding-s/b/l-de-v1`.
## Contact
Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas.
## Citation
If you find Jina Embeddings useful in your research, please cite the following paper:
``` latex
@misc{günther2023jina,
title={Jina Embeddings: A Novel Set of High-Performance Sentence Embedding Models},
author={Michael Günther and Louis Milliken and Jonathan Geuter and Georgios Mastrapas and Bo Wang and Han Xiao},
year={2023},
eprint={2307.11224},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 67,461 | [
[
-0.05474853515625,
-0.0667724609375,
0.020599365234375,
0.0103607177734375,
-0.0197296142578125,
-0.0157012939453125,
-0.0182952880859375,
-0.0171051025390625,
0.043121337890625,
0.00390625,
-0.038177490234375,
-0.034332275390625,
-0.048828125,
0.00855255126... |
Yntec/RadiantVibes | 2023-08-28T10:30:39.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"Fantasy",
"Artwork",
"Landscape",
"Hivemind111",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Yntec | null | null | Yntec/RadiantVibes | 2 | 1,747 | diffusers | 2023-08-28T10:01:38 | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
language:
- en
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- Fantasy
- Artwork
- Landscape
- Hivemind111
inference: true
---
# Radiant Vibes
FP16-no-ema version of this model.
Sample images and prompt:


Overwatch pretty cute girl grabbing beef tacos made out of burritos. by ilya kuvshinov, krenz cushart, greg rutkowski, trending on artstation. glossy materials, sharp highlights, amazing textured brush strokes, accurate shape, clear details, cinematic soft volumetric studio lighting, with backlight, vfx, hdr
Original Page:
https://civitai.com/models/4509?modelVersionId=38663 | 932 | [
[
-0.031585693359375,
-0.0638427734375,
0.037872314453125,
0.0234527587890625,
-0.0162353515625,
0.007175445556640625,
0.0217132568359375,
-0.050048828125,
0.0662841796875,
0.05224609375,
-0.06829833984375,
-0.045928955078125,
-0.0170745849609375,
0.0156402587... |
savasy/bert-base-turkish-sentiment-cased | 2023-06-22T14:42:55.000Z | [
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"text-classification",
"tr",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | savasy | null | null | savasy/bert-base-turkish-sentiment-cased | 20 | 1,746 | transformers | 2022-03-02T23:29:05 | ---
language: tr
---
# Bert-base Turkish Sentiment Model
https://huggingface.co/savasy/bert-base-turkish-sentiment-cased
This model is used for Sentiment Analysis, which is based on BERTurk for Turkish Language https://huggingface.co/dbmdz/bert-base-turkish-cased
## Dataset
The dataset is taken from the studies [[2]](#paper-2) and [[3]](#paper-3), and merged.
* The study [2] gathered movie and product reviews. The products are book, DVD, electronics, and kitchen.
The movie dataset is taken from a cinema Web page ([Beyazperde](www.beyazperde.com)) with
5331 positive and 5331 negative sentences. Reviews in the Web page are marked in
scale from 0 to 5 by the users who made the reviews. The study considered a review
sentiment positive if the rating is equal to or bigger than 4, and negative if it is less
or equal to 2. They also built Turkish product review dataset from an online retailer
Web page. They constructed benchmark dataset consisting of reviews regarding some
products (book, DVD, etc.). Likewise, reviews are marked in the range from 1 to 5,
and majority class of reviews are 5. Each category has 700 positive and 700 negative
reviews in which average rating of negative reviews is 2.27 and of positive reviews
is 4.5. This dataset is also used by the study [[1]](#paper-1).
* The study [[3]](#paper-3) collected tweet dataset. They proposed a new approach for automatically classifying the sentiment of microblog messages. The proposed approach is based on utilizing robust feature representation and fusion.
*Merged Dataset*
| *size* | *data* |
|--------|----|
| 8000 |dev.tsv|
| 8262 |test.tsv|
| 32000 |train.tsv|
| *48290* |*total*|
### The dataset is used by following papers
<a id="paper-1">[1]</a> Yildirim, Savaş. (2020). Comparing Deep Neural Networks to Traditional Models for Sentiment Analysis in Turkish Language. 10.1007/978-981-15-1216-2_12.
<a id="paper-2">[2]</a> Demirtas, Erkin and Mykola Pechenizkiy. 2013. Cross-lingual polarity detection with machine translation. In Proceedings of the Second International Workshop on Issues of Sentiment
Discovery and Opinion Mining (WISDOM ’13)
<a id="paper-3">[3]</a> Hayran, A., Sert, M. (2017), "Sentiment Analysis on Microblog Data based on Word Embedding and Fusion Techniques", IEEE 25th Signal Processing and Communications Applications Conference (SIU 2017), Belek, Turkey
## Training
```shell
export GLUE_DIR="./sst-2-newall"
export TASK_NAME=SST-2
python3 run_glue.py \
--model_type bert \
--model_name_or_path dbmdz/bert-base-turkish-uncased\
--task_name "SST-2" \
--do_train \
--do_eval \
--data_dir "./sst-2-newall" \
--max_seq_length 128 \
--per_gpu_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir "./model"
```
## Results
> 05/10/2020 17:00:43 - INFO - transformers.trainer - \*\*\*\*\* Running Evaluation \*\*\*\*\*
> 05/10/2020 17:00:43 - INFO - transformers.trainer - Num examples = 7999
> 05/10/2020 17:00:43 - INFO - transformers.trainer - Batch size = 8
> Evaluation: 100% 1000/1000 [00:34<00:00, 29.04it/s]
> 05/10/2020 17:01:17 - INFO - \_\_main__ - \*\*\*\*\* Eval results sst-2 \*\*\*\*\*
> 05/10/2020 17:01:17 - INFO - \_\_main__ - acc = 0.9539942492811602
> 05/10/2020 17:01:17 - INFO - \_\_main__ - loss = 0.16348013816401363
Accuracy is about **95.4%**
## Code Usage
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model = AutoModelForSequenceClassification.from_pretrained("savasy/bert-base-turkish-sentiment-cased")
tokenizer = AutoTokenizer.from_pretrained("savasy/bert-base-turkish-sentiment-cased")
sa= pipeline("sentiment-analysis", tokenizer=tokenizer, model=model)
p = sa("bu telefon modelleri çok kaliteli , her parçası çok özel bence")
print(p)
# [{'label': 'LABEL_1', 'score': 0.9871089}]
print(p[0]['label'] == 'LABEL_1')
# True
p = sa("Film çok kötü ve çok sahteydi")
print(p)
# [{'label': 'LABEL_0', 'score': 0.9975505}]
print(p[0]['label'] == 'LABEL_1')
# False
```
## Test
### Data
Suppose your file has lots of lines of comment and label (1 or 0) at the end (tab seperated)
> comment1 ... \t label
> comment2 ... \t label
> ...
### Code
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model = AutoModelForSequenceClassification.from_pretrained("savasy/bert-base-turkish-sentiment-cased")
tokenizer = AutoTokenizer.from_pretrained("savasy/bert-base-turkish-sentiment-cased")
sa = pipeline("sentiment-analysis", tokenizer=tokenizer, model=model)
input_file = "/path/to/your/file/yourfile.tsv"
i, crr = 0, 0
for line in open(input_file):
lines = line.strip().split("\t")
if len(lines) == 2:
i = i + 1
if i%100 == 0:
print(i)
pred = sa(lines[0])
pred = pred[0]["label"].split("_")[1]
if pred == lines[1]:
crr = crr + 1
print(crr, i, crr/i)
```
| 5,000 | [
[
-0.039337158203125,
-0.048126220703125,
0.0008287429809570312,
0.016357421875,
-0.03997802734375,
0.0009646415710449219,
-0.02099609375,
0.0002105236053466797,
0.021453857421875,
0.01354217529296875,
-0.042144775390625,
-0.059356689453125,
-0.054443359375,
-... |
google/bigbird-roberta-large | 2021-06-02T14:49:29.000Z | [
"transformers",
"pytorch",
"jax",
"big_bird",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:cc_news",
"arxiv:2007.14062",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | google | null | null | google/bigbird-roberta-large | 19 | 1,745 | transformers | 2022-03-02T23:29:05 | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
- cc_news
---
# BigBird large model
BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle.
It is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this [paper](https://arxiv.org/abs/2007.14062) and first released in this [repository](https://github.com/google-research/bigbird).
Disclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts.
## How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BigBirdModel
# by default its in `block_sparse` mode with num_random_blocks=3, block_size=64
model = BigBirdModel.from_pretrained("google/bigbird-roberta-large")
# you can change `attention_type` to full attention like this:
model = BigBirdModel.from_pretrained("google/bigbird-roberta-large", attention_type="original_full")
# you can change `block_size` & `num_random_blocks` like this:
model = BigBirdModel.from_pretrained("google/bigbird-roberta-large", block_size=16, num_random_blocks=2)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Training Data
This model is pre-trained on four publicly available datasets: **Books**, **CC-News**, **Stories** and **Wikipedia**. It used same sentencepiece vocabulary as RoBERTa (which is in turn borrowed from GPT2).
## Training Procedure
Document longer than 4096 were split into multiple documents and documents that were much smaller than 4096 were joined. Following the original BERT training, 15% of tokens were masked and model is trained to predict the mask.
Model is warm started from RoBERTa’s checkpoint.
## BibTeX entry and citation info
```tex
@misc{zaheer2021big,
title={Big Bird: Transformers for Longer Sequences},
author={Manzil Zaheer and Guru Guruganesh and Avinava Dubey and Joshua Ainslie and Chris Alberti and Santiago Ontanon and Philip Pham and Anirudh Ravula and Qifan Wang and Li Yang and Amr Ahmed},
year={2021},
eprint={2007.14062},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
| 2,855 | [
[
-0.0309600830078125,
-0.051025390625,
0.0124969482421875,
0.0216064453125,
-0.0063629150390625,
-0.018096923828125,
-0.0341796875,
-0.044464111328125,
0.0209197998046875,
0.0218658447265625,
-0.048553466796875,
-0.0124969482421875,
-0.05908203125,
0.01326751... |
sail-rvc/Central_Cee__RVC_-_1000_Epochs_ | 2023-07-14T07:20:16.000Z | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | sail-rvc | null | null | sail-rvc/Central_Cee__RVC_-_1000_Epochs_ | 0 | 1,745 | transformers | 2023-07-14T07:19:57 |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# Central_Cee__RVC_-_1000_Epochs_
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:20:16
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
| 399 | [
[
-0.033660888671875,
-0.037841796875,
0.03192138671875,
0.009857177734375,
-0.0266265869140625,
-0.0164031982421875,
-0.000492095947265625,
0.0012712478637695312,
0.013031005859375,
0.07098388671875,
-0.045135498046875,
-0.052520751953125,
-0.025665283203125,
... |
digiplay/AI-infinity-V1-fp16 | 2023-08-04T18:12:02.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | digiplay | null | null | digiplay/AI-infinity-V1-fp16 | 6 | 1,743 | diffusers | 2023-08-03T13:31:17 | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/121253/ai-infinity-realistic-better-hands
DEMO image generated by huggingface's API :

Original Author's DEMO image :

 | 722 | [
[
-0.048126220703125,
-0.04254150390625,
0.036651611328125,
0.0020904541015625,
-0.033966064453125,
0.0030536651611328125,
0.0297088623046875,
-0.038543701171875,
0.04132080078125,
0.0236968994140625,
-0.0653076171875,
-0.034515380859375,
-0.04461669921875,
-0... |
gbellamy/lora-trained-xl-colab_2 | 2023-09-30T17:18:45.000Z | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"license:openrail++",
"has_space",
"region:us"
] | text-to-image | gbellamy | null | null | gbellamy/lora-trained-xl-colab_2 | 2 | 1,743 | diffusers | 2023-09-30T14:36:01 |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of suezzeus dog
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - gbellamy/lora-trained-xl-colab_2
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of suezzeus dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
gmb note: used 21 1024x1024 images
| 674 | [
[
-0.0279083251953125,
-0.028350830078125,
0.023712158203125,
0.0173797607421875,
-0.028350830078125,
0.0067291259765625,
0.020111083984375,
-0.016510009765625,
0.06500244140625,
0.0257568359375,
-0.035400390625,
-0.0269012451171875,
-0.04022216796875,
-0.0145... |
irfansk/my-pet-dog | 2023-10-18T07:48:21.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | irfansk | null | null | irfansk/my-pet-dog | 0 | 1,743 | diffusers | 2023-10-18T07:44:07 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by irfansk following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:

| 385 | [
[
-0.05950927734375,
-0.01654052734375,
0.0163421630859375,
0.01091766357421875,
-0.0085906982421875,
0.034942626953125,
0.0227203369140625,
-0.0401611328125,
0.044921875,
0.02728271484375,
-0.0435791015625,
-0.0131988525390625,
-0.01145172119140625,
0.0061035... |
Falah/female | 2023-10-23T09:49:16.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | Falah | null | null | Falah/female | 0 | 1,742 | diffusers | 2023-10-23T09:44:45 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### female Dreambooth model trained by Falah with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| 493 | [
[
-0.0172119140625,
-0.0614013671875,
0.02801513671875,
0.0291595458984375,
-0.0168304443359375,
0.0287017822265625,
0.038330078125,
-0.01107025146484375,
0.04052734375,
0.00927734375,
-0.028289794921875,
-0.0238037109375,
-0.040771484375,
-0.00893402099609375... |
akreal/tiny-random-gpt2 | 2021-08-18T15:07:44.000Z | [
"transformers",
"pytorch",
"tf",
"gpt2",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | akreal | null | null | akreal/tiny-random-gpt2 | 0 | 1,741 | transformers | 2022-03-02T23:29:05 | This is a copy of: https://huggingface.co/hf-internal-testing/tiny-random-gpt2
Changes: use old format for `pytorch_model.bin`.
| 129 | [
[
-0.00637054443359375,
-0.0733642578125,
0.006755828857421875,
0.0247802734375,
-0.0275421142578125,
-0.0275115966796875,
0.00222015380859375,
-0.0142974853515625,
0.0192718505859375,
0.021484375,
-0.039154052734375,
-0.00478363037109375,
-0.0164031982421875,
... |
vinai/vinai-translate-vi2en | 2022-07-06T07:19:15.000Z | [
"transformers",
"pytorch",
"tf",
"mbart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | text2text-generation | vinai | null | null | vinai/vinai-translate-vi2en | 3 | 1,739 | transformers | 2022-07-01T03:29:14 | # A Vietnamese-English Neural Machine Translation System
Our pre-trained VinAI Translate models `vinai/vinai-translate-vi2en` and `vinai/vinai-translate-en2vi` are state-of-the-art text translation models for Vietnamese-to-English and English-to-Vietnamese, respectively. The general architecture and experimental results of VinAI Translate can be found in [our paper](https://openreview.net/forum?id=CRg-RaxKnai):
@inproceedings{vinaitranslate,
title = {{A Vietnamese-English Neural Machine Translation System}},
author = {Thien Hai Nguyen and Tuan-Duy H. Nguyen and Duy Phung and Duy Tran-Cong Nguyen and Hieu Minh Tran and Manh Luong and Tin Duy Vo and Hung Hai Bui and Dinh Phung and Dat Quoc Nguyen},
booktitle = {Proceedings of the 23rd Annual Conference of the International Speech Communication Association: Show and Tell (INTERSPEECH)},
year = {2022}
}
Please **CITE** our paper whenever the pre-trained models or the system are used to help produce published results or incorporated into other software.
For further information or requests, please go to [VinAI Translate's homepage](https://github.com/VinAIResearch/VinAI_Translate)! | 1,188 | [
[
0.012176513671875,
-0.037322998046875,
0.03997802734375,
0.0282745361328125,
-0.03936767578125,
-0.027587890625,
-0.0170440673828125,
-0.0215911865234375,
0.0029010772705078125,
0.04107666015625,
-0.01247406005859375,
-0.0305938720703125,
-0.05267333984375,
... |
timm/efficientvit_m5.r224_in1k | 2023-08-18T23:22:16.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2305.07027",
"license:mit",
"region:us"
] | image-classification | timm | null | null | timm/efficientvit_m5.r224_in1k | 0 | 1,739 | timm | 2023-08-18T23:21:57 | ---
tags:
- image-classification
- timm
library_name: timm
license: mit
datasets:
- imagenet-1k
---
# Model card for efficientvit_m5.r224_in1k
An EfficientViT (MSRA) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 12.5
- GMACs: 0.5
- Activations (M): 2.4
- Image size: 224 x 224
- **Papers:**
- EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention: https://arxiv.org/abs/2305.07027
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/microsoft/Cream/tree/main/EfficientViT
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('efficientvit_m5.r224_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'efficientvit_m5.r224_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 192, 14, 14])
# torch.Size([1, 288, 7, 7])
# torch.Size([1, 384, 4, 4])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'efficientvit_m5.r224_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 384, 4, 4) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@InProceedings{liu2023efficientvit,
title = {EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention},
author = {Liu, Xinyu and Peng, Houwen and Zheng, Ningxin and Yang, Yuqing and Hu, Han and Yuan, Yixuan},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2023},
}
```
| 3,765 | [
[
-0.0310211181640625,
-0.03961181640625,
0.00038170814514160156,
0.012969970703125,
-0.0225982666015625,
-0.032379150390625,
-0.01995849609375,
-0.0190277099609375,
0.01018524169921875,
0.0224609375,
-0.034759521484375,
-0.0457763671875,
-0.048553466796875,
-... |
akreal/tiny-random-xlnet | 2021-08-18T15:08:21.000Z | [
"transformers",
"pytorch",
"tf",
"xlnet",
"endpoints_compatible",
"region:us"
] | null | akreal | null | null | akreal/tiny-random-xlnet | 0 | 1,738 | transformers | 2022-03-02T23:29:05 | This is a copy of: https://huggingface.co/hf-internal-testing/tiny-random-xlnet
Changes: use old format for `pytorch_model.bin`.
| 130 | [
[
-0.0104827880859375,
-0.060333251953125,
-0.0032672882080078125,
0.0204925537109375,
-0.01395416259765625,
-0.023284912109375,
0.003711700439453125,
-0.012115478515625,
0.044921875,
0.044769287109375,
-0.036102294921875,
-0.00812530517578125,
-0.0098114013671875... |
IIC/dpr-spanish-passage_encoder-allqa-base | 2022-04-02T15:05:07.000Z | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"sentence similarity",
"passage retrieval",
"es",
"dataset:squad_es",
"dataset:PlanTL-GOB-ES/SQAC",
"dataset:IIC/bioasq22_es",
"arxiv:2004.04906",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | IIC | null | null | IIC/dpr-spanish-passage_encoder-allqa-base | 2 | 1,738 | transformers | 2022-03-27T15:23:12 | ---
language:
- es
tags:
- sentence similarity # Example: audio
- passage retrieval # Example: automatic-speech-recognition
datasets:
- squad_es
- PlanTL-GOB-ES/SQAC
- IIC/bioasq22_es
metrics:
- eval_loss: 0.010779764448327261
- eval_accuracy: 0.9982682224158297
- eval_f1: 0.9446059155411182
- average_rank: 0.11728500598392888
model-index:
- name: dpr-spanish-passage_encoder-allqa-base
results:
- task:
type: text similarity # Required. Example: automatic-speech-recognition
name: text similarity # Optional. Example: Speech Recognition
dataset:
type: squad_es # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: squad_es # Required. Example: Common Voice zh-CN
args: es # Optional. Example: zh-CN
metrics:
- type: loss
value: 0.010779764448327261
name: eval_loss
- type: accuracy
value: 0.9982682224158297
name: accuracy
- type: f1
value: 0.9446059155411182
name: f1
- type: avgrank
value: 0.11728500598392888
name: avgrank
---
[Dense Passage Retrieval](https://arxiv.org/abs/2004.04906)-DPR is a set of tools for performing State of the Art open-domain question answering. It was initially developed by Facebook and there is an [official repository](https://github.com/facebookresearch/DPR). DPR is intended to retrieve the relevant documents to answer a given question, and is composed of 2 models, one for encoding passages and other for encoding questions. This concrete model is the one used for encoding passages.
With this and the [question encoder model](https://huggingface.co/avacaondata/dpr-spanish-question_encoder-allqa-base) we introduce the best passage retrievers in Spanish up to date (to the best of our knowledge), improving over the [previous model we developed](https://huggingface.co/IIC/dpr-spanish-question_encoder-squades-base), by training it for longer and with more data.
Regarding its use, this model should be used to vectorize a question that enters in a Question Answering system, and then we compare that encoding with the encodings of the database (encoded with [the passage encoder](https://huggingface.co/avacaondata/dpr-spanish-passage_encoder-squades-base)) to find the most similar documents , which then should be used for either extracting the answer or generating it.
For training the model, we used a collection of Question Answering datasets in Spanish:
- the Spanish version of SQUAD, [SQUAD-ES](https://huggingface.co/datasets/squad_es)
- [SQAC- Spanish Question Answering Corpus](https://huggingface.co/datasets/PlanTL-GOB-ES/SQAC)
- [BioAsq22-ES](https://huggingface.co/datasets/IIC/bioasq22_es) - we translated this last one by using automatic translation with Transformers.
With this complete dataset we created positive and negative examples for the model (For more information look at [the paper](https://arxiv.org/abs/2004.04906) to understand the training process for DPR). We trained for 25 epochs with the same configuration as the paper. The [previous DPR model](https://huggingface.co/IIC/dpr-spanish-passage_encoder-squades-base) was trained for only 3 epochs with about 60% of the data.
Example of use:
```python
from transformers import DPRContextEncoder, DPRContextEncoderTokenizer
model_str = "IIC/dpr-spanish-passage_encoder-allqa-base"
tokenizer = DPRContextEncoderTokenizer.from_pretrained(model_str)
model = DPRContextEncoder.from_pretrained(model_str)
input_ids = tokenizer("Usain Bolt ganó varias medallas de oro en las Olimpiadas del año 2012", return_tensors="pt")["input_ids"]
embeddings = model(input_ids).pooler_output
```
The full metrics of this model on the evaluation split of SQUADES are:
```
eval_loss: 0.010779764448327261
eval_acc: 0.9982682224158297
eval_f1: 0.9446059155411182
eval_acc_and_f1: 0.9714370689784739
eval_average_rank: 0.11728500598392888
```
And the classification report:
```
precision recall f1-score support
hard_negative 0.9991 0.9991 0.9991 1104999
positive 0.9446 0.9446 0.9446 17547
accuracy 0.9983 1122546
macro avg 0.9719 0.9719 0.9719 1122546
weighted avg 0.9983 0.9983 0.9983 1122546
```
### Contributions
Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this model. | 4,600 | [
[
-0.03607177734375,
-0.044647216796875,
0.0267333984375,
0.0302734375,
-0.012420654296875,
0.01334381103515625,
0.0026607513427734375,
-0.0274810791015625,
0.0080718994140625,
0.0190887451171875,
-0.046875,
-0.03411865234375,
-0.04534912109375,
0.027740478515... |
linhvu/decapoda-research-llama-7b-hf | 2023-05-30T03:20:00.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"has_space"
] | text-generation | linhvu | null | null | linhvu/decapoda-research-llama-7b-hf | 2 | 1,738 | transformers | 2023-05-30T03:18:15 | ---
license: other
duplicated_from: decapoda-research/llama-7b-hf
---
LLaMA-7B converted to work with Transformers/HuggingFace. This is under a special license, please see the LICENSE file for details.
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
| 8,355 | [
[
-0.0296630859375,
-0.054443359375,
0.033111572265625,
0.021148681640625,
-0.017608642578125,
-0.018524169921875,
0.001064300537109375,
-0.04913330078125,
0.00511932373046875,
0.03240966796875,
-0.034576416015625,
-0.042938232421875,
-0.054107666015625,
0.015... |
projecte-aina/aguila-7b | 2023-10-31T15:33:31.000Z | [
"transformers",
"pytorch",
"safetensors",
"RefinedWebModel",
"text-generation",
"aguila",
"falcon",
"spanish",
"catalan",
"custom_code",
"en",
"es",
"ca",
"model-index",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | projecte-aina | null | null | projecte-aina/aguila-7b | 27 | 1,738 | transformers | 2023-07-05T13:29:04 | ---
language:
- en
- es
- ca
licence:
- apache-2.0
tags:
- aguila
- falcon
- spanish
- catalan
metrics:
- ppl
model-index:
- name: aguila_7b
results:
- task:
name: Causal Language Modeling
type: text-generation
metrics:
- name: Perplexity
type: ppl
value: 8.59
pipeline_tag: text-generation
widget:
- text: |-
Respon a la pregunta següent.
Pregunta: "Quina és la capital de Suècia?"
Resposta: "La capital de Suècia és Estocolm."
----
Respon a la pregunta següent.
Pregunta: "Quina beguda es consumeix als matins per despertar-se?"
Resposta: "La majoria de gent consumeix cafè per despertar-se."
----
Respon a la pregunta següent.
Pregunta: "Explica com funciona un motor de combustió"
Resposta:
example_title: Pregunta-Resposta
- text: |-
Extrae las entidades nombradas del siguiente texto:
Texto: "Me llamo Wolfgang y vivo en Berlin"
Entidades: Wolfgang:PER, Berlin:LOC
----
Extrae las entidades nombradas del siguiente texto:
Texto: "Hoy voy a visitar el parc güell tras salir del barcelona supercomputing center"
Entidades: parc güell:LOC, barcelona supercomputing center:LOC
----
Extrae las entidades nombradas del siguiente texto:
Texto: "Maria y Miguel no tienen ningún problema contigo"
Entidades: Maria:PER, Miguel:PER
----
Extrae las entidades nombradas del siguiente texto:
Texto: "Damián se cortó el pelo"
Entidades: Damián:PER
----
Extrae las entidades nombradas del siguiente texto:
Texto: "Lo mejor de Barcelona és el bar de mi amigo Pablo"
Entidades: Pablo:PER, Barcelona:LOC
----
Extrae las entidades nombradas del siguiente texto:
Texto: "Carlos comparte piso con Marc"
Entidades:
example_title: Entidades-Nombradas
---
# Ǎguila-7B
## Table of Contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-uses-and-limitations)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Language adaptation](#language-adaptation)
- [Training](#training)
- [Training data](#training-data)
- [Training procedure](#training-procedure)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact](#contact)
- [Copyright](#copyright)
- [License](#license)
- [Funding](#funding)
- [Disclaimer](#disclaimer)
</details>
## Model description
**Ǎguila-7B** is a transformer-based causal language model for Catalan, Spanish, and English.
It is based on the [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) model and has been trained on a 26B token
trilingual corpus collected from publicly available corpora and crawlers.
More information available in the following post from Medium.com: Introducing Ǎguila, a new open-source LLM for Spanish and Catalan (https://medium.com/@mpamies247/introducing-a%CC%8Cguila-a-new-open-source-llm-for-spanish-and-catalan-ee1ebc70bc79)
## Intended uses and limitations
The **Ǎguila-7B** model is ready-to-use only for causal language modeling to perform text-generation tasks.
However, it is intended to be fine-tuned for downstream tasks.
## How to use
Here is how to use this model:
```python
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
input_text = "El mercat del barri és fantàstic, hi pots trobar"
model_id = "projecte-aina/aguila-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
generator = pipeline(
"text-generation",
model=model_id,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
generation = generator(
input_text,
do_sample=True,
top_k=10,
eos_token_id=tokenizer.eos_token_id,
)
print(f"Result: {generation[0]['generated_text']}")
```
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model.
However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques
on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Language adaptation
We adapted the original [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) model to Spanish and Catalan by swapping the tokenizer and adjusting the embedding layer.
The adaptation procedure is explained in [this blog post](https://medium.com/@mpamies247/ee1ebc70bc79).
## Training
### Training data
The training corpus consists of 26B tokens of several corpora gathered from web crawlings and public domain data.
| Dataset | Language | Words (per-epoch) | Epochs |
|---------------------|----------|--------------------|--------------|
| Wikipedia | en | 2169.97M | 1.428144485 |
| C4_es | es | 53709.80M | 0.1049686196 |
| Biomedical | es | 455.03M | 0.7140722425 |
| Legal | es | 995.70M | 0.7140722425 |
| Wikipedia | es | 693.60M | 1.428144485 |
| Gutenberg | es | 53.18M | 0.7140722425 |
| C4_ca | ca | 2826.00M | 2.142216727 |
| Biomedical | ca | 11.80M | 1.428144485 |
| RacoCatalà Noticias | ca | 17.16M | 2.142216727 |
| RacoCatalà Forums | ca | 333.73M | 2.142216727 |
| CaWaC | ca | 57.79M | 2.142216727 |
| Wikipedia | ca | 228.01M | 3.570361212 |
| Vilaweb | ca | 50.34M | 2.142216727 |
The dataset has the following language distribution:
|Language|Percentage|
|--------|----------|
| En | 16.84% |
| Es | 41.38% |
| Ca | 41.79% |
Note: A small amount of English data was kept to avoid catastrophic forgetting.
## Training procedure
The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2) with a vocabulary size of 50,257 tokens.
After training a new tokenizer and adapting [falcon-7b](https://huggingface.co/tiiuae/falcon-7b)'s embedding layer, the model was
further pre-trained in three target languages: Catalan, Spanish and English.
The training lasted a total of 320 hours on 8 NVIDIA H100 GPUs with 80GB RAM.
### Training hyperparameters
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- train_batch_size: 1
- eval_batch_size: 1
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam
- betas: (0.9,0.999)
- epsilon: 1e-08
- learning_rate: 5e-05
- lr_scheduler_type: linear
- num_epochs: 1.0
### Framework versions
- Pytorch 2.0.0
- Transformers 4.30.2
- Datasets 2.13.1
- Tokenizers 0.13.3
## Additional information
### Author
The Language Technologies Unit from Barcelona Supercomputing Center.
### Contact
For further information, please send an email to <langtech@bsc.es>.
### Copyright
Copyright(c) 2023 by Language Technologies Unit, Barcelona Supercomputing Center.
### License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by:
- The [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
- The [Spanish State Secretariat for Digitalization and Artificial Intelligence](https://portal.mineco.gob.es/en-us/digitalizacionIA/Pages/sedia.aspx) within the framework of the [Plan de Impulso de las Tecnologías del Lenguaje](https://plantl.mineco.gob.es/Paginas/index.aspx).
### Disclaimer
<details>
<summary>Click to expand</summary>
The model published in this repository is intended for a generalist purpose and is available to third parties under a permissive Apache License, Version 2.0.
Be aware that the model may have biases and/or any other undesirable distortions.
When third parties deploy or provide systems and/or services to other parties using this model (or any system based on it)
or become users of the model, they should note that it is their responsibility to mitigate the risks arising from its use and,
in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the model (Barcelona Supercomputing Center)
be liable for any results arising from the use made by third parties.
</details> | 8,744 | [
[
-0.0252532958984375,
-0.051788330078125,
0.0113525390625,
0.04107666015625,
-0.02178955078125,
-0.006786346435546875,
-0.0212249755859375,
-0.036407470703125,
0.0234832763671875,
0.019805908203125,
-0.022491455078125,
-0.050994873046875,
-0.0504150390625,
0.... |
mokshu3242/my-pet-mouse | 2023-10-14T11:23:52.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | mokshu3242 | null | null | mokshu3242/my-pet-mouse | 0 | 1,737 | diffusers | 2023-10-14T11:00:47 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### mok: My-Pet-Mouse Dreambooth model trained by mokshu3242 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: VCETV127
Sample pictures of this concept:


| 543 | [
[
-0.049957275390625,
-0.031982421875,
0.01678466796875,
0.0039825439453125,
-0.01465606689453125,
0.0328369140625,
0.0292510986328125,
-0.0316162109375,
0.0458984375,
0.0282135009765625,
-0.0595703125,
-0.0281982421875,
-0.019775390625,
-0.0010967254638671875... |
chcaa/dfm-encoder-large-v1 | 2023-06-21T21:35:54.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"large",
"danish",
"mlm",
"da",
"arxiv:1706.03762",
"arxiv:1810.04805",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | chcaa | null | null | chcaa/dfm-encoder-large-v1 | 4 | 1,735 | transformers | 2023-01-04T10:06:06 | ---
license: cc-by-4.0
metrics:
- accuracy
model-index:
- name: dfm-encoder-large-v1
results:
- task:
name: Masked Language Modeling
type: fill-mask
datasets:
- netarkivet_text_v1
- danews_v1
- hopetwitter_v1
- DDSC/dagw_reddit_filtered_v1.0.0
metrics:
- name: Accuracy
type: accuracy
value: 0.7328012831797821
language:
- da
tags:
- bert
- pytorch
- large
- danish
- mlm
---
# dfm-encoder-large-v1
This model is trained as a part of the Danish Foundation Models project.
## Training procedure
This model is a fine-tuned version of [NbAiLab/nb-bert-large](https://huggingface.co/NbAiLab/nb-bert-large) on the dcc_v1.1.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3175
- Accuracy: 0.7328
<details>
<summary> Training Hyperparameters </summary>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- training_steps: 100000
- mixed_precision_training: Native AMP
</details>
<details>
<summary> Training Results </summary>
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:------:|:--------:|:---------------:|
| 1.4239 | 0.02 | 2000 | 0.6481 | 1.9361 |
| 1.299 | 0.04 | 4000 | 0.6646 | 1.8073 |
| 1.2008 | 0.06 | 6000 | 0.6766 | 1.7281 |
| 1.193 | 0.08 | 8000 | 0.6770 | 1.6885 |
| 1.138 | 0.1 | 10000 | 0.6803 | 1.6729 |
| 1.1401 | 0.12 | 12000 | 0.6729 | 1.7227 |
| 4.1932 | 0.14 | 14000 | 0.3016 | 4.5455 |
| 2.3732 | 0.16 | 16000 | 0.5607 | 2.3964 |
| 1.2114 | 0.18 | 18000 | 0.6667 | 1.7638 |
| 1.1482 | 0.2 | 20000 | 0.6576 | 1.7839 |
| 1.0815 | 0.22 | 22000 | 0.6862 | 1.6308 |
| 1.085 | 0.24 | 24000 | 0.6837 | 1.6383 |
| 1.0788 | 0.26 | 26000 | 0.6812 | 1.6585 |
| 1.0389 | 0.28 | 28000 | 0.6861 | 1.5927 |
| 1.0283 | 0.3 | 30000 | 0.6937 | 1.5779 |
| 1.0145 | 0.32 | 32000 | 0.6967 | 1.5439 |
| 1.0023 | 0.34 | 34000 | 0.6980 | 1.5237 |
| 0.9976 | 0.36 | 36000 | 0.6962 | 1.5692 |
| 1.019 | 0.38 | 38000 | 0.6970 | 1.5460 |
| 1.0137 | 0.4 | 40000 | 0.6967 | 1.5564 |
| 1.0067 | 0.42 | 42000 | 0.7008 | 1.5176 |
| 0.992 | 0.44 | 44000 | 0.7060 | 1.4806 |
| 0.9796 | 0.46 | 46000 | 0.7026 | 1.5085 |
| 0.976 | 0.48 | 48000 | 0.7092 | 1.4705 |
| 0.9571 | 0.5 | 50000 | 0.7052 | 1.4895 |
| 0.9723 | 0.52 | 52000 | 0.7135 | 1.4516 |
| 0.9581 | 0.54 | 54000 | 0.7145 | 1.4343 |
| 0.9511 | 0.56 | 56000 | 0.7124 | 1.4334 |
| 0.9608 | 0.58 | 58000 | 0.7151 | 1.4268 |
| 0.9588 | 0.6 | 60000 | 0.7127 | 1.4471 |
| 0.9473 | 0.62 | 62000 | 0.7202 | 1.4037 |
| 0.9266 | 0.64 | 64000 | 0.7158 | 1.4225 |
| 0.925 | 0.66 | 66000 | 0.7208 | 1.3940 |
| 0.9242 | 0.68 | 68000 | 0.7189 | 1.4090 |
| 0.9141 | 0.7 | 70000 | 0.7229 | 1.3831 |
| 0.8884 | 0.72 | 72000 | 1.3738 | 0.7233 |
| 0.9145 | 0.74 | 74000 | 1.3478 | 0.7275 |
| 0.8741 | 0.76 | 76000 | 1.3691 | 0.7255 |
| 0.8752 | 0.78 | 78000 | 1.3530 | 0.7276 |
| 0.8634 | 0.8 | 80000 | 1.3428 | 0.7272 |
| 0.8882 | 0.82 | 82000 | 1.3490 | 0.7270 |
| 0.8872 | 0.84 | 84000 | 1.3458 | 0.7296 |
| 0.892 | 0.86 | 86000 | 1.3382 | 0.7279 |
| 0.9002 | 0.88 | 88000 | 1.3091 | 0.7341 |
| 0.8805 | 0.9 | 90000 | 1.3209 | 0.7310 |
| 0.8944 | 0.92 | 92000 | 1.3133 | 0.7332 |
| 0.8605 | 0.94 | 94000 | 1.3404 | 0.7311 |
| 0.879 | 0.96 | 96000 | 1.2890 | 0.7356 |
| 0.871 | 0.98 | 98000 | 1.2954 | 0.7352 |
| 0.8676 | 1.0 | 100000 | 1.2935 | 0.7369 |
</details>
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu102
- Datasets 2.5.3.dev0
- Tokenizers 0.12.1
# Model Card
Following [1], the following constitutes a model for this model.
---
*Organization developing the Model*: The Danish Foundation Models project
*Model Creation Date*: June 2022
*Model Type*: Transformer encoder model [2]; BERT [3]
*Feedback on the Model*: For feedback on the model please use the [community forum](https://huggingface.co/chcaa/dfm-bert-base-v1/discussions).
*Training logs and performance metrics*: Check out this Weight and biases [Dashboard](https://wandb.ai/chcaa/danish-foundation-models/reports/dfm-bert-base-v1--VmlldzoyODkwMzc2).
## Intended Uses
*Primary Intended Uses*:
The primary intended use case of this model is the reproduction and validation of dataset quality. The intended use cases for future iterations of this model are the application in industry and research for Danish natural language tasks.
*Primary Intended Users*:
Future iterations of the model are intended for NLP practitioners dealing with Danish text documents.
*Out-of-Scope Uses*:
Use of the model for profiling in a way which is inconsiderate of the potential harm it might cause, such as racial profiling.
## Factors
*Card prompts - Relevant Factors*:
Relevant factors include which language is used. Our model is trained on a Danish
text corpus and is intended to compare the training data.
*Card prompts - Evaluation Factors*:
Future iterations of this model should include a validation of biases pertaining to gender, race, and religious and social groups.
## Metrics
*Performance Metrics*:
Our model is evaluated on the following performance metrics:
- Pseudo perplexity, following [4], across eight distinct domains, including Danish dialects, books, legal, social media (Reddit, Twitter), spontaneous speech, news and Wikipedia.
- The Danish subsection of Scandeval [5].
To see the performance metrics, check out this Weight and biases [Dashboard](https://wandb.ai/chcaa/danish-foundation-models/reports/dfm-bert-base-v1--VmlldzoyODkwMzc2).
*Decision Threshold*:
N/A
*Approaches to Uncertainty and Variability*:
Due to the cost of training the model is only pre-trained once, but the ScandEval fine-tunes ten times to obtain a reasonable estimate of model performance.
## Evaluation Data
*Datasets*:
The ScandEval's Danish benchmark includes:
- Named entity recognition on DaNE [7,8].
- Part-of-speech tagging and dependency on DDT [8].
- Sentiment classification on AngryTweets [9], TwitterSent [9], Europarl [9], LCC [10]
- Hate speech classification on DKHate [11].
*Motivation*:
The ScandEval benchmark is the most comprehensive benchmark for Danish. Pseudo perplexity was analysed to examine the model's ability to model certain language domains.
## Training Data
For our training data, we sample from HopeTwitter, DaNews, [DAGW](DDSC/dagw_reddit_filtered_v1.0.0) and Netarkivet Text (NAT) with the probabilites; 0.10, 0.10, 0.10, 0.70. For more information on the training and datasets, see the respective datasheets on the Danish foundation models [GitHub page](https://github.com/centre-for-humanities-computing/danish-foundation-models).
*Pre-processing*:
Input documents are tokenized using the tokenizer of the Danish BERT by BotXO [12], which uses a BPE with a vocabulary size of ~30,000 and NFKC normalization.
## Ethical Considerations
*Data*: The is sources from News, DAGW, Twitter, and Netarkivet Text (NAT) and might thus contain hate-speech, sexually explicit content and otherwise harmful content.
*Mitigations*: We considered removing sexually explicit content by filtering web domians using a DNS or using google safe-search. However, examining the filtering domains these were also found to include news media pertaining to a specific demographic (e.g. Dagens.dk) and educational sites pertaining to sexual education. We also examined the use of word-based filters, but found that might influence certain demographic groups disproportionally.
*Risk and Harms*: As Netarkivet Text cover such a wide array of the Danish internet it undoubtably contains personal information. To avoid model memorization of this information we have deduplicated the data such that the model does not learn this information.
# References:
- [1] Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model Cards for Model Reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency, 220–229. https://doi.org/10.1145/3287560.3287596
- [2] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention Is All You Need. ArXiv:1706.03762 [Cs]. http://arxiv.org/abs/1706.03762
- [3] Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. http://arxiv.org/abs/1810.04805
- [4] Salazar, J., Liang, D., Nguyen, T. Q., & Kirchhoff, K. (2020). Masked Language Model Scoring. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2699–2712. https://doi.org/10.18653/v1/2020.acl-main.240
- [6] Nielsen, D. S. (2021). ScandEval: Evaluation of language models on mono- or multilingual Scandinavian language tasks. GitHub. Note: Https://Github.Com/Saattrupdan/ScandEval.
- [7] Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A named entity resource for danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604.
- [8] Kromann, M. T. (2003). The Danish Dependency Treebank and the DTAG Treebank Tool. https://research.cbs.dk/en/publications/the-danish-dependency-treebank-and-the-dtag-treebank-tool
- [9] Alexandrainst/danlp. (2022). Alexandra Institute. https://github.com/alexandrainst/danlp/blob/a1e9fa70fc5a3ae7ff78877062da3a8a8da80758/docs/docs/datasets.md (Original work published 2019)
- [10] Nielsen, F. Å. (2022). Lcc-sentiment. https://github.com/fnielsen/lcc-sentiment (Original work published 2016)
- [11] Sigurbergsson, G. I., & Derczynski, L. (2020). Offensive Language and Hate Speech Detection for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 3498–3508. https://aclanthology.org/2020.lrec-1.430
- [12] Møllerhøj, J. D. (2019, December 5). Danish BERT model: BotXO has trained the most advanced BERT model. BotXO. https://www.botxo.ai/blog/danish-bert-model/ | 11,258 | [
[
-0.0548095703125,
-0.051727294921875,
0.01166534423828125,
0.00882720947265625,
-0.01348876953125,
-0.0010290145874023438,
-0.01085662841796875,
-0.01690673828125,
0.0234832763671875,
0.0258026123046875,
-0.04290771484375,
-0.050140380859375,
-0.0540771484375,
... |
showlab/show-1-base | 2023-10-12T03:56:59.000Z | [
"diffusers",
"text-to-video",
"arxiv:2309.15818",
"license:cc-by-nc-4.0",
"has_space",
"diffusers:TextToVideoIFPipeline",
"region:us"
] | text-to-video | showlab | null | null | showlab/show-1-base | 8 | 1,735 | diffusers | 2023-10-10T16:56:09 | ---
license: cc-by-nc-4.0
tags:
- text-to-video
---
# show-1-base
Pixel-based VDMs can generate motion accurately aligned with the textual prompt but typically demand expensive computational costs in terms of time and GPU memory, especially when generating high-resolution videos. Latent-based VDMs are more resource-efficient because they work in a reduced-dimension latent space. But it is challenging for such small latent space (e.g., 64×40 for 256×160 videos) to cover rich yet necessary visual semantic details as described by the textual prompt.
To marry the strength and alleviate the weakness of pixel-based and latent-based VDMs, we introduce **Show-1**, an efficient text-to-video model that generates videos of not only decent video-text alignment but also high visual quality.

## Model Details
This is the base model of Show-1 that generates videos with 8 keyframes at a resolution of 64x40. The model is finetuned from [DeepFloyd/IF-I-L-v1.0](https://huggingface.co/DeepFloyd/IF-I-L-v1.0) on the [WebVid-10M](https://maxbain.com/webvid-dataset/) and [InternVid](https://huggingface.co/datasets/OpenGVLab/InternVid) dataset.
- **Developed by:** [Show Lab, National University of Singapore](https://sites.google.com/view/showlab/home?authuser=0)
- **Model type:** pixel- and latent-based cascaded text-to-video diffusion model
- **Cascade stage:** base (keyframe generation)
- **Finetuned from model:** [DeepFloyd/IF-I-L-v1.0](https://huggingface.co/DeepFloyd/IF-I-L-v1.0)
- **License:** Creative Commons Attribution Non Commercial 4.0
- **Resources for more information:** [GitHub](https://github.com/showlab/Show-1), [Website](https://showlab.github.io/Show-1/), [arXiv](https://arxiv.org/abs/2309.15818)
## Usage
Clone the GitHub repository and install the requirements:
```bash
git clone https://github.com/showlab/Show-1.git
pip install -r requirements.txt
```
Run the following command to generate a video from a text prompt. By default, this will automatically download all the model weights from huggingface.
```bash
python run_inference.py
```
You can also download the weights manually and change the `pretrained_model_path` in `run_inference.py` to run the inference.
```bash
git lfs install
# base
git clone https://huggingface.co/showlab/show-1-base
# interp
git clone https://huggingface.co/showlab/show-1-interpolation
# sr1
git clone https://huggingface.co/showlab/show-1-sr1
# sr2
git clone https://huggingface.co/showlab/show-1-sr2
```
## Citation
If you make use of our work, please cite our paper.
```bibtex
@misc{zhang2023show1,
title={Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation},
author={David Junhao Zhang and Jay Zhangjie Wu and Jia-Wei Liu and Rui Zhao and Lingmin Ran and Yuchao Gu and Difei Gao and Mike Zheng Shou},
year={2023},
eprint={2309.15818},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Model Card Contact
This model card is maintained by [David Junhao Zhang](https://junhaozhang98.github.io/) and [Jay Zhangjie Wu](https://jayzjwu.github.io/). For any questions, please feel free to contact us or open an issue in the repository. | 3,237 | [
[
-0.0305023193359375,
-0.0706787109375,
0.0321044921875,
0.0130462646484375,
-0.0272674560546875,
-0.02508544921875,
-0.003978729248046875,
0.0089874267578125,
-0.0006089210510253906,
0.0185699462890625,
-0.059478759765625,
-0.040740966796875,
-0.06219482421875,
... |
valhalla/emoji-diffusion | 2023-05-16T09:29:09.000Z | [
"diffusers",
"stable-diffusion",
"text-to-image",
"en",
"license:bigscience-bloom-rail-1.0",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | valhalla | null | null | valhalla/emoji-diffusion | 61 | 1,734 | diffusers | 2022-11-17T11:41:09 | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
widget:
- text: "a unicorn Llama emoji"
example_title: Llama Emoji
- text: "emoji pokemon"
example_title: Pokemon Emoji
- text: "snowy montain emoji"
example_title: snowy montain emoji
- text: "a snail shaped harp emoji"
example_title: Snail-shaped harp Emoji
license: bigscience-bloom-rail-1.0
---
# stable diffusion finetuned on emoji dataset
emoji-diffusion is a stable diffusion model fine-tuned on the [russian-emoji dataset](https://www.kaggle.com/datasets/shonenkov/russian-emoji) to generate emoji images.
Below are some samples generated using the model.
<img src="https://huggingface.co/valhalla/emoji-diffusion/resolve/main/emoji.png">
## Usage
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
**To get the best result use the text "emoji" at beginning or end of the prompt.**
```python
import torch
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
pipe = StableDiffusionPipeline.from_pretrained(
"valhalla/emoji-diffusion",
torch_dtype=torch.float16,
).to("cuda")
euler = EulerDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.scheduler = euler
prompt = "a unicorn lama emoji"
image = pipe(prompt, num_inference_steps=30).images[0]
image.save("lama_emoji.png")
```
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
## Downstream Uses
This model can be used for entertainment purposes and as a generative art assistant. | 2,604 | [
[
-0.024169921875,
-0.07891845703125,
0.02056884765625,
0.0307159423828125,
-0.033721923828125,
-0.01461029052734375,
-0.00273895263671875,
-0.002105712890625,
0.03204345703125,
0.03900146484375,
-0.038055419921875,
-0.035369873046875,
-0.0574951171875,
0.0073... |
timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384 | 2023-03-31T22:15:57.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:laion-2b",
"arxiv:2210.08402",
"arxiv:2201.03545",
"arxiv:2103.00020",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384 | 1 | 1,734 | timm | 2023-02-07T05:56:54 | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
datasets:
- imagenet-1k
- laion-2b
---
# Model card for convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384
A ConvNeXt image classification model. CLIP image tower weights pretrained in [OpenCLIP](https://github.com/mlfoundations/open_clip) on LAION and fine-tuned on ImageNet-1k in `timm` by Ross Wightman.
Please see related OpenCLIP model cards for more details on pretrain:
* https://huggingface.co/laion/CLIP-convnext_xxlarge-laion2B-s34B-b82K-augreg-soup
* https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg
* https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K-augreg
* https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 200.1
- GMACs: 101.1
- Activations (M): 126.7
- Image size: 384 x 384
- **Papers:**
- LAION-5B: An open large-scale dataset for training next generation image-text models: https://arxiv.org/abs/2210.08402
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- Learning Transferable Visual Models From Natural Language Supervision: https://arxiv.org/abs/2103.00020
- **Original:** https://github.com/mlfoundations/open_clip
- **Pretrain Dataset:** LAION-2B
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 192, 96, 96])
# torch.Size([1, 384, 48, 48])
# torch.Size([1, 768, 24, 24])
# torch.Size([1, 1536, 12, 12])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1536, 12, 12) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
| model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
| [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
| [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
| [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
| [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
| [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
| [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
| [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
| [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
| [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
| [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 |
| [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
| [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
| [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
| [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
| [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
| [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
| [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
| [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
| [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
| [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
| [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
| [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
| [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
| [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
| [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
| [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
| [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
| [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
| [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
| [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
| [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
| [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
| [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
| [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
| [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
| [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
| [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
| [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
| [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
| [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
| [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
## Citation
```bibtex
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
```bibtex
@article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
| 18,539 | [
[
-0.0616455078125,
-0.035247802734375,
-0.0016298294067382812,
0.03582763671875,
-0.0295562744140625,
-0.0190277099609375,
-0.01544952392578125,
-0.034393310546875,
0.058013916015625,
0.0207977294921875,
-0.04217529296875,
-0.044525146484375,
-0.05419921875,
... |
stablediffusionapi/portrait-realistic-sdxl | 2023-10-02T03:47:27.000Z | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | stablediffusionapi | null | null | stablediffusionapi/portrait-realistic-sdxl | 0 | 1,734 | diffusers | 2023-10-02T03:45:39 | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Portrait-realistic-SDXL API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "portrait-realistic-sdxl"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/portrait-realistic-sdxl)
Model link: [View model](https://stablediffusionapi.com/models/portrait-realistic-sdxl)
Credits: [View credits](https://civitai.com/?query=Portrait-realistic-SDXL)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "portrait-realistic-sdxl",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** | 2,529 | [
[
-0.03973388671875,
-0.05224609375,
0.04376220703125,
0.022979736328125,
-0.032135009765625,
0.0097503662109375,
0.0244293212890625,
-0.03948974609375,
0.036712646484375,
0.05108642578125,
-0.06988525390625,
-0.06817626953125,
-0.0225830078125,
-0.00610351562... |
timm/efficientnet_es_pruned.in1k | 2023-04-27T21:12:15.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.02838",
"arxiv:1905.11946",
"arxiv:2002.08258",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/efficientnet_es_pruned.in1k | 0 | 1,733 | timm | 2022-12-12T23:58:15 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for efficientnet_es_pruned.in1k
A EfficientNet-EdgeTPU image classification model. Knapsack pruned from existing weights.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 5.4
- GMACs: 1.8
- Activations (M): 8.7
- Image size: 224 x 224
- **Papers:**
- Accelerator-aware Neural Network Design using AutoML: https://arxiv.org/abs/2003.02838
- EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: https://arxiv.org/abs/1905.11946
- Knapsack Pruning with Inner Distillation: https://arxiv.org/abs/2002.08258
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('efficientnet_es_pruned.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'efficientnet_es_pruned.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 24, 112, 112])
# torch.Size([1, 32, 56, 56])
# torch.Size([1, 48, 28, 28])
# torch.Size([1, 144, 14, 14])
# torch.Size([1, 192, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'efficientnet_es_pruned.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1280, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{gupta2020accelerator,
title={Accelerator-aware neural network design using automl},
author={Gupta, Suyog and Akin, Berkin},
journal={arXiv preprint arXiv:2003.02838},
year={2020}
}
```
```bibtex
@inproceedings{tan2019efficientnet,
title={Efficientnet: Rethinking model scaling for convolutional neural networks},
author={Tan, Mingxing and Le, Quoc},
booktitle={International conference on machine learning},
pages={6105--6114},
year={2019},
organization={PMLR}
}
```
```bibtex
@article{aflalo2020knapsack,
title={Knapsack pruning with inner distillation},
author={Aflalo, Yonathan and Noy, Asaf and Lin, Ming and Friedman, Itamar and Zelnik, Lihi},
journal={arXiv preprint arXiv:2002.08258},
year={2020}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 4,584 | [
[
-0.035980224609375,
-0.041473388671875,
-0.0018711090087890625,
0.00943756103515625,
-0.0236053466796875,
-0.033111572265625,
-0.0233917236328125,
-0.026031494140625,
0.01023101806640625,
0.020782470703125,
-0.0311431884765625,
-0.0374755859375,
-0.0575866699218... |
akreal/tiny-random-mpnet | 2021-08-18T15:08:05.000Z | [
"transformers",
"pytorch",
"tf",
"mpnet",
"endpoints_compatible",
"region:us"
] | null | akreal | null | null | akreal/tiny-random-mpnet | 0 | 1,732 | transformers | 2022-03-02T23:29:05 | This is a copy of: https://huggingface.co/hf-internal-testing/tiny-random-mpnet
Changes: use old format for `pytorch_model.bin`.
| 130 | [
[
-0.017425537109375,
-0.051422119140625,
-0.005397796630859375,
0.024261474609375,
-0.0193023681640625,
-0.033905029296875,
0.0086212158203125,
0.00009107589721679688,
0.042022705078125,
0.032196044921875,
-0.04327392578125,
-0.001003265380859375,
-0.014518737792... |
sanchit-gandhi/whisper-medium-fleurs-lang-id | 2023-09-11T13:25:16.000Z | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"whisper",
"audio-classification",
"generated_from_trainer",
"dataset:xtreme_s",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | audio-classification | sanchit-gandhi | null | null | sanchit-gandhi/whisper-medium-fleurs-lang-id | 5 | 1,731 | transformers | 2023-02-23T13:37:22 | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- xtreme_s
metrics:
- accuracy
base_model: openai/whisper-medium
model-index:
- name: whisper-medium-fleurs-lang-id
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium FLEURS Language Identification
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the [FLEURS subset](https://huggingface.co/datasets/google/xtreme_s#language-identification---fleurs-langid) of the [google/xtreme_s](https://huggingface.co/google/xtreme_s) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8413
- Accuracy: 0.8805
To reproduce this run, execute the command in [`run.sh`](https://huggingface.co/sanchit-gandhi/whisper-medium-fleurs-lang-id/blob/main/run.sh).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 0
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0152 | 1.0 | 8494 | 0.9087 | 0.8431 |
| 0.0003 | 2.0 | 16988 | 1.0059 | 0.8460 |
| 0.0 | 3.0 | 25482 | 0.8413 | 0.8805 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.13.2
| 2,016 | [
[
-0.02838134765625,
-0.0439453125,
0.007190704345703125,
0.01474761962890625,
-0.0142974853515625,
-0.0208282470703125,
-0.0257415771484375,
-0.0298004150390625,
0.0122222900390625,
0.0259552001953125,
-0.0479736328125,
-0.04510498046875,
-0.05609130859375,
-... |
MCG-NJU/videomae-huge-finetuned-kinetics | 2023-04-22T11:32:55.000Z | [
"transformers",
"pytorch",
"videomae",
"video-classification",
"vision",
"arxiv:2203.12602",
"arxiv:2111.06377",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"has_space",
"region:us"
] | video-classification | MCG-NJU | null | null | MCG-NJU/videomae-huge-finetuned-kinetics | 0 | 1,730 | transformers | 2023-04-16T11:08:12 | ---
license: "cc-by-nc-4.0"
tags:
- vision
- video-classification
---
# VideoMAE (huge-sized model, fine-tuned on Kinetics-400)
VideoMAE model pre-trained for 1600 epochs in a self-supervised way and fine-tuned in a supervised way on Kinetics-400. It was introduced in the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Tong et al. and first released in [this repository](https://github.com/MCG-NJU/VideoMAE).
Disclaimer: The team releasing VideoMAE did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
VideoMAE is an extension of [Masked Autoencoders (MAE)](https://arxiv.org/abs/2111.06377) to video. The architecture of the model is very similar to that of a standard Vision Transformer (ViT), with a decoder on top for predicting pixel values for masked patches.
Videos are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds fixed sinus/cosinus position embeddings before feeding the sequence to the layers of the Transformer encoder.
By pre-training the model, it learns an inner representation of videos that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled videos for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire video.
## Intended uses & limitations
You can use the raw model for video classification into one of the 400 possible Kinetics-400 labels.
### How to use
Here is how to use this model to classify a video:
```python
from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification
import numpy as np
import torch
video = list(np.random.randn(16, 3, 224, 224))
processor = VideoMAEImageProcessor.from_pretrained("MCG-NJU/videomae-huge-finetuned-kinetics")
model = VideoMAEForVideoClassification.from_pretrained("MCG-NJU/videomae-huge-finetuned-kinetics")
inputs = processor(video, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/videomae.html#).
## Training data
(to do, feel free to open a PR)
## Training procedure
### Preprocessing
(to do, feel free to open a PR)
### Pretraining
(to do, feel free to open a PR)
## Evaluation results
This model obtains a top-1 accuracy of 86.6 and a top-5 accuracy of 97.1 on the test set of Kinetics-400.
### BibTeX entry and citation info
```bibtex
misc{https://doi.org/10.48550/arxiv.2203.12602,
doi = {10.48550/ARXIV.2203.12602},
url = {https://arxiv.org/abs/2203.12602},
author = {Tong, Zhan and Song, Yibing and Wang, Jue and Wang, Limin},
keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
``` | 3,584 | [
[
-0.036712646484375,
-0.018646240234375,
0.0104827880859375,
-0.016021728515625,
-0.02947998046875,
0.0016269683837890625,
0.00788116455078125,
-0.000385284423828125,
0.025146484375,
0.0309600830078125,
-0.041351318359375,
-0.032562255859375,
-0.0762939453125,
... |
pavani8/my-pet-dog | 2023-10-18T07:54:37.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | pavani8 | null | null | pavani8/my-pet-dog | 0 | 1,729 | diffusers | 2023-10-18T07:49:06 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by pavani8 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:
.jpg)
| 388 | [
[
-0.0594482421875,
-0.02178955078125,
0.0286102294921875,
0.00830078125,
-0.0123748779296875,
0.02764892578125,
0.028228759765625,
-0.0263671875,
0.0438232421875,
0.028228759765625,
-0.03466796875,
-0.0166015625,
-0.0157928466796875,
0.0008835792541503906,
... |
gustavorayo/ryo-takemasa | 2023-10-21T15:07:27.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | gustavorayo | null | null | gustavorayo/ryo-takemasa | 0 | 1,728 | diffusers | 2023-10-21T15:02:51 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### ryo-takemasa Dreambooth model trained by gustavorayo with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| 505 | [
[
-0.02484130859375,
-0.04827880859375,
0.048736572265625,
0.025115966796875,
-0.023956298828125,
0.0222015380859375,
0.020416259765625,
-0.03558349609375,
0.05731201171875,
0.0059051513671875,
-0.0260467529296875,
-0.0227203369140625,
-0.04376220703125,
-0.01... |
craigdsouza/my-uig-racecar | 2023-10-28T15:26:36.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | craigdsouza | null | null | craigdsouza/my-uig-racecar | 1 | 1,728 | diffusers | 2023-10-28T15:20:11 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-UIG-racecar Dreambooth model trained by craigdsouza following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: SFIT-75
Sample pictures of this concept:
.png)
.png)
.png)
.png)
| 708 | [
[
-0.059844970703125,
-0.0205230712890625,
0.02935791015625,
0.0108184814453125,
-0.0234222412109375,
0.04437255859375,
0.0213470458984375,
-0.02593994140625,
0.037872314453125,
0.0125579833984375,
-0.057586669921875,
-0.03717041015625,
-0.0125274658203125,
0.... |
hfl/chinese-electra-180g-small-ex-discriminator | 2021-03-03T01:25:29.000Z | [
"transformers",
"pytorch",
"tf",
"electra",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | hfl | null | null | hfl/chinese-electra-180g-small-ex-discriminator | 5 | 1,727 | transformers | 2022-03-02T23:29:05 | ---
language:
- zh
license: "apache-2.0"
---
# This model is trained on 180G data, we recommend using this one than the original version.
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
``` | 1,964 | [
[
-0.0238037109375,
-0.046875,
0.021759033203125,
0.006954193115234375,
-0.0063934326171875,
-0.0173187255859375,
-0.03515625,
-0.0550537109375,
0.0301361083984375,
0.0347900390625,
-0.0247955322265625,
-0.0174713134765625,
-0.0148162841796875,
0.0098495483398... |
johnslegers/epic-diffusion-v1.1 | 2023-01-21T06:08:01.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | johnslegers | null | null | johnslegers/epic-diffusion-v1.1 | 44 | 1,727 | diffusers | 2023-01-21T01:27:22 | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
extra_gated_prompt: |-
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. CompVis claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
---
[![Example][1]][1]
## Why Epic Diffusion
Epîc Diffusion is a general purpose model based on Stable Diffusion 1.x intended to replace the official SD releases
as your default model. It is focused on providing high quality output in a wide range of different styles, with support
for NFSW content.
Epîc Diffusion 1.1 is a heavily calibrated merge of SD 1.4, SD 1.5, Analog Diffusion, Wavy Diffusion, Redshift Diffusion,
Openjourney Diffusion, Samdoesarts Ultramerge, Elldreth's Dream, postapocalypse, Inkpunk Diffusion, Ghibli Diffusion, Mo Di Diffusion,
Archer Diffusion, Classic Animation Diffusion, Arcane Diffusion, Van Gogh Diffusion, 3DKX, HASDX, Flexible Diffusion, Cinematic Diffusion,
Shady Art, dvMJv4, dvAuto & mj-v4-look + some dreambooth trained models of my own, blended and reblended multiple times until I got
the quality & consistency I was looking for
Epic Diffusion is also [available on CivitAI](https://civitai.com/models/3855/epic-diffusion).
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M
license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or
harmful outputs or content
2. CompVis claims no rights on the outputs you generate, you are free to use
them and are accountable for their use which must not go against the
provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as
a service. If you do, please be aware you have to include the same use
restrictions as the ones in the license and share a copy of the CreativeML
OpenRAIL-M to all your users (please read the license entirely and carefully)
<a href="https://www.buymeacoffee.com/johnslegers" target="_blank">
<img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 45px !important;width: 162px !important;" >
</a>
## Example prompts
<table>
<tr style="border: 1px solid;background:#e5e7eb">
<th style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
Prompt
</th>
<th style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
Parameters
</th>
<th style="vertical-align:top;padding:.5714286em!important;border: 1px solid;min-width:270px">
Output
</th>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
scarlett johansson, in the style of Wes Anderson, highly detailed, unreal engine, octane render, 8k
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>2263657329<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/O4jXU.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
sansa angeline jolie gessica chastain mummy, intricate, elegant, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha and william - adolphe bouguereau
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>1310341382<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/JScKL.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
Pokimane, Feminine, Mercy, Perfect Sexy Symmetrical Face, Detailed Pupils, Pensive Smirk, Look at Viewer, Leaf Armor, Ilya Kuvshinov, Gil Elvgren, Mucha. Intricate, Octane Render, 4KUHD, Centered, Oil Painting, Bokeh, Rim Lighting.
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>4142902194<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/rLqHN.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
Mature babe,artgerm Style, gerald brom, atey ghailan, mike mignola, short cut off shirt knot, wide hips, showing off, exposing herself vulnerable, blushing, exited, confident, demanding, joyful, trending on artstation, double split complementary colors, intricate details, highly detailed,
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>3954688283<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/eufe5.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
planet base, windows, night, ground level, no man's sky, digital art, highly detailed, intricate, sharp focus, Trending on Artstation HQ, deviantart, unreal engine 5, 4K UHD image
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>895811336<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/XbfYV.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
berchtesgaden, hyperdetailed, detailed faces, artgerm, wolfenstein, portal 2, Leartes Studios, assassin's creed, alphonse mucha, bouguereau, edmund blair leighton, greg kadel, dynamic lighting, delicate, unreal engine, octane render, 8k
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>1172925287<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/HMZVA.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
princess, detailed portrait, hyperdetailed, detailed faces, irakli nadar, magali villeneuve, Assassin's Creed, Tim Hildebrandt, Ilya Kuvshinov, artgem, greg kadel, dynamic lighting, delicate, unreal engine, octane render, 8k
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>2096567313<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/EqPBr.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
a Photorealistic dramatic hyperrealistic bright blue eyes, African American elegant girl, black hair, white veil,by WLOP,Artgerm,Greg Rutkowski,Alphonse Mucha, Beautiful dynamic dramatic bright sunset lighting,shadows,cinematic atmosphere,Artstation,concept design art,Octane render,8k
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>2999946689<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/1nn2e.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
cutest girl in the world outside, (detailed portrait), in the style of fernanda suarez and simon stalenhag and Ilya Kuvshinov and Wlop and Artgerm and Chie Yoshii and Greg Rutkowski and Waking Life, trending on artstation, featured on pixiv, dynamic lighting, highly detailed, ambient lighting, octane render, 8k
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>2249388004<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/MfLZS.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
military academy, (detailed portrait), steampunk, in the style of arcane and fernanda suarez and dishonored and bioshock and simon stalenhag and Ilya Kuvshinov and Wlop and Artgerm, trending on artstation, featured on pixiv, dynamic lighting, highly detailed, ambient lighting, octane render, 8k
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>3877530043<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/BvA3s.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
beautiful female assassin wearing cyberpunk clothing, respirator, cybernetic respirator, (detailed portrait), cell shaded, 4 k, vivid colours, photorealistic concept art by wlop, ilya kuvshinov, artgerm, krenz cushart, greg rutkowski, pixiv. cinematic dramatic atmosphere, sharp focus, volumetric lighting, cinematic lighting, studio quality
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>3388890157<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/KUm9A.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
cemetary, pen and ink, in the style of gustave dore highly detailed, octane render, 8k, trending on artstation, sharp focus, studio photo, intricate details, highly detailed, by greg rutkowski
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>568457114<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/90mH1.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
dubai, hyperdetailed, detailed faces, artgem, irakli nadar, mass effect, Tim Hildebrandt, Ilya Kuvshinov, liam wong, greg rutkowski, greg kadel, dynamic lighting, delicate, unreal engine, octane render, 8k, centered, symmetry, painted, intricate, volumetric lighting, beautiful, rich deep colors masterpiece, sharp focus, ultra detailed, in the style of dan mumford and marc simonetti, astrophotography
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>DPM++ SDE<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>4262868463<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/7TjmX.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
Little cute forest fluffy chibi cuteness overload, sunny magical background, ultra precious details, intricate details, volumetric lighting, photo realistic, lifelike, photography, digital art, 8k, trending on artstation, sharp focus, studio photo, intricate details, highly detailed, by greg rutkowski, sharp focus, emitting diodes, smoke, artillery, sparks, racks, system unit, motherboard, by pascal blanche rutkowski repin artstation hyperrealism painting concept art of detailed character design matte painting, 4 k resolution blade runner
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>DPM++ SDE Karras<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>3849507891<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/skddc.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
15 year old schoolgirl with short straight hair, blue eyes, cute, friendly, round face, cottagecore, intricate, enlightened, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>2276800560<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/L0kVH.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
extreme wide shot a futuristic containment building in a rainforest valley with a city in the distance, national geographic, hyper realistic, 4 k, harsh light
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>3260458902<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/p66dH.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
portrait of a middle - eastern female cleric with straight black hair wearing blue and yellow vestments casting fireball, fantasy, highly detailed, digital painting, artstation, concept art, character art, art by greg rutkowski and tyler jacobson and alphonse mucha
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>1379894453<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/FBZuT.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
aSnowshoe Siamese Cat as the doomslayer, realistic scifi cyberpunk power armor robot, closeup portrait art by donato giancola and greg rutkowski, vintage retro scifi, realistic face, digital art, trending on artstation, symmetry
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>2122325442<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/ZjX2f.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
Beautiful boy by René Magritte
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>1753689226<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/bgvsg.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
portrait of a dark god, copper wires, visible scars and nerves, intricate, headshot, highly detailed, digital painting, artstation, concept art, sharp focus, cinematic lighting, illustration, art by artgerm and greg rutkowski, alphonse mocha, cgsociety, Olivia
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>3355776798<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/8yx4N.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
knight warrior helmet skyrim mask elder scrolls v nordic armor bethesda adam adamowicz illustration character design concept, unreal 5, daz, hyperrealistic, octane render, cosplay, rpg portrait, dynamic lighting, intricate detail, harvest fall vibrancy, cinematic volume inner glowing aura global illumination ray tracing hdr
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>1938574287<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/dY65d.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
berserker portrait, d&d style, fantasy, photorealistic, highly detailed, artstation, smooth, sharp focus, art by michael whelan, artgerm, greg rutkowski and alphonse mucha
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>DPM++ SDE Karras<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>156077154<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/76jz5.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
symmetry product render poster vivid colors classical proportion car, glowing fog intricate, elegant, highly detailed, digital painting, art station, concept art, smooth, sharp focus, illustration,
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>DPM++ SDE Karras<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>4294525772<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/f4jll.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
Futuristic Vintage Medium Shot 1920's Poster with Cyberpunk, ovni, tron biker with helmet bike, black in color, with a cyberpunk city background, futuristic lighting, cinematic lighting, cozy lighting, 8k, cinematic poster vintage 1800s
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>1229558409<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/6N6kr.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
beautiful, young woman, cybernetic, cyberpunk, detailed gorgeous face, flowing hair, vaporwave aesthetic, synthwave , digital painting, artstation, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>264509871<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/IDgVX.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
strong warrior princess| centered| key visual| intricate| highly detailed| breathtaking beauty| precise lineart| vibrant| comprehensive cinematic| Carne Griffiths| Conrad Roset
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>16<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/oTVxB.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
portrait of a rugged 19th century man with mutton chops in a jacket, victorian, concept art, detailed face, fantasy, close up face, highly detailed, cinematic lighting, digital art painting by greg rutkowski
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>16<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/vKamr.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
side profile of cyberpunk body with cyborg skull | cyberpunk | styled in Art Nouveau | insanely detailed | embellishments | high definition | concept art | digital art | vibrant
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>16<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/fkxPX.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
a cute little matte low poly isometric cherry blossom forest island, pink waterfalls, mist, lat lighting, soft shadows, trending on artstation, 3d render, monument valley, fez video game,
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>16<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/M2PAq.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
high resolution concept art of an apartment living room overlooking a large futuristic city with floor to ceiling windows and mid century modern furniture cinematic lighting cgsociety
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>850995814<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/F6GMQ.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
hyperrealistic full length portrait of gorgeous watson from apex legends | blonde | detailed gorgeous face!! | full body!! | armor | intricate | elegant | realistic | hyperrealistic | cinematic | character design | concept art | highly detailed | illustration | digital art | digital painting | depth of field | illustrated by tim brown lee
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>3002798343<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/nDe6M.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
Chibi spiderman, high redolution, 3D rendering, octane rendering, modern Disney style
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>3232863832<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/ixo6D.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
photo of the most beautiful artwork in the world featuring soft lustrous, industrial mechanic real world, fantastic location, working environment, rugged harsh situation worker, full body 8k unity render, action shot, skin pores, detailed intricate iris, very dark lighting, heavy shadows, detailed, detailed face, (vibrant, photo realistic, realistic, dramatic, dark, sharp focus, 8k), (weathered greasy dirty damaged old worn technician worker outfit:1.1), (intricate:1.1), (highly detailed:1.1), digital painting, octane render, artstation, concept art, smooth, sharp focus, illustration, art by artgerm, (loish:0.23), wlop ilya kuvshinov., (global illumination, studio light, volumetric light)<br><br>
<b>Negative prompt:</b> Asian, black and white, close up, cartoon, 3d, denim, (disfigured), (deformed), (poorly drawn), (extra limbs), blurry, boring, sketch, lackluster, signature, letters, watermark, low res , horrific , mutated , artifacts , bad art , gross , b&w , poor quality , low quality , cropped
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>30<br>
<b>Sampler:</b><br>DPM++ SDE Karras<br>
<b>CFG scale:</b><br>10<br>
<b>Seed:</b><br>169686802<br>
<b>Size:</b><br>512x640
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/1vx2U.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
photo of the most beautiful artwork in the world featuring soft lustrous, industrial mechanic real world, fantastic location, working environment, rugged harsh situation worker, full body 8k unity render, action shot, skin pores, detailed intricate iris, very dark lighting, heavy shadows, detailed, detailed face, (vibrant, photo realistic, realistic, dramatic, dark, sharp focus, 8k), (weathered greasy dirty damaged old worn technician worker outfit:1.1), (intricate:1.1), (highly detailed:1.1), digital painting, octane render, artstation, concept art, smooth, sharp focus, illustration, art by artgerm, (loish:0.23), wlop ilya kuvshinov., (global illumination, studio light, volumetric light)<br><br>
<b>Negative prompt:</b> Asian, black and white, close up, cartoon, 3d, denim, (disfigured), (deformed), (poorly drawn), (extra limbs), blurry, boring, sketch, lackluster, signature, letters, watermark, low res , horrific , mutated , artifacts , bad art , gross , b&w , poor quality , low quality , cropped
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>30<br>
<b>Sampler:</b><br>DPM++ SDE Karras<br>
<b>CFG scale:</b><br>10<br>
<b>Seed:</b><br>169686796<br>
<b>Size:</b><br>512x640<br>
<b>Denoising strength:</b><br>0.7<br>
<b>Hires upscale:</b><br>2<br>
<b>Hires upscaler:</b><br>Latent
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.imgur.com/AC1xKup.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
dark and gloomy full body 8k unity render, female teen cyborg, Blue yonder hair, wearing broken battle armor, at cluttered and messy shack , action shot, tattered torn shirt, porcelain cracked skin, skin pores, detailed intricate iris, very dark lighting, heavy shadows, detailed, detailed face, (vibrant, photo realistic, realistic, dramatic, dark, sharp focus, 8k)<br><br>
<b>Negative prompt:</b> nude, Asian, black and white, close up, cartoon, 3d, denim, (disfigured), (deformed), (poorly drawn), (extra limbs), blurry, boring, sketch, lackluster, signature, letters, watermark, low res , horrific , mutated , artifacts , bad art , gross , b&w , poor quality , low quality , cropped
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>26<br>
<b>Sampler:</b><br>DPM++ SDE Karras<br>
<b>CFG scale:</b><br>7.5<br>
<b>Seed:</b><br>2388736888<br>
<b>Size:</b><br>768x1024
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/0AcN7.jpg">
</td>
</tr>
</table>
[1]: https://i.stack.imgur.com/p9mFM.jpg | 33,819 | [
[
-0.047515869140625,
-0.0592041015625,
0.0272674560546875,
0.02398681640625,
-0.0107574462890625,
0.0106048583984375,
0.0186767578125,
-0.038909912109375,
0.04913330078125,
0.0233917236328125,
-0.047760009765625,
-0.059051513671875,
-0.04241943359375,
0.01033... |
Shivraj1/my-pet-parrot | 2023-10-24T08:46:40.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | Shivraj1 | null | null | Shivraj1/my-pet-parrot | 0 | 1,727 | diffusers | 2023-10-24T08:42:02 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Parrot Dreambooth model trained by Shivraj1 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: NSU-288
Sample pictures of this concept:

| 450 | [
[
-0.05120849609375,
-0.03497314453125,
0.00951385498046875,
0.01325225830078125,
-0.01374053955078125,
0.0196990966796875,
0.02984619140625,
-0.025604248046875,
0.047149658203125,
0.0284576416015625,
-0.0268402099609375,
0.0061187744140625,
-0.025390625,
0.01... |
sshleifer/distilbart-xsum-1-1 | 2021-06-14T07:53:57.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"bart",
"text2text-generation",
"summarization",
"en",
"dataset:cnn_dailymail",
"dataset:xsum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | summarization | sshleifer | null | null | sshleifer/distilbart-xsum-1-1 | 0 | 1,725 | transformers | 2022-03-02T23:29:05 | ---
language: en
tags:
- summarization
license: apache-2.0
datasets:
- cnn_dailymail
- xsum
thumbnail: https://huggingface.co/front/thumbnails/distilbart_medium.png
---
### Usage
This checkpoint should be loaded into `BartForConditionalGeneration.from_pretrained`. See the [BART docs](https://huggingface.co/transformers/model_doc/bart.html?#transformers.BartForConditionalGeneration) for more information.
### Metrics for DistilBART models
| Model Name | MM Params | Inference Time (MS) | Speedup | Rouge 2 | Rouge-L |
|:---------------------------|------------:|----------------------:|----------:|----------:|----------:|
| distilbart-xsum-12-1 | 222 | 90 | 2.54 | 18.31 | 33.37 |
| distilbart-xsum-6-6 | 230 | 132 | 1.73 | 20.92 | 35.73 |
| distilbart-xsum-12-3 | 255 | 106 | 2.16 | 21.37 | 36.39 |
| distilbart-xsum-9-6 | 268 | 136 | 1.68 | 21.72 | 36.61 |
| bart-large-xsum (baseline) | 406 | 229 | 1 | 21.85 | 36.50 |
| distilbart-xsum-12-6 | 306 | 137 | 1.68 | 22.12 | 36.99 |
| bart-large-cnn (baseline) | 406 | 381 | 1 | 21.06 | 30.63 |
| distilbart-12-3-cnn | 255 | 214 | 1.78 | 20.57 | 30.00 |
| distilbart-12-6-cnn | 306 | 307 | 1.24 | 21.26 | 30.59 |
| distilbart-6-6-cnn | 230 | 182 | 2.09 | 20.17 | 29.70 |
| 1,705 | [
[
-0.04412841796875,
-0.023468017578125,
0.0386962890625,
0.026702880859375,
-0.0132598876953125,
0.015167236328125,
0.01352691650390625,
-0.0012273788452148438,
0.0157012939453125,
0.028900146484375,
-0.06292724609375,
-0.039337158203125,
-0.0546875,
-0.01164... |
Yntec/GalenaVAE | 2023-08-04T02:38:57.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"schneed",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Yntec | null | null | Yntec/GalenaVAE | 2 | 1,725 | diffusers | 2023-08-04T02:04:35 | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- schneed
---
# Galena Blend 1.0 VAE
This model with the Color 101 VAE baked in.
Original pages:
https://civitai.com/models/16300?modelVersionId=19249
| 324 | [
[
0.0020580291748046875,
0.00865936279296875,
0.0426025390625,
0.042877197265625,
0.000255584716796875,
-0.0106353759765625,
0.0567626953125,
0.028564453125,
0.0648193359375,
0.051422119140625,
-0.04779052734375,
-0.0163116455078125,
-0.014678955078125,
-0.029... |
Salesforce/mixqg-3b | 2021-10-18T16:19:00.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"arxiv:2110.08175",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text2text-generation | Salesforce | null | null | Salesforce/mixqg-3b | 8 | 1,722 | transformers | 2022-03-02T23:29:04 | ---
language: en
widget:
- text: Robert Boyle \\n In the late 17th century, Robert Boyle proved that air is necessary for combustion.
---
# MixQG (3b-sized model)
MixQG is a new question generation model pre-trained on a collection of QA datasets with a mix of answer types. It was introduced in the paper [MixQG: Neural Question Generation with Mixed Answer Types](https://arxiv.org/abs/2110.08175) and the associated code is released in [this](https://github.com/salesforce/QGen) repository.
### How to use
Using Huggingface pipeline abstraction:
```
from transformers import pipeline
nlp = pipeline("text2text-generation", model='Salesforce/mixqg-3b', tokenizer='Salesforce/mixqg-3b')
CONTEXT = "In the late 17th century, Robert Boyle proved that air is necessary for combustion."
ANSWER = "Robert Boyle"
def format_inputs(context: str, answer: str):
return f"{answer} \\n {context}"
text = format_inputs(CONTEXT, ANSWER)
nlp(text)
# should output [{'generated_text': 'Who proved that air is necessary for combustion?'}]
```
Using the pre-trained model directly:
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained('Salesforce/mixqg-3b')
model = AutoModelForSeq2SeqLM.from_pretrained('Salesforce/mixqg-3b')
CONTEXT = "In the late 17th century, Robert Boyle proved that air is necessary for combustion."
ANSWER = "Robert Boyle"
def format_inputs(context: str, answer: str):
return f"{answer} \\n {context}"
text = format_inputs(CONTEXT, ANSWER)
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=32, num_beams=4)
output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(output)
# should output "Who proved that air is necessary for combustion?"
```
### Citation
```
@misc{murakhovska2021mixqg,
title={MixQG: Neural Question Generation with Mixed Answer Types},
author={Lidiya Murakhovs'ka and Chien-Sheng Wu and Tong Niu and Wenhao Liu and Caiming Xiong},
year={2021},
eprint={2110.08175},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 2,144 | [
[
-0.0335693359375,
-0.05224609375,
0.0116729736328125,
0.0206298828125,
-0.0004448890686035156,
-0.01160430908203125,
0.00597381591796875,
-0.007167816162109375,
0.009796142578125,
0.01885986328125,
-0.048095703125,
-0.01708984375,
-0.02886962890625,
0.000041... |
Salesforce/codet5-base-multi-sum | 2022-10-18T14:18:03.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"codet5",
"dataset:code_search_net",
"arxiv:2109.00859",
"arxiv:1909.09436",
"arxiv:1907.11692",
"arxiv:2002.08155",
"license:bsd-3-clause",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
... | text2text-generation | Salesforce | null | null | Salesforce/codet5-base-multi-sum | 24 | 1,721 | transformers | 2022-03-02T23:29:04 | ---
license: bsd-3-clause
tags:
- codet5
datasets:
- code_search_net
inference: true
---
# CodeT5-base for Code Summarization
[CodeT5-base](https://huggingface.co/Salesforce/codet5-base) model fine-tuned on CodeSearchNet data in a multi-lingual training setting (
Ruby/JavaScript/Go/Python/Java/PHP) for code summarization. It was introduced in this EMNLP 2021
paper [CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation](https://arxiv.org/abs/2109.00859)
by Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi. Please check out more
at [this repository](https://github.com/salesforce/CodeT5).
## How to use
Here is how to use this model:
```python
from transformers import RobertaTokenizer, T5ForConditionalGeneration
if __name__ == '__main__':
tokenizer = RobertaTokenizer.from_pretrained('Salesforce/codet5-base-multi-sum')
model = T5ForConditionalGeneration.from_pretrained('Salesforce/codet5-base-multi-sum')
text = """def svg_to_image(string, size=None):
if isinstance(string, unicode):
string = string.encode('utf-8')
renderer = QtSvg.QSvgRenderer(QtCore.QByteArray(string))
if not renderer.isValid():
raise ValueError('Invalid SVG data.')
if size is None:
size = renderer.defaultSize()
image = QtGui.QImage(size, QtGui.QImage.Format_ARGB32)
painter = QtGui.QPainter(image)
renderer.render(painter)
return image"""
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=20)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
# this prints: "Convert a SVG string to a QImage."
```
## Fine-tuning data
We employ the filtered version of CodeSearchNet data [[Husain et al., 2019](https://arxiv.org/abs/1909.09436)]
from [CodeXGLUE](https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text) benchmark for fine-tuning on
code summarization. The data is tokenized with our pre-trained code-specific BPE (Byte-Pair Encoding) tokenizer. One can
prepare text (or code) for the model using RobertaTokenizer with the vocab files from [codet5-base](https://huggingface.co/Salesforce/codet5-base).
### Data statistic
| Programming Language | Training | Dev | Test |
| :------------------- | :------: | :----: | :----: |
| Python | 251,820 | 13,914 | 14,918 |
| PHP | 241,241 | 12,982 | 14,014 |
| Go | 167,288 | 7,325 | 8,122 |
| Java | 164,923 | 5,183 | 10,955 |
| JavaScript | 58,025 | 3,885 | 3,291 |
| Ruby | 24,927 | 1,400 | 1,261 |
## Training procedure
We fine-tune codet5-base on these six programming languages (Ruby/JavaScript/Go/Python/Java/PHP) in the multi-task learning setting. We employ the
balanced sampling to avoid biasing towards high-resource tasks. Please refer to the [paper](https://arxiv.org/abs/2109.00859) for more details.
## Evaluation results
Unlike the paper allowing to select different best checkpoints for different programming languages (PLs), here we employ one checkpoint for
all PLs. Besides, we remove the task control prefix to specify the PL in training and inference. The results on the test set are shown as below:
| Model | Ruby | Javascript | Go | Python | Java | PHP | Overall |
| ----------- | :-------: | :--------: | :-------: | :-------: | :-------: | :-------: | :-------: |
| Seq2Seq | 9.64 | 10.21 | 13.98 | 15.93 | 15.09 | 21.08 | 14.32 |
| Transformer | 11.18 | 11.59 | 16.38 | 15.81 | 16.26 | 22.12 | 15.56 |
| [RoBERTa](https://arxiv.org/pdf/1907.11692.pdf) | 11.17 | 11.90 | 17.72 | 18.14 | 16.47 | 24.02 | 16.57 |
| [CodeBERT](https://arxiv.org/pdf/2002.08155.pdf) | 12.16 | 14.90 | 18.07 | 19.06 | 17.65 | 25.16 | 17.83 |
| [PLBART](https://aclanthology.org/2021.naacl-main.211.pdf) | 14.11 |15.56 | 18.91 | 19.30 | 18.45 | 23.58 | 18.32 |
| [CodeT5-small](https://arxiv.org/abs/2109.00859) |14.87 | 15.32 | 19.25 | 20.04 | 19.92 | 25.46 | 19.14 |
| [CodeT5-base](https://arxiv.org/abs/2109.00859) | **15.24** | 16.16 | 19.56 | 20.01 | **20.31** | 26.03 | 19.55 |
| [CodeT5-base-multi-sum](https://arxiv.org/abs/2109.00859) | **15.24** | **16.18** | **19.95** | **20.42** | 20.26 | **26.10** | **19.69** |
## Citation
```bibtex
@inproceedings{
wang2021codet5,
title={CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation},
author={Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi},
booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021},
year={2021},
}
``` | 4,920 | [
[
-0.026336669921875,
-0.02435302734375,
0.0006666183471679688,
0.01123809814453125,
-0.01247406005859375,
0.00959014892578125,
-0.030029296875,
-0.0186920166015625,
-0.0037860870361328125,
0.020782470703125,
-0.032928466796875,
-0.05902099609375,
-0.0354614257812... |
speechbrain/emotion-recognition-wav2vec2-IEMOCAP | 2023-07-23T02:21:35.000Z | [
"speechbrain",
"audio-classification",
"Emotion",
"Recognition",
"wav2vec2",
"pytorch",
"en",
"dataset:iemocap",
"arxiv:2106.04624",
"license:apache-2.0",
"has_space",
"region:us"
] | audio-classification | speechbrain | null | null | speechbrain/emotion-recognition-wav2vec2-IEMOCAP | 59 | 1,721 | speechbrain | 2022-03-02T23:29:05 | ---
language: "en"
thumbnail:
tags:
- audio-classification
- speechbrain
- Emotion
- Recognition
- wav2vec2
- pytorch
license: "apache-2.0"
datasets:
- iemocap
metrics:
- Accuracy
inference: false
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Emotion Recognition with wav2vec2 base on IEMOCAP
This repository provides all the necessary tools to perform emotion recognition with a fine-tuned wav2vec2 (base) model using SpeechBrain.
It is trained on IEMOCAP training data.
For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io). The model performance on IEMOCAP test set is:
| Release | Accuracy(%) |
|:-------------:|:--------------:|
| 19-10-21 | 78.7 (Avg: 75.3) |
## Pipeline description
This system is composed of an wav2vec2 model. It is a combination of convolutional and residual blocks. The embeddings are extracted using attentive statistical pooling. The system is trained with Additive Margin Softmax Loss. Speaker Verification is performed using cosine distance between speaker embeddings.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify_file* if needed.
## Install SpeechBrain
First of all, please install the **development** version of SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Perform Emotion recognition
An external `py_module_file=custom.py` is used as an external Predictor class into this HF repos. We use `foreign_class` function from `speechbrain.pretrained.interfaces` that allow you to load you custom model.
```python
from speechbrain.pretrained.interfaces import foreign_class
classifier = foreign_class(source="speechbrain/emotion-recognition-wav2vec2-IEMOCAP", pymodule_file="custom_interface.py", classname="CustomEncoderWav2vec2Classifier")
out_prob, score, index, text_lab = classifier.classify_file("speechbrain/emotion-recognition-wav2vec2-IEMOCAP/anger.wav")
print(text_lab)
```
The prediction tensor will contain a tuple of (embedding, id_class, label_name).
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The model was trained with SpeechBrain (aa018540).
To train it from scratch follows these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```
cd recipes/IEMOCAP/emotion_recognition
python train_with_wav2vec2.py hparams/train_with_wav2vec2.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/15dKQetLuAhSyg4sNOtbSDnuxFdEeU4zQ?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
| 4,172 | [
[
-0.02923583984375,
-0.037628173828125,
0.0046539306640625,
0.0109100341796875,
-0.0102386474609375,
-0.021270751953125,
-0.0250244140625,
-0.04266357421875,
0.022064208984375,
-0.00811004638671875,
-0.046234130859375,
-0.0511474609375,
-0.049163818359375,
-0... |
facebook/mask2former-swin-base-IN21k-ade-semantic | 2023-01-25T11:42:15.000Z | [
"transformers",
"pytorch",
"mask2former",
"vision",
"image-segmentation",
"dataset:coco",
"arxiv:2112.01527",
"arxiv:2107.06278",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | facebook | null | null | facebook/mask2former-swin-base-IN21k-ade-semantic | 2 | 1,721 | transformers | 2023-01-05T12:23:45 | ---
license: other
tags:
- vision
- image-segmentation
datasets:
- coco
widget:
- src: http://images.cocodataset.org/val2017/000000039769.jpg
example_title: Cats
- src: http://images.cocodataset.org/val2017/000000039770.jpg
example_title: Castle
---
# Mask2Former
Mask2Former model trained on ADE20k semantic segmentation (base-IN21k version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on ADE20k semantic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-base-IN21k-ade-semantic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-base-IN21k-ade-semantic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | 3,172 | [
[
-0.04205322265625,
-0.050933837890625,
0.022705078125,
0.017791748046875,
-0.0193634033203125,
-0.021759033203125,
0.01033782958984375,
-0.060821533203125,
0.01279449462890625,
0.046661376953125,
-0.059326171875,
-0.03216552734375,
-0.0643310546875,
-0.02630... |
ai-forever/FRED-T5-1.7B | 2023-11-03T12:50:00.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ru",
"arxiv:2309.10931",
"arxiv:2205.05131",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | ai-forever | null | null | ai-forever/FRED-T5-1.7B | 55 | 1,721 | transformers | 2023-01-20T12:43:26 | ---
language:
- ru
license: apache-2.0
---
# FRED-T5 1.7B (Full-scale Russian Enhanced Denoisers T5)
The model architecture design, pretraining, and evaluation are documented in our preprint: [**A Family of Pretrained Transformer Language Models for Russian**](https://arxiv.org/abs/2309.10931).
The model was trained by [SberDevices](https://sberdevices.ru/).
Architecture based on T5.
It has 24 layers and 1536 hidden size. More details in config.json.
The model trained on a mixture of 7 denoisers like UL2 with several differences (https://arxiv.org/abs/2205.05131).
It was trained on Russian language corpus (300GB). The dataset is the same as for ruT5 models.
Bbpe tokenizer. 50257 + special tokens 107. Prefix tokens: '\<LM\>', '\<SC1>',.. '\<SC6>'
First half of the time model trained on the small part of all dataset (1%,3GB) and without prefixes in each task.
For RSG, we trained as described in the T5 paper. First, we trained multitask for all tasks. Then we took the best checkpoint for the task and trained it further.
RSG submit here https://russiansuperglue.com/login/submit_info/1936
Total training time was around 45 days on 112 A100 GPUs.
## Usage (HuggingFace Models Repository)
```python
import torch
from transformers import GPT2Tokenizer, T5ForConditionalGeneration
tokenizer = GPT2Tokenizer.from_pretrained('ai-forever/FRED-T5-1.7B',eos_token='</s>')
model = T5ForConditionalGeneration.from_pretrained('ai-forever/FRED-T5-1.7B')
device='cuda'
model.to(device)
#Prefix <LM>
lm_text='<LM>Принялся Кутузов рассказывать свою историю как он сюда попал. Началось'
input_ids=torch.tensor([tokenizer.encode(lm_text)]).to(device)
outputs=model.generate(input_ids,eos_token_id=tokenizer.eos_token_id,early_stopping=True)
print(tokenizer.decode(outputs[0][1:]))
# print result: с того, что он был в армии, служил в артиллерии</s>.
#Prefix <SC1>
lm_text='<SC1>Принялся Кутузов рассказывать свою историю <extra_id_0>. Началось с того, что он был в армии, служил в артиллерии.'
input_ids=torch.tensor([tokenizer.encode(lm_text)]).to(device)
outputs=model.generate(input_ids,eos_token_id=tokenizer.eos_token_id,early_stopping=True)
print(tokenizer.decode(outputs[0][1:]))
#print result: '<extra_id_0>, как он воевал</s>'
# Prefix <SC5>
lm_text='<SC5>Принялся Кутузов рассказывать свою историю <extra_id_0>. Началось с того, что он был в армии, служил в артиллерии.'
input_ids=torch.tensor([tokenizer.encode(lm_text)]).to(device)
outputs=model.generate(input_ids,eos_token_id=tokenizer.eos_token_id,early_stopping=True)
tokenizer.decode(outputs[0][1:])
#print result: '<extra_id_0>, как он стал генералом</s>'
```
# Authors
+ NLP core team RnD [Telegram channel](https://t.me/nlpcoreteam):
+ Dmitry Zmitrovich
+ Andrei Kalmykov
+ Vitaly Kadulin
+ Mikhail Novikov
+ Alexey Khoroshilov
[Salute AI Community](https://t.me/SaluteTechGroup).
# Cite us
```
@misc{zmitrovich2023family,
title={A Family of Pretrained Transformer Language Models for Russian},
author={Dmitry Zmitrovich and Alexander Abramov and Andrey Kalmykov and Maria Tikhonova and Ekaterina Taktasheva and Danil Astafurov and Mark Baushenko and Artem Snegirev and Tatiana Shavrina and Sergey Markov and Vladislav Mikhailov and Alena Fenogenova},
year={2023},
eprint={2309.10931},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 3,385 | [
[
-0.024627685546875,
-0.03887939453125,
0.00969696044921875,
0.0122528076171875,
-0.03021240234375,
0.005306243896484375,
-0.02081298828125,
-0.0243377685546875,
0.003063201904296875,
-0.01318359375,
-0.0433349609375,
-0.0361328125,
-0.053955078125,
0.0115661... |
Namala/nxt | 2023-10-16T15:14:47.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | Namala | null | null | Namala/nxt | 1 | 1,721 | diffusers | 2023-10-16T15:10:47 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### nxt Dreambooth model trained by Namala following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: SREC-AP-550
Sample pictures of this concept:

| 364 | [
[
-0.041900634765625,
-0.01763916015625,
0.032012939453125,
0.00557708740234375,
-0.0125579833984375,
0.04058837890625,
0.04486083984375,
-0.0279083251953125,
0.0439453125,
0.028045654296875,
-0.05694580078125,
-0.026458740234375,
-0.0260467529296875,
0.008560... |
timm/mobilevit_xs.cvnets_in1k | 2023-04-24T22:23:24.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2110.02178",
"license:other",
"region:us"
] | image-classification | timm | null | null | timm/mobilevit_xs.cvnets_in1k | 0 | 1,716 | timm | 2023-04-24T22:23:14 | ---
tags:
- image-classification
- timm
library_name: timm
license: other
datasets:
- imagenet-1k
---
# Model card for mobilevit_xs.cvnets_in1k
A MobileViT image classification model. Trained on ImageNet-1k by paper authors.
See license details at https://github.com/apple/ml-cvnets/blob/main/LICENSE
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 2.3
- GMACs: 1.1
- Activations (M): 16.3
- Image size: 256 x 256
- **Papers:**
- MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer: https://arxiv.org/abs/2110.02178
- **Original:** https://github.com/apple/ml-cvnets
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('mobilevit_xs.cvnets_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mobilevit_xs.cvnets_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 32, 128, 128])
# torch.Size([1, 48, 64, 64])
# torch.Size([1, 64, 32, 32])
# torch.Size([1, 80, 16, 16])
# torch.Size([1, 384, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mobilevit_xs.cvnets_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 384, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{mehta2022mobilevit,
title={MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer},
author={Sachin Mehta and Mohammad Rastegari},
booktitle={International Conference on Learning Representations},
year={2022}
}
```
| 3,757 | [
[
-0.0299072265625,
-0.0176849365234375,
-0.006717681884765625,
0.00940704345703125,
-0.033538818359375,
-0.0236358642578125,
-0.0050201416015625,
-0.01496124267578125,
0.0254058837890625,
0.0290069580078125,
-0.036285400390625,
-0.05621337890625,
-0.0448608398437... |
botryan96/GeoBERT | 2022-12-15T09:20:54.000Z | [
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | botryan96 | null | null | botryan96/GeoBERT | 3 | 1,715 | transformers | 2022-11-08T09:18:19 | ---
tags:
- generated_from_keras_callback
model-index:
- name: GeoBERT
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# GeoBERT
GeoBERT is a NER model that was fine-tuned from SciBERT on the Geoscientific Corpus dataset.
The model was trained on the Labeled Geoscientific Corpus dataset (~1 million sentences).
## Intended uses
The NER product in this model has a goal to identify four main semantic types or categories related to Geosciences.
1. GeoPetro for any entities that belong to all terms in Geosciences
2. GeoMeth for all tools or methods associated with Geosciences
3. GeoLoc to identify geological locations
4. GeoTime for identifying the geological time scale entities
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 14000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Framework versions
- Transformers 4.22.1
- TensorFlow 2.10.0
- Datasets 2.4.0
- Tokenizers 0.12.1
## Model performances (metric: seqeval)
entity|precision|recall|f1
-|-|-|-
GeoLoc |0.9727|0.9591|0.9658
GeoMeth |0.9433|0.9447|0.9445
GeoPetro|0.9767|0.9745|0.9756
GeoTime |0.9695|0.9666|0.9680
## How to use GeoBERT with HuggingFace
##### Load GeoBERT and its sub-word tokenizer :
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("botryan96/GeoBERT")
model = AutoModelForTokenClassification.from_pretrained("botryan96/GeoBERT")
#Define the pipeline
from transformers import pipeline
ner_machine = pipeline('ner',model = models,tokenizer=tokenizer, aggregation_strategy="simple")
#Define the sentence
sentence = 'In North America, the water storage in the seepage face model is higher than the base case because positive pore pressure is requisite for drainage through a seepage face boundary condition. The result from the resistivity data supports the notion, especially in the northern part of the Sandstone Sediment formation. The active formation of America has a big potential for Oil and Gas based on the seismic section, has been activated since the Paleozoic'
#Deploy the NER Machine
ner_machine(sentence)
``` | 2,613 | [
[
-0.03448486328125,
-0.05914306640625,
0.042694091796875,
-0.0012731552124023438,
-0.020294189453125,
-0.016571044921875,
-0.00826263427734375,
-0.0079345703125,
0.028045654296875,
0.0255126953125,
-0.0285797119140625,
-0.06280517578125,
-0.065673828125,
-0.0... |
fcakyon/yolov5s-v7.0 | 2022-12-20T09:51:11.000Z | [
"transformers",
"object-detection",
"computer-vision",
"vision",
"yolo",
"yolov5",
"dataset:detection-datasets/coco",
"license:gpl-3.0",
"region:us"
] | object-detection | fcakyon | null | null | fcakyon/yolov5s-v7.0 | 8 | 1,715 | transformers | 2022-12-13T21:26:21 | ---
license: gpl-3.0
inference: false
tags:
- object-detection
- computer-vision
- vision
- yolo
- yolov5
datasets:
- detection-datasets/coco
---
### How to use
- Install yolov5:
```bash
pip install -U yolov5
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('fcakyon/yolov5s-v7.0')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img)
# inference with larger input size
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --img 640 --batch 16 --weights fcakyon/yolov5s-v7.0 --epochs 10 --device cuda:0
``` | 1,336 | [
[
-0.05303955078125,
-0.0302886962890625,
0.03350830078125,
-0.02471923828125,
-0.0308685302734375,
-0.0246429443359375,
0.0157012939453125,
-0.0287628173828125,
0.00855255126953125,
0.0292816162109375,
-0.04168701171875,
-0.050445556640625,
-0.036956787109375,
... |
uer/roberta-base-finetuned-dianping-chinese | 2023-10-17T15:19:16.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"zh",
"arxiv:1909.05658",
"arxiv:2212.06385",
"arxiv:1708.02657",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | uer | null | null | uer/roberta-base-finetuned-dianping-chinese | 24 | 1,714 | transformers | 2022-03-02T23:29:05 | ---
language: zh
widget:
- text: "这本书真的很不错"
---
# Chinese RoBERTa-Base Models for Text Classification
## Model description
This is the set of 5 Chinese RoBERTa-Base classification models fine-tuned by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658). Besides, the models could also be fine-tuned by [TencentPretrain](https://github.com/Tencent/TencentPretrain) introduced in [this paper](https://arxiv.org/abs/2212.06385), which inherits UER-py to support models with parameters above one billion, and extends it to a multimodal pre-training framework.
You can download the 5 Chinese RoBERTa-Base classification models either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| Dataset | Link |
| :-----------: | :-------------------------------------------------------: |
| **JD full** | [**roberta-base-finetuned-jd-full-chinese**][jd_full] |
| **JD binary** | [**roberta-base-finetuned-jd-binary-chinese**][jd_binary] |
| **Dianping** | [**roberta-base-finetuned-dianping-chinese**][dianping] |
| **Ifeng** | [**roberta-base-finetuned-ifeng-chinese**][ifeng] |
| **Chinanews** | [**roberta-base-finetuned-chinanews-chinese**][chinanews] |
## How to use
You can use this model directly with a pipeline for text classification (take the case of roberta-base-finetuned-chinanews-chinese):
```python
>>> from transformers import AutoModelForSequenceClassification,AutoTokenizer,pipeline
>>> model = AutoModelForSequenceClassification.from_pretrained('uer/roberta-base-finetuned-chinanews-chinese')
>>> tokenizer = AutoTokenizer.from_pretrained('uer/roberta-base-finetuned-chinanews-chinese')
>>> text_classification = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
>>> text_classification("北京上个月召开了两会")
[{'label': 'mainland China politics', 'score': 0.7211663722991943}]
```
## Training data
5 Chinese text classification datasets are used. JD full, JD binary, and Dianping datasets consist of user reviews of different sentiment polarities. Ifeng and Chinanews consist of first paragraphs of news articles of different topic classes. They are collected by [Glyph](https://github.com/zhangxiangxiao/glyph) project and more details are discussed in the corresponding [paper](https://arxiv.org/abs/1708.02657).
## Training procedure
Models are fine-tuned by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We fine-tune three epochs with a sequence length of 512 on the basis of the pre-trained model [chinese_roberta_L-12_H-768](https://huggingface.co/uer/chinese_roberta_L-12_H-768). At the end of each epoch, the model is saved when the best performance on development set is achieved. We use the same hyper-parameters on different models.
Taking the case of roberta-base-finetuned-chinanews-chinese
```
python3 finetune/run_classifier.py --pretrained_model_path models/cluecorpussmall_roberta_base_seq512_model.bin-250000 \
--vocab_path models/google_zh_vocab.txt \
--train_path datasets/glyph/chinanews/train.tsv \
--dev_path datasets/glyph/chinanews/dev.tsv \
--output_model_path models/chinanews_classifier_model.bin \
--learning_rate 3e-5 --epochs_num 3 --batch_size 32 --seq_length 512
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_text_classification_from_uer_to_huggingface.py --input_model_path models/chinanews_classifier_model.bin \
--output_model_path pytorch_model.bin \
--layers_num 12
```
### BibTeX entry and citation info
```
@article{liu2019roberta,
title={Roberta: A robustly optimized bert pretraining approach},
author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1907.11692},
year={2019}
}
@article{zhang2017encoding,
title={Which encoding is the best for text classification in chinese, english, japanese and korean?},
author={Zhang, Xiang and LeCun, Yann},
journal={arXiv preprint arXiv:1708.02657},
year={2017}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
@article{zhao2023tencentpretrain,
title={TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities},
author={Zhao, Zhe and Li, Yudong and Hou, Cheng and Zhao, Jing and others},
journal={ACL 2023},
pages={217},
year={2023}
```
[jd_full]:https://huggingface.co/uer/roberta-base-finetuned-jd-full-chinese
[jd_binary]:https://huggingface.co/uer/roberta-base-finetuned-jd-binary-chinese
[dianping]:https://huggingface.co/uer/roberta-base-finetuned-dianping-chinese
[ifeng]:https://huggingface.co/uer/roberta-base-finetuned-ifeng-chinese
[chinanews]:https://huggingface.co/uer/roberta-base-finetuned-chinanews-chinese | 5,561 | [
[
-0.0206756591796875,
-0.0347900390625,
0.016357421875,
0.0274658203125,
-0.0264739990234375,
-0.0255889892578125,
-0.03839111328125,
-0.034454345703125,
-0.0008087158203125,
0.0230865478515625,
-0.034820556640625,
-0.045440673828125,
-0.04119873046875,
0.006... |
TARUNBHATT/flan-t5-small-finetuned-squad | 2023-07-26T08:25:52.000Z | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | question-answering | TARUNBHATT | null | null | TARUNBHATT/flan-t5-small-finetuned-squad | 0 | 1,712 | transformers | 2023-07-26T05:29:38 | ---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: flan-t5-small-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-finetuned-squad
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4937
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6998 | 1.0 | 8321 | 1.4937 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
| 1,342 | [
[
-0.033905029296875,
-0.041351318359375,
0.01149749755859375,
0.0100860595703125,
-0.020904541015625,
-0.01922607421875,
-0.0140380859375,
-0.0247650146484375,
0.003997802734375,
0.021087646484375,
-0.070068359375,
-0.033660888671875,
-0.039886474609375,
0.00... |
Yntec/CitrineDreamMix | 2023-09-17T12:42:27.000Z | [
"diffusers",
"anime",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Yntec | null | null | Yntec/CitrineDreamMix | 2 | 1,712 | diffusers | 2023-09-17T11:33:13 | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- anime
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# Critine Dream Mix
Original page: https://civitai.com/models/18116?modelVersionId=21839
Samples and prompt:


Anime fine details portrait of joyful cute little girl sleep school class room, bokeh. anime masterpiece by studio ghibli. 8k, sharp high quality classic anime from 1990 in style of hayao miyazaki. Wikipedia. hugging. OIL PAINTING. DOCTOR with short hair in coat BEAUTIFUL girl eyes. she has pigtails | 829 | [
[
-0.0297698974609375,
-0.050872802734375,
0.0104827880859375,
0.03759765625,
-0.0278472900390625,
0.010406494140625,
-0.01386260986328125,
-0.050384521484375,
0.07293701171875,
0.052734375,
-0.04803466796875,
-0.040374755859375,
-0.055572509765625,
-0.0169219... |
haoranxu/ALMA-7B-Pretrain | 2023-10-27T05:10:58.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2309.11674",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | haoranxu | null | null | haoranxu/ALMA-7B-Pretrain | 1 | 1,710 | transformers | 2023-09-17T17:42:40 | ---
license: mit
---
**ALMA** (**A**dvanced **L**anguage **M**odel-based tr**A**nslator) is an LLM-based translation model, which adopts a new translation model paradigm: it begins with fine-tuning on monolingual data and is further optimized using high-quality parallel data. This two-step fine-tuning process ensures strong translation performance.
Please find more details in our [paper](https://arxiv.org/abs/2309.11674).
```
@misc{xu2023paradigm,
title={A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models},
author={Haoran Xu and Young Jin Kim and Amr Sharaf and Hany Hassan Awadalla},
year={2023},
eprint={2309.11674},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
We release four translation models presented in the paper:
- **ALMA-7B**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data
- **ALMA-7B-LoRA**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **LoRA** fine-tune on human-written parallel data
- **ALMA-13B**: Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data
- **ALMA-13B-LoRA** (Our best system): Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **LoRA** fine-tune on human-written parallel data
Model checkpoints are released at huggingface:
| Models | Base Model Link | LoRA Link |
|:-------------:|:---------------:|:---------:|
| ALMA-7B | [haoranxu/ALMA-7B](https://huggingface.co/haoranxu/ALMA-7B) | - |
| ALMA-7B-LoRA | [haoranxu/ALMA-7B-Pretrain](https://huggingface.co/haoranxu/ALMA-7B-Pretrain) | [haoranxu/ALMA-7B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-7B-Pretrain-LoRA) |
| ALMA-13B | [haoranxu/ALMA-13B](https://huggingface.co/haoranxu/ALMA-13B) | - |
| ALMA-13B-LoRA | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-13B-Pretrain-LoRA) |
**Note that `ALMA-7B-Pretrain` and `ALMA-13B-Pretrain` are NOT translation models. They only experience stage 1 monolingual fine-tuning (20B tokens for the 7B model and 12B tokens for the 13B model), and should be utilized in conjunction with their LoRA models for translation purposes.**
A quick start to use our best system (ALMA-13B-LoRA) for translation. An example of translating "我爱机器翻译。" into English:
```
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM
from transformers import LlamaTokenizer
# Load base model and LoRA weights
model = AutoModelForCausalLM.from_pretrained("haoranxu/ALMA-13B-Pretrain", torch_dtype=torch.float16, device_map="auto")
model = PeftModel.from_pretrained(model, "haoranxu/ALMA-13B-Pretrain-LoRA")
tokenizer = LlamaTokenizer.from_pretrained("haoranxu/ALMA-13B-Pretrain", padding_side='left')
# Add the source setence into the prompt template
prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:"
input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda()
# Translation
with torch.no_grad():
generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9)
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(outputs)
```
Please find more details in our [GitHub repository](https://github.com/fe1ixxu/ALMA) | 3,636 | [
[
-0.0205841064453125,
-0.036468505859375,
0.01340484619140625,
0.029266357421875,
-0.03753662109375,
-0.004199981689453125,
-0.00818634033203125,
-0.038421630859375,
0.02252197265625,
0.034027099609375,
-0.04541015625,
-0.0584716796875,
-0.05133056640625,
0.0... |
Intel/neural-chat-7b-v1-1 | 2023-09-08T01:05:30.000Z | [
"transformers",
"pytorch",
"mpt",
"text-generation",
"custom_code",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | text-generation | Intel | null | null | Intel/neural-chat-7b-v1-1 | 4 | 1,708 | transformers | 2023-07-06T05:20:07 | ---
license: apache-2.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
This model is a fine-tuned model for Chat based on [mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b) with **max_seq_lenght=2048** on various open source dataset. For the details of the used dataset, please refer to [Intel/neural-chat-dataset-v1-1](https://huggingface.co/datasets/Intel/neural-chat-dataset-v1-1).
## Model date
Neural-chat-7b-v1.1 was trained between June and July 2023.
## Evaluation
We use the same evaluation metrics as [open_llm_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) which uses [Eleuther AI Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/master), a unified framework to test generative language models on a large number of different evaluation tasks.
| Model | Average ⬆️| ARC (25-s) ⬆️ | HellaSwag (10-s) ⬆️ | MMLU (5-s) ⬆️| TruthfulQA (MC) (0-s) ⬆️ |
| --- | --- | --- | --- | --- | --- |
|[mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b)| 47.4 | 47.61 | 77.56 | 31 | 33.43 |
| [mosaicml/mpt-7b-chat](https://huggingface.co/mosaicml/mpt-7b-chat) | **49.95** | 46.5 | 75.55 | 37.60 | 40.17 |
| **Ours** | **51.41** | 50.09 | 76.69 | 38.79 | 40.07 |
### Bias evaluation
Following the blog [evaluating-llm-bias](https://huggingface.co/blog/evaluating-llm-bias), we select 10000 samples randomly from [allenai/real-toxicity-prompts](https://huggingface.co/datasets/allenai/real-toxicity-prompts) to evaluate toxicity bias in Language Models
| Model | Toxicity Rito ↓|
| --- | --- |
|[mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b)| 0.027 |
| **Ours** | 0.0264 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 3.0
## Inference with transformers
```shell
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'Intel/neural-chat-7b-v1-1',
trust_remote_code=True
)
```
## Inference with INT8
Follow the instructions [link](https://github.com/intel/intel-extension-for-transformers/tree/main/examples/huggingface/pytorch/text-generation/quantization) to install the necessary dependencies. Use the below command to quantize the model using Intel Neural Compressor [link](https://github.com/intel/neural-compressor) and accelerate the inference.
```shell
python run_generation.py \
--model Intel/neural-chat-7b-v1-1 \
--quantize \
--sq \
--alpha 0.95 \
--ipex
```
### Examples
- code generation

- summarization

- trip

## Ethical Considerations and Limitations
neural-chat-7b-v1-1 can produce factually incorrect output, and should not be relied on to produce factually accurate information. neural-chat-7b-v1-1 was trained on various instruction/chat datasets based on [mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b). Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of neural-chat-7b-v1-1, developers should perform safety testing.
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Organizations developing the model
The NeuralChat team with members from Intel/SATG/AIA/AIPT. Core team members: Kaokao Lv, Liang Lv, Chang Wang, Wenxin Zhang, Xuhui Ren, and Haihao Shen.
## Useful links
* Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
* Intel Extension for Transformers [link](https://github.com/intel/intel-extension-for-transformers)
* Intel Extension for PyTorch [link](https://github.com/intel/intel-extension-for-pytorch)
| 4,449 | [
[
-0.0157318115234375,
-0.06640625,
0.0101165771484375,
0.036163330078125,
-0.0162811279296875,
-0.0194091796875,
-0.03204345703125,
-0.028289794921875,
0.01114654541015625,
0.0114898681640625,
-0.0494384765625,
-0.044097900390625,
-0.048583984375,
-0.01541137... |
charannamani/my-pet-xyg | 2023-10-17T14:12:25.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | charannamani | null | null | charannamani/my-pet-xyg | 0 | 1,707 | diffusers | 2023-10-09T08:34:29 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-xyg Dreambooth model trained by charannamani following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: KMEC 572
Sample pictures of this concept:

| 397 | [
[
-0.048736572265625,
-0.0098114013671875,
0.028564453125,
-0.00437164306640625,
-0.0164642333984375,
0.0396728515625,
0.028961181640625,
-0.0333251953125,
0.05242919921875,
0.037445068359375,
-0.05072021484375,
-0.030548095703125,
-0.01953125,
0.0041007995605... |
sail-rvc/BritneySpears2333333 | 2023-07-14T07:19:39.000Z | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | sail-rvc | null | null | sail-rvc/BritneySpears2333333 | 0 | 1,706 | transformers | 2023-07-14T07:19:20 |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# BritneySpears2333333
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:19:39
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
| 388 | [
[
-0.031646728515625,
-0.0209197998046875,
0.022003173828125,
0.006687164306640625,
-0.03131103515625,
0.00887298583984375,
0.01485443115234375,
0.00356292724609375,
0.0225830078125,
0.06494140625,
-0.053802490234375,
-0.049591064453125,
-0.038543701171875,
-0... |
digiplay/OnlyReal-Black-Mix | 2023-08-16T21:23:32.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | digiplay | null | null | digiplay/OnlyReal-Black-Mix | 1 | 1,706 | diffusers | 2023-07-27T09:40:00 | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
https://civitai.com/models/115449/onlyreal-black-mix
Sample image generated by huggingface's API :

Original Author's DEMO images :
,%20detail%20face,%20%20(a%20girl%20standing,%20rending%20on%20cgsociety,%20black%20shadows,%20streaming,%20new%20york%20backdro.jpeg) | 697 | [
[
-0.042022705078125,
-0.038177490234375,
0.021575927734375,
0.04498291015625,
-0.037628173828125,
0.00567626953125,
0.0186004638671875,
-0.03948974609375,
0.07647705078125,
0.0345458984375,
-0.07940673828125,
-0.044891357421875,
-0.038726806640625,
-0.0083084... |
DucHaiten/DucHaitenAnime | 2023-02-06T02:34:46.000Z | [
"diffusers",
"stable-diffusion",
"text-to-image",
"image-to-image",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | DucHaiten | null | null | DucHaiten/DucHaitenAnime | 19 | 1,705 | diffusers | 2023-01-30T13:06:25 | ---
license: creativeml-openrail-m
language:
- en
tags:
- stable-diffusion
- text-to-image
- image-to-image
- diffusers
inference: true
---
DucHaitenAnime_v4.0: In this version i added a little 3D, a little realistic, improved the hand but not much, improved the color because i don't like to use vae
All images above are used only text to image, not edited or accompanying application software.
https://civitai.com/models/6634
please support me by becoming a patron:
https://www.patreon.com/duchaitenreal











| 2,042 | [
[
-0.05413818359375,
-0.01288604736328125,
0.0457763671875,
0.01462554931640625,
-0.035003662109375,
0.01141357421875,
0.00823974609375,
-0.03607177734375,
0.060302734375,
0.07318115234375,
-0.049560546875,
-0.038330078125,
-0.037322998046875,
0.01139831542968... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.