modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 โ | author stringlengths 2 34 โ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 โ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
digiplay/Realisian_v5 | 2023-07-12T12:47:08.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | digiplay | null | null | digiplay/Realisian_v5 | 6 | 1,538 | diffusers | 2023-07-12T12:11:07 | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/47130?modelVersionId=115942
Sample images I made



Original Author's DEMO image :
 | 832 | [
[
-0.047210693359375,
-0.0163116455078125,
0.03094482421875,
0.02264404296875,
-0.0264739990234375,
-0.0145263671875,
0.01971435546875,
-0.0118255615234375,
0.05059814453125,
0.030120849609375,
-0.05010986328125,
-0.03875732421875,
-0.0234222412109375,
-0.0032... |
facebook/data2vec-audio-large | 2022-04-18T16:29:14.000Z | [
"transformers",
"pytorch",
"data2vec-audio",
"feature-extraction",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2202.03555",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | feature-extraction | facebook | null | null | facebook/data2vec-audio-large | 1 | 1,537 | transformers | 2022-04-02T15:59:46 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# Data2Vec-Audio-Large
[Facebook's Data2Vec](https://ai.facebook.com/research/data2vec-a-general-framework-for-self-supervised-learning-in-speech-vision-and-language/)
The large model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
[Paper](https://arxiv.org/abs/2202.03555)
Authors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli
**Abstract**
While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.
The original model can be found under https://github.com/pytorch/fairseq/tree/main/examples/data2vec .
# Pre-Training method

For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555).
# Usage
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model. | 2,385 | [
[
-0.0112457275390625,
-0.059112548828125,
0.01116180419921875,
-0.0002422332763671875,
-0.01212310791015625,
-0.01666259765625,
-0.0133819580078125,
-0.03765869140625,
-0.0037097930908203125,
0.0260009765625,
-0.04766845703125,
-0.036712646484375,
-0.035675048828... |
digiplay/Realisian_v1 | 2023-07-08T22:59:08.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | digiplay | null | null | digiplay/Realisian_v1 | 2 | 1,537 | diffusers | 2023-07-08T15:08:17 | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
https://civitai.com/models/47130?modelVersionId=51711
Sample image I made :


| 504 | [
[
-0.050933837890625,
-0.01474761962890625,
0.025787353515625,
0.034912109375,
-0.03564453125,
-0.0142669677734375,
0.02093505859375,
-0.00989532470703125,
0.0634765625,
0.02752685546875,
-0.04827880859375,
-0.037628173828125,
-0.0179901123046875,
0.0000364184... |
microsoft/resnet-101 | 2022-07-01T17:33:19.000Z | [
"transformers",
"pytorch",
"tf",
"resnet",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:1512.03385",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | image-classification | microsoft | null | null | microsoft/resnet-101 | 5 | 1,536 | transformers | 2022-03-16T15:43:40 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
---
# ResNet-101 v1.5
ResNet model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by He et al.
Disclaimer: The team releasing ResNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ResNet (Residual Network) is a convolutional neural network that democratized the concepts of residual learning and skip connections. This enables to train much deeper models.
This is ResNet v1.5, which differs from the original model: in the bottleneck blocks which require downsampling, v1 has stride = 2 in the first 1x1 convolution, whereas v1.5 has stride = 2 in the 3x3 convolution. This difference makes ResNet50 v1.5 slightly more accurate (\~0.5% top1) than v1, but comes with a small performance drawback (~5% imgs/sec) according to [Nvidia](https://catalog.ngc.nvidia.com/orgs/nvidia/resources/resnet_50_v1_5_for_pytorch).

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=resnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, ResNetForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-101")
model = ResNetForImageClassification.from_pretrained("microsoft/resnet-101")
inputs = feature_extractor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/resnet).
### BibTeX entry and citation info
```bibtex
@inproceedings{he2016deep,
title={Deep residual learning for image recognition},
author={He, Kaiming and Zhang, Xiangyu and Ren, Shaoqing and Sun, Jian},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={770--778},
year={2016}
}
```
| 2,665 | [
[
-0.04766845703125,
-0.01541900634765625,
-0.015869140625,
-0.006916046142578125,
-0.0207672119140625,
-0.0138702392578125,
-0.003971099853515625,
-0.0535888671875,
0.026123046875,
0.0325927734375,
-0.045806884765625,
-0.01947021484375,
-0.04315185546875,
0.0... |
Helsinki-NLP/opus-mt-gl-en | 2023-08-16T11:38:00.000Z | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"gl",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | Helsinki-NLP | null | null | Helsinki-NLP/opus-mt-gl-en | 0 | 1,533 | transformers | 2022-03-02T23:29:04 | ---
language:
- gl
- en
tags:
- translation
license: apache-2.0
---
### glg-eng
* source group: Galician
* target group: English
* OPUS readme: [glg-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-eng/README.md)
* model: transformer-align
* source language(s): glg
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-eng/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-eng/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-eng/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.glg.eng | 44.4 | 0.628 |
### System Info:
- hf_name: glg-eng
- source_languages: glg
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['gl', 'en']
- src_constituents: {'glg'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-eng/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-eng/opus-2020-06-16.test.txt
- src_alpha3: glg
- tgt_alpha3: eng
- short_pair: gl-en
- chrF2_score: 0.628
- bleu: 44.4
- brevity_penalty: 0.975
- ref_len: 8365.0
- src_name: Galician
- tgt_name: English
- train_date: 2020-06-16
- src_alpha2: gl
- tgt_alpha2: en
- prefer_old: False
- long_pair: glg-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | 2,058 | [
[
-0.0254058837890625,
-0.04400634765625,
0.0265350341796875,
0.0301971435546875,
-0.035125732421875,
-0.01372528076171875,
-0.0276031494140625,
-0.0380859375,
0.02008056640625,
0.017181396484375,
-0.0474853515625,
-0.06378173828125,
-0.04302978515625,
0.02140... |
timm/beit_large_patch16_512.in22k_ft_in22k_in1k | 2023-05-08T23:32:40.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-22k",
"arxiv:2106.08254",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/beit_large_patch16_512.in22k_ft_in22k_in1k | 0 | 1,531 | timm | 2022-12-23T02:31:40 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- imagenet-22k
---
# Model card for beit_large_patch16_512.in22k_ft_in22k_in1k
A BEiT image classification model. Trained on ImageNet-22k with self-supervised masked image modelling (MIM) using a DALL-E dVAE as visual tokenizer. Fine-tuned on ImageNet-22k and then ImageNet-1k.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 305.7
- GMACs: 362.2
- Activations (M): 656.4
- Image size: 512 x 512
- **Papers:**
- BEiT: BERT Pre-Training of Image Transformers: https://arxiv.org/abs/2106.08254
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-22k
- **Original:** https://github.com/microsoft/unilm/tree/master/beit
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('beit_large_patch16_512.in22k_ft_in22k_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'beit_large_patch16_512.in22k_ft_in22k_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1025, 1024) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{bao2021beit,
title={Beit: Bert pre-training of image transformers},
author={Bao, Hangbo and Dong, Li and Piao, Songhao and Wei, Furu},
journal={arXiv preprint arXiv:2106.08254},
year={2021}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 3,742 | [
[
-0.04388427734375,
-0.0261383056640625,
0.0011777877807617188,
0.01255035400390625,
-0.03143310546875,
-0.018524169921875,
-0.0183868408203125,
-0.03955078125,
0.018096923828125,
0.0274200439453125,
-0.042388916015625,
-0.04827880859375,
-0.056488037109375,
... |
JosephusCheung/Pwen-VL-Chat-20_30 | 2023-10-10T05:50:25.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"qwen",
"en",
"zh",
"license:gpl-3.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | JosephusCheung | null | null | JosephusCheung/Pwen-VL-Chat-20_30 | 0 | 1,529 | transformers | 2023-10-05T12:36:16 | ---
license: gpl-3.0
language:
- en
- zh
tags:
- llama
- llama2
- qwen
---
WIP (20/30), recalibrated and fine-tuned on (852/1278)M SFT tokens, etwa (7\~11/10\~17) GPU days on Nvidia A100.
| 188 | [
[
-0.04803466796875,
-0.056640625,
0.028533935546875,
0.032012939453125,
-0.0307464599609375,
-0.002048492431640625,
0.0163726806640625,
-0.03271484375,
0.0229339599609375,
0.0007843971252441406,
-0.042266845703125,
-0.006137847900390625,
-0.0286712646484375,
... |
openmmlab/upernet-swin-base | 2023-05-03T20:51:22.000Z | [
"transformers",
"pytorch",
"safetensors",
"upernet",
"vision",
"image-segmentation",
"en",
"arxiv:1807.10221",
"arxiv:2103.14030",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-segmentation | openmmlab | null | null | openmmlab/upernet-swin-base | 1 | 1,528 | transformers | 2023-01-13T14:34:17 | ---
language: en
license: mit
tags:
- vision
- image-segmentation
model_name: openmmlab/upernet-swin-base
---
# UperNet, Swin Transformer base-sized backbone
UperNet framework for semantic segmentation, leveraging a Swin Transformer backbone. UperNet was introduced in the paper [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) by Xiao et al.
Combining UperNet with a Swin Transformer backbone was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030).
Disclaimer: The team releasing UperNet + Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
UperNet is a framework for semantic segmentation. It consists of several components, including a backbone, a Feature Pyramid Network (FPN) and a Pyramid Pooling Module (PPM).
Any visual backbone can be plugged into the UperNet framework. The framework predicts a semantic label per pixel.

## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=openmmlab/upernet) to look for
fine-tuned versions (with various backbones) on a task that interests you.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/upernet#transformers.UperNetForSemanticSegmentation).
| 1,630 | [
[
-0.032012939453125,
-0.00836944580078125,
0.0178985595703125,
0.0389404296875,
-0.016815185546875,
-0.0157318115234375,
0.0191650390625,
-0.041259765625,
0.0224456787109375,
0.053466796875,
-0.05950927734375,
-0.04052734375,
-0.0306396484375,
-0.020935058593... |
snunlp/KR-Medium | 2021-11-22T06:19:42.000Z | [
"transformers",
"pytorch",
"jax",
"bert",
"ko",
"endpoints_compatible",
"region:us"
] | null | snunlp | null | null | snunlp/KR-Medium | 5 | 1,527 | transformers | 2022-03-02T23:29:05 | ---
language:
- ko
---
# KR-BERT-MEDIUM
A pretrained Korean-specific BERT model developed by Computational Linguistics Lab at Seoul National University.
It is based on our character-level [KR-BERT](https://github.com/snunlp/KR-BERT) model which utilize WordPiece tokenizer.
Here, the model name has a suffix 'MEDIUM' since its training data grew from KR-BERT's original dataset. We have another additional model, KR-BERT-EXPANDED with more extensive training data expanded from those of KR-BERT-MEDIUM, so the suffix 'MEDIUM' is used.
<br>
### Vocab, Parameters and Data
| | Mulitlingual BERT<br>(Google) | KorBERT<br>(ETRI) | KoBERT<br>(SKT) | KR-BERT character | KR-BERT-MEDIUM |
| -------------: | ---------------------------------------------: | ---------------------: | ----------------------------------: | -------------------------------------: | -------------------------------------: |
| vocab size | 119,547 | 30,797 | 8,002 | 16,424 | 20,000 |
| parameter size | 167,356,416 | 109,973,391 | 92,186,880 | 99,265,066 | 102,015,010 |
| data size | -<br>(The Wikipedia data<br>for 104 languages) | 23GB<br>4.7B morphemes | -<br>(25M sentences,<br>233M words) | 2.47GB<br>20M sentences,<br>233M words | 12.37GB<br>91M sentences,<br>1.17B words |
<br>
The training data for this model is expanded from those of KR-BERT, texts from Korean Wikipedia, and news articles, by addition of legal texts crawled from the National Law Information Center and [Korean Comments dataset](https://www.kaggle.com/junbumlee/kcbert-pretraining-corpus-korean-news-comments). This data expansion is to collect texts from more various domains than those of KR-BERT. The total data size is about 12.37GB, consisting of 91M and 1.17B words.
The user-generated comment dataset is expected to have similar stylistic properties to the task datasets of NSMC and HSD. Such text includes abbreviations, coinages, emoticons, spacing errors, and typos. Therefore, we added the dataset containing such on-line properties to our existing formal data such as news articles and Wikipedia texts to compose the training data for KR-BERT-MEDIUM. Accordingly, KR-BERT-MEDIUM reported better results in sentiment analysis than other models, and the performances improved with the model of the more massive, more various training data.
This modelโs vocabulary size is 20,000, whose tokens are trained based on the expanded training data using the WordPiece tokenizer.
KR-BERT-MEDIUM is trained for 2M steps with the maxlen of 128, training batch size of 64, and learning rate of 1e-4, taking 22 hours to train the model using a Google Cloud TPU v3-8.
### Models
#### TensorFlow
* BERT tokenizer, character-based model ([download](https://drive.google.com/file/d/1OWXGqr2Z2PWD6ST3MsFmcjM8c2mr8PkE/view?usp=sharing))
#### PyTorch
* You can import it from Transformers!
```sh
# pytorch, transformers
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("snunlp/KR-Medium", do_lower_case=False)
model = AutoModel.from_pretrained("snunlp/KR-Medium")
```
### Requirements
- transformers == 4.0.0
- tensorflow < 2.0
## Downstream tasks
* Movie Review Classification on Naver Sentiment Movie Corpus [(NSMC)](https://github.com/e9t/nsmc)
* Hate Speech Detection [(Moon et al., 2020)](https://github.com/kocohub/korean-hate-speech)
#### tensorflow
* After downloading our pre-trained models, put them in a `models` directory.
* Set the output directory (for fine-tuning)
* Select task name: `NSMC` for Movie Review Classification, and `HATE` for Hate Speech Detection
```sh
# tensorflow
python3 run_classifier.py \
--task_name={NSMC, HATE} \
--do_train=true \
--do_eval=true \
--do_predict=true \
--do_lower_case=False\
--max_seq_length=128 \
--train_batch_size=128 \
--learning_rate=5e-05 \
--num_train_epochs=5.0 \
--output_dir={output_dir}
```
<br>
### Performances
TensorFlow, test set performances
| | multilingual BERT | KorBERT<br>character | KR-BERT<br>character<br>WordPiece | KR-BERT-MEDIUM |
|:-----:|-------------------:|----------------:|----------------------------:|-----------------------------------------:|
| NSMC (Acc) | 86.82 | 89.81 | 89.74 | 90.29 |
| Hate Speech (F1) | 52.03 | 54.33 | 54.53 | 57.91 |
<br>
## Contacts
nlp.snu@gmail.com
| 4,745 | [
[
-0.040771484375,
-0.049468994140625,
0.0204925537109375,
0.0201416015625,
-0.035858154296875,
0.006595611572265625,
-0.039642333984375,
-0.0216522216796875,
0.024383544921875,
0.02252197265625,
-0.0400390625,
-0.041168212890625,
-0.05743408203125,
-0.0001369... |
timm/swin_large_patch4_window12_384.ms_in22k_ft_in1k | 2023-03-18T04:12:24.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-22k",
"arxiv:2103.14030",
"license:mit",
"region:us"
] | image-classification | timm | null | null | timm/swin_large_patch4_window12_384.ms_in22k_ft_in1k | 0 | 1,526 | timm | 2023-03-18T04:11:30 | ---
tags:
- image-classification
- timm
library_tag: timm
license: mit
datasets:
- imagenet-1k
- imagenet-22k
---
# Model card for swin_large_patch4_window12_384.ms_in22k_ft_in1k
A Swin Transformer image classification model. Pretrained on ImageNet-22k and fine-tuned on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 196.7
- GMACs: 104.1
- Activations (M): 202.2
- Image size: 384 x 384
- **Papers:**
- Swin Transformer: Hierarchical Vision Transformer using Shifted Windows: https://arxiv.org/abs/2103.14030
- **Original:** https://github.com/microsoft/Swin-Transformer
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('swin_large_patch4_window12_384.ms_in22k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'swin_large_patch4_window12_384.ms_in22k_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g. for swin_base_patch4_window7_224 (NHWC output)
# torch.Size([1, 56, 56, 128])
# torch.Size([1, 28, 28, 256])
# torch.Size([1, 14, 14, 512])
# torch.Size([1, 7, 7, 1024])
# e.g. for swinv2_cr_small_ns_224 (NCHW output)
# torch.Size([1, 96, 56, 56])
# torch.Size([1, 192, 28, 28])
# torch.Size([1, 384, 14, 14])
# torch.Size([1, 768, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'swin_large_patch4_window12_384.ms_in22k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, H, W, num_features) tensor for swin / swinv2
# or (batch_size, num_features, H, W) for swinv2_cr
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{liu2021Swin,
title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 4,535 | [
[
-0.0328369140625,
-0.03399658203125,
-0.003345489501953125,
0.012451171875,
-0.0229644775390625,
-0.02947998046875,
-0.017608642578125,
-0.0382080078125,
0.004917144775390625,
0.0279083251953125,
-0.04510498046875,
-0.04931640625,
-0.0455322265625,
-0.014617... |
timm/resnest26d.gluon_in1k | 2023-04-23T23:35:18.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2004.08955",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/resnest26d.gluon_in1k | 0 | 1,526 | timm | 2023-04-23T23:35:07 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for resnest26d.gluon_in1k
A ResNeSt (ResNet based architecture with Split Attention) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 17.1
- GMACs: 3.6
- Activations (M): 10.0
- Image size: 224 x 224
- **Papers:**
- ResNeSt: Split-Attention Networks: https://arxiv.org/abs/2004.08955
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/zhanghang1989/ResNeSt
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resnest26d.gluon_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnest26d.gluon_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1024, 14, 14])
# torch.Size([1, 2048, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnest26d.gluon_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{zhang2020resnest,
title={ResNeSt: Split-Attention Networks},
author={Zhang, Hang and Wu, Chongruo and Zhang, Zhongyue and Zhu, Yi and Zhang, Zhi and Lin, Haibin and Sun, Yue and He, Tong and Muller, Jonas and Manmatha, R. and Li, Mu and Smola, Alexander},
journal={arXiv preprint arXiv:2004.08955},
year={2020}
}
```
| 3,757 | [
[
-0.03533935546875,
-0.036163330078125,
0.0095672607421875,
0.007541656494140625,
-0.02593994140625,
-0.0190277099609375,
-0.022735595703125,
-0.0258331298828125,
0.0306854248046875,
0.0277862548828125,
-0.048553466796875,
-0.04608154296875,
-0.05316162109375,
... |
nerijs/lego-minifig-xl | 2023-08-13T20:43:31.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"license:apache-2.0",
"has_space",
"region:us"
] | text-to-image | nerijs | null | null | nerijs/lego-minifig-xl | 19 | 1,524 | diffusers | 2023-08-13T20:36:32 | ---
license: apache-2.0
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: lego minifig
widget:
- text: lego minifig of a samurai
---
# LEGO Minifig XL
## Consider supporting further research on [Patreon](https://www.patreon.com/user?u=29466374) or [Twitter](https://twitter.com/nerijs)

### Tips:
- Prompt with "lego minifig of a $SUBJECT"-
- Works best at 1024x1024, if you go higher than that will be non-standard size minifigs
- Best used at 0.8 strength
- You can use it for lego items or animals, just remove the "minifig" from the prompt
### Limitations
- Tends to add items to the minifigs, will be fixed on v2 | 836 | [
[
-0.058685302734375,
-0.0360107421875,
0.020721435546875,
0.0335693359375,
-0.029327392578125,
-0.0052337646484375,
-0.00984954833984375,
-0.032623291015625,
0.04888916015625,
0.01392364501953125,
-0.06201171875,
-0.00618743896484375,
-0.032318115234375,
0.02... |
Yntec/WesternAnimation | 2023-08-12T19:41:57.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"Western Animation Diffusion",
"Lykon",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Yntec | null | null | Yntec/WesternAnimation | 1 | 1,523 | diffusers | 2023-07-18T01:34:22 | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- Western Animation Diffusion
- Lykon
---
# Western Animation Diffusion
Model by Lykon
Original page:
https://civitai.com/models/86546/western-animation-diffusion | 335 | [
[
-0.016693115234375,
-0.038543701171875,
0.04534912109375,
0.0247039794921875,
-0.01058197021484375,
-0.024505615234375,
0.0199127197265625,
-0.029815673828125,
0.053619384765625,
0.036865234375,
-0.0723876953125,
-0.0321044921875,
-0.01439666748046875,
-0.04... |
MU-NLPC/whisper-small-audio-captioning | 2023-05-15T21:48:24.000Z | [
"transformers",
"pytorch",
"whisper",
"en",
"dataset:AudioSet",
"dataset:AudioCaps",
"dataset:Clotho-v2.1",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | MU-NLPC | null | null | MU-NLPC/whisper-small-audio-captioning | 1 | 1,522 | transformers | 2023-05-15T17:48:16 | ---
datasets:
- AudioSet
- AudioCaps
- Clotho-v2.1
metrics:
- SPICE
- CIDEr
- SPIDEr
- METEOR
- SacreBLEU
model-index:
- name: whisper-small-audio-captioning
results:
- task:
type: audio-captioning
name: Audio Captioning
dataset:
type: clotho-v2.1
name: Clotho
split: evaluation
metrics:
- type: SPICE
value: 0.1234
- type: CIDEr
value: 0.4142
- type: SPIDEr
value: 0.2687
- type: METEOR
value: 0.3781
- type: SacreBLEU
value: 15.76
license: cc-by-nc-4.0
language:
- en
---
# Model Card for Whisper Audio Captioning
A transformer encoder-decoder model for automatic audio captioning. As opposed to speech-to-text, captioning describes the content of audio clips, such as prominent sounds or environmental noises. This task has numerous practical applications, e.g., for providing access to audio information for people with hearing impairments or improving the searchability of audio content.
- **Model type:** Whisper encoder-decoder transformer
- **Language(s) (NLP):** en
- **License:** cc-by-4.0
- **Parent Model:** openai/whisper-
- **Resources for more information:**
- [GitHub Repo](https://github.com/prompteus/audio-captioning)
- [Technical Report](TODO)
## Usage
The model expects an audio clip (up to 30s) to the encoder as an input and information about caption style as forced prefix to the decoder.
Minimal example:
```python
# Load model
checkpoint = "MU-NLPC/whisper-small-audio-captioning"
model = WhisperForAudioCaptioning.from_pretrained(checkpoint)
tokenizer = transformers.WhisperTokenizer.from_pretrained(checkpoint, language="en", task="transcribe")
feature_extractor = transformers.WhisperFeatureExtractor.from_pretrained(checkpoint)
# Load and preprocess audio
input_file = "..."
audio, sampling_rate = librosa.load(input_file, sr=feature_extractor.sampling_rate)
features = feature_extractor(audio, sampling_rate=sampling_rate, return_tensors="pt").input_features
# Prepare caption style
style_prefix = "clotho > caption: "
style_prefix_tokens = tokenizer("", text_target=style_prefix, return_tensors="pt", add_special_tokens=False).labels
# Generate caption
model.eval()
outputs = model.generate(
inputs=features.to(model.device),
forced_ac_decoder_ids=style_prefix_tokens,
max_length=100,
)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
```
Example output:
*clotho > caption: Rain is pouring down and thunder is rumbling in the background.*
The style prefix influences the style of the caption. Model knows 3 styles: `audioset > keywords: `, `audiocaps > caption: `, and `clotho > caption: `. It was finetuned on Clotho and that is the indended "default" style.
WhisperTokenizer must be initialized with `language="en"` and `task="transcribe"`.
Our model class `WhisperForAudioCaptioning` can be found in our git repository or here on the HuggingFace Hub in the model repository. The class overrides default Whisper `generate` method to support forcing decoder prefix.
## Training details
The model was initialized by original speech-to-text `openai/whisper-small` weights. Then, it was pretrained on a mix of (1) subset of AudioSet with synthetic labels, (2) AudioCaps captioning dataset and (3) Clotho v2.1 captioning dataset. Finally, it was finetuned on Clotho v2.1 to focus the model on the specific style of the captions. For each traning input, the model was informed about the source of the data, so it can mimic the caption style in all 3 styles.
During pretraining, the ratio of samples in each batch was approximately 12:3:1 (AudioSet:AudioCaps:Clotho). The pretraining took 19800 steps with batch size 32 and learning rate 2e-5. Finetuning was done on Clotho only, and the model was trained for 1500 steps with batch size 32 and learning rate 4e-6. All layers except *fc1* layers were frozen during finetuning.
For more information about the training regime, see the [technical report](TODO).
## Evaluation details
Metrics reported in the metadata were computed on Clotho v2.1 test split with captions generated using a beam search with 5 beams.
| | whisper-tiny | whisper-small | whisper-large-v2 |
|----------------------|--------------|---------------|------------------|
| SacreBLEU | 13.77 | 15.76 | 16.50 |
| METEOR | 0.3452 | 0.3781 | 0.3782 |
| CIDEr | 0.3404 | 0.4142 | 0.4331 |
| SPICE | 0.1077 | 0.1234 | 0.1257 |
| SPIDEr | 0.2240 | 0.2687 | 0.2794 |
## Limitations
The captions generated by the model can be misleading or not truthful, even if they appear convincing. The hallucination occurs especially in domains that were not present in the finetuning data.
While the original speech-to-text checkpoints by OpenAI were trained on multilingual data, our training contains only English captions, and therefore is not expected for the model to support other languages.
## Licence
The model weights are published under non-commercial license CC BY-NC 4.0 as the model was finetuned on a dataset for non-commercial use.
## Contact
If you'd like to chat about this, please get in touch with is via email at kadlcik`<at>`mail.muni.cz or ahajek`<at>`mail.muni.cz.
| 5,397 | [
[
-0.023529052734375,
-0.03594970703125,
0.007564544677734375,
0.02392578125,
-0.031524658203125,
-0.005832672119140625,
-0.01149749755859375,
-0.036102294921875,
0.007427215576171875,
0.04083251953125,
-0.06317138671875,
-0.039215087890625,
-0.054046630859375,
... |
nherve/flaubert-oral-ft | 2022-04-04T10:27:14.000Z | [
"transformers",
"pytorch",
"bert",
"language-model",
"flaubert",
"french",
"flaubert-base",
"uncased",
"asr",
"speech",
"oral",
"natural language understanding",
"NLU",
"spoken language understanding",
"SLU",
"understanding",
"fr",
"license:mit",
"endpoints_compatible",
"region... | null | nherve | null | null | nherve/flaubert-oral-ft | 1 | 1,521 | transformers | 2022-03-23T12:33:05 | ---
language: fr
license: mit
tags:
- bert
- language-model
- flaubert
- french
- flaubert-base
- uncased
- asr
- speech
- oral
- natural language understanding
- NLU
- spoken language understanding
- SLU
- understanding
---
# FlauBERT-Oral models: Using ASR-Generated Text for Spoken Language Modeling
**FlauBERT-Oral** are French BERT models trained on a very large amount of automatically transcribed speech from 350,000 hours of diverse French TV shows. They were trained with the [**FlauBERT software**](https://github.com/getalp/Flaubert) using the same parameters as the [flaubert-base-uncased](https://huggingface.co/flaubert/flaubert_base_uncased) model (12 layers, 12 attention heads, 768 dims, 137M parameters, uncased).
## Available FlauBERT-Oral models
- `flaubert-oral-asr` : trained from scratch on ASR data, keeping the BPE tokenizer and vocabulary of flaubert-base-uncased
- `flaubert-oral-asr_nb` : trained from scratch on ASR data, BPE tokenizer is also trained on the same corpus
- `flaubert-oral-mixed` : trained from scratch on a mixed corpus of ASR and text data, BPE tokenizer is also trained on the same corpus
- `flaubert-oral-ft` : fine-tuning of flaubert-base-uncased for a few epochs on ASR data
## Usage for sequence classification
```python
flaubert_tokenizer = FlaubertTokenizer.from_pretrained("nherve/flaubert-oral-asr")
flaubert_classif = FlaubertForSequenceClassification.from_pretrained("nherve/flaubert-oral-asr", num_labels=14)
flaubert_classif.sequence_summary.summary_type = 'mean'
# Then, train your model
```
## References
If you use FlauBERT-Oral models for your scientific publication, or if you find the resources in this repository useful, please cite the following papers:
```
@InProceedings{herve2022flaubertoral,
author = {Herv\'{e}, Nicolas and Pelloin, Valentin and Favre, Benoit and Dary, Franck and Laurent, Antoine and Meignier, Sylvain and Besacier, Laurent},
title = {Using ASR-Generated Text for Spoken Language Modeling},
booktitle = {Proceedings of "Challenges & Perspectives in Creating Large Language Models" ACL 2022 Workshop},
month = {May},
year = {2022}
}
```
| 2,210 | [
[
-0.0178985595703125,
-0.07427978515625,
0.015899658203125,
0.00894927978515625,
0.0055389404296875,
-0.01203155517578125,
-0.033660888671875,
-0.024078369140625,
0.0114593505859375,
0.039825439453125,
-0.0226898193359375,
-0.02008056640625,
-0.032806396484375,
... |
SmilingWolf/wd-v1-4-moat-tagger-v2 | 2023-05-20T07:12:07.000Z | [
"keras",
"onnx",
"arxiv:2210.01820",
"license:apache-2.0",
"has_space",
"region:us"
] | null | SmilingWolf | null | null | SmilingWolf/wd-v1-4-moat-tagger-v2 | 34 | 1,520 | keras | 2023-05-20T06:21:45 | ---
license: apache-2.0
---
# WD 1.4 MOAT Tagger V2
Supports ratings, characters and general tags.
Trained using https://github.com/SmilingWolf/SW-CV-ModelZoo.
TPUs used for training kindly provided by the [TRC program](https://sites.research.google/trc/about/).
## Dataset
Last image id: 5944504
Trained on Danbooru images with IDs modulo 0000-0899.
Validated on images with IDs modulo 0950-0999.
Images with less than 10 general tags were filtered out.
Tags with less than 600 images were filtered out.
## Validation results
`P=R: threshold = 0.3771, F1 = 0.6911`
## Paper
[`MOAT: Alternating Mobile Convolution and Attention Brings Strong Vision Models`](https://arxiv.org/abs/2210.01820)
## Final words
Subject to change and updates.
Downstream users are encouraged to use tagged releases rather than relying on the head of the repo.
| 856 | [
[
-0.056884765625,
-0.0166473388671875,
-0.007183074951171875,
0.0149688720703125,
-0.03704833984375,
-0.023468017578125,
0.0184173583984375,
-0.049560546875,
0.01259613037109375,
0.0297088623046875,
-0.035797119140625,
-0.06634521484375,
-0.04022216796875,
-0... |
ai4bharat/indictrans2-en-indic-1B | 2023-09-12T12:35:08.000Z | [
"transformers",
"pytorch",
"IndicTrans",
"text2text-generation",
"indictrans2",
"translation",
"ai4bharat",
"multilingual",
"custom_code",
"as",
"bn",
"brx",
"doi",
"en",
"gom",
"gu",
"hi",
"kn",
"ks",
"kas",
"mai",
"ml",
"mr",
"mni",
"mnb",
"ne",
"or",
"pa",
... | translation | ai4bharat | null | null | ai4bharat/indictrans2-en-indic-1B | 4 | 1,519 | transformers | 2023-09-09T13:02:59 | ---
language:
- as
- bn
- brx
- doi
- en
- gom
- gu
- hi
- kn
- ks
- kas
- mai
- ml
- mr
- mni
- mnb
- ne
- or
- pa
- sa
- sat
- sd
- snd
- ta
- te
- ur
language_details: >-
asm_Beng, ben_Beng, brx_Deva, doi_Deva, eng_Latn, gom_Deva, guj_Gujr,
hin_Deva, kan_Knda, kas_Arab, kas_Deva, mai_Deva, mal_Mlym, mar_Deva,
mni_Beng, mni_Mtei, npi_Deva, ory_Orya, pan_Guru, san_Deva, sat_Olck,
snd_Arab, snd_Deva, tam_Taml, tel_Telu, urd_Arab
tags:
- indictrans2
- translation
- ai4bharat
- multilingual
license: mit
datasets:
- flores-200
- IN22-Gen
- IN22-Conv
metrics:
- bleu
- chrf
- chrf++
- comet
inference: false
---
# IndicTrans2
This is the model card of IndicTrans2 En-Indic 1.1B variant.
Here are the [metrics](https://drive.google.com/drive/folders/1lOOdaU0VdRSBgJEsNav5zC7wwLBis9NI?usp=sharing) for the particular checkpoint.
Please refer to `Appendix D: Model Card` of the [preprint](https://arxiv.org/abs/2305.16307) for further details on model training, intended use, data, metrics, limitations and recommendations.
### Usage Instructions
Please refer to the [github repository](https://github.com/AI4Bharat/IndicTrans2/tree/main/huggingface_inference) for a detail description on how to use HF compatible IndicTrans2 models for inference.
### Citation
If you consider using our work then please cite using:
```
@article{ai4bharat2023indictrans2,
title = {IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages},
author = {AI4Bharat and Jay Gala and Pranjal A. Chitale and Raghavan AK and Sumanth Doddapaneni and Varun Gumma and Aswanth Kumar and Janki Nawale and Anupama Sujatha and Ratish Puduppully and Vivek Raghavan and Pratyush Kumar and Mitesh M. Khapra and Raj Dabre and Anoop Kunchukuttan},
year = {2023},
journal = {arXiv preprint arXiv: 2305.16307}
}
```
| 1,865 | [
[
-0.0128326416015625,
-0.0272064208984375,
0.01067352294921875,
0.040618896484375,
-0.03643798828125,
-0.004718780517578125,
0.0014009475708007812,
-0.04119873046875,
0.0191192626953125,
0.0235137939453125,
-0.052154541015625,
-0.028839111328125,
-0.0430603027343... |
42dot/42dot_LLM-PLM-1.3B | 2023-09-26T04:09:34.000Z | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"causal-lm",
"42dot_llm",
"en",
"ko",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | 42dot | null | null | 42dot/42dot_LLM-PLM-1.3B | 12 | 1,518 | transformers | 2023-09-04T05:54:07 | ---
language:
- en
- ko
pipeline_tag: text-generation
tags:
- pytorch
- llama
- causal-lm
- 42dot_llm
license: cc-by-nc-4.0
---
# 42dot_LLM-PLM-1.3B
**42dot LLM-PLM** is a pre-trained language model (PLM) developed by [**42dot**](https://42dot.ai/) and is a part of **42dot LLM** (large language model). 42dot LLM-PLM is pre-trained using Korean and English text corpus and can be used as a foundation language model for several Korean and English natural language tasks. This repository contains a 1.3B-parameter version of the model.
## Model Description
### Hyperparameters
42dot LLM-PLM is built upon a Transformer decoder architecture similar to the [LLaMA 2](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/) and its hyperparameters are listed below.
| Params | Layers | Attention heads | Hidden size | FFN size |
| -- | -- | -- | -- | -- |
| 1.3B | 24 | 32 | 2,048 | 5,632 |
### Pre-training
Pre-training took about 49K GPU hours (NVIDIA A100). Related settings are listed below.
| Params | Global batch size\* | Initial learning rate | Train iter.\* | Max length\* | Weight decay |
| -- | -- | -- | -- | -- | -- |
| 1.3B | 4.0M | 4E-4 | 1.4T | 4,096 | 0.1 |
(\* unit: tokens)
### Pre-training datasets
We used a set of publicly available text corpus, including:
- Korean: including [Jikji project](http://jikji.duckdns.org/), [mC4-ko](https://huggingface.co/datasets/mc4), [LBox Open](https://github.com/lbox-kr/lbox-open), [KLUE](https://huggingface.co/datasets/klue), [Wikipedia (Korean)](https://ko.wikipedia.org/) and so on.
- English: including [The Pile](https://github.com/EleutherAI/the-pile), [RedPajama](https://github.com/togethercomputer/RedPajama-Data), [C4](https://huggingface.co/datasets/c4) and so on.
### Tokenizer
The tokenizer is based on the Byte-level BPE algorithm. We trained its vocabulary from scratch using a subset of the pre-training corpus. For constructing a subset, 10M and 10M documents are sampled from Korean and English corpus respectively. The resulting vocabulary sizes about 50K.
### Zero-shot evaluations
We evaluate 42dot LLM-PLM on a variety of academic benchmarks both in Korean and English. All the results are obtained using [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot) and models released on the Hugging Face Hub.
#### Korean (KOBEST)
<figure align="center">
<img src="https://huggingface.co/42dot/42dot_LLM-PLM-1.3B/resolve/main/asset/42dot_LLM_PLM_KO_score_background.png"/>
</figure>
|Tasks / Macro-F1|[KoGPT2](https://github.com/SKT-AI/KoGPT2) <br>1.2B|[Polyglot-Ko](https://github.com/EleutherAI/polyglot) <br>1.3B|[XGLM](https://huggingface.co/facebook/xglm-1.7B) <br>1.7B|[PolyLM](https://huggingface.co/DAMO-NLP-MT/polylm-1.7b) <br>1.7B|42dot LLM-PLM <br>1.3B|
|--------------|-----------|----------------|---------|-----------|------------------------|
|boolq |0.337 |0.355 |**0.502** |0.334 |0.369 |
|copa |0.67 |**0.721** |0.616 |0.513 |0.704 |
|hellaswag |0.404 |0.401 |0.374 |0.321 |**0.431** |
|sentineg |0.606 |0.679 |0.46 |0.382 |**0.69** |
|**average** |0.504 |0.539 |0.488 |0.388 |**0.549** |
#### English
<figure align="center">
<img src="https://huggingface.co/42dot/42dot_LLM-PLM-1.3B/resolve/main/asset/42dot_LLM_EN_score_white_background.png"/>
</figure>
| Tasks / Metric | MPT <br>1B | OPT <br>1.3B | XGLM <br>1.7B | PolyLM <br>1.7B | 42dot LLM-PLM <br>1.3B |
| ---------------------- | ------ | -------- | --------- | ----------- | ------------------------ |
| anli_r1/acc | 0.309 | **0.341** | 0.334 | 0.336 | 0.325 |
| anli_r2/acc | 0.334 | 0.339 | 0.331 | 0.314 | **0.34** |
| anli_r3/acc | 0.33 | 0.336 | 0.333 | **0.339** | 0.333 |
| arc_challenge/acc | 0.268 | 0.234 | 0.21 | 0.198 | **0.288** |
| arc_challenge/acc_norm | 0.291 | 0.295 | 0.243 | 0.256 | **0.317** |
| arc_easy/acc | 0.608 | 0.571 | 0.537 | 0.461 | **0.628** |
| arc_easy/acc_norm | 0.555 | 0.51 | 0.479 | 0.404 | **0.564** |
| boolq/acc | 0.517 | 0.578 | 0.585 | 0.617 | **0.624** |
| hellaswag/acc | 0.415 | 0.415 | 0.362 | 0.322 | **0.422** |
| hellaswag/acc_norm | 0.532 | 0.537 | 0.458 | 0.372 | **0.544** |
| openbookqa/acc | **0.238** | 0.234 | 0.17 | 0.166 | 0.222 |
| openbookqa/acc_norm | 0.334 | 0.334 | 0.298 | 0.334 | **0.34** |
| piqa/acc | 0.714 | 0.718 | 0.697 | 0.667 | **0.725** |
| piqa/acc_norm | 0.72 | 0.724 | 0.703 | 0.649 | **0.727** |
| record/f1 | 0.84 | **0.857** | 0.775 | 0.681 | 0.848 |
| record/em | 0.832 | **0.849** | 0.769 | 0.674 | 0.839 |
| rte/acc | 0.541 | 0.523 | **0.559** | 0.513 | 0.542 |
| truthfulqa_mc/mc1 | 0.224 | 0.237 | 0.215 | **0.251** | 0.236 |
| truthfulqa_mc/mc2 | 0.387 | 0.386 | 0.373 | **0.428** | 0.387 |
| wic/acc | 0.498 | **0.509** | 0.503 | 0.5 | 0.502 |
| winogrande/acc | 0.574 | **0.595** | 0.55 | 0.519 | 0.583 |
| **average** | 0.479 | 0.482 | 0.452 | 0.429 | **0.492** |
## Limitations and Ethical Considerations
42dot LLM-PLM shares a number of well-known limitations of other large language models (LLMs). For example, it may generate false and misinformative content since 42dot LLM-PLM is also subject to [hallucination](https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)). In addition, 42dot LLM-PLM may generate toxic, harmful, and biased content due to the use of web-available training data. We strongly suggest that 42dot LLM-PLM users should be aware of those limitations and take necessary steps to mitigate those issues.
## Disclaimer
The contents generated by 42dot LLM series ("42dot LLM") do not necessarily reflect the views or opinions of 42dot Inc. ("42dot"). 42dot disclaims any and all liability to any part for any direct, indirect, implied, punitive, special, incidental, or other consequential damages arising from any use of the 42dot LLM and its generated contents.
## License
The 42dot LLM-PLM is licensed under the Creative Commons Attribution-NonCommercial 4.0 (CC BY-NC 4.0).
## Citation
```
@misc{42dot2023llm,
title={42dot LLM: A Series of Large Language Model by 42dot},
author={42dot Inc.},
year={2023},
url = {https://github.com/42dot/42dot_LLM},
version = {1.0.0},
}
```
| 7,365 | [
[
-0.058013916015625,
-0.046478271484375,
0.025360107421875,
0.0166778564453125,
-0.0235443115234375,
0.006927490234375,
-0.01947021484375,
-0.0272216796875,
0.04229736328125,
0.0221405029296875,
-0.0408935546875,
-0.05108642578125,
-0.0513916015625,
-0.000792... |
facebook/mask2former-swin-large-mapillary-vistas-panoptic | 2023-09-07T15:31:16.000Z | [
"transformers",
"pytorch",
"safetensors",
"mask2former",
"vision",
"image-segmentation",
"dataset:coco",
"arxiv:2112.01527",
"arxiv:2107.06278",
"license:other",
"endpoints_compatible",
"has_space",
"region:us"
] | image-segmentation | facebook | null | null | facebook/mask2former-swin-large-mapillary-vistas-panoptic | 2 | 1,517 | transformers | 2023-01-05T00:48:59 | ---
license: other
tags:
- vision
- image-segmentation
datasets:
- coco
widget:
- src: http://images.cocodataset.org/val2017/000000039769.jpg
example_title: Cats
- src: http://images.cocodataset.org/val2017/000000039770.jpg
example_title: Castle
---
# Mask2Former
Mask2Former model trained on Mapillary Vistas panoptic segmentation (large-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on Mapillary Vistas panoptic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-large-mapillary-vistas-panoptic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-large-mapillary-vistas-panoptic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
result = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | 3,241 | [
[
-0.043212890625,
-0.04107666015625,
0.0159759521484375,
0.0287628173828125,
-0.0169525146484375,
-0.0193023681640625,
0.011962890625,
-0.056365966796875,
0.0170745849609375,
0.05023193359375,
-0.051971435546875,
-0.022918701171875,
-0.060943603515625,
-0.026... |
cyberagent/open-calm-3b | 2023-05-18T01:11:50.000Z | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"japanese",
"causal-lm",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"dataset:mc4",
"license:cc-by-sa-4.0",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | cyberagent | null | null | cyberagent/open-calm-3b | 15 | 1,517 | transformers | 2023-05-15T07:14:36 | ---
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
- mc4
language:
- ja
tags:
- japanese
- causal-lm
inference: false
---
# OpenCALM-3B
## Model Description
OpenCALM is a suite of decoder-only language models pre-trained on Japanese datasets, developed by CyberAgent, Inc.
## Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("cyberagent/open-calm-3b", device_map="auto", torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("cyberagent/open-calm-3b")
inputs = tokenizer("AIใซใใฃใฆ็ง้ใฎๆฎใใใฏใ", return_tensors="pt").to(model.device)
with torch.no_grad():
tokens = model.generate(
**inputs,
max_new_tokens=64,
do_sample=True,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.05,
pad_token_id=tokenizer.pad_token_id,
)
output = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(output)
```
## Model Details
|Model|Params|Layers|Dim|Heads|Dev ppl|
|:---:|:---: |:---:|:---:|:---:|:---:|
|[cyberagent/open-calm-small](https://huggingface.co/cyberagent/open-calm-small)|160M|12|768|12|19.7|
|[cyberagent/open-calm-medium](https://huggingface.co/cyberagent/open-calm-medium)|400M|24|1024|16|13.8|
|[cyberagent/open-calm-large](https://huggingface.co/cyberagent/open-calm-large)|830M|24|1536|16|11.3|
|[cyberagent/open-calm-1b](https://huggingface.co/cyberagent/open-calm-1b)|1.4B|24|2048|16|10.3|
|[cyberagent/open-calm-3b](https://huggingface.co/cyberagent/open-calm-3b)|2.7B|32|2560|32|9.7|
|[cyberagent/open-calm-7b](https://huggingface.co/cyberagent/open-calm-7b)|6.8B|32|4096|32|8.2|
* **Developed by**: [CyberAgent, Inc.](https://www.cyberagent.co.jp/)
* **Model type**: Transformer-based Language Model
* **Language**: Japanese
* **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
* **License**: OpenCALM is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License ([CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)). When using this model, please provide appropriate credit to CyberAgent, Inc.
* Example (en): This model is a fine-tuned version of OpenCALM-XX developed by CyberAgent, Inc. The original model is released under the CC BY-SA 4.0 license, and this model is also released under the same CC BY-SA 4.0 license. For more information, please visit: https://creativecommons.org/licenses/by-sa/4.0/
* Example (ja): ๆฌใขใใซใฏใๆ ชๅผไผ็คพใตใคใใผใจใผใธใงใณใใซใใOpenCALM-XXใใใกใคใณใใฅใผใใณใฐใใใใฎใงใใๅ
ใฎใขใใซใฏCC BY-SA 4.0ใฉใคใปใณในใฎใใจใงๅ
ฌ้ใใใฆใใใๆฌใขใใซใๅใใCC BY-SA 4.0ใฉใคใปใณในใงๅ
ฌ้ใใพใใ่ฉณใใใฏใใกใใใ่ฆงใใ ใใ: https://creativecommons.org/licenses/by-sa/4.0/
## Training Dataset
* Wikipedia (ja)
* Common Crawl (ja)
## Author
[Ryosuke Ishigami](https://huggingface.co/rishigami)
## Citations
```bibtext
@software{gpt-neox-library,
title = {{GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch}},
author = {Andonian, Alex and Anthony, Quentin and Biderman, Stella and Black, Sid and Gali, Preetham and Gao, Leo and Hallahan, Eric and Levy-Kramer, Josh and Leahy, Connor and Nestler, Lucas and Parker, Kip and Pieler, Michael and Purohit, Shivanshu and Songz, Tri and Phil, Wang and Weinbach, Samuel},
url = {https://www.github.com/eleutherai/gpt-neox},
doi = {10.5281/zenodo.5879544},
month = {8},
year = {2021},
version = {0.0.1},
}
``` | 3,366 | [
[
-0.029022216796875,
-0.055328369140625,
0.0219879150390625,
0.01029205322265625,
-0.010589599609375,
-0.0231781005859375,
-0.02740478515625,
-0.034576416015625,
0.01264190673828125,
0.038604736328125,
-0.035888671875,
-0.05682373046875,
-0.0338134765625,
0.0... |
allamand/dogbooth | 2023-10-25T18:02:25.000Z | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | allamand | null | null | allamand/dogbooth | 0 | 1,517 | diffusers | 2023-10-13T16:57:21 |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a photo of [v]dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - allamand/dogbooth
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
| 548 | [
[
-0.01557159423828125,
-0.03472900390625,
0.0286865234375,
0.004863739013671875,
-0.0298004150390625,
0.01513671875,
0.029693603515625,
-0.0227813720703125,
0.047210693359375,
0.0270843505859375,
-0.03265380859375,
-0.02606201171875,
-0.0450439453125,
-0.0142... |
Rajkumar7093626243/my-pet-dog | 2023-10-18T11:13:07.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Rajkumar7093626243 | null | null | Rajkumar7093626243/my-pet-dog | 0 | 1,516 | diffusers | 2023-10-18T11:08:53 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by Rajkumar7093626243 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:
.jpg)
.jpg)
| 517 | [
[
-0.06298828125,
-0.014892578125,
0.0264739990234375,
0.0131072998046875,
-0.01200103759765625,
0.028533935546875,
0.028472900390625,
-0.0287933349609375,
0.04534912109375,
0.0233917236328125,
-0.04034423828125,
-0.0198974609375,
-0.0182647705078125,
-0.00094... |
PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T | 2023-11-05T03:48:35.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | PY007 | null | null | PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T | 34 | 1,516 | transformers | 2023-11-04T04:30:39 | ---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
language:
- en
---
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs ๐๐. The training has started on 2023-09-01.
<div align="center">
<img src="https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-240k-503b/resolve/main/TinyLlama_logo.png" width="300"/>
</div>
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Model
This is an intermediate checkpoint with 715K steps and 1.49T tokens. **We suggest you not use this directly for inference.**
#### How to use
You will need the transformers>=4.31
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
```
from transformers import AutoTokenizer
import transformers
import torch
model = "PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
'The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs ๐๐. The training has started on 2023-09-01.',
do_sample=True,
top_k=10,
num_return_sequences=1,
repetition_penalty=1.5,
eos_token_id=tokenizer.eos_token_id,
max_length=500,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
#### Eval
| Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg |
|-------------------------------------------|-----------------|-----------|------|------------|-------|-------|-------|------|-----|
| Pythia-1.0B | 300B | 47.16 | 31.40| 53.43 | 27.05 | 48.99 | 60.83 | 69.21 | 48.30 |
| TinyLlama-1.1B-intermediate-step-50K-104b | 103B | 43.50 | 29.80| 53.28 | 24.32 | 44.91 | 59.66 | 67.30 | 46.11|
| TinyLlama-1.1B-intermediate-step-240k-503b| 503B | 49.56 |31.40 |55.80 |26.54 |48.32 |56.91 |69.42 | 48.28 |
| TinyLlama-1.1B-intermediate-step-480k-1007B | 1007B | 52.54 | 33.40 | 55.96 | 27.82 | 52.36 | 59.54 | 69.91 | 50.22 |
| TinyLlama-1.1B-intermediate-step-715k-1.5T | 1.49T | 53.68 | 35.20 | 58.33 | 29.18 | 51.89 | 59.08 | 71.65 | 51.29 | | 2,983 | [
[
-0.0202789306640625,
-0.040435791015625,
0.0328369140625,
0.0179901123046875,
-0.031585693359375,
-0.004192352294921875,
-0.00923919677734375,
-0.0219879150390625,
0.02862548828125,
0.01085662841796875,
-0.053619384765625,
-0.039093017578125,
-0.03759765625,
... |
EIStakovskii/xlm_roberta_base_multilingual_toxicity_classifier_plus | 2023-05-02T10:28:52.000Z | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"multilingual",
"license:other",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | EIStakovskii | null | null | EIStakovskii/xlm_roberta_base_multilingual_toxicity_classifier_plus | 1 | 1,515 | transformers | 2022-10-25T06:40:35 | ---
language: multilingual # <-- my language
widget:
- text: "J'aime ta coiffure"
- text: "Va te faire foutre"
- text: "Quel mauvais temps, n'est-ce pas ?"
- text: "J'espรจre que tu vas mourir, connard !"
- text: "j'aime beaucoup ta veste"
- text: "Guten morgen, meine Liebe"
- text: "Ich scheiร drauf."
- text: "Ich liebe dich"
- text: "Ich hab die Schnauze voll von diesen Irren."
- text: "Ich wรผnsche Ihnen einen schรถnen Tag!"
- text: "ะกัะบะฐ ััะฟะฐั"
- text: "ะะฐะบะฐั ะฟัะตะบัะฐัะฝะฐั ะฟะพะณะพะดะฐ!"
- text: "ะฏ ะฝะตะฝะฐะฒะธะถั ัะตะฑั ะบะพะทัะป!"
- text: "ะฅะปะตะฑ ะฒัะตะผั ะณะพะปะพะฒะฐ"
- text: "ะะพั ะถะต ัะฑะปัะดะพะบ...."
- text: "Go fuck yoursefl, asshole"
- text: "I don't really like this idea"
- text: "Look at this dickhead tho"
- text: "Usually, she is more open about that"
- text: "Why you have to always fuck everything up????"
- text: "I like this car"
license: other
---
This model was trained for multilingual toxicity labeling. Label_1 means TOXIC, Label_0 means NOT TOXIC.
The model was fine-tuned based off the xlm_roberta_base model for 4 languages: EN, RU, FR, DE
The validation accuracy is 92%.
The model was finetuned on the total sum of 100933k sentences. The train data for English and Russian came from https://github.com/s-nlp/multilingual_detox, French data comprised the translated to French data from https://github.com/s-nlp/multilingual_detox as well as all the French data from the Jigsaw dataset, the German data was similarly composed using translations and semi-manual data collection techniquies, in particular for offensive words and phrases were crawled the dict.cc dictionary (https://www.dict.cc/) and the Reverso Context (https://context.reverso.net/translation/). | 1,707 | [
[
0.00394439697265625,
-0.042999267578125,
0.0245819091796875,
0.0225372314453125,
-0.007198333740234375,
-0.016693115234375,
-0.00884246826171875,
-0.03692626953125,
-0.0067901611328125,
0.040130615234375,
-0.034912109375,
-0.07745361328125,
-0.046905517578125,
... |
darkstorm2150/Protogen_x5.3_Official_Release | 2023-01-27T17:43:57.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"art",
"artistic",
"protogen",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | darkstorm2150 | null | null | darkstorm2150/Protogen_x5.3_Official_Release | 104 | 1,514 | diffusers | 2023-01-05T04:31:50 | ---
language:
- en
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- art
- artistic
- diffusers
- protogen
inference: true
license: creativeml-openrail-m
---
<center><img src="https://huggingface.co/darkstorm2150/Protogen_x5.3_Official_Release/resolve/main/Protogen%20x5.3-512.png" style="height:400px; border-radius: 7%; border: 10px solid #663380; padding-top:0px;" span title="Protogen x5.3 Raw Output"></center>
<center><h1>Protogen x5.3 (Photorealism) Official Release</h1></center>
<center><p><em>Research Model by <a href="https://instagram.com/officialvictorespinoza">darkstorm2150</a></em></p></center>
</div>
## Table of contents
* [General info](#general-info)
* [Granular Adaptive Learning](#granular-adaptive-learning)
* [Setup](#setup)
* [Space](#space)
* [CompVis](#compvis)
* [Diffusers](#๐งจ-diffusers)
* [Checkpoint Merging Data Reference](#checkpoint-merging-data-reference)
* [License](#license)
## General info
Protogen x5.3 - One Step Closer to Reality by [darkstorm2150](https://instagram.com/officialvictorespinoza)
Protogen was warm-started with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) and continued fine-tuned from [darkstorm2150/Protogen_x3.4_Official_Release](https://huggingface.co/darkstorm2150/Protogen_x3.4_Official_Release)
Robodiffusion has been removed and 10% Dreamlike-PhotoReal V.2 added, the result is better sampling at 768px to 1024px of humans and surroundings, The results are immediate!!!
Also this bad boy comes with a license, so do please read it, thank you!
* Model control
Now its recommended that you add nude, naked to your negative prompts, its a horny model, well 10% but still....cant be too careful!
As for realism, you can use this template
modelshoot style, (extremely detailed 8k wallpaper),a medium shot photo of a (what you want here), Intricate, High Detail, dramatic
It should also be very "dreambooth-able", being able to generate high fidelity faces with a little amount of steps (see [dreambooth](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth)).
## Granular Adaptive Learning
Granular adaptive learning is a machine learning technique that focuses on adjusting the learning process at a fine-grained level, rather than making global adjustments to the model. This approach allows the model to adapt to specific patterns or features in the data, rather than making assumptions based on general trends.
Granular adaptive learning can be achieved through techniques such as active learning, which allows the model to select the data it wants to learn from, or through the use of reinforcement learning, where the model receives feedback on its performance and adapts based on that feedback. It can also be achieved through techniques such as online learning where the model adjust itself as it receives more data.
Granular adaptive learning is often used in situations where the data is highly diverse or non-stationary and where the model needs to adapt quickly to changing patterns. This is often the case in dynamic environments such as robotics, financial markets, and natural language processing.
## Setup
To run this model, download the model.ckpt and install it in your "stable-diffusion-webui\models\Stable-diffusion" directory
## Space
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI:
[](https://huggingface.co/spaces/darkstorm2150/Stable-Diffusion-Protogen-webui)
### CompVis
## CKPT
[Download ProtoGen x5.3.ckpt (4.27GB)](https://huggingface.co/darkstorm2150/Protogen_v5.3_Official_Release/blob/main/ProtoGen_X5.3.ckpt)
[Download ProtoGen x5.3-pruned-fp16.ckpt (1.89GB)](https://huggingface.co/darkstorm2150/Protogen_x5.3_Official_Release/resolve/main/ProtoGen_X5.3-pruned-fp16.ckpt)
## Safetensors
[Download ProtoGen x5.3.safetensors (4.27GB)](https://huggingface.co/darkstorm2150/Protogen_x5.3_Official_Release/resolve/main/ProtoGen_X5.3.safetensors)
[Download ProtoGen x5.3-pruned-fp16.safetensors (1.89GB)](https://huggingface.co/darkstorm2150/Protogen_x5.3_Official_Release/resolve/main/ProtoGen_X5.3-pruned-fp16.safetensors)
### ๐งจ Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion Pipeline](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
```python
from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler
import torch
prompt = (
"modelshoot style, (extremely detailed CG unity 8k wallpaper), full shot body photo of the most beautiful artwork in the world, "
"english medieval witch, black silk vale, pale skin, black silk robe, black cat, necromancy magic, medieval era, "
"photorealistic painting by Ed Blinkey, Atey Ghailan, Studio Ghibli, by Jeremy Mann, Greg Manchess, Antonio Moro, trending on ArtStation, "
"trending on CGSociety, Intricate, High Detail, Sharp focus, dramatic, photorealistic painting art by midjourney and greg rutkowski"
)
model_id = "darkstorm2150/Protogen_v5.3_Official_Release"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
image = pipe(prompt, num_inference_steps=25).images[0]
image.save("./result.jpg")
```
## PENDING DATA FOR MERGE, RPGv2 not accounted..
## Checkpoint Merging Data Reference
<style>
.myTable {
border-collapse:collapse;
}
.myTable th {
background-color:#663380;
color:white;
}
.myTable td, .myTable th {
padding:5px;
border:1px solid #663380;
}
</style>
<table class="myTable">
<tr>
<th>Models</th>
<th>Protogen v2.2 (Anime)</th>
<th>Protogen x3.4 (Photo)</th>
<th>Protogen x5.3 (Photo)</th>
<th>Protogen x5.8 (Sci-fi/Anime)</th>
<th>Protogen x5.9 (Dragon)</th>
<th>Protogen x7.4 (Eclipse)</th>
<th>Protogen x8.0 (Nova)</th>
<th>Protogen x8.6 (Infinity)</th>
</tr>
<tr>
<td>seek_art_mega v1</td>
<td>52.50%</td>
<td>42.76%</td>
<td>42.63%</td>
<td></td>
<td></td>
<td></td>
<td>25.21%</td>
<td>14.83%</td>
</tr>
<tr>
<td>modelshoot v1</td>
<td>30.00%</td>
<td>24.44%</td>
<td>24.37%</td>
<td>2.56%</td>
<td>2.05%</td>
<td>3.48%</td>
<td>22.91%</td>
<td>13.48%</td>
</tr>
<tr>
<td>elldreth v1</td>
<td>12.64%</td>
<td>10.30%</td>
<td>10.23%</td>
<td></td>
<td></td>
<td></td>
<td>6.06%</td>
<td>3.57%</td>
</tr>
<tr>
<td>photoreal v2</td>
<td></td>
<td></td>
<td>10.00%</td>
<td>48.64%</td>
<td>38.91%</td>
<td>66.33%</td>
<td>20.49%</td>
<td>12.06%</td>
</tr>
<tr>
<td>analogdiffusion v1</td>
<td></td>
<td>4.75%</td>
<td>4.50%</td>
<td></td>
<td></td>
<td></td>
<td>1.75%</td>
<td>1.03%</td>
</tr>
<tr>
<td>openjourney v2</td>
<td></td>
<td>4.51%</td>
<td>4.28%</td>
<td></td>
<td></td>
<td>4.75%</td>
<td>2.26%</td>
<td>1.33%</td>
</tr>
<tr>
<td>hassan1.4</td>
<td>2.63%</td>
<td>2.14%</td>
<td>2.13%</td>
<td></td>
<td></td>
<td></td>
<td>1.26%</td>
<td>0.74%</td>
</tr>
<tr>
<td>f222</td>
<td>2.23%</td>
<td>1.82%</td>
<td>1.81%</td>
<td></td>
<td></td>
<td></td>
<td>1.07%</td>
<td>0.63%</td>
</tr>
<tr>
<td>hasdx</td>
<td></td>
<td></td>
<td></td>
<td>20.00%</td>
<td>16.00%</td>
<td>4.07%</td>
<td>5.01%</td>
<td>2.95%</td>
</tr>
<tr>
<td>moistmix</td>
<td></td>
<td></td>
<td></td>
<td>16.00%</td>
<td>12.80%</td>
<td>3.86%</td>
<td>4.08%</td>
<td>2.40%</td>
</tr>
<tr>
<td>roboDiffusion v1</td>
<td></td>
<td>4.29%</td>
<td></td>
<td>12.80%</td>
<td>10.24%</td>
<td>3.67%</td>
<td>4.41%</td>
<td>2.60%</td>
</tr>
<tr>
<td>RPG v3</td>
<td></td>
<td>5.00%</td>
<td></td>
<td></td>
<td>20.00%</td>
<td>4.29%</td>
<td>4.29%</td>
<td>2.52%</td>
</tr>
<tr>
<td>anything&everything</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>4.51%</td>
<td>0.56%</td>
<td>0.33%</td>
</tr>
<tr>
<td>dreamlikediff v1</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>5.0%</td>
<td>0.63%</td>
<td>0.37%</td>
</tr>
<tr>
<td>sci-fidiff v1</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>3.10%</td>
</tr>
<tr>
<td>synthwavepunk v2</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>3.26%</td>
</tr>
<tr>
<td>mashupv2</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>11.51%</td>
</tr>
<tr>
<td>dreamshaper 252</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>4.04%</td>
</tr>
<tr>
<td>comicdiff v2</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>4.25%</td>
</tr>
<tr>
<td>artEros</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>15.00%</td>
</tr>
</table>
## License
By downloading you agree to the terms of these licenses
<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">CreativeML Open RAIL-M</a>
<a href="https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/blob/main/LICENSE.md">Dreamlike License</a>
<a href="https://huggingface.co/coreco/seek.art_MEGA/blob/main/LICENSE.txt">Seek Art Mega License</a> | 9,233 | [
[
-0.0487060546875,
-0.048980712890625,
0.0175628662109375,
0.034393310546875,
-0.0111846923828125,
0.0052642822265625,
0.01178741455078125,
-0.035308837890625,
0.027984619140625,
0.00754547119140625,
-0.0452880859375,
-0.0343017578125,
-0.03564453125,
-0.0038... |
digiplay/m3u | 2023-11-01T14:40:17.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | digiplay | null | null | digiplay/m3u | 3 | 1,514 | diffusers | 2023-07-02T11:36:28 | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
***in test in test in test in test in test in test in test***
*Sample images & prompt example :*
8k UHD RAW,photorealistic ,beautiful _your keywords_ ,tiny smile ,close-up ,masterpiece ,realistic ,ultra detailed,




girl ,19y.o, silver white wavy hair, in front ,looking at viewer ,angel wings, (sky),ultra-detailed ,8k,very detailed ,light and shadow ,detailed paint , realistic,

| 1,232 | [
[
-0.042633056640625,
-0.0650634765625,
0.0108489990234375,
0.01151275634765625,
-0.0134429931640625,
0.01241302490234375,
0.0218505859375,
-0.05035400390625,
0.0280914306640625,
0.03814697265625,
-0.0465087890625,
-0.0244903564453125,
-0.022491455078125,
0.01... |
BridgeTower/bridgetower-base-itm-mlm | 2023-01-27T02:12:53.000Z | [
"transformers",
"pytorch",
"bridgetower",
"en",
"dataset:conceptual_captions",
"dataset:sbu_captions",
"dataset:visual_genome",
"dataset:mscoco_captions",
"arxiv:2206.08657",
"arxiv:1504.00325",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | BridgeTower | null | null | BridgeTower/bridgetower-base-itm-mlm | 2 | 1,512 | transformers | 2022-12-08T00:36:43 | ---
language: en
tags:
- bridgetower
license: mit
datasets:
- conceptual_captions
- sbu_captions
- visual_genome
- mscoco_captions
---
# BridgeTower base-itm-mlm model
The BridgeTower model was proposed in "BridgeTower: Building Bridges Between Encoders in Vision-Language Representative Learning" by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan.
The model was pretrained on English language using masked language modeling (MLM) and image text matching (ITM)objectives. It was introduced in
[this paper](https://arxiv.org/pdf/2206.08657.pdf) and first released in
[this repository](https://github.com/microsoft/BridgeTower).
BridgeTower got accepted to [AAAI'23](https://aaai.org/Conferences/AAAI-23/).
## Model description
The abstract from the paper is the following:
Vision-Language (VL) models with the Two-Tower architecture have dominated visual-language representation learning in recent years. Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder. Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BridgeTower, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the cross-modal encoder. This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BridgeTower achieves state-of-the-art performance on various downstream vision-language tasks. In particular, on the VQAv2 test-std set, BridgeTower achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs. Notably, when further scaling the model, BridgeTower achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets.
## Intended uses & limitations(TODO)
### How to use
Here is how to use this model to perform image and text matching:
```python
from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval
import requests
from PIL import Image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
model = BridgeTowerForImageAndTextRetrieval.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
# forward pass
scores = dict()
for text in texts:
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
outputs = model(**encoding)
scores[text] = outputs.logits[0,1].item()
```
Here is how to use this model to perform masked language modeling:
```python
from transformers import BridgeTowerProcessor, BridgeTowerForMaskedLM
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000360943.jpg"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
text = "a <mask> looking out of the window"
processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
model = BridgeTowerForMaskedLM.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
# forward pass
outputs = model(**encoding)
results = processor.decode(outputs.logits.argmax(dim=-1).squeeze(0).tolist())
print(results)
#.a cat looking out of the window.
```
### Limitations and bias
TODO
## Training data
The BridgeTower model was pretrained on four public image-caption datasets:
- [Conceptual Captions(CC)](https://ai.google.com/research/ConceptualCaptions/),
- [SBU Captions](https://www.cs.rice.edu/~vo9/sbucaptions/),
- [MSCOCO Captions](https://arxiv.org/pdf/1504.00325.pdf),
- [Visual Genome](https://visualgenome.org/)
The total number of unique images in the combined data is 4M.
## Training procedure
### Preprocessing
TODO
### Pretraining
The model was pre-trained for 100k steps on 8 NVIDIA A100 GPUs with a batch size of 4096.
The optimizer used was AdamW with a learning rate of 1e-5. No data augmentation was used except for center-crop. The image resolution in pre-training is set to 288 x 288.
## Evaluation results
Please refer to [Table 5](https://arxiv.org/pdf/2206.08657.pdf) for BridgeTower's performance on Image Retrieval and other downstream tasks.
### BibTeX entry and citation info
```bibtex
@article{xu2022bridge,
title={BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning},
author={Xu, Xiao and Wu, Chenfei and Rosenman, Shachar and Lal, Vasudev and Che, Wanxiang and Duan, Nan},
journal={arXiv preprint arXiv:2206.08657},
year={2022}
}
```
| 5,202 | [
[
-0.0211181640625,
-0.045196533203125,
0.0104217529296875,
0.018951416015625,
-0.0328369140625,
-0.0037403106689453125,
-0.020751953125,
-0.034576416015625,
0.005680084228515625,
0.04986572265625,
-0.033172607421875,
-0.0406494140625,
-0.0526123046875,
0.0122... |
Helsinki-NLP/opus-mt-tc-big-en-tr | 2023-08-16T12:10:49.000Z | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc",
"en",
"tr",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | Helsinki-NLP | null | null | Helsinki-NLP/opus-mt-tc-big-en-tr | 6 | 1,511 | transformers | 2022-04-13T15:11:47 | ---
language:
- en
- tr
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-en-tr
results:
- task:
name: Translation eng-tur
type: translation
args: eng-tur
dataset:
name: flores101-devtest
type: flores_101
args: eng tur devtest
metrics:
- name: BLEU
type: bleu
value: 31.4
- task:
name: Translation eng-tur
type: translation
args: eng-tur
dataset:
name: newsdev2016
type: newsdev2016
args: eng-tur
metrics:
- name: BLEU
type: bleu
value: 21.9
- task:
name: Translation eng-tur
type: translation
args: eng-tur
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-tur
metrics:
- name: BLEU
type: bleu
value: 42.3
- task:
name: Translation eng-tur
type: translation
args: eng-tur
dataset:
name: newstest2016
type: wmt-2016-news
args: eng-tur
metrics:
- name: BLEU
type: bleu
value: 23.4
- task:
name: Translation eng-tur
type: translation
args: eng-tur
dataset:
name: newstest2017
type: wmt-2017-news
args: eng-tur
metrics:
- name: BLEU
type: bleu
value: 25.4
- task:
name: Translation eng-tur
type: translation
args: eng-tur
dataset:
name: newstest2018
type: wmt-2018-news
args: eng-tur
metrics:
- name: BLEU
type: bleu
value: 22.6
---
# opus-mt-tc-big-en-tr
Neural machine translation model for translating from English (en) to Turkish (tr).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT โ Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge โ Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-02-25
* source language(s): eng
* target language(s): tur
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-02-25.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tur/opusTCv20210807+bt_transformer-big_2022-02-25.zip)
* more information released models: [OPUS-MT eng-tur README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-tur/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"I know Tom didn't want to eat that.",
"On Sundays, we would get up early and go fishing."
]
model_name = "pytorch-models/opus-mt-tc-big-en-tr"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Tom'un bunu yemek istemediฤini biliyorum.
# Pazar gรผnleri erkenden kalkฤฑp balฤฑk tutmaya giderdik.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-tr")
print(pipe("I know Tom didn't want to eat that."))
# expected output: Tom'un bunu yemek istemediฤini biliyorum.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-02-25.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tur/opusTCv20210807+bt_transformer-big_2022-02-25.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tur/opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| eng-tur | tatoeba-test-v2021-08-07 | 0.68726 | 42.3 | 13907 | 84364 |
| eng-tur | flores101-devtest | 0.62829 | 31.4 | 1012 | 20253 |
| eng-tur | newsdev2016 | 0.58947 | 21.9 | 1001 | 15958 |
| eng-tur | newstest2016 | 0.57624 | 23.4 | 3000 | 50782 |
| eng-tur | newstest2017 | 0.58858 | 25.4 | 3007 | 51977 |
| eng-tur | newstest2018 | 0.57848 | 22.6 | 3000 | 53731 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Unionโs Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Unionโs Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 18:11:39 EEST 2022
* port machine: LM0-400-22516.local
| 7,138 | [
[
-0.0261077880859375,
-0.04449462890625,
0.016448974609375,
0.01873779296875,
-0.0364990234375,
-0.0203857421875,
-0.038604736328125,
-0.02264404296875,
0.0135498046875,
0.0290374755859375,
-0.031402587890625,
-0.050079345703125,
-0.0447998046875,
0.028244018... |
nlpodyssey/bert-multilingual-uncased-geo-countries-headlines | 2023-04-13T21:40:29.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en... | text-classification | nlpodyssey | null | null | nlpodyssey/bert-multilingual-uncased-geo-countries-headlines | 4 | 1,511 | transformers | 2022-09-24T11:17:01 | ---
license: apache-2.0
language:
- multilingual
- af
- sq
- ar
- an
- hy
- ast
- az
- ba
- eu
- bar
- be
- bn
- inc
- bs
- br
- bg
- my
- ca
- ceb
- ce
- zh
- cv
- hr
- cs
- da
- nl
- en
- et
- fi
- fr
- gl
- ka
- de
- el
- gu
- ht
- he
- hi
- hu
- is
- io
- id
- ga
- it
- ja
- jv
- kn
- kk
- ky
- ko
- la
- lv
- lt
- roa
- nds
- lm
- mk
- mg
- ms
- ml
- mr
- min
- ne
- new
- nb
- nn
- oc
- fa
- pms
- pl
- pt
- pa
- ro
- ru
- sco
- sr
- hr
- scn
- sk
- sl
- aze
- es
- su
- sw
- sv
- tl
- tg
- ta
- tt
- te
- tr
- uk
- ud
- uz
- vi
- vo
- war
- cy
- fry
- pnb
- yo
tags:
- text-classification
---
# bert-multilingual-uncased-geo-countries-headlines
This a bert-base-multilingual-uncased model fine-tuned to perform geographic classification of news headlines.
It predicts the ISO 3166-1 alpha-3 country codes.
### Authors
The [NLP Odyssey](https://github.com/nlpodyssey/) Authors (Matteo Grella, Marco Nicola) | 920 | [
[
-0.03826904296875,
-0.03564453125,
0.04034423828125,
0.050079345703125,
-0.02423095703125,
0.00669097900390625,
-0.0252838134765625,
-0.040618896484375,
0.03558349609375,
0.042388916015625,
-0.050537109375,
-0.06964111328125,
-0.046661376953125,
0.0042877197... |
deepfile/embedder-100p | 2023-09-25T14:04:29.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"mteb",
"model-index",
"endpoints_compatible",
"region:us"
] | sentence-similarity | deepfile | null | null | deepfile/embedder-100p | 0 | 1,511 | transformers | 2023-07-24T11:02:34 | ---
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
- transformers
- mteb
model-index:
- name: embedder-100p
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 67.05970149253731
- type: ap
value: 30.376473854922846
- type: f1
value: 61.30474831792133
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 70.40857500000001
- type: ap
value: 64.61611594622543
- type: f1
value: 70.28136292034776
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 33.214
- type: f1
value: 33.123322451005755
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.311999999999998
- type: map_at_10
value: 42.760999999999996
- type: map_at_100
value: 43.691
- type: map_at_1000
value: 43.698
- type: map_at_3
value: 37.091
- type: map_at_5
value: 40.398
- type: mrr_at_1
value: 28.165000000000003
- type: mrr_at_10
value: 43.05
- type: mrr_at_100
value: 43.994
- type: mrr_at_1000
value: 44.0
- type: mrr_at_3
value: 37.376
- type: mrr_at_5
value: 40.665
- type: ndcg_at_1
value: 27.311999999999998
- type: ndcg_at_10
value: 52.035
- type: ndcg_at_100
value: 55.891000000000005
- type: ndcg_at_1000
value: 56.043
- type: ndcg_at_3
value: 40.38
- type: ndcg_at_5
value: 46.364
- type: precision_at_1
value: 27.311999999999998
- type: precision_at_10
value: 8.193
- type: precision_at_100
value: 0.985
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 16.643
- type: precision_at_5
value: 12.902
- type: recall_at_1
value: 27.311999999999998
- type: recall_at_10
value: 81.935
- type: recall_at_100
value: 98.506
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 49.929
- type: recall_at_5
value: 64.509
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 42.899186071418946
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 32.44851270109027
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 61.05081337796836
- type: mrr
value: 73.87218045112782
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 80.06755261269532
- type: cos_sim_spearman
value: 75.31798123153732
- type: euclidean_pearson
value: 77.70454789166935
- type: euclidean_spearman
value: 74.07578425253767
- type: manhattan_pearson
value: 77.18021593857006
- type: manhattan_spearman
value: 74.10590542079663
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 82.73051948051948
- type: f1
value: 82.61992011434658
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.236246179832975
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 29.75182197424716
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.016999999999996
- type: map_at_10
value: 39.519999999999996
- type: map_at_100
value: 40.987
- type: map_at_1000
value: 41.124
- type: map_at_3
value: 36.120000000000005
- type: map_at_5
value: 38.071
- type: mrr_at_1
value: 35.05
- type: mrr_at_10
value: 45.589
- type: mrr_at_100
value: 46.322
- type: mrr_at_1000
value: 46.366
- type: mrr_at_3
value: 43.108999999999995
- type: mrr_at_5
value: 44.754
- type: ndcg_at_1
value: 35.05
- type: ndcg_at_10
value: 46.119
- type: ndcg_at_100
value: 51.512
- type: ndcg_at_1000
value: 53.471000000000004
- type: ndcg_at_3
value: 41.3
- type: ndcg_at_5
value: 43.657000000000004
- type: precision_at_1
value: 35.05
- type: precision_at_10
value: 9.156
- type: precision_at_100
value: 1.516
- type: precision_at_1000
value: 0.201
- type: precision_at_3
value: 20.552999999999997
- type: precision_at_5
value: 14.793000000000001
- type: recall_at_1
value: 28.016999999999996
- type: recall_at_10
value: 58.4
- type: recall_at_100
value: 81.67699999999999
- type: recall_at_1000
value: 94.119
- type: recall_at_3
value: 44.293
- type: recall_at_5
value: 51.056000000000004
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.46
- type: map_at_10
value: 33.194
- type: map_at_100
value: 34.367999999999995
- type: map_at_1000
value: 34.514
- type: map_at_3
value: 30.134
- type: map_at_5
value: 31.796999999999997
- type: mrr_at_1
value: 29.744999999999997
- type: mrr_at_10
value: 38.213
- type: mrr_at_100
value: 38.942
- type: mrr_at_1000
value: 38.993
- type: mrr_at_3
value: 35.435
- type: mrr_at_5
value: 37.053000000000004
- type: ndcg_at_1
value: 29.744999999999997
- type: ndcg_at_10
value: 38.868
- type: ndcg_at_100
value: 43.562
- type: ndcg_at_1000
value: 46.036
- type: ndcg_at_3
value: 33.93
- type: ndcg_at_5
value: 36.175000000000004
- type: precision_at_1
value: 29.744999999999997
- type: precision_at_10
value: 7.605
- type: precision_at_100
value: 1.291
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 16.582
- type: precision_at_5
value: 12.051
- type: recall_at_1
value: 23.46
- type: recall_at_10
value: 50.080000000000005
- type: recall_at_100
value: 70.161
- type: recall_at_1000
value: 86.009
- type: recall_at_3
value: 36.229
- type: recall_at_5
value: 42.055
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.515
- type: map_at_10
value: 47.028999999999996
- type: map_at_100
value: 48.104
- type: map_at_1000
value: 48.171
- type: map_at_3
value: 44.224000000000004
- type: map_at_5
value: 45.795
- type: mrr_at_1
value: 40.627
- type: mrr_at_10
value: 50.251000000000005
- type: mrr_at_100
value: 51.001
- type: mrr_at_1000
value: 51.035
- type: mrr_at_3
value: 48.046
- type: mrr_at_5
value: 49.262
- type: ndcg_at_1
value: 40.627
- type: ndcg_at_10
value: 52.5
- type: ndcg_at_100
value: 56.967999999999996
- type: ndcg_at_1000
value: 58.414
- type: ndcg_at_3
value: 47.725
- type: ndcg_at_5
value: 49.932
- type: precision_at_1
value: 40.627
- type: precision_at_10
value: 8.464
- type: precision_at_100
value: 1.17
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 21.526
- type: precision_at_5
value: 14.545
- type: recall_at_1
value: 35.515
- type: recall_at_10
value: 65.436
- type: recall_at_100
value: 85.06
- type: recall_at_1000
value: 95.50999999999999
- type: recall_at_3
value: 52.339
- type: recall_at_5
value: 57.894999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.75
- type: map_at_10
value: 27.639999999999997
- type: map_at_100
value: 28.612
- type: map_at_1000
value: 28.716
- type: map_at_3
value: 25.186999999999998
- type: map_at_5
value: 26.558999999999997
- type: mrr_at_1
value: 21.582
- type: mrr_at_10
value: 29.637999999999998
- type: mrr_at_100
value: 30.514000000000003
- type: mrr_at_1000
value: 30.592999999999996
- type: mrr_at_3
value: 27.326
- type: mrr_at_5
value: 28.58
- type: ndcg_at_1
value: 21.582
- type: ndcg_at_10
value: 32.301
- type: ndcg_at_100
value: 37.217
- type: ndcg_at_1000
value: 39.951
- type: ndcg_at_3
value: 27.483999999999998
- type: ndcg_at_5
value: 29.754
- type: precision_at_1
value: 21.582
- type: precision_at_10
value: 5.175
- type: precision_at_100
value: 0.803
- type: precision_at_1000
value: 0.108
- type: precision_at_3
value: 11.940000000000001
- type: precision_at_5
value: 8.52
- type: recall_at_1
value: 19.75
- type: recall_at_10
value: 44.783
- type: recall_at_100
value: 67.673
- type: recall_at_1000
value: 88.676
- type: recall_at_3
value: 31.740000000000002
- type: recall_at_5
value: 37.128
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 11.791
- type: map_at_10
value: 18.782
- type: map_at_100
value: 19.939
- type: map_at_1000
value: 20.083000000000002
- type: map_at_3
value: 16.564
- type: map_at_5
value: 17.592
- type: mrr_at_1
value: 15.174000000000001
- type: mrr_at_10
value: 22.448999999999998
- type: mrr_at_100
value: 23.430999999999997
- type: mrr_at_1000
value: 23.521
- type: mrr_at_3
value: 20.025000000000002
- type: mrr_at_5
value: 21.238
- type: ndcg_at_1
value: 15.174000000000001
- type: ndcg_at_10
value: 23.411
- type: ndcg_at_100
value: 29.365999999999996
- type: ndcg_at_1000
value: 32.893
- type: ndcg_at_3
value: 18.999
- type: ndcg_at_5
value: 20.721
- type: precision_at_1
value: 15.174000000000001
- type: precision_at_10
value: 4.714
- type: precision_at_100
value: 0.903
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 9.494
- type: precision_at_5
value: 6.94
- type: recall_at_1
value: 11.791
- type: recall_at_10
value: 33.986
- type: recall_at_100
value: 60.833999999999996
- type: recall_at_1000
value: 86.291
- type: recall_at_3
value: 21.983
- type: recall_at_5
value: 26.313
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.041999999999998
- type: map_at_10
value: 35.61
- type: map_at_100
value: 37.002
- type: map_at_1000
value: 37.120999999999995
- type: map_at_3
value: 31.982
- type: map_at_5
value: 34.007
- type: mrr_at_1
value: 30.895
- type: mrr_at_10
value: 41.095
- type: mrr_at_100
value: 41.983
- type: mrr_at_1000
value: 42.031
- type: mrr_at_3
value: 38.114
- type: mrr_at_5
value: 39.798
- type: ndcg_at_1
value: 30.895
- type: ndcg_at_10
value: 42.138999999999996
- type: ndcg_at_100
value: 47.741
- type: ndcg_at_1000
value: 49.931
- type: ndcg_at_3
value: 36.179
- type: ndcg_at_5
value: 38.998
- type: precision_at_1
value: 30.895
- type: precision_at_10
value: 8.065
- type: precision_at_100
value: 1.274
- type: precision_at_1000
value: 0.165
- type: precision_at_3
value: 17.645
- type: precision_at_5
value: 12.955
- type: recall_at_1
value: 25.041999999999998
- type: recall_at_10
value: 56.169999999999995
- type: recall_at_100
value: 79.3
- type: recall_at_1000
value: 93.618
- type: recall_at_3
value: 39.359
- type: recall_at_5
value: 46.650000000000006
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.854
- type: map_at_10
value: 32.088
- type: map_at_100
value: 33.511
- type: map_at_1000
value: 33.629999999999995
- type: map_at_3
value: 29.079
- type: map_at_5
value: 30.663
- type: mrr_at_1
value: 29.110000000000003
- type: mrr_at_10
value: 36.902
- type: mrr_at_100
value: 37.927
- type: mrr_at_1000
value: 37.99
- type: mrr_at_3
value: 34.285
- type: mrr_at_5
value: 35.757
- type: ndcg_at_1
value: 29.110000000000003
- type: ndcg_at_10
value: 37.429
- type: ndcg_at_100
value: 43.59
- type: ndcg_at_1000
value: 46.207
- type: ndcg_at_3
value: 32.394
- type: ndcg_at_5
value: 34.562
- type: precision_at_1
value: 29.110000000000003
- type: precision_at_10
value: 6.895
- type: precision_at_100
value: 1.176
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 15.107000000000001
- type: precision_at_5
value: 10.982
- type: recall_at_1
value: 23.854
- type: recall_at_10
value: 48.589
- type: recall_at_100
value: 74.78
- type: recall_at_1000
value: 92.836
- type: recall_at_3
value: 34.489
- type: recall_at_5
value: 40.182
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.159999999999997
- type: map_at_10
value: 29.421333333333337
- type: map_at_100
value: 30.61058333333333
- type: map_at_1000
value: 30.742416666666667
- type: map_at_3
value: 26.745833333333337
- type: map_at_5
value: 28.20291666666667
- type: mrr_at_1
value: 25.308249999999997
- type: mrr_at_10
value: 33.21275
- type: mrr_at_100
value: 34.09341666666666
- type: mrr_at_1000
value: 34.163000000000004
- type: mrr_at_3
value: 30.81675
- type: mrr_at_5
value: 32.16816666666667
- type: ndcg_at_1
value: 25.308249999999997
- type: ndcg_at_10
value: 34.46208333333333
- type: ndcg_at_100
value: 39.77183333333334
- type: ndcg_at_1000
value: 42.461916666666674
- type: ndcg_at_3
value: 29.797916666666662
- type: ndcg_at_5
value: 31.935166666666664
- type: precision_at_1
value: 25.308249999999997
- type: precision_at_10
value: 6.260916666666666
- type: precision_at_100
value: 1.0716666666666665
- type: precision_at_1000
value: 0.15025000000000002
- type: precision_at_3
value: 13.926916666666667
- type: precision_at_5
value: 10.043916666666664
- type: recall_at_1
value: 21.159999999999997
- type: recall_at_10
value: 45.61408333333334
- type: recall_at_100
value: 69.26583333333332
- type: recall_at_1000
value: 88.22541666666667
- type: recall_at_3
value: 32.67691666666666
- type: recall_at_5
value: 38.12716666666667
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.293
- type: map_at_10
value: 25.316
- type: map_at_100
value: 26.211000000000002
- type: map_at_1000
value: 26.316
- type: map_at_3
value: 23.200000000000003
- type: map_at_5
value: 24.538
- type: mrr_at_1
value: 21.471999999999998
- type: mrr_at_10
value: 27.583000000000002
- type: mrr_at_100
value: 28.371000000000002
- type: mrr_at_1000
value: 28.455000000000002
- type: mrr_at_3
value: 25.613000000000003
- type: mrr_at_5
value: 26.863
- type: ndcg_at_1
value: 21.471999999999998
- type: ndcg_at_10
value: 28.925
- type: ndcg_at_100
value: 33.489000000000004
- type: ndcg_at_1000
value: 36.313
- type: ndcg_at_3
value: 25.003999999999998
- type: ndcg_at_5
value: 27.232
- type: precision_at_1
value: 21.471999999999998
- type: precision_at_10
value: 4.693
- type: precision_at_100
value: 0.762
- type: precision_at_1000
value: 0.108
- type: precision_at_3
value: 10.838000000000001
- type: precision_at_5
value: 7.945
- type: recall_at_1
value: 19.293
- type: recall_at_10
value: 37.63
- type: recall_at_100
value: 58.818000000000005
- type: recall_at_1000
value: 80.026
- type: recall_at_3
value: 27.389000000000003
- type: recall_at_5
value: 32.71
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 12.087
- type: map_at_10
value: 17.777
- type: map_at_100
value: 18.837
- type: map_at_1000
value: 18.973000000000003
- type: map_at_3
value: 15.956999999999999
- type: map_at_5
value: 16.902
- type: mrr_at_1
value: 14.763000000000002
- type: mrr_at_10
value: 20.8
- type: mrr_at_100
value: 21.757
- type: mrr_at_1000
value: 21.85
- type: mrr_at_3
value: 18.989
- type: mrr_at_5
value: 19.905
- type: ndcg_at_1
value: 14.763000000000002
- type: ndcg_at_10
value: 21.512999999999998
- type: ndcg_at_100
value: 26.822000000000003
- type: ndcg_at_1000
value: 30.270999999999997
- type: ndcg_at_3
value: 18.16
- type: ndcg_at_5
value: 19.573999999999998
- type: precision_at_1
value: 14.763000000000002
- type: precision_at_10
value: 4.043
- type: precision_at_100
value: 0.7979999999999999
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 8.741
- type: precision_at_5
value: 6.325
- type: recall_at_1
value: 12.087
- type: recall_at_10
value: 29.805
- type: recall_at_100
value: 53.787
- type: recall_at_1000
value: 78.884
- type: recall_at_3
value: 20.497
- type: recall_at_5
value: 24.148
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.099
- type: map_at_10
value: 29.487999999999996
- type: map_at_100
value: 30.553
- type: map_at_1000
value: 30.669999999999998
- type: map_at_3
value: 27.250000000000004
- type: map_at_5
value: 28.416000000000004
- type: mrr_at_1
value: 26.026
- type: mrr_at_10
value: 33.238
- type: mrr_at_100
value: 34.114
- type: mrr_at_1000
value: 34.188
- type: mrr_at_3
value: 31.157
- type: mrr_at_5
value: 32.262
- type: ndcg_at_1
value: 26.026
- type: ndcg_at_10
value: 34.036
- type: ndcg_at_100
value: 39.443
- type: ndcg_at_1000
value: 42.181999999999995
- type: ndcg_at_3
value: 29.942
- type: ndcg_at_5
value: 31.682
- type: precision_at_1
value: 26.026
- type: precision_at_10
value: 5.7090000000000005
- type: precision_at_100
value: 0.9560000000000001
- type: precision_at_1000
value: 0.131
- type: precision_at_3
value: 13.495
- type: precision_at_5
value: 9.366
- type: recall_at_1
value: 22.099
- type: recall_at_10
value: 44.098
- type: recall_at_100
value: 68.726
- type: recall_at_1000
value: 87.992
- type: recall_at_3
value: 32.902
- type: recall_at_5
value: 37.389
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.195
- type: map_at_10
value: 27.298000000000002
- type: map_at_100
value: 28.875
- type: map_at_1000
value: 29.152
- type: map_at_3
value: 24.595
- type: map_at_5
value: 25.926
- type: mrr_at_1
value: 23.913
- type: mrr_at_10
value: 31.696999999999996
- type: mrr_at_100
value: 32.728
- type: mrr_at_1000
value: 32.808
- type: mrr_at_3
value: 29.249000000000002
- type: mrr_at_5
value: 30.623
- type: ndcg_at_1
value: 23.913
- type: ndcg_at_10
value: 32.745999999999995
- type: ndcg_at_100
value: 38.663
- type: ndcg_at_1000
value: 41.984
- type: ndcg_at_3
value: 28.272000000000002
- type: ndcg_at_5
value: 30.184
- type: precision_at_1
value: 23.913
- type: precision_at_10
value: 6.601
- type: precision_at_100
value: 1.462
- type: precision_at_1000
value: 0.241
- type: precision_at_3
value: 13.439
- type: precision_at_5
value: 10.079
- type: recall_at_1
value: 19.195
- type: recall_at_10
value: 42.933
- type: recall_at_100
value: 69.762
- type: recall_at_1000
value: 91.57
- type: recall_at_3
value: 30.302
- type: recall_at_5
value: 35.17
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.816999999999998
- type: map_at_10
value: 19.314
- type: map_at_100
value: 20.328
- type: map_at_1000
value: 20.439
- type: map_at_3
value: 16.658
- type: map_at_5
value: 18.169
- type: mrr_at_1
value: 15.342
- type: mrr_at_10
value: 21.098
- type: mrr_at_100
value: 22.031
- type: mrr_at_1000
value: 22.126
- type: mrr_at_3
value: 18.453
- type: mrr_at_5
value: 19.923
- type: ndcg_at_1
value: 15.342
- type: ndcg_at_10
value: 23.558
- type: ndcg_at_100
value: 28.889
- type: ndcg_at_1000
value: 31.89
- type: ndcg_at_3
value: 18.186
- type: ndcg_at_5
value: 20.751
- type: precision_at_1
value: 15.342
- type: precision_at_10
value: 4.011
- type: precision_at_100
value: 0.749
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 7.763000000000001
- type: precision_at_5
value: 6.026
- type: recall_at_1
value: 13.816999999999998
- type: recall_at_10
value: 35.459
- type: recall_at_100
value: 60.612
- type: recall_at_1000
value: 83.174
- type: recall_at_3
value: 20.601
- type: recall_at_5
value: 26.83
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.770999999999999
- type: map_at_10
value: 14.948
- type: map_at_100
value: 16.668
- type: map_at_1000
value: 16.865
- type: map_at_3
value: 12.264
- type: map_at_5
value: 13.623
- type: mrr_at_1
value: 18.502
- type: mrr_at_10
value: 28.782000000000004
- type: mrr_at_100
value: 29.875
- type: mrr_at_1000
value: 29.929
- type: mrr_at_3
value: 25.147000000000002
- type: mrr_at_5
value: 27.322000000000003
- type: ndcg_at_1
value: 18.502
- type: ndcg_at_10
value: 21.815
- type: ndcg_at_100
value: 29.174
- type: ndcg_at_1000
value: 32.946999999999996
- type: ndcg_at_3
value: 16.833000000000002
- type: ndcg_at_5
value: 18.792
- type: precision_at_1
value: 18.502
- type: precision_at_10
value: 7.016
- type: precision_at_100
value: 1.486
- type: precision_at_1000
value: 0.219
- type: precision_at_3
value: 12.421
- type: precision_at_5
value: 10.15
- type: recall_at_1
value: 8.770999999999999
- type: recall_at_10
value: 27.542
- type: recall_at_100
value: 53.481
- type: recall_at_1000
value: 74.67399999999999
- type: recall_at_3
value: 15.986
- type: recall_at_5
value: 20.669
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.0249999999999995
- type: map_at_10
value: 11.924
- type: map_at_100
value: 15.801000000000002
- type: map_at_1000
value: 16.878999999999998
- type: map_at_3
value: 9.031
- type: map_at_5
value: 10.181
- type: mrr_at_1
value: 48.0
- type: mrr_at_10
value: 56.928
- type: mrr_at_100
value: 57.619
- type: mrr_at_1000
value: 57.646
- type: mrr_at_3
value: 55.25
- type: mrr_at_5
value: 55.974999999999994
- type: ndcg_at_1
value: 36.875
- type: ndcg_at_10
value: 26.508
- type: ndcg_at_100
value: 29.692
- type: ndcg_at_1000
value: 36.658
- type: ndcg_at_3
value: 30.764000000000003
- type: ndcg_at_5
value: 28.049000000000003
- type: precision_at_1
value: 48.0
- type: precision_at_10
value: 21.175
- type: precision_at_100
value: 6.535
- type: precision_at_1000
value: 1.6230000000000002
- type: precision_at_3
value: 34.75
- type: precision_at_5
value: 27.700000000000003
- type: recall_at_1
value: 6.0249999999999995
- type: recall_at_10
value: 16.454
- type: recall_at_100
value: 35.026
- type: recall_at_1000
value: 58.031
- type: recall_at_3
value: 10.058
- type: recall_at_5
value: 12.145999999999999
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 43.470000000000006
- type: f1
value: 39.27142511079909
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.468
- type: map_at_10
value: 49.652
- type: map_at_100
value: 50.314
- type: map_at_1000
value: 50.346999999999994
- type: map_at_3
value: 46.592
- type: map_at_5
value: 48.553000000000004
- type: mrr_at_1
value: 40.384
- type: mrr_at_10
value: 53.03099999999999
- type: mrr_at_100
value: 53.629000000000005
- type: mrr_at_1000
value: 53.65299999999999
- type: mrr_at_3
value: 49.967
- type: mrr_at_5
value: 51.951
- type: ndcg_at_1
value: 40.384
- type: ndcg_at_10
value: 56.318
- type: ndcg_at_100
value: 59.43000000000001
- type: ndcg_at_1000
value: 60.266
- type: ndcg_at_3
value: 50.341
- type: ndcg_at_5
value: 53.756
- type: precision_at_1
value: 40.384
- type: precision_at_10
value: 8.062999999999999
- type: precision_at_100
value: 0.972
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 20.897
- type: precision_at_5
value: 14.374
- type: recall_at_1
value: 37.468
- type: recall_at_10
value: 73.68900000000001
- type: recall_at_100
value: 87.844
- type: recall_at_1000
value: 94.098
- type: recall_at_3
value: 57.768
- type: recall_at_5
value: 65.979
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.071
- type: map_at_10
value: 23.455000000000002
- type: map_at_100
value: 25.358999999999998
- type: map_at_1000
value: 25.55
- type: map_at_3
value: 20.164
- type: map_at_5
value: 21.654999999999998
- type: mrr_at_1
value: 28.395
- type: mrr_at_10
value: 37.21
- type: mrr_at_100
value: 38.086999999999996
- type: mrr_at_1000
value: 38.145
- type: mrr_at_3
value: 34.336
- type: mrr_at_5
value: 35.795
- type: ndcg_at_1
value: 28.395
- type: ndcg_at_10
value: 30.595
- type: ndcg_at_100
value: 37.885000000000005
- type: ndcg_at_1000
value: 41.55
- type: ndcg_at_3
value: 26.858999999999998
- type: ndcg_at_5
value: 27.528999999999996
- type: precision_at_1
value: 28.395
- type: precision_at_10
value: 8.92
- type: precision_at_100
value: 1.6389999999999998
- type: precision_at_1000
value: 0.22999999999999998
- type: precision_at_3
value: 18.004
- type: precision_at_5
value: 13.302
- type: recall_at_1
value: 14.071
- type: recall_at_10
value: 37.635000000000005
- type: recall_at_100
value: 65.18599999999999
- type: recall_at_1000
value: 87.58399999999999
- type: recall_at_3
value: 24.490000000000002
- type: recall_at_5
value: 28.621999999999996
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.659
- type: map_at_10
value: 33.622
- type: map_at_100
value: 34.488
- type: map_at_1000
value: 34.58
- type: map_at_3
value: 31.317
- type: map_at_5
value: 32.689
- type: mrr_at_1
value: 49.318
- type: mrr_at_10
value: 57.028999999999996
- type: mrr_at_100
value: 57.567
- type: mrr_at_1000
value: 57.603
- type: mrr_at_3
value: 55.152
- type: mrr_at_5
value: 56.289
- type: ndcg_at_1
value: 49.318
- type: ndcg_at_10
value: 42.091
- type: ndcg_at_100
value: 45.812999999999995
- type: ndcg_at_1000
value: 47.902
- type: ndcg_at_3
value: 38.012
- type: ndcg_at_5
value: 40.160000000000004
- type: precision_at_1
value: 49.318
- type: precision_at_10
value: 8.921
- type: precision_at_100
value: 1.189
- type: precision_at_1000
value: 0.147
- type: precision_at_3
value: 23.655
- type: precision_at_5
value: 15.897
- type: recall_at_1
value: 24.659
- type: recall_at_10
value: 44.605
- type: recall_at_100
value: 59.453
- type: recall_at_1000
value: 73.40299999999999
- type: recall_at_3
value: 35.483
- type: recall_at_5
value: 39.743
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 67.2992
- type: ap
value: 61.82215741645874
- type: f1
value: 67.04790333380426
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 13.635
- type: map_at_10
value: 22.412000000000003
- type: map_at_100
value: 23.622
- type: map_at_1000
value: 23.707
- type: map_at_3
value: 19.368
- type: map_at_5
value: 21.095
- type: mrr_at_1
value: 14.04
- type: mrr_at_10
value: 22.858
- type: mrr_at_100
value: 24.049
- type: mrr_at_1000
value: 24.127000000000002
- type: mrr_at_3
value: 19.852
- type: mrr_at_5
value: 21.552
- type: ndcg_at_1
value: 14.04
- type: ndcg_at_10
value: 27.676000000000002
- type: ndcg_at_100
value: 33.917
- type: ndcg_at_1000
value: 36.217
- type: ndcg_at_3
value: 21.432000000000002
- type: ndcg_at_5
value: 24.519
- type: precision_at_1
value: 14.04
- type: precision_at_10
value: 4.585999999999999
- type: precision_at_100
value: 0.776
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 9.298
- type: precision_at_5
value: 7.135
- type: recall_at_1
value: 13.635
- type: recall_at_10
value: 44.015
- type: recall_at_100
value: 73.756
- type: recall_at_1000
value: 91.743
- type: recall_at_3
value: 26.941
- type: recall_at_5
value: 34.378
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 91.81714546283631
- type: f1
value: 91.67516531750526
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 74.69904240766073
- type: f1
value: 57.9559746458099
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.76866173503699
- type: f1
value: 69.95643410077002
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.85137861466038
- type: f1
value: 77.66496420028315
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 36.646200212660744
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 32.57381797665868
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.54815546178676
- type: mrr
value: 31.40311212966208
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.005
- type: map_at_10
value: 8.125
- type: map_at_100
value: 11.439
- type: map_at_1000
value: 12.908
- type: map_at_3
value: 5.299
- type: map_at_5
value: 6.654
- type: mrr_at_1
value: 33.745999999999995
- type: mrr_at_10
value: 43.513000000000005
- type: mrr_at_100
value: 44.330999999999996
- type: mrr_at_1000
value: 44.388
- type: mrr_at_3
value: 41.28
- type: mrr_at_5
value: 42.766
- type: ndcg_at_1
value: 31.889
- type: ndcg_at_10
value: 26.432
- type: ndcg_at_100
value: 26.191
- type: ndcg_at_1000
value: 35.413
- type: ndcg_at_3
value: 29.625
- type: ndcg_at_5
value: 28.588
- type: precision_at_1
value: 33.745999999999995
- type: precision_at_10
value: 21.146
- type: precision_at_100
value: 7.736999999999999
- type: precision_at_1000
value: 2.08
- type: precision_at_3
value: 29.102
- type: precision_at_5
value: 26.316
- type: recall_at_1
value: 3.005
- type: recall_at_10
value: 12.29
- type: recall_at_100
value: 30.06
- type: recall_at_1000
value: 63.148
- type: recall_at_3
value: 6.587
- type: recall_at_5
value: 9.095
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.839000000000002
- type: map_at_10
value: 31.424999999999997
- type: map_at_100
value: 32.641999999999996
- type: map_at_1000
value: 32.704
- type: map_at_3
value: 27.742
- type: map_at_5
value: 29.854999999999997
- type: mrr_at_1
value: 22.451
- type: mrr_at_10
value: 33.632
- type: mrr_at_100
value: 34.653
- type: mrr_at_1000
value: 34.699000000000005
- type: mrr_at_3
value: 30.427
- type: mrr_at_5
value: 32.263
- type: ndcg_at_1
value: 22.422
- type: ndcg_at_10
value: 37.929
- type: ndcg_at_100
value: 43.667
- type: ndcg_at_1000
value: 45.231
- type: ndcg_at_3
value: 30.814999999999998
- type: ndcg_at_5
value: 34.379
- type: precision_at_1
value: 22.422
- type: precision_at_10
value: 6.59
- type: precision_at_100
value: 0.9860000000000001
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 14.301
- type: precision_at_5
value: 10.626
- type: recall_at_1
value: 19.839000000000002
- type: recall_at_10
value: 55.769999999999996
- type: recall_at_100
value: 81.733
- type: recall_at_1000
value: 93.559
- type: recall_at_3
value: 37.078
- type: recall_at_5
value: 45.318999999999996
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 67.534
- type: map_at_10
value: 81.449
- type: map_at_100
value: 82.15400000000001
- type: map_at_1000
value: 82.173
- type: map_at_3
value: 78.412
- type: map_at_5
value: 80.268
- type: mrr_at_1
value: 77.77
- type: mrr_at_10
value: 84.60499999999999
- type: mrr_at_100
value: 84.765
- type: mrr_at_1000
value: 84.76700000000001
- type: mrr_at_3
value: 83.493
- type: mrr_at_5
value: 84.221
- type: ndcg_at_1
value: 77.79
- type: ndcg_at_10
value: 85.555
- type: ndcg_at_100
value: 87.105
- type: ndcg_at_1000
value: 87.261
- type: ndcg_at_3
value: 82.401
- type: ndcg_at_5
value: 84.071
- type: precision_at_1
value: 77.79
- type: precision_at_10
value: 13.104
- type: precision_at_100
value: 1.5190000000000001
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 36.157000000000004
- type: precision_at_5
value: 23.86
- type: recall_at_1
value: 67.534
- type: recall_at_10
value: 93.573
- type: recall_at_100
value: 99.10799999999999
- type: recall_at_1000
value: 99.911
- type: recall_at_3
value: 84.575
- type: recall_at_5
value: 89.251
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 50.622402916164575
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 54.43689895218044
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.723
- type: map_at_10
value: 9.524000000000001
- type: map_at_100
value: 11.407
- type: map_at_1000
value: 11.721
- type: map_at_3
value: 6.678000000000001
- type: map_at_5
value: 7.881
- type: mrr_at_1
value: 18.2
- type: mrr_at_10
value: 28.349999999999998
- type: mrr_at_100
value: 29.528
- type: mrr_at_1000
value: 29.601
- type: mrr_at_3
value: 25.15
- type: mrr_at_5
value: 26.765
- type: ndcg_at_1
value: 18.2
- type: ndcg_at_10
value: 16.603
- type: ndcg_at_100
value: 24.331
- type: ndcg_at_1000
value: 30.086000000000002
- type: ndcg_at_3
value: 15.151
- type: ndcg_at_5
value: 13.199
- type: precision_at_1
value: 18.2
- type: precision_at_10
value: 8.86
- type: precision_at_100
value: 2.012
- type: precision_at_1000
value: 0.33999999999999997
- type: precision_at_3
value: 14.2
- type: precision_at_5
value: 11.559999999999999
- type: recall_at_1
value: 3.723
- type: recall_at_10
value: 17.965
- type: recall_at_100
value: 40.803
- type: recall_at_1000
value: 69.053
- type: recall_at_3
value: 8.633000000000001
- type: recall_at_5
value: 11.722000000000001
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.92797679109452
- type: cos_sim_spearman
value: 80.91205372065706
- type: euclidean_pearson
value: 83.1339233055303
- type: euclidean_spearman
value: 80.80406858672507
- type: manhattan_pearson
value: 83.023350668501
- type: manhattan_spearman
value: 80.79924041758802
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 85.40179876416202
- type: cos_sim_spearman
value: 76.97735281189986
- type: euclidean_pearson
value: 81.78242131839902
- type: euclidean_spearman
value: 75.2853626575815
- type: manhattan_pearson
value: 81.38214640501
- type: manhattan_spearman
value: 74.96725680962342
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 81.38943723638555
- type: cos_sim_spearman
value: 82.62953855483207
- type: euclidean_pearson
value: 82.4417464172415
- type: euclidean_spearman
value: 82.8241086805702
- type: manhattan_pearson
value: 82.05925934320744
- type: manhattan_spearman
value: 82.44019953304266
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 81.56920959786761
- type: cos_sim_spearman
value: 77.83933203825715
- type: euclidean_pearson
value: 81.34174603327101
- type: euclidean_spearman
value: 78.05064087128034
- type: manhattan_pearson
value: 81.1754246859513
- type: manhattan_spearman
value: 77.8965324094323
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 84.70673290528633
- type: cos_sim_spearman
value: 85.918072169933
- type: euclidean_pearson
value: 85.49668339564212
- type: euclidean_spearman
value: 86.07562791847965
- type: manhattan_pearson
value: 85.46112200749786
- type: manhattan_spearman
value: 86.06360174588102
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 78.57362584144626
- type: cos_sim_spearman
value: 80.68461073524229
- type: euclidean_pearson
value: 81.86974700030184
- type: euclidean_spearman
value: 81.9556672243023
- type: manhattan_pearson
value: 81.58501319903948
- type: manhattan_spearman
value: 81.65934304491222
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.0517739143147
- type: cos_sim_spearman
value: 88.99264497015508
- type: euclidean_pearson
value: 88.60143851830212
- type: euclidean_spearman
value: 88.417049574577
- type: manhattan_pearson
value: 88.71275731832226
- type: manhattan_spearman
value: 88.62174073802386
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 65.92377536840165
- type: cos_sim_spearman
value: 68.25861908141049
- type: euclidean_pearson
value: 67.74046365058068
- type: euclidean_spearman
value: 67.74440638624723
- type: manhattan_pearson
value: 67.72314553247108
- type: manhattan_spearman
value: 67.58993746063668
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.01280212650944
- type: cos_sim_spearman
value: 84.2021805427655
- type: euclidean_pearson
value: 85.2593711183253
- type: euclidean_spearman
value: 84.7692260813728
- type: manhattan_pearson
value: 85.20370142077513
- type: manhattan_spearman
value: 84.68261435873887
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 79.8274674627466
- type: mrr
value: 93.2766625168586
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 44.917
- type: map_at_10
value: 54.809
- type: map_at_100
value: 55.544000000000004
- type: map_at_1000
value: 55.584999999999994
- type: map_at_3
value: 51.274
- type: map_at_5
value: 53.42
- type: mrr_at_1
value: 47.0
- type: mrr_at_10
value: 56.00000000000001
- type: mrr_at_100
value: 56.611
- type: mrr_at_1000
value: 56.647000000000006
- type: mrr_at_3
value: 53.166999999999994
- type: mrr_at_5
value: 54.883
- type: ndcg_at_1
value: 47.0
- type: ndcg_at_10
value: 59.948
- type: ndcg_at_100
value: 63.214999999999996
- type: ndcg_at_1000
value: 64.331
- type: ndcg_at_3
value: 53.690000000000005
- type: ndcg_at_5
value: 56.99999999999999
- type: precision_at_1
value: 47.0
- type: precision_at_10
value: 8.433
- type: precision_at_100
value: 1.0170000000000001
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 21.0
- type: precision_at_5
value: 14.667
- type: recall_at_1
value: 44.917
- type: recall_at_10
value: 74.483
- type: recall_at_100
value: 89.1
- type: recall_at_1000
value: 98.0
- type: recall_at_3
value: 58.15
- type: recall_at_5
value: 66.033
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.66534653465347
- type: cos_sim_ap
value: 90.67883265196161
- type: cos_sim_f1
value: 82.81327389796928
- type: cos_sim_precision
value: 82.04121687929342
- type: cos_sim_recall
value: 83.6
- type: dot_accuracy
value: 99.6009900990099
- type: dot_ap
value: 85.37859415933599
- type: dot_f1
value: 79.68285431119922
- type: dot_precision
value: 78.97838899803537
- type: dot_recall
value: 80.4
- type: euclidean_accuracy
value: 99.66435643564357
- type: euclidean_ap
value: 90.28983244955695
- type: euclidean_f1
value: 82.47925817471938
- type: euclidean_precision
value: 80.55290753098188
- type: euclidean_recall
value: 84.5
- type: manhattan_accuracy
value: 99.65247524752475
- type: manhattan_ap
value: 89.75455076116366
- type: manhattan_f1
value: 81.63682864450128
- type: manhattan_precision
value: 83.56020942408377
- type: manhattan_recall
value: 79.80000000000001
- type: max_accuracy
value: 99.66534653465347
- type: max_ap
value: 90.67883265196161
- type: max_f1
value: 82.81327389796928
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 54.25773656414605
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 32.52034918177213
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 47.10460797458404
- type: mrr
value: 47.67126358119005
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.159
- type: map_at_10
value: 0.9979999999999999
- type: map_at_100
value: 5.806
- type: map_at_1000
value: 16.575
- type: map_at_3
value: 0.391
- type: map_at_5
value: 0.596
- type: mrr_at_1
value: 56.00000000000001
- type: mrr_at_10
value: 68.7
- type: mrr_at_100
value: 68.892
- type: mrr_at_1000
value: 68.892
- type: mrr_at_3
value: 65.667
- type: mrr_at_5
value: 68.367
- type: ndcg_at_1
value: 51.0
- type: ndcg_at_10
value: 45.1
- type: ndcg_at_100
value: 36.834
- type: ndcg_at_1000
value: 39.329
- type: ndcg_at_3
value: 49.458
- type: ndcg_at_5
value: 48.177
- type: precision_at_1
value: 56.00000000000001
- type: precision_at_10
value: 47.8
- type: precision_at_100
value: 38.6
- type: precision_at_1000
value: 18.285999999999998
- type: precision_at_3
value: 54.0
- type: precision_at_5
value: 52.400000000000006
- type: recall_at_1
value: 0.159
- type: recall_at_10
value: 1.2510000000000001
- type: recall_at_100
value: 9.237
- type: recall_at_1000
value: 38.984
- type: recall_at_3
value: 0.44
- type: recall_at_5
value: 0.7080000000000001
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.6660000000000001
- type: map_at_10
value: 7.444000000000001
- type: map_at_100
value: 12.078
- type: map_at_1000
value: 13.716999999999999
- type: map_at_3
value: 4.06
- type: map_at_5
value: 5.172000000000001
- type: mrr_at_1
value: 20.408
- type: mrr_at_10
value: 33.547
- type: mrr_at_100
value: 35.281
- type: mrr_at_1000
value: 35.289
- type: mrr_at_3
value: 29.252
- type: mrr_at_5
value: 31.19
- type: ndcg_at_1
value: 18.367
- type: ndcg_at_10
value: 18.848000000000003
- type: ndcg_at_100
value: 29.938
- type: ndcg_at_1000
value: 42.792
- type: ndcg_at_3
value: 20.005
- type: ndcg_at_5
value: 18.617
- type: precision_at_1
value: 20.408
- type: precision_at_10
value: 17.143
- type: precision_at_100
value: 6.571000000000001
- type: precision_at_1000
value: 1.492
- type: precision_at_3
value: 21.088
- type: precision_at_5
value: 18.776
- type: recall_at_1
value: 1.6660000000000001
- type: recall_at_10
value: 12.736
- type: recall_at_100
value: 41.485
- type: recall_at_1000
value: 80.301
- type: recall_at_3
value: 5.137
- type: recall_at_5
value: 7.317
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 67.481
- type: ap
value: 12.474830532963725
- type: f1
value: 51.720124230716834
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 55.62252405206565
- type: f1
value: 55.87133173318741
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 45.695133575997474
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 84.16284198605233
- type: cos_sim_ap
value: 67.77133994574282
- type: cos_sim_f1
value: 63.007767732076914
- type: cos_sim_precision
value: 60.89096726556732
- type: cos_sim_recall
value: 65.27704485488127
- type: dot_accuracy
value: 80.60439887941826
- type: dot_ap
value: 55.17278808505333
- type: dot_f1
value: 55.023250784038055
- type: dot_precision
value: 46.619021440351844
- type: dot_recall
value: 67.12401055408971
- type: euclidean_accuracy
value: 84.75889610776659
- type: euclidean_ap
value: 69.33925609880741
- type: euclidean_f1
value: 64.72887151929653
- type: euclidean_precision
value: 60.254661209640744
- type: euclidean_recall
value: 69.92084432717678
- type: manhattan_accuracy
value: 84.84234368480658
- type: manhattan_ap
value: 69.50780726475959
- type: manhattan_f1
value: 64.78766430738119
- type: manhattan_precision
value: 62.17855409995148
- type: manhattan_recall
value: 67.62532981530343
- type: max_accuracy
value: 84.84234368480658
- type: max_ap
value: 69.50780726475959
- type: max_f1
value: 64.78766430738119
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.46198626149726
- type: cos_sim_ap
value: 84.64911720373662
- type: cos_sim_f1
value: 77.18601251827143
- type: cos_sim_precision
value: 75.19900679179142
- type: cos_sim_recall
value: 79.28087465352634
- type: dot_accuracy
value: 86.79512554818179
- type: dot_ap
value: 80.43213280609042
- type: dot_f1
value: 74.18943791589976
- type: dot_precision
value: 68.65828092243187
- type: dot_recall
value: 80.68986757006468
- type: euclidean_accuracy
value: 88.2368921488726
- type: euclidean_ap
value: 84.2791000321804
- type: euclidean_f1
value: 76.62216238453198
- type: euclidean_precision
value: 74.49640026179914
- type: euclidean_recall
value: 78.87280566676932
- type: manhattan_accuracy
value: 88.29122521054062
- type: manhattan_ap
value: 84.25495067571485
- type: manhattan_f1
value: 76.60077590984667
- type: manhattan_precision
value: 73.63784897350287
- type: manhattan_recall
value: 79.81213427779488
- type: max_accuracy
value: 88.46198626149726
- type: max_ap
value: 84.64911720373662
- type: max_f1
value: 77.18601251827143
---
# embedder-100p
This is a ms-marco bi-encoder from sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It is trained on more than 20GiB of german text. It used the knowledge distillation to be a bi-language embedding model (English and German).
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('embedder-100p')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('embedder-100p')
model = AutoModel.from_pretrained('embedder-100p')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
(soon, still undern evaluation)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 231230 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 20,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 7e-06
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 5000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
@[bayang](https://huggingface.co/bayang)
<!--- Describe where people can find more information --> | 65,869 | [
[
-0.022857666015625,
-0.058074951171875,
0.019012451171875,
0.0252532958984375,
-0.0171661376953125,
-0.020965576171875,
-0.0164794921875,
0.00502777099609375,
0.01374053955078125,
0.0233917236328125,
-0.0469970703125,
-0.040313720703125,
-0.055206298828125,
... |
anton-l/wav2vec2-large-xlsr-53-romanian | 2021-07-05T20:20:21.000Z | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ro",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | anton-l | null | null | anton-l/wav2vec2-large-xlsr-53-romanian | 2 | 1,509 | transformers | 2022-03-02T23:29:05 | ---
language: ro
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Romanian XLSR Wav2Vec2 Large 53 by Anton Lozhkov
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ro
type: common_voice
args: ro
metrics:
- name: Test WER
type: wer
value: 24.84
---
# Wav2Vec2-Large-XLSR-53-Romanian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Romanian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ro", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-romanian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-romanian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Romanian test data of Common Voice.
```python
import torch
import torchaudio
import urllib.request
import tarfile
import pandas as pd
from tqdm.auto import tqdm
from datasets import load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Download the raw data instead of using HF datasets to save disk space
data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/ro.tar.gz"
filestream = urllib.request.urlopen(data_url)
data_file = tarfile.open(fileobj=filestream, mode="r|gz")
data_file.extractall()
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-romanian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-romanian")
model.to("cuda")
cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/ro/test.tsv", sep='\t')
clips_path = "cv-corpus-6.1-2020-12-11/ro/clips/"
def clean_sentence(sent):
sent = sent.lower()
# replace non-alpha characters with space
sent = "".join(ch if ch.isalpha() else " " for ch in sent)
# remove repeated spaces
sent = " ".join(sent.split())
return sent
targets = []
preds = []
for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]):
row["sentence"] = clean_sentence(row["sentence"])
speech_array, sampling_rate = torchaudio.load(clips_path + row["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
row["speech"] = resampler(speech_array).squeeze().numpy()
inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
targets.append(row["sentence"])
preds.append(processor.batch_decode(pred_ids)[0])
print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets)))
```
**Test Result**: 24.84 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
| 4,152 | [
[
-0.0257110595703125,
-0.04754638671875,
0.0011262893676757812,
0.0218353271484375,
-0.01580810546875,
-0.00814056396484375,
-0.04217529296875,
-0.024261474609375,
0.01343536376953125,
0.026580810546875,
-0.049957275390625,
-0.051849365234375,
-0.03436279296875,
... |
Gourishreeka/my-pet-cat | 2023-11-05T17:55:05.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Gourishreeka | null | null | Gourishreeka/my-pet-cat | 0 | 1,509 | diffusers | 2023-11-05T17:51:53 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat Dreambooth model trained by Gourishreeka following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: -MRCEW-312
Sample pictures of this concept:
.jpg)
| 397 | [
[
-0.04730224609375,
-0.02191162109375,
0.0266265869140625,
0.0018472671508789062,
-0.0171051025390625,
0.040283203125,
0.022308349609375,
-0.0333251953125,
0.06536865234375,
0.039276123046875,
-0.0469970703125,
-0.00791168212890625,
-0.0055389404296875,
0.012... |
deepseek-ai/deepseek-coder-6.7b-instruct | 2023-11-05T16:22:22.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | deepseek-ai | null | null | deepseek-ai/deepseek-coder-6.7b-instruct | 39 | 1,508 | transformers | 2023-10-29T11:01:36 | ---
license: other
license_name: deepseek
license_link: LICENSE
---
<p align="center">
<img width="1000px" alt="DeepSeek Coder" src="https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/pictures/logo.png?raw=true">
</p>
<p align="center"><a href="https://www.deepseek.com/">[๐ Homepage]</a> | <a href="https://coder.deepseek.com/">[๐ค Chat with DeepSeek Coder]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/guoday/assert/blob/main/QR.png?raw=true">[Wechat(ๅพฎไฟก)]</a> </p>
<hr>
### 1. Introduction of Deepseek Coder
Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing a window size of 16K and a extra fill-in-the-blank task, to support project-level code completion and infilling. For coding capabilities, Deepseek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks.
- **Massive Training Data**: Trained from scratch fon 2T tokens, including 87% code and 13% linguistic data in both English and Chinese languages.
- **Highly Flexible & Scalable**: Offered in model sizes of 1.3B, 5.7B, 6.7B, and 33B, enabling users to choose the setup most suitable for their requirements.
- **Superior Model Performance**: State-of-the-art performance among publicly available code models on HumanEval, MultiPL-E, MBPP, DS-1000, and APPS benchmarks.
- **Advanced Code Completion Capabilities**: A window size of 16K and a fill-in-the-blank task, supporting project-level code completion and infilling tasks.
### 2. Model Summary
deepseek-coder-6.7b-instruct is a 6.7B parameter model initialized from deepseek-coder-6.7b-base and fine-tuned on 2B tokens of instruction data.
- **Home Page:** [DeepSeek](https://deepseek.com/)
- **Repository:** [deepseek-ai/deepseek-coder](https://github.com/deepseek-ai/deepseek-coder)
- **Chat With DeepSeek Coder:** [DeepSeek-Coder](https://coder.deepseek.com/)
### 3. How to Use
Here give some examples of how to use our model.
#### Chat Model Inference
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True).cuda()
messages=[
{ 'role': 'user', 'content': "write a quick sort algorithm in python."}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
# 32021 is the id of <|EOT|> token
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=32021)
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
```
### 4. License
This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use.
See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details.
### 5. Contact
If you have any questions, please raise an issue or contact us at [agi_code@deepseek.com](mailto:agi_code@deepseek.com).
| 3,474 | [
[
-0.0228424072265625,
-0.04730224609375,
0.013275146484375,
0.02593994140625,
-0.0215301513671875,
0.00965118408203125,
-0.0164642333984375,
-0.04498291015625,
-0.0029010772705078125,
0.01099395751953125,
-0.035858154296875,
-0.042572021484375,
-0.04937744140625,... |
timm/mobilenetv2_110d.ra_in1k | 2023-04-27T21:14:17.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2110.00476",
"arxiv:1801.04381",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/mobilenetv2_110d.ra_in1k | 0 | 1,507 | timm | 2022-12-13T00:00:34 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for mobilenetv2_110d.ra_in1k
A MobileNet-v2 image classification model. Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* RandAugment `RA` recipe. Inspired by and evolved from EfficientNet RandAugment recipes. Published as `B` recipe in [ResNet Strikes Back](https://arxiv.org/abs/2110.00476).
* RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging
* Step (exponential decay w/ staircase) LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 4.5
- GMACs: 0.4
- Activations (M): 8.7
- Image size: 224 x 224
- **Papers:**
- MobileNetV2: Inverted Residuals and Linear Bottlenecks: https://arxiv.org/abs/1801.04381
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('mobilenetv2_110d.ra_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mobilenetv2_110d.ra_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 16, 112, 112])
# torch.Size([1, 24, 56, 56])
# torch.Size([1, 32, 28, 28])
# torch.Size([1, 104, 14, 14])
# torch.Size([1, 352, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mobilenetv2_110d.ra_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1280, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{sandler2018mobilenetv2,
title={Mobilenetv2: Inverted residuals and linear bottlenecks},
author={Sandler, Mark and Howard, Andrew and Zhu, Menglong and Zhmoginov, Andrey and Chen, Liang-Chieh},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={4510--4520},
year={2018}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
| 4,753 | [
[
-0.0274200439453125,
-0.0230255126953125,
-0.0126800537109375,
0.0022220611572265625,
-0.0265350341796875,
-0.0265350341796875,
-0.0062103271484375,
-0.0285491943359375,
0.0222015380859375,
0.03582763671875,
-0.03143310546875,
-0.042083740234375,
-0.046356201171... |
JosephusCheung/Qwen-VL-LLaMAfied-7B-Chat | 2023-09-25T22:38:03.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"qwen",
"en",
"zh",
"license:gpl-3.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | JosephusCheung | null | null | JosephusCheung/Qwen-VL-LLaMAfied-7B-Chat | 28 | 1,506 | transformers | 2023-08-30T18:57:53 | ---
language:
- en
- zh
tags:
- llama
- llama2
- qwen
license: gpl-3.0
---
This is the LLaMAfied replica of [Qwen/Qwen-VL-Chat](https://huggingface.co/Qwen/Qwen-VL-Chat) (Original Version before 25.09.2023), recalibrated to fit the original LLaMA/LLaMA-2-like model structure.
You can use LlamaForCausalLM for model inference, which is the same as LLaMA/LLaMA-2 models (using GPT2Tokenizer converted from the original tiktoken, by [vonjack](https://huggingface.co/vonjack)).
The model has been edited to be white-labelled, meaning the model will no longer call itself a Qwen.
Up until now, the model has undergone numerical alignment of weights and preliminary reinforcement learning in order to align with the original model. Some errors and outdated knowledge have been addressed through model editing methods. This model remains completely equivalent to the original version, without having any dedicated supervised finetuning on downstream tasks or other extensive conversation datasets.
PROMPT FORMAT: [chatml](https://github.com/openai/openai-python/blob/main/chatml.md) | 1,081 | [
[
0.00787353515625,
-0.0640869140625,
0.0176239013671875,
0.022186279296875,
-0.0184783935546875,
0.0006508827209472656,
0.000039696693420410156,
-0.042877197265625,
0.034576416015625,
0.052398681640625,
-0.066162109375,
-0.020263671875,
-0.026031494140625,
-0... |
1aurent/vit_base_patch16_224.owkin_pancancer | 2023-11-03T16:15:18.000Z | [
"timm",
"pytorch",
"safetensors",
"feature-extraction",
"image-classification",
"biology",
"cancer",
"owkin",
"histology",
"dataset:owkin/camelyon16-features",
"dataset:owkin/nct-crc-he",
"license:other",
"model-index",
"co2_eq_emissions",
"region:us"
] | feature-extraction | 1aurent | null | null | 1aurent/vit_base_patch16_224.owkin_pancancer | 2 | 1,506 | timm | 2023-10-22T22:56:17 | ---
tags:
- feature-extraction
- image-classification
- timm
- biology
- cancer
- owkin
- histology
library_name: timm
model-index:
- name: owkin_pancancer
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: Camelyon16[Meta]
type: image-classification
metrics:
- type: accuracy
value: 94.5 ยฑ 4.4
name: ROC AUC
verified: false
- task:
type: image-classification
name: Image Classification
dataset:
name: TCGA-BRCA[Hist]
type: image-classification
metrics:
- type: accuracy
value: 96.2 ยฑ 3.3
name: ROC AUC
verified: false
- task:
type: image-classification
name: Image Classification
dataset:
name: TCGA-BRCA[HRD]
type: image-classification
metrics:
- type: accuracy
value: 79.3 ยฑ 2.4
name: ROC AUC
verified: false
- task:
type: image-classification
name: Image Classification
dataset:
name: TCGA-BRCA[Mol]
type: image-classification
metrics:
- type: accuracy
value: 81.7 ยฑ 1.6
name: ROC AUC
verified: false
- task:
type: image-classification
name: Image Classification
dataset:
name: TCGA-BRCA[OS]
type: image-classification
metrics:
- type: accuracy
value: 64.7 ยฑ 5.7
name: ROC AUC
verified: false
- task:
type: image-classification
name: Image Classification
dataset:
name: TCGA-CRC[MSI]
type: image-classification
metrics:
- type: accuracy
value: 91.0 ยฑ 2.2
name: ROC AUC
verified: false
- task:
type: image-classification
name: Image Classification
dataset:
name: TCGA-COAD[OS]
type: image-classification
metrics:
- type: accuracy
value: 63.4 ยฑ 7.4
name: ROC AUC
verified: false
- task:
type: image-classification
name: Image Classification
dataset:
name: TCGA-NSCLC[CType]
type: image-classification
metrics:
- type: accuracy
value: 97.7 ยฑ 1.3
name: ROC AUC
verified: false
- task:
type: image-classification
name: Image Classification
dataset:
name: TCGA-LUAD[OS]
type: image-classification
metrics:
- type: accuracy
value: 53.8 ยฑ 4.5
name: ROC AUC
verified: false
- task:
type: image-classification
name: Image Classification
dataset:
name: TCGA-LUSC[OS]
type: image-classification
metrics:
- type: accuracy
value: 62.2 ยฑ 2.9
name: ROC AUC
verified: false
- task:
type: image-classification
name: Image Classification
dataset:
name: TCGA-OV[HRD]
type: image-classification
metrics:
- type: accuracy
value: 74.2 ยฑ 8.6
name: ROC AUC
verified: false
- task:
type: image-classification
name: Image Classification
dataset:
name: TCGA-RCC[CType]
type: image-classification
metrics:
- type: accuracy
value: 99.5 ยฑ 0.2
name: ROC AUC
verified: false
- task:
type: image-classification
name: Image Classification
dataset:
name: TCGA-STAD[MSI]
type: image-classification
metrics:
- type: accuracy
value: 89.9 ยฑ 3.9
name: ROC AUC
verified: false
- task:
type: image-classification
name: Image Classification
dataset:
name: TCGA-PAAD[OS]
type: image-classification
metrics:
- type: accuracy
value: 59.2 ยฑ 4.1
name: ROC AUC
verified: false
widget:
- src: https://github.com/owkin/HistoSSLscaling/raw/main/assets/example.tif
example_title: pancancer tile
co2_eq_emissions:
emissions: 14590
source: https://www.medrxiv.org/content/10.1101/2023.07.21.23292757v2
training_type: pre-training
geographical_location: Jean Zay cluster, France (~40 gCOโeq/kWh)
hardware_used: 32 V100 32Gb GPUs, 1216 GPU hours
license: other
license_name: owkin-non-commercial
license_link: https://github.com/owkin/HistoSSLscaling/blob/main/LICENSE.txt
pipeline_tag: feature-extraction
inference: false
datasets:
- owkin/camelyon16-features
- owkin/nct-crc-he
metrics:
- roc_auc
---
# Model card for vit_base_patch16_224.owkin_pancancer
A Vision Transformer (ViT) image classification model. \
Trained by Owkin on 40 million pan-cancer histology tiles from TCGA-COAD.
A version using the transformers library is also available here: https://huggingface.co/owkin/phikon

## Model Details
- **Model Type:** Feature backbone
- **Developed by**: Owkin
- **Funded by**: Owkin and IDRIS
- **Model Stats:**
- Params: 85.8M (base)
- Image size: 224 x 224 x 3
- Patch size: 16 x 16 x 3
- **Pre-training:**
- Dataset: Pancancer40M, created from [TCGA-COAD](https://portal.gdc.cancer.gov/repository?facetTab=cases&filters=%7B%22content%22%3A%5B%7B%22content%22%3A%7B%22field%22%3A%22cases.project.project_id%22%2C%22value%22%3A%5B%22TCGA-COAD%22%5D%7D%2C%22op%22%3A%22in%22%7D%2C%7B%22content%22%3A%7B%22field%22%3A%22files.experimental_strategy%22%2C%22value%22%3A%5B%22Diagnostic%20Slide%22%5D%7D%2C%22op%22%3A%22in%22%7D%5D%2C%22op%22%3A%22and%22%7D&searchTableTab=cases)
- Framework: [iBOT](https://github.com/bytedance/ibot), self-supervised, masked image modeling, self-distillation
- **Papers:**
- [Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling](https://www.medrxiv.org/content/10.1101/2023.07.21.23292757v2)
- **Original:** https://github.com/owkin/HistoSSLscaling
- **License:** [Owkin non-commercial license](https://github.com/owkin/HistoSSLscaling/blob/main/LICENSE.txt)
## Model Usage
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
# get example histology image
img = Image.open(
urlopen(
"https://github.com/owkin/HistoSSLscaling/raw/main/assets/example.tif"
)
)
# load model from the hub
model = timm.create_model(
model_name="hf-hub:1aurent/vit_base_patch16_224.owkin_pancancer",
pretrained=True,
).eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
data = transforms(img).unsqueeze(0) # input is a (batch_size, num_channels, img_size, img_size) shaped tensor
output = model(data) # output is a (batch_size, num_features) shaped tensor
```
## Citation
```bibtex
@article{Filiot2023.07.21.23292757,
author = {Alexandre Filiot and Ridouane Ghermi and Antoine Olivier and Paul Jacob and Lucas Fidon and Alice Mac Kain and Charlie Saillard and Jean-Baptiste Schiratti},
title = {Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling},
elocation-id = {2023.07.21.23292757},
year = {2023},
doi = {10.1101/2023.07.21.23292757},
publisher = {Cold Spring Harbor Laboratory Press},
url = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757},
eprint = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757.full.pdf},
journal = {medRxiv}
}
``` | 7,319 | [
[
-0.033447265625,
-0.01934814453125,
0.032135009765625,
-0.00807952880859375,
-0.0205078125,
-0.01032257080078125,
-0.0007538795471191406,
-0.01702880859375,
0.0258636474609375,
0.043731689453125,
-0.029693603515625,
-0.04815673828125,
-0.042694091796875,
0.0... |
akdeniz27/bert-base-turkish-cased-ner | 2023-03-18T23:08:33.000Z | [
"transformers",
"pytorch",
"onnx",
"safetensors",
"bert",
"token-classification",
"tr",
"doi:10.57967/hf/0949",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | token-classification | akdeniz27 | null | null | akdeniz27/bert-base-turkish-cased-ner | 7 | 1,500 | transformers | 2022-03-02T23:29:05 | ---
language: tr
widget:
- text: "Mustafa Kemal Atatรผrk 19 Mayฤฑs 1919'da Samsun'a รงฤฑktฤฑ."
---
# Turkish Named Entity Recognition (NER) Model
This model is the fine-tuned model of "dbmdz/bert-base-turkish-cased"
using a reviewed version of well known Turkish NER dataset
(https://github.com/stefan-it/turkish-bert/files/4558187/nerdata.txt).
# Fine-tuning parameters:
```
task = "ner"
model_checkpoint = "dbmdz/bert-base-turkish-cased"
batch_size = 8
label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
max_length = 512
learning_rate = 2e-5
num_train_epochs = 3
weight_decay = 0.01
```
# How to use:
```
model = AutoModelForTokenClassification.from_pretrained("akdeniz27/bert-base-turkish-cased-ner")
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/bert-base-turkish-cased-ner")
ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="first")
ner("your text here")
```
Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.
# Reference test results:
* accuracy: 0.9933935699477056
* f1: 0.9592969472710453
* precision: 0.9543530277931161
* recall: 0.9642923563325274
Evaluation results with the test sets proposed in ["Kรผรงรผk, D., Kรผรงรผk, D., Arฤฑcฤฑ, N. 2016. Tรผrkรงe Varlฤฑk ฤฐsmi Tanฤฑma iรงin bir Veri Kรผmesi ("A Named Entity Recognition Dataset for Turkish"). IEEE Sinyal ฤฐลleme, ฤฐletiลim ve Uygulamalarฤฑ Kurultayฤฑ. Zonguldak, Tรผrkiye."](https://ieeexplore.ieee.org/document/7495744) paper.
* Test Set Acc. Prec. Rec. F1-Score
* 20010000 0.9946 0.9871 0.9463 0.9662
* 20020000 0.9928 0.9134 0.9206 0.9170
* 20030000 0.9942 0.9814 0.9186 0.9489
* 20040000 0.9943 0.9660 0.9522 0.9590
* 20050000 0.9971 0.9539 0.9932 0.9732
* 20060000 0.9993 0.9942 0.9942 0.9942
* 20070000 0.9970 0.9806 0.9439 0.9619
* 20080000 0.9988 0.9821 0.9649 0.9735
* 20090000 0.9977 0.9891 0.9479 0.9681
* 20100000 0.9961 0.9684 0.9293 0.9485
* Overall 0.9961 0.9720 0.9516 0.9617 | 2,023 | [
[
-0.059112548828125,
-0.039642333984375,
0.0181884765625,
0.00833892822265625,
-0.031463623046875,
-0.019866943359375,
-0.004749298095703125,
-0.022552490234375,
0.0184173583984375,
0.0254364013671875,
-0.0272979736328125,
-0.049591064453125,
-0.051422119140625,
... |
Yntec/3DKX2 | 2023-10-11T05:33:11.000Z | [
"diffusers",
"General",
"3D",
"Cartoon",
"unvailai",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Yntec | null | null | Yntec/3DKX2 | 0 | 1,496 | diffusers | 2023-10-11T03:55:41 | ---
license: other
library_name: diffusers
pipeline_tag: text-to-image
tags:
- General
- 3D
- Cartoon
- unvailai
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
inference: false
---
Warning: This model is horny! Add "nude, naked" to the negative prompt if want to avoid NSFW.
# 3DKX 2
Check the license at the original page: https://huggingface.co/unvailai/3DKX_V2
Sample and prompt:

Photo of a standing figure of a cute five years old girl in front of a pc computer monitor in an old dirty soviet apartment by and mark brooks, vladimir volegov, rich deep colors. beksinski painting, from a movie by david cronenberg. masterpiece. photographed with leica summilux - m 2 4 mm lens, iso 1 0 0, f / | 845 | [
[
-0.028106689453125,
-0.04876708984375,
0.021514892578125,
0.038848876953125,
-0.055328369140625,
-0.0228424072265625,
0.047882080078125,
-0.053680419921875,
0.0063934326171875,
0.055908203125,
-0.046539306640625,
-0.0313720703125,
-0.049102783203125,
-0.0034... |
ccdv/lsg-bart-base-4096 | 2023-08-31T21:27:53.000Z | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"long context",
"fill-mask",
"custom_code",
"en",
"arxiv:2210.15497",
"arxiv:1910.13461",
"autotrain_compatible",
"region:us"
] | fill-mask | ccdv | null | null | ccdv/lsg-bart-base-4096 | 2 | 1,495 | transformers | 2022-03-02T23:29:05 | ---
tags:
- summarization
- bart
- long context
language:
- en
pipeline_tag: fill-mask
---
# LSG model
**Transformers >= 4.23.1**\
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \
Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg).
* [Usage](#usage)
* [Parameters](#parameters)
* [Sparse selection type](#sparse-selection-type)
* [Tasks](#tasks)
This model is adapted from [BART-base](https://huggingface.co/facebook/bart-base) for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer.
This model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG).
The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...).
Implemented in PyTorch.

## Usage
The model relies on a custom modeling file, you need to add trust_remote_code=True to use it.
```python:
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("ccdv/lsg-bart-base-4096", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-4096")
```
## Parameters
You can change various parameters like :
* the number of global tokens (num_global_tokens=1)
* local block size (block_size=128)
* sparse block size (sparse_block_size=128)
* sparsity factor (sparsity_factor=2)
* mask_first_token (mask first token since it is redundant with the first global token)
* see config.json file
Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.
```python:
from transformers import AutoModel
model = AutoModel.from_pretrained("ccdv/lsg-bart-base-4096",
trust_remote_code=True,
num_global_tokens=16,
block_size=64,
sparse_block_size=64,
attention_probs_dropout_prob=0.0
sparsity_factor=4,
sparsity_type="none",
mask_first_token=True
)
```
## Sparse selection type
There are 6 different sparse selection patterns. The best type is task dependent. \
If `sparse_block_size=0` or `sparsity_type="none"`, only local attention is considered. \
Note that for sequences with length < 2*block_size, the type has no effect.
* `sparsity_type="bos_pooling"` (new)
* weighted average pooling using the BOS token
* Works best in general, especially with a rather large sparsity_factor (8, 16, 32)
* Additional parameters:
* None
* `sparsity_type="norm"`, select highest norm tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* `sparsity_type="pooling"`, use average pooling to merge tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* `sparsity_type="lsh"`, use the LSH algorithm to cluster similar tokens
* Works best for a large sparsity_factor (4+)
* LSH relies on random projections, thus inference may differ slightly with different seeds
* Additional parameters:
* lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids
* `sparsity_type="stride"`, use a striding mecanism per head
* Each head will use different tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
* `sparsity_type="block_stride"`, use a striding mecanism per head
* Each head will use block of tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
## Tasks
Seq2Seq example for summarization:
```python:
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-bart-base-4096",
trust_remote_code=True,
pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-4096")
SENTENCE = "This is a test sequence to test the model. " * 300
token_ids = tokenizer(
SENTENCE,
return_tensors="pt",
padding="max_length", # Optional but recommended
truncation=True # Optional but recommended
)
output = model(**token_ids)
```
Classification example:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-bart-base-4096",
trust_remote_code=True,
pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-4096")
SENTENCE = "This is a test sequence to test the model. " * 300
token_ids = tokenizer(
SENTENCE,
return_tensors="pt",
#pad_to_multiple_of=... # Optional
truncation=True
)
output = model(**token_ids)
> SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
```
**BART**
```
@article{DBLP:journals/corr/abs-1910-13461,
author = {Mike Lewis and
Yinhan Liu and
Naman Goyal and
Marjan Ghazvininejad and
Abdelrahman Mohamed and
Omer Levy and
Veselin Stoyanov and
Luke Zettlemoyer},
title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language
Generation, Translation, and Comprehension},
journal = {CoRR},
volume = {abs/1910.13461},
year = {2019},
url = {http://arxiv.org/abs/1910.13461},
eprinttype = {arXiv},
eprint = {1910.13461},
timestamp = {Thu, 31 Oct 2019 14:02:26 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 6,265 | [
[
-0.044036865234375,
-0.05596923828125,
0.0029449462890625,
0.022369384765625,
-0.032073974609375,
-0.01366424560546875,
-0.0217132568359375,
-0.0210418701171875,
0.016754150390625,
0.02032470703125,
-0.040557861328125,
-0.037567138671875,
-0.048431396484375,
... |
deepset/bert-base-uncased-squad2 | 2023-03-24T14:15:37.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:squad_v2",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | question-answering | deepset | null | null | deepset/bert-base-uncased-squad2 | 2 | 1,495 | transformers | 2022-03-02T23:29:05 | ---
language: en
license: cc-by-4.0
datasets:
- squad_v2
model-index:
- name: deepset/bert-base-uncased-squad2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- type: exact_match
value: 75.6529
name: Exact Match
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTY2YmQ0ZDFjMjRlZWRiZWQ2YWQ4MTM0ODkyYTQ0NmYwMzBlNWViZWQ0ODFhMGJmMmY4ZGYwOTQyMDAyZGNjYyIsInZlcnNpb24iOjF9.UyqonQTsCB0BW86LfPy17kLt3a4r3wMeh04MDam5t_UhElp6N02YpiKOqcb1ethNHjAR0WGyxrcV3TI4d-wFAQ
- type: f1
value: 78.6191
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWRkZWVjMDU2YTcxYWVkZTU1YmUzY2FkNWI5NDJkM2YwMjFmMmE0Njc3MjI5N2Q0NDdhZDNkZWNjMWE5YTRmZiIsInZlcnNpb24iOjF9.ol0Zacd9ZryXazXjgVssGFYG4s5FzbhGGaj1ZEDLVN2ziyzx23bo4GH9PSuGTFxRK2BO5_dxvDupLRqJOF59Bg
---
# bert-base-uncased for QA
## Overview
**Language model:** bert-base-uncased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Infrastructure**: 1x Tesla v100
## Hyperparameters
```
batch_size = 32
n_epochs = 3
base_LM_model = "bert-base-uncased"
max_seq_len = 384
learning_rate = 3e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride=128
max_query_length=64
```
## Performance
```
"exact": 73.67977764676156
"f1": 77.87647139308865
```
## Authors
- Timo Mรถller: `timo.moeller [at] deepset.ai`
- Julian Risch: `julian.risch [at] deepset.ai`
- Malte Pietsch: `malte.pietsch [at] deepset.ai`
- Michel Bartels: `michel.bartels [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs) | 2,621 | [
[
-0.0260467529296875,
-0.040283203125,
0.03228759765625,
0.0208892822265625,
-0.01149749755859375,
0.01092529296875,
-0.02679443359375,
-0.030975341796875,
0.01442718505859375,
0.0304107666015625,
-0.051025390625,
-0.06268310546875,
-0.0248565673828125,
-0.01... |
etri-xainlp/llama2-ko-13b-instruct | 2023-10-06T09:50:41.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | etri-xainlp | null | null | etri-xainlp/llama2-ko-13b-instruct | 0 | 1,495 | transformers | 2023-10-06T04:43:16 | ---
license: apache-2.0
---
# llama2-ko-13b-instruct
This model is a fine-tuned version of [meta-llama/Llama-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) on an instruction-following dataset(650k). | 212 | [
[
-0.01885986328125,
-0.040191650390625,
0.0250244140625,
0.0333251953125,
-0.046234130859375,
0.00971221923828125,
0.039337158203125,
-0.0276336669921875,
0.03948974609375,
0.0587158203125,
-0.07598876953125,
-0.032501220703125,
-0.033843994140625,
0.00099849... |
almanach/camembert-bio-base | 2023-09-26T13:51:54.000Z | [
"transformers",
"pytorch",
"tf",
"safetensors",
"camembert",
"fill-mask",
"biomedical",
"clinical",
"life sciences",
"fr",
"dataset:rntc/biomed-fr",
"arxiv:2306.15550",
"doi:10.57967/hf/0586",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | almanach | null | null | almanach/camembert-bio-base | 8 | 1,494 | transformers | 2023-02-23T15:03:44 | ---
license: mit
language:
- fr
pipeline_tag: fill-mask
tags:
- biomedical
- clinical
- life sciences
datasets:
- rntc/biomed-fr
widget:
- text: >-
Les mรฉdicaments <mask> typiques sont largement utilisรฉs dans le traitement
de premiรจre intention des patients schizophrรจnes.
library_name: transformers
---
<a href=https://camembert-bio-model.fr/>
<img width="300px" src="https://www.camembert-bio-model.fr/authors/camembert-bio/avatar_hu793b92579abd63a955d3004af578ed96_116953_270x270_fill_lanczos_center_3.png">
</a>
# CamemBERT-bio : a Tasty French Language Model Better for your Health
CamemBERT-bio is a state-of-the-art french biomedical language model built using continual-pretraining from [camembert-base](https://huggingface.co/camembert-base).
It was trained on a french public biomedical corpus of 413M words containing scientific documents, drug leaflets and clinical cases extrated from theses and articles.
It shows 2.54 points of F1 score improvement on average on 5 different biomedical named entity recognition tasks compared to [camembert-base](https://huggingface.co/camembert-base).
## Absract
Clinical data in hospitals are increasingly accessible for research through clinical data warehouses, however these documents are unstructured. It is therefore necessary to extract information from medical
reports to conduct clinical studies. Transfer learning with BERT-like models such as CamemBERT
has allowed major advances, especially for named entity recognition. However, these models are
trained for plain language and are less efficient on biomedical data. This is why we propose a new
french public biomedical dataset on which we have continued the pre-training of CamemBERT. Thus,
we introduce a first version of CamemBERT-bio, a specialized public model for the french biomedical
domain that shows 2.54 points of F1 score improvement on average on different biomedical named
entity recognition tasks.
- **Developed by:** [Rian Touchent](https://rian-t.github.io), [Eric Villemonte de La Clergerie](http://pauillac.inria.fr/~clerger/)
- **Logo by:** [Alix Chaguรฉ](https://alix-tz.github.io)
- **License:** MIT
<!-- ### Model Sources [optional] -->
<!-- Provide the basic links for the model. -->
<!-- - **Website:** camembert-bio-model.fr -->
<!-- - **Paper [optional]:** [More Information Needed] -->
<!-- - **Demo [optional]:** [More Information Needed] -->
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
| **Corpus** | **Details** | **Size** |
|------------|--------------------------------------------------------------------|------------|
| ISTEX | diverse scientific literature indexed on ISTEX | 276 M |
| CLEAR | drug leaflets | 73 M |
| E3C | various documents from journals, drug leaflets, and clinical cases | 64 M |
| Total | | 413 M |
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
We used continual-pretraining from [camembert-base](https://huggingface.co/camembert-base).
We trained the model using the Masked Language Modeling (MLM) objective with Whole Word Masking for 50k steps during 39 hours
with 2 Tesla V100.
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Fine-tuning
For fine-tuning, we utilized Optuna to select the hyperparameters.
The learning rate was set to 5e-5, with a warmup ratio of 0.224 and a batch size of 16.
The fine-tuning process was carried out for 2000 steps.
For prediction, a simple linear layer was added on top of the model.
Notably, none of the CamemBERT layers were frozen during the fine-tuning process.
### Scoring
To evaluate the performance of the model, we used the seqeval tool in strict mode with the IOB2 scheme.
For each evaluation, the best fine-tuned model on the validation set was selected to calculate the final score on the test set.
To ensure reliability, we averaged over 10 evaluations with different seeds.
### Results
| Style | Dataset | Score | CamemBERT | CamemBERT-bio |
| :----------- | :------ | :---- | :---------------: | :-------------------: |
| Clinical | CAS1 | F1 | 70\.50 ~~ยฑ~~ 1.75 | **73\.03 ~~ยฑ~~ 1.29** |
| | | P | 70\.12 ~~ยฑ~~ 1.93 | **71\.71 ~~ยฑ~~ 1.61** |
| | | R | 70\.89 ~~ยฑ~~ 1.78 | **74\.42 ~~ยฑ~~ 1.49** |
| | CAS2 | F1 | 79\.02 ~~ยฑ~~ 0.92 | **81\.66 ~~ยฑ~~ 0.59** |
| | | P | 77\.3 ~~ยฑ~~ 1.36 | **80\.96 ~~ยฑ~~ 0.91** |
| | | R | 80\.83 ~~ยฑ~~ 0.96 | **82\.37 ~~ยฑ~~ 0.69** |
| | E3C | F1 | 67\.63 ~~ยฑ~~ 1.45 | **69\.85 ~~ยฑ~~ 1.58** |
| | | P | 78\.19 ~~ยฑ~~ 0.72 | **79\.11 ~~ยฑ~~ 0.42** |
| | | R | 59\.61 ~~ยฑ~~ 2.25 | **62\.56 ~~ยฑ~~ 2.50** |
| Drug leaflets | EMEA | F1 | 74\.14 ~~ยฑ~~ 1.95 | **76\.71 ~~ยฑ~~ 1.50** |
| | | P | 74\.62 ~~ยฑ~~ 1.97 | **76\.92 ~~ยฑ~~ 1.96** |
| | | R | 73\.68 ~~ยฑ~~ 2.22 | **76\.52 ~~ยฑ~~ 1.62** |
| Scientific | MEDLINE | F1 | 65\.73 ~~ยฑ~~ 0.40 | **68\.47 ~~ยฑ~~ 0.54** |
| | | P | 64\.94 ~~ยฑ~~ 0.82 | **67\.77 ~~ยฑ~~ 0.88** |
| | | R | 66\.56 ~~ยฑ~~ 0.56 | **69\.21 ~~ยฑ~~ 1.32** |
## Environmental Impact estimation
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
- **Hardware Type:** 2 x Tesla V100
- **Hours used:** 39 hours
- **Provider:** INRIA clusters
- **Compute Region:** Paris, France
- **Carbon Emitted:** 0.84 kg CO2 eq.
<!-- ## Citation [optional] -->
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
<!-- **BibTeX:** -->
## Citation information
```bibtex
@misc{touchent2023camembertbio,
title={CamemBERT-bio: a Tasty French Language Model Better for your Health},
author={Rian Touchent and Laurent Romary and Eric de la Clergerie},
year={2023},
eprint={2306.15550},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{touchent:hal-04130187,
TITLE = {{CamemBERT-bio : Un mod{\`e}le de langue fran{\c c}ais savoureux et meilleur pour la sant{\'e}}},
AUTHOR = {Touchent, Rian and Romary, Laurent and De La Clergerie, Eric},
URL = {https://hal.science/hal-04130187},
BOOKTITLE = {{18e Conf{\'e}rence en Recherche d'Information et Applications \\ 16e Rencontres Jeunes Chercheurs en RI \\ 30e Conf{\'e}rence sur le Traitement Automatique des Langues Naturelles \\ 25e Rencontre des {\'E}tudiants Chercheurs en Informatique pour le Traitement Automatique des Langues}},
ADDRESS = {Paris, France},
EDITOR = {Servan, Christophe and Vilnat, Anne},
PUBLISHER = {{ATALA}},
PAGES = {323-334},
YEAR = {2023},
KEYWORDS = {comptes rendus m{\'e}dicaux ; TAL clinique ; CamemBERT ; extraction d'information ; biom{\'e}dical ; reconnaissance d'entit{\'e}s nomm{\'e}es},
HAL_ID = {hal-04130187},
HAL_VERSION = {v1},
}
```
<!-- [More Information Needed] -->
<!-- **APA:** -->
<!-- [More Information Needed] --> | 7,685 | [
[
-0.02203369140625,
-0.04925537109375,
0.038909912109375,
0.00760650634765625,
-0.021514892578125,
0.003589630126953125,
-0.01078033447265625,
-0.03778076171875,
0.040435791015625,
0.036407470703125,
-0.031890869140625,
-0.06591796875,
-0.040618896484375,
0.0... |
Yntec/Dreamscape | 2023-09-01T17:48:15.000Z | [
"diffusers",
"fantasy",
"art",
"realistic",
"artistic",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"Lykon",
"DarkAgent",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Yntec | null | null | Yntec/Dreamscape | 0 | 1,491 | diffusers | 2023-09-01T12:31:01 | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
language:
- en
tags:
- fantasy
- art
- realistic
- artistic
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- Lykon
- DarkAgent
inference: true
---
A mix of NeverEndingDream 1.22 and Dreamscapes & Dragonfire 2 to bring the best of both worlds!
Comparison:

(Click for larger)
Sample and prompt:

magazine Pretty CUTE LITTLE female. Paint bucket pouring paint in air on top of planet earth vector logo style. Ponytail By angra mainyu. michael germash, The lost souvenir by andreas rocha, jean deville, yakuza and very beautiful, mystical pinup. Beautiful detailed by KEY VISUAL. beautiful face, 4k dslr photo, Favela fungus cathedral coaster hive, palace in
Original Pages:
https://civitai.com/models/50294/dreamscapes-and-dragonfire-new-v20-semi-realism-fantasy-model
https://huggingface.co/Lykon/NeverEnding-Dream
# Recipe
-Add Difference 1.0-
Primary model:
NeverEndingDream 1.22
Secondary model:
NeverEndingDream 1.22
Tertiary model:
v1-5-pruned-fp16-no-ema (https://huggingface.co/Yntec/DreamLikeRemix/resolve/main/v1-5-pruned-fp16-no-ema.safetensors)
Output Model:
NeverEndingDreamEssence
-Weighted Sum 0.70-
Primary model:
NeverEndingDreamEssence
Secondary model:
Dreamscapes & Dragonfire 2
Output Model:
Dreamscape | 1,585 | [
[
-0.02398681640625,
-0.031646728515625,
0.016754150390625,
0.04864501953125,
-0.02679443359375,
0.01055908203125,
0.01259613037109375,
-0.06494140625,
0.06512451171875,
0.04913330078125,
-0.0838623046875,
-0.0325927734375,
-0.026611328125,
-0.0058746337890625... |
DiogoXP/pxogoid2 | 2023-07-27T10:32:29.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | DiogoXP | null | null | DiogoXP/pxogoid2 | 0 | 1,490 | diffusers | 2023-07-27T10:19:40 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### PXogoid2 Dreambooth model trained by DiogoXP with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| 497 | [
[
-0.0167999267578125,
-0.0452880859375,
0.040557861328125,
0.043243408203125,
-0.018402099609375,
0.0294952392578125,
0.020782470703125,
-0.0178985595703125,
0.040771484375,
-0.0032444000244140625,
-0.009490966796875,
-0.01543426513671875,
-0.04107666015625,
... |
daekeun-ml/Llama-2-ko-instruct-13B | 2023-10-30T01:57:07.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-2",
"instruct",
"instruction",
"ko",
"license:llama2",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | daekeun-ml | null | null | daekeun-ml/Llama-2-ko-instruct-13B | 5 | 1,489 | transformers | 2023-10-29T16:13:27 | ---
language:
- ko
tags:
- llama-2
- instruct
- instruction
pipeline_tag: text-generation
license: llama2
---
# Llama-2-ko-instruct-13B
### Model Details
- Base Model: [LLaMA-2-koen-13B](https://huggingface.co/beomi/llama-2-koen-13b)
### Datasets
- Added some English to Korean translation data based on the KOpen-platypus and KoAlpaca datasets. Translations utilized AWS blog content that I translated myself.
- Extracted only sentences longer than 100 characters and removed similar sentences with KoSimCSE (daekeun-ml/KoSimCSE-supervised-kobigbird-roberta-large)
- Created category-specific prompts that encourage AI to answer despite hallucination for future RLHF (Reinforcement Learning From Human Feedback) or DPO (Direct Preference Optimization) tuning.
### License
- Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License, under LLAMA 2 COMMUNITY LICENSE AGREEMENT
This model was created as a personal experiment, unrelated to the organization I work for. | 1,001 | [
[
-0.0260467529296875,
-0.058502197265625,
0.05242919921875,
0.043914794921875,
-0.04547119140625,
-0.006183624267578125,
-0.003936767578125,
-0.0293426513671875,
0.0274810791015625,
0.06048583984375,
-0.0726318359375,
-0.036956787109375,
-0.044464111328125,
0... |
unicamp-dl/ptt5-base-portuguese-vocab | 2021-03-24T22:16:54.000Z | [
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"tensorflow",
"pt",
"pt-br",
"dataset:brWaC",
"license:mit",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text2text-generation | unicamp-dl | null | null | unicamp-dl/ptt5-base-portuguese-vocab | 19 | 1,488 | transformers | 2022-03-02T23:29:05 | ---
language: pt
license: mit
tags:
- t5
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- brWaC
widget:
- text: "Texto de exemplo em portuguรชs"
inference: false
---
# Portuguese T5 (aka "PTT5")
## Introduction
PTT5 is a T5 model pretrained in the BrWac corpus, a large collection of web pages in Portuguese, improving T5's performance on Portuguese sentence similarity and entailment tasks. It's available in three sizes (small, base and large) and two vocabularies (Google's T5 original and ours, trained on Portuguese Wikipedia).
For further information or requests, please go to [PTT5 repository](https://github.com/unicamp-dl/PTT5).
## Available models
| Model | Size | #Params | Vocabulary |
| :-: | :-: | :-: | :-: |
| [unicamp-dl/ptt5-small-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-small-t5-vocab) | small | 60M | Google's T5 |
| [unicamp-dl/ptt5-base-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-base-t5-vocab) | base | 220M | Google's T5 |
| [unicamp-dl/ptt5-large-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-large-t5-vocab) | large | 740M | Google's T5 |
| [unicamp-dl/ptt5-small-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-small-portuguese-vocab) | small | 60M | Portuguese |
| **[unicamp-dl/ptt5-base-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-base-portuguese-vocab)** **(Recommended)** | **base** | **220M** | **Portuguese** |
| [unicamp-dl/ptt5-large-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-large-portuguese-vocab) | large | 740M | Portuguese |
## Usage
```python
# Tokenizer
from transformers import T5Tokenizer
# PyTorch (bare model, baremodel + language modeling head)
from transformers import T5Model, T5ForConditionalGeneration
# Tensorflow (bare model, baremodel + language modeling head)
from transformers import TFT5Model, TFT5ForConditionalGeneration
model_name = 'unicamp-dl/ptt5-base-portuguese-vocab'
tokenizer = T5Tokenizer.from_pretrained(model_name)
# PyTorch
model_pt = T5ForConditionalGeneration.from_pretrained(model_name)
# TensorFlow
model_tf = TFT5ForConditionalGeneration.from_pretrained(model_name)
```
# Citation
If you use PTT5, please cite:
@article{ptt5_2020,
title={PTT5: Pretraining and validating the T5 model on Brazilian Portuguese data},
author={Carmo, Diedre and Piau, Marcos and Campiotti, Israel and Nogueira, Rodrigo and Lotufo, Roberto},
journal={arXiv preprint arXiv:2008.09144},
year={2020}
}
| 2,739 | [
[
-0.029998779296875,
-0.0274810791015625,
0.01297760009765625,
0.02703857421875,
-0.0472412109375,
0.0165252685546875,
-0.0250701904296875,
-0.039215087890625,
0.00795745849609375,
0.0235748291015625,
-0.031494140625,
-0.066162109375,
-0.056396484375,
0.02488... |
DopeorNope/COLA3-7B | 2023-10-19T15:29:44.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"ko",
"dataset:DopeorNope/combined",
"arxiv:2307.09288",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | DopeorNope | null | null | DopeorNope/COLA3-7B | 1 | 1,488 | transformers | 2023-10-03T04:15:32 | ---
language:
- en
- ko
datasets:
- DopeorNope/combined
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
**(์ฃผ)๋ฏธ๋์ด๊ทธ๋ฃน์ฌ๋๊ณผ์ฒ๊ณผ (์ฃผ)๋ง์ปค์ LLM ์ฐ๊ตฌ ์ปจ์์์์์ ๊ฐ๋ฐ๋ ๋ชจ๋ธ์
๋๋ค**
**The license is `cc-by-nc-sa-4.0`.**
# **COLA3-7B : Lamm2 7B ๋ฒ ์ด์ค ๋ชจ๋ธ์ IA3๋ฐฉ์์ผ๋ก Fine tuningํ ๋ชจ๋ธ**
** IA3๋ฐฉ์์ ๋ํ ๋ํ
์ผ ์ ๋ณด: [K(G)OAT](https://github.com/Marker-Inc-Korea/K-G-OAT)**
## Model Details
**Model Developers** Seungyoo-Lee (DopeorNope)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
KO-Platypus2-7B-ex is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Base Model**
[kyujinpy/KO-Platypus2-7B-ex](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex)
**Training Dataset**
[Eng_Kor_COT_combined](https://huggingface.co/datasets/DopeorNope/Eng_Kor_COT_combined) was used for finetuning.
I used A5000 GPU 24GB x2 desktop for training.
### Limitations and bias
Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
### Citations
```bibtex
@article{platypus2023,
title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs},
author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz},
booktitle={arXiv preprint arxiv:2308.07317},
year={2023}
}
```
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
}
```
```bibtex
@inproceedings{
hu2022lora,
title={Lo{RA}: Low-Rank Adaptation of Large Language Models},
author={Edward J Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=nZeVKeeFYf9}
}
``` | 2,611 | [
[
-0.02178955078125,
-0.0655517578125,
0.0150604248046875,
0.032684326171875,
-0.036346435546875,
-0.0021533966064453125,
-0.0206146240234375,
-0.054534912109375,
0.008148193359375,
0.03216552734375,
-0.0303955078125,
-0.0367431640625,
-0.05694580078125,
-0.00... |
razent/SciFive-base-Pubmed_PMC | 2023-08-30T11:18:24.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"token-classification",
"text-classification",
"question-answering",
"text-generation",
"en",
"dataset:pubmed",
"dataset:pmc/open_access",
"arxiv:2106.03598",
"autotrain_compatible",
"endpoints_compa... | text-classification | razent | null | null | razent/SciFive-base-Pubmed_PMC | 4 | 1,487 | transformers | 2022-03-02T23:29:05 | ---
language:
- en
tags:
- token-classification
- text-classification
- question-answering
- text2text-generation
- text-generation
datasets:
- pubmed
- pmc/open_access
---
# SciFive Pubmed+PMC Base
## Introduction
Paper: [SciFive: a text-to-text transformer model for biomedical literature](https://arxiv.org/abs/2106.03598)
Authors: _Long N. Phan, James T. Anibal, Hieu Tran, Shaurya Chanana, Erol Bahadroglu, Alec Peltekian, Grรฉgoire Altan-Bonnet_
## How to use
For more details, do check out [our Github repo](https://github.com/justinphan3110/SciFive).
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
โ
tokenizer = AutoTokenizer.from_pretrained("razent/SciFive-base-Pubmed_PMC")
model = AutoModelForSeq2SeqLM.from_pretrained("razent/SciFive-base-Pubmed_PMC")
โ
sentence = "Identification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor ."
text = sentence + "</s>"
encoding = tokenizer.encode_plus(text, pad_to_max_length=True, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda")
outputs = model.generate(
input_ids=input_ids, attention_mask=attention_masks,
max_length=256,
early_stopping=True
)
for output in outputs:
line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(line)
``` | 1,388 | [
[
0.0030384063720703125,
-0.0210418701171875,
0.0276336669921875,
0.028167724609375,
-0.0253753662109375,
-0.00489044189453125,
-0.00250244140625,
-0.00757598876953125,
0.0021495819091796875,
0.00823974609375,
-0.035797119140625,
-0.0230865478515625,
-0.05078125,
... |
nota-ai/bk-sdm-tiny | 2023-08-19T12:15:47.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dataset:ChristophSchuhmann/improved_aesthetics_6.5plus",
"arxiv:2305.15798",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | nota-ai | null | null | nota-ai/bk-sdm-tiny | 14 | 1,485 | diffusers | 2023-07-12T10:53:04 | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
datasets:
- ChristophSchuhmann/improved_aesthetics_6.5plus
library_name: diffusers
pipeline_tag: text-to-image
extra_gated_prompt: >-
This model is open access and available to all, with a CreativeML OpenRAIL-M
license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or
harmful outputs or content
2. The authors claim no rights on the outputs you generate, you are free to
use them and are accountable for their use which must not go against the
provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as
a service. If you do, please be aware you have to include the same use
restrictions as the ones in the license and share a copy of the CreativeML
OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here:
https://huggingface.co/spaces/CompVis/stable-diffusion-license
extra_gated_heading: Please read the LICENSE to access this model
---
# BK-SDM Model Card
Block-removed Knowledge-distilled Stable Diffusion Model (BK-SDM) is an architecturally compressed SDM for efficient general-purpose text-to-image synthesis. This model is bulit with (i) removing several residual and attention blocks from the U-Net of [Stable Diffusion v1.4]( https://huggingface.co/CompVis/stable-diffusion-v1-4) and (ii) distillation pretraining on only 0.22M LAION pairs (fewer than 0.1% of the full training set). Despite being trained with very limited resources, our compact model can imitate the original SDM by benefiting from transferred knowledge.
- **Resources for more information**: [Paper](https://arxiv.org/abs/2305.15798), [GitHub](https://github.com/Nota-NetsPresso/BK-SDM), [Demo]( https://huggingface.co/spaces/nota-ai/compressed-stable-diffusion).
## Examples with ๐ค[Diffusers library](https://github.com/huggingface/diffusers).
An inference code with the default PNDM scheduler and 50 denoising steps is as follows.
```python
import torch
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("nota-ai/bk-sdm-tiny", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a tropical bird sitting on a branch of a tree"
image = pipe(prompt).images[0]
image.save("example.png")
```
The following code is also runnable, because we compressed the U-Net of [Stable Diffusion v1.4]( https://huggingface.co/CompVis/stable-diffusion-v1-4) while keeping the other parts (i.e., Text Encoder and Image Decoder) unchanged:
```python
import torch
from diffusers import StableDiffusionPipeline, UNet2DConditionModel
pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16)
pipe.unet = UNet2DConditionModel.from_pretrained("nota-ai/bk-sdm-tiny", subfolder="unet", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a tropical bird sitting on a branch of a tree"
image = pipe(prompt).images[0]
image.save("example.png")
```
## Compression Method
### U-Net Architecture
Certain residual and attention blocks were eliminated from the U-Net of SDM-v1.4:
- 1.04B-param [SDM-v1.4](https://huggingface.co/CompVis/stable-diffusion-v1-4) (0.86B-param U-Net): the original source model.
- 0.76B-param [**BK-SDM-Base**](https://huggingface.co/nota-ai/bk-sdm-base) (0.58B-param U-Net): obtained with โ fewer blocks in outer stages.
- 0.66B-param [**BK-SDM-Small**](https://huggingface.co/nota-ai/bk-sdm-small) (0.49B-param U-Net): obtained with โ and โก mid-stage removal.
- 0.50B-param [**BK-SDM-Tiny**](https://huggingface.co/nota-ai/bk-sdm-tiny) (0.33B-param U-Net): obtained with โ , โก, and โข further inner-stage removal.
<center>
<img alt="U-Net architectures" img src="https://netspresso-research-code-release.s3.us-east-2.amazonaws.com/assets-bk-sdm/fig_arch.png" width="100%">
</center>
### Distillation Pretraining
The compact U-Net was trained to mimic the behavior of the original U-Net. We leveraged feature-level and output-level distillation, along with the denoising task loss.
<center>
<img alt="KD-based pretraining" img src="https://netspresso-research-code-release.s3.us-east-2.amazonaws.com/assets-bk-sdm/fig_kd_bksdm.png" width="100%">
</center>
<br/>
- **Training Data**: 212,776 image-text pairs (i.e., 0.22M pairs) from [LAION-Aesthetics V2 6.5+](https://laion.ai/blog/laion-aesthetics/).
- **Hardware:** A single NVIDIA A100 80GB GPU
- **Gradient Accumulations**: 4
- **Batch:** 256 (=4ร64)
- **Optimizer:** AdamW
- **Learning Rate:** a constant learning rate of 5e-5 for 50K-iteration pretraining
## Experimental Results
The following table shows the zero-shot results on 30K samples from the MS-COCO validation split. After generating 512ร512 images with the PNDM scheduler and 25 denoising steps, we downsampled them to 256ร256 for evaluating generation scores. Our models were drawn at the 50K-th training iteration.
| Model | FIDโ | ISโ | CLIP Scoreโ<br>(ViT-g/14) | # Params,<br>U-Net | # Params,<br>Whole SDM |
|---|:---:|:---:|:---:|:---:|:---:|
| [Stable Diffusion v1.4](https://huggingface.co/CompVis/stable-diffusion-v1-4) | 13.05 | 36.76 | 0.2958 | 0.86B | 1.04B |
| [BK-SDM-Base](https://huggingface.co/nota-ai/bk-sdm-base) (Ours) | 15.76 | 33.79 | 0.2878 | 0.58B | 0.76B |
| [BK-SDM-Small](https://huggingface.co/nota-ai/bk-sdm-small) (Ours) | 16.98 | 31.68 | 0.2677 | 0.49B | 0.66B |
| [BK-SDM-Tiny](https://huggingface.co/nota-ai/bk-sdm-tiny) (Ours) | 17.12 | 30.09 | 0.2653 | 0.33B | 0.50B |
<br/>
The following figure depicts synthesized images with some MS-COCO captions.
<center>
<img alt="Visual results" img src="https://netspresso-research-code-release.s3.us-east-2.amazonaws.com/assets-bk-sdm/fig_results.png" width="100%">
</center>
<br/>
# Uses
_Note: This section is taken from the [Stable Diffusion v1 model card]( https://huggingface.co/CompVis/stable-diffusion-v1-4) (which was based on the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini)) and applies in the same way to BK-SDMs_.
## Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to โA red cube on top of a blue sphereโ
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material and is not fit for product use without additional safety mechanisms and considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data. The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are primarily limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers. This checker works by checking model outputs against known hard-coded NSFW concepts. The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter. Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images. The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
# Acknowledgments
- We express our gratitude to [Microsoft for Startups Founders Hub](https://www.microsoft.com/en-us/startups) for generously providing the Azure credits used during pretraining.
- We deeply appreciate the pioneering research on Latent/Stable Diffusion conducted by [CompVis](https://github.com/CompVis/latent-diffusion), [Runway](https://runwayml.com/), and [Stability AI](https://stability.ai/).
- Special thanks to the contributors to [LAION](https://laion.ai/), [Diffusers](https://github.com/huggingface/diffusers), and [Gradio](https://www.gradio.app/) for their valuable support.
# Citation
```bibtex
@article{kim2023architectural,
title={On Architectural Compression of Text-to-Image Diffusion Models},
author={Kim, Bo-Kyeong and Song, Hyoung-Kyu and Castells, Thibault and Choi, Shinkook},
journal={arXiv preprint arXiv:2305.15798},
year={2023},
url={https://arxiv.org/abs/2305.15798}
}
```
```bibtex
@article{Kim_2023_ICMLW,
title={BK-SDM: Architecturally Compressed Stable Diffusion for Efficient Text-to-Image Generation},
author={Kim, Bo-Kyeong and Song, Hyoung-Kyu and Castells, Thibault and Choi, Shinkook},
journal={ICML Workshop on Efficient Systems for Foundation Models (ES-FoMo)},
year={2023},
url={https://openreview.net/forum?id=bOVydU0XKC}
}
```
*This model card was written by Bo-Kyeong Kim and is based on the [Stable Diffusion v1 model card]( https://huggingface.co/CompVis/stable-diffusion-v1-4).* | 11,924 | [
[
-0.04290771484375,
-0.043731689453125,
0.01207733154296875,
0.019439697265625,
-0.03533935546875,
-0.0122222900390625,
-0.003314971923828125,
-0.02197265625,
0.02203369140625,
0.033905029296875,
-0.033599853515625,
-0.036346435546875,
-0.046417236328125,
0.0... |
timm/resnetrs50.tf_in1k | 2023-04-05T18:45:44.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:2103.07579",
"arxiv:1512.03385",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/resnetrs50.tf_in1k | 0 | 1,482 | timm | 2023-04-05T18:45:06 | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
---
# Model card for resnetrs50.tf_in1k
A ResNetRS-B image classification model.
This model features:
* ReLU activations
* single layer 7x7 convolution with pooling
* 1x1 convolution shortcut downsample
Trained on ImageNet-1k by paper authors in Tensorflow.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 35.7
- GMACs: 2.3
- Activations (M): 6.2
- Image size: train = 160 x 160, test = 224 x 224
- **Papers:**
- Revisiting ResNets: Improved Training and Scaling Strategies: https://arxiv.org/abs/2103.07579
- Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385
- **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/resnet
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resnetrs50.tf_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnetrs50.tf_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 80, 80])
# torch.Size([1, 256, 40, 40])
# torch.Size([1, 512, 20, 20])
# torch.Size([1, 1024, 10, 10])
# torch.Size([1, 2048, 5, 5])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnetrs50.tf_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 5, 5) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec|
|------------------------------------------|--------|-----|-----|-----------|-----|-----|-------|
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 |
|[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 |
|[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 |
|[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 |
|[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 |
|[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 |
|[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 |
|[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 |
|[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 |
|[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 |
|[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 |
|[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 |
|[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 |
|[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 |
|[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 |
|[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 |
|[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 |
|[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 |
|[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 |
|[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 |
|[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 |
|[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 |
|[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 |
|[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 |
|[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 |
|[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 |
|[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 |
|[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 |
|[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 |
|[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 |
|[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 |
|[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 |
|[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 |
|[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 |
|[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 |
|[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 |
|[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 |
|[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 |
|[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 |
|[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 |
|[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 |
|[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 |
|[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 |
|[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 |
|[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 |
|[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 |
|[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 |
|[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 |
|[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 |
## Citation
```bibtex
@article{bello2021revisiting,
title={Revisiting ResNets: Improved Training and Scaling Strategies},
author={Irwan Bello and William Fedus and Xianzhi Du and Ekin D. Cubuk and Aravind Srinivas and Tsung-Yi Lin and Jonathon Shlens and Barret Zoph},
journal={arXiv preprint arXiv:2103.07579},
year={2021}
}
```
```bibtex
@article{He2015,
author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {arXiv preprint arXiv:1512.03385},
year = {2015}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 38,363 | [
[
-0.06536865234375,
-0.0153961181640625,
0.0018167495727539062,
0.0292205810546875,
-0.031463623046875,
-0.00722503662109375,
-0.0098724365234375,
-0.0291290283203125,
0.08612060546875,
0.019805908203125,
-0.048065185546875,
-0.03955078125,
-0.046142578125,
0... |
vihangd/smartyplats-7b-v1 | 2023-10-27T10:44:59.000Z | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | vihangd | null | null | vihangd/smartyplats-7b-v1 | 0 | 1,481 | transformers | 2023-10-20T11:13:16 | ---
license: apache-2.0
---
<p><h1> SmartyPlats-7b </h1></p>
An experimental finetune of Mistrel 7b with QLoRA
<h2> Datasets </h2>
Trained on alpca style datasets
<p><h2> Prompt Template </h2></p>
Uses alpaca style prompt template | 232 | [
[
-0.044403076171875,
-0.02691650390625,
0.0232391357421875,
0.027374267578125,
-0.04913330078125,
-0.0198211669921875,
0.02313232421875,
0.00011771917343139648,
0.0292205810546875,
0.023101806640625,
-0.042144775390625,
-0.0312042236328125,
-0.00759124755859375,
... |
IlyaGusev/ru-word-stress-transformer | 2022-12-31T00:21:39.000Z | [
"transformers",
"pytorch",
"deberta-v2",
"token-classification",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | token-classification | IlyaGusev | null | null | IlyaGusev/ru-word-stress-transformer | 2 | 1,480 | transformers | 2022-07-06T18:30:23 | ---
language:
- ru
tags:
- token-classification
license: apache-2.0
inference: false
---
# RuWordStressTransformer
## Model description
Transformer encoder for predicting word stress in Russian.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
model_name = "IlyaGusev/ru-word-stress-transformer"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
trust_remote_code=True,
revision="bae83dd"
)
model = AutoModelForTokenClassification.from_pretrained(model_name)
pipe = pipeline(
"token-classification",
model=model,
tokenizer=tokenizer,
device=-1,
aggregation_strategy="none",
ignore_labels=("NO",)
)
text = "ัะตะบะพะปะดะฐ"
print(text)
index = pipe(text)[0]["index"]
print(text[:index] + "'" + text[index:])
```
Colab: [link](https://colab.research.google.com/drive/1I61aDezhxMVZzHQQfpn7Wqn-ydbndO6i) | 932 | [
[
0.0006670951843261719,
-0.02301025390625,
0.001132965087890625,
0.024993896484375,
-0.043487548828125,
-0.01111602783203125,
-0.000431060791015625,
-0.003437042236328125,
0.0116729736328125,
0.00020575523376464844,
-0.05230712890625,
-0.035675048828125,
-0.07391... |
daekeun-ml/Llama-2-ko-DPO-13B | 2023-10-31T13:19:37.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-2",
"dpo",
"ko",
"license:llama2",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"has_space"
] | text-generation | daekeun-ml | null | null | daekeun-ml/Llama-2-ko-DPO-13B | 12 | 1,479 | transformers | 2023-10-31T08:44:53 | ---
language:
- ko
tags:
- llama-2
- dpo
pipeline_tag: text-generation
license: llama2
---
# Llama-2-ko-DPO-13B
Based on the changed criteria from Open-AI-LLM leaderboard, the evaluation metric exceeded 50 percent for the first time. I am pretty proud of myself, even though this score will soon fade into the background as I'm simply testing a hypothesis rather than competing, and there are a lot of great models coming out of 7B.
Since my day job is technical support, not R&D, I could not spend a lot of time on it, so I only processed about 1000 samples and tuned them with DPO (Direct Preference Optimization) to reduce hallucination. The infrastructure was the same as before, using AWS g5.12xlarge, and no additional prompts were given.
I think the potential of the base LLM model is enormous, seeing how much hallucination are reduced with very little data and without much effort. When I meet with customers, many of them have difficulty implementing GenAI features. But it does not take much effort to implement them since many template codes/APIs are well done. It is a world where anyone who is willing to process data can easily and quickly create their own quality model.
### Model Details
- Base Model: [Llama-2-ko-instruct-13B](https://huggingface.co/daekeun-ml/Llama-2-ko-instruct-13B)
### Datasets
- 1,000 samples generated by myself
- Sentences generated by Amazon Bedrock Claude-2 were adopted as chosen, and sentences generated by the Llama-2-13B model fine-tuned with SFT were adopted as rejected.
### Benchmark
- This is the first Korean LLM model to exceed the average metric of 50 percent.
- SOTA model as of October 31, 2023 (https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
| Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| --- | --- | --- | --- | --- | --- | --- |
| **daekeun-ml/Llama-2-ko-DPO-13B (Ours)** | **51.03** | 47.53 | 58.28 | 43.59 | 51.91 | 53.84 |
| [daekeun-ml/Llama-2-ko-instruct-13B](https://huggingface.co/daekeun-ml/Llama-2-ko-instruct-13B) | 49.52 | 46.5 | 56.9 | 43.76 | 42 | 58.44 |
| [kyujinpy/Korean-OpenOrca-13B](https://huggingface.co/kyujinpy/KO-Platypus2-13B) | 48.79 | 43.09 | 54.13 | 40.24 | 45.22 | 61.28 |

### License
- Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License, under LLAMA 2 COMMUNITY LICENSE AGREEMENT
This model was created as a personal experiment, unrelated to the organization I work for. | 2,591 | [
[
-0.034423828125,
-0.04052734375,
0.046417236328125,
0.039306640625,
-0.04376220703125,
0.0037555694580078125,
0.000005841255187988281,
-0.041259765625,
0.025421142578125,
0.032318115234375,
-0.041015625,
-0.060272216796875,
-0.0533447265625,
0.01141357421875... |
kyujinpy/CoT-llama-2k-7b | 2023-10-19T13:28:07.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"dataset:kyujinpy/KoCoT_2000",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | kyujinpy | null | null | kyujinpy/CoT-llama-2k-7b | 3 | 1,477 | transformers | 2023-09-23T19:02:28 | ---
language:
- ko
datasets:
- kyujinpy/KoCoT_2000
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
**(์ฃผ)๋ฏธ๋์ด๊ทธ๋ฃน์ฌ๋๊ณผ์ฒ๊ณผ (์ฃผ)๋ง์ปค์ LLM ์ฐ๊ตฌ ์ปจ์์์์์ ๊ฐ๋ฐ๋ ๋ชจ๋ธ์
๋๋ค**
**The license is `cc-by-nc-sa-4.0`.**
# **CoT-llama2-7B**

**More detail repo(Github): [CoT-llama2](https://github.com/Marker-Inc-Korea/CoT-llama2)**
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
CoT-llama2 is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Base Model**
[Llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)
**Training Dataset**
I use [KoCoT_2000](https://huggingface.co/datasets/kyujinpy/KoCoT_2000).
Using DeepL, translate about [kaist-CoT](https://huggingface.co/datasets/kaist-ai/CoT-Collection).
I use A100 GPU 40GB and COLAB, when trianing.
**Training Hyperparameters**
| Hyperparameters | Value |
| --- | --- |
| batch_size | `64` |
| micro_batch_size | `1` |
| Epochs | `15` |
| learning_rate | `1e-5` |
| cutoff_len | `2048` |
| lr_scheduler | `linear` |
| base_model | `beomi/llama-2-ko-7b` |
# **Model Benchmark**
## LM Eval Harness - Korean (polyglot branch)
- Used EleutherAI's [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot)
> Question Answering (QA)
### COPA (F1)
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.7196 | 0.7193 | 0.7204 | 0.7206 |
| [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.7595 | 0.7608 | 0.7638 | 0.7788 |
| [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.7745 | 0.7676 | 0.7775 | 0.7887 |
| [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.7937 | 0.8108 | 0.8037 | 0.8369 |
| [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.7388 | 0.7626 | 0.7808 | 0.7979 |
| [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.7436 | 0.7927 | 0.8037 | 0.8259 |
| [KO-platypus2-7B-EX](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) | 0.7509 | 0.7899 | 0.8029 | 0.8290 |
| **CoT-llama2-7B(ours)** | 0.7528 | 0.7888 | 0.7998 | 0.8210 |
> Natural Language Inference (NLI; ์์ฐ์ด ์ถ๋ก ํ๊ฐ)
### HellaSwag (F1)
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.5247 | 0.5260 | 0.5278 | 0.5427 |
| [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.5707 | 0.5830 | 0.5670 | 0.5787 |
| [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.5976 | 0.5998 | 0.5979 | 0.6208 |
| [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.5954 | 0.6306 | 0.6098 | 0.6118 |
| [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4518 | 0.4668 | 0.4726 | 0.4828 |
| [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4562 | 0.4657 | 0.4698 | 0.4774 |
| [KO-platypus2-7B-EX](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) | 0.4571 | 0.4461 | 0.4371 | 0.4525 |
| **CoT-llama2-7B(ours)** | 0.4543 | 0.4554 | 0.4606 | 0.4579 |
> Question Answering (QA)
### BoolQ (F1)
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.3552 | 0.4751 | 0.4109 | 0.4038 |
| [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.4320 | 0.5263 | 0.4930 | 0.4038 |
| [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.4356 | 0.5698 | 0.5187 | 0.5236 |
| [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.4818 | 0.6041 | 0.6289 | 0.6448 |
| [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.3607 | 0.6797 | 0.6801 | 0.6622 |
| [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.5786 | 0.6977 | 0.7084 | 0.7144 |
| [KO-platypus2-7B-EX](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) | 0.6028 | 0.6979 | 0.7016 | 0.6988 |
| **CoT-llama2-7B(ours)** | 0.5852 | 0.6947 | 0.7059 | 0.7213 |
> Classification
### SentiNeg (F1)
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.6790 | 0.6257 | 0.5514 | 0.7851 |
| [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.4858 | 0.7950 | 0.7320 | 0.7851 |
| [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.3394 | 0.8841 | 0.8808 | 0.9521 |
| [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.9117 | 0.9015 | 0.9345 | 0.9723 |
| [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4855 | 0.8295 | 0.8711 | 0.8513 |
| [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4594 | 0.7611 | 0.7276 | 0.9370 |
| [KO-platypus2-7B-EX](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) | 0.5821 | 0.7653 | 0.7991 | 0.8643 |
| **CoT-llama2-7B(ours)** | 0.5045 | 0.8054 | 0.7942 | 0.9446 |
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/CoT-llama-2k-7b"
CoT-llama = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
CoT-llama_tokenizer = AutoTokenizer.from_pretrained(repo)
```
> Readme format: [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)
--- | 5,725 | [
[
-0.0478515625,
-0.048736572265625,
0.0182647705078125,
0.035552978515625,
-0.0472412109375,
0.0180206298828125,
-0.007114410400390625,
-0.0430908203125,
0.0638427734375,
0.0039520263671875,
-0.0274505615234375,
-0.0482177734375,
-0.057861328125,
0.0211029052... |
wbbbbb/wav2vec2-large-chinese-zh-cn | 2023-09-11T00:07:38.000Z | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"zh",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | automatic-speech-recognition | wbbbbb | null | null | wbbbbb/wav2vec2-large-chinese-zh-cn | 33 | 1,476 | transformers | 2022-07-18T06:21:56 | ---
language: zh
datasets:
- common_voice
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Chinese (zh-CN) by wbbbbb
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice zh-CN
type: common_voice
args: zh-CN
metrics:
- name: Test WER
type: wer
value: 70.47
- name: Test CER
type: cer
value: 12.30
---
# Fine-tuned XLSR-53 large model for speech recognition in Chinese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Chinese using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice), [CSS10](https://github.com/Kyubyong/css10) and [ST-CMDS](http://www.openslr.org/38/).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned on RTX3090 for 50h
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("wbbbbb/wav2vec2-large-chinese-zh-cn")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
## Evaluation
The model can be evaluated as follows on the Chinese (zh-CN) test data of Common Voice.
```python
import torch
import re
import librosa
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import warnings
import os
os.environ["KMP_AFFINITY"] = ""
LANG_ID = "zh-CN"
MODEL_ID = "zh-CN-output-aishell"
DEVICE = "cuda"
test_dataset = load_dataset("common_voice", LANG_ID, split="test")
wer = load_metric("wer")
cer = load_metric("cer")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
model.to(DEVICE)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = (
re.sub("([^\u4e00-\u9fa5\u0030-\u0039])", "", batch["sentence"]).lower() + " "
)
return batch
test_dataset = test_dataset.map(
speech_file_to_array_fn,
num_proc=15,
remove_columns=['client_id', 'up_votes', 'down_votes', 'age', 'gender', 'accent', 'locale', 'segment'],
)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(
batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True
)
with torch.no_grad():
logits = model(
inputs.input_values.to(DEVICE),
attention_mask=inputs.attention_mask.to(DEVICE),
).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
predictions = [x.lower() for x in result["pred_strings"]]
references = [x.lower() for x in result["sentence"]]
print(
f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}"
)
print(f"CER: {cer.compute(predictions=predictions, references=references) * 100}")
```
**Test Result**:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2022-07-18). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
| Model | WER | CER |
| ------------- | ------------- | ------------- |
| wbbbbb/wav2vec2-large-chinese-zh-cn | **70.47%** | **12.30%** |
| jonatasgrosman/wav2vec2-large-xlsr-53-chinese-zh-cn | **82.37%** | **19.03%** |
| ydshieh/wav2vec2-large-xlsr-53-chinese-zh-cn-gpt | 84.01% | 20.95% |
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-chinese,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {C}hinese},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/wbbbbb/wav2vec2-large-chinese-zh-cn}},
year={2021}
}
``` | 4,690 | [
[
-0.013031005859375,
-0.042633056640625,
0.01488494873046875,
0.01259613037109375,
-0.01520538330078125,
-0.01611328125,
-0.033233642578125,
-0.03753662109375,
0.000995635986328125,
0.02301025390625,
-0.04364013671875,
-0.056427001953125,
-0.03460693359375,
-... |
EleutherAI/pythia-160m-deduped-v0 | 2023-07-10T01:30:40.000Z | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"pythia_v0",
"en",
"dataset:EleutherAI/the_pile_deduplicated",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"text-generation-inference",... | text-generation | EleutherAI | null | null | EleutherAI/pythia-160m-deduped-v0 | 7 | 1,476 | transformers | 2022-10-18T02:59:41 | ---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-160M-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:contact@eleuther.ai).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | โ |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | โ |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | โ |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. โEquivalentโ
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-160M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-160M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-160M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-160M-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better โunderstandโ human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most โaccurateโ text. Never rely on
Pythia-160M-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-160M-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-160M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-160M-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 โactualโ steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA โ OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge โ Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure> | 11,894 | [
[
-0.0254974365234375,
-0.06390380859375,
0.0202484130859375,
0.002666473388671875,
-0.016571044921875,
-0.01165008544921875,
-0.016204833984375,
-0.0333251953125,
0.01508331298828125,
0.0163421630859375,
-0.0238494873046875,
-0.024200439453125,
-0.035552978515625... |
timm/tf_efficientnetv2_b1.in1k | 2023-04-27T21:38:52.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2104.00298",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/tf_efficientnetv2_b1.in1k | 0 | 1,476 | timm | 2022-12-13T00:14:24 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for tf_efficientnetv2_b1.in1k
A EfficientNet-v2 image classification model. Trained on ImageNet-1k in Tensorflow by paper authors, ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 8.1
- GMACs: 0.8
- Activations (M): 4.6
- Image size: train = 192 x 192, test = 240 x 240
- **Papers:**
- EfficientNetV2: Smaller Models and Faster Training: https://arxiv.org/abs/2104.00298
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('tf_efficientnetv2_b1.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tf_efficientnetv2_b1.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 16, 96, 96])
# torch.Size([1, 32, 48, 48])
# torch.Size([1, 48, 24, 24])
# torch.Size([1, 112, 12, 12])
# torch.Size([1, 192, 6, 6])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tf_efficientnetv2_b1.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1280, 6, 6) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{tan2021efficientnetv2,
title={Efficientnetv2: Smaller models and faster training},
author={Tan, Mingxing and Le, Quoc},
booktitle={International conference on machine learning},
pages={10096--10106},
year={2021},
organization={PMLR}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 4,065 | [
[
-0.0262298583984375,
-0.034271240234375,
-0.00450897216796875,
0.00656890869140625,
-0.0236053466796875,
-0.031890869140625,
-0.02001953125,
-0.0277099609375,
0.01294708251953125,
0.0284271240234375,
-0.0253753662109375,
-0.048248291015625,
-0.0550537109375,
... |
4i-ai/Llama-2-7b-alpaca-es | 2023-08-23T09:22:50.000Z | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"es",
"dataset:bertin-project/alpaca-spanish",
"license:cc-by-nc-4.0",
"text-generation-inference",
"region:us"
] | text-generation | 4i-ai | null | null | 4i-ai/Llama-2-7b-alpaca-es | 7 | 1,476 | transformers | 2023-08-10T07:51:19 | ---
license: cc-by-nc-4.0
datasets:
- bertin-project/alpaca-spanish
language:
- es
inference: false
---
# Model Card for Model ID
This model is the Llama-2-7b-hf fine-tuned with an adapter on the Spanish Alpaca dataset.
## Model Details
### Model Description
This is a Spanish chat model fine-tuned on a Spanish instruction dataset.
The model expect a prompt containing the instruction, with an option to add an input (see examples below).
- **Developed by:** 4i Intelligent Insights
- **Model type:** Chat model
- **Language(s) (NLP):** Spanish
- **License:** cc-by-nc-4.0 (inhereted from the alpaca-spanish dataset),
- **Finetuned from model :** Llama 2 7B ([license agreement](https://ai.meta.com/resources/models-and-libraries/llama-downloads/))
## Uses
The model is intended to be used directly without the need of further fine-tuning.
## Bias, Risks, and Limitations
This model inherits the bias, risks, and limitations of its base model, Llama 2, and of the dataset used for fine-tuning.
Note that the Spanish Alpaca dataset was obtained by translating the original Alpaca dataset. It contains translation errors that may have negatively impacted the fine-tuning of the model.
## How to Get Started with the Model
Use the code below to get started with the model for inference. The adapter was directly merged into the original Llama 2 model.
The following code sample uses 4-bit quantization, you may load the model without it if you have enough VRAM.
```py
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, TrainingArguments, GenerationConfig
import torch
model_name = "4i-ai/Llama-2-7b-alpaca-es"
#Tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
def create_and_prepare_model():
compute_dtype = getattr(torch, "float16")
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=compute_dtype,
bnb_4bit_use_double_quant=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name, quantization_config=bnb_config, device_map={"": 0}
)
return model
model = create_and_prepare_model()
def generate(instruction, input=None):
#Format the prompt to look like the training data
if input is not None:
prompt = "### Instruction:\n"+instruction+"\n\n### Input:\n"+input+"\n\n### Response:\n"
else :
prompt = "### Instruction:\n"+instruction+"\n\n### Response:\n"
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].cuda()
generation_output = model.generate(
input_ids=input_ids,
generation_config=GenerationConfig(temperature=1.0, top_p=0.75, top_k=40, num_beams=10), #hyperparameters for generation
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=150, #maximum tokens generated, increase if you want longer asnwer (up to 2048 - the length of the prompt), generation "looks" slower for longer response
)
for seq in generation_output.sequences:
output = tokenizer.decode(seq, skip_special_tokens=True)
print(output.split("### Response:")[1].strip())
generate("Hรกblame de la superconductividad.")
print("-----------")
generate("Encuentra la capital de Espaรฑa.")
print("-----------")
generate("Encuentra la capital de Portugal.")
print("-----------")
generate("Organiza los nรบmeros dados en orden ascendente.", "2, 3, 0, 8, 4, 10")
print("-----------")
generate("Compila una lista de 5 estados de EE. UU. ubicados en el Oeste.")
print("-----------")
generate("ยฟCuรกl es el color de una fresa?")
print("-----------")
generate("ยฟCuรกl es el color de la siguiente fruta?", "fresa")
print("-----------")
```
Expected output:
```
La superconductividad es un fenรณmeno fรญsico en el que algunos materiales se convierten en conductores de corriente elรฉctrica a temperaturas muy bajas. Esto significa que la corriente elรฉctrica puede fluir a travรฉs del material sin pรฉrdida de energรญa. La superconductividad fue descubierta por primera vez en 1911 por el fรญsico alemรกn Heike Kamerlingh Onnes, quien descubriรณ que algunos materiales se convierten en conductores de corriente elรฉctrica a temperaturas muy bajas. Desde entonces, la superconductividad se ha utiliz
-----------
La capital de Espaรฑa es Madrid.
-----------
La capital de Portugal es Lisboa.
-----------
2, 3, 4, 8, 10, 0
-----------
California, Oregรณn, Washington, Nevada y Arizona.
-----------
El color de una fresa es rosa.
-----------
El color de la fresa es rojo.
```
## Contact Us
[4i.ai](https://4i.ai/) provides natural language processing solutions with dialog, vision and voice capabilities to deliver real-life multimodal human-machine conversations.
Please contact us at info@4i.ai
| 4,884 | [
[
-0.035919189453125,
-0.053955078125,
0.0162353515625,
0.028564453125,
-0.0215911865234375,
-0.0024585723876953125,
-0.00016582012176513672,
-0.026336669921875,
0.019805908203125,
0.021453857421875,
-0.05218505859375,
-0.03717041015625,
-0.039031982421875,
0.... |
timm/fastvit_t12.apple_in1k | 2023-08-23T20:56:22.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2303.14189",
"license:other",
"region:us"
] | image-classification | timm | null | null | timm/fastvit_t12.apple_in1k | 0 | 1,476 | timm | 2023-08-23T20:56:16 | ---
tags:
- image-classification
- timm
library_name: timm
license: other
datasets:
- imagenet-1k
---
# Model card for fastvit_t12.apple_in1k
A FastViT image classification model. Trained on ImageNet-1k by paper authors.
Please observe [original license](https://github.com/apple/ml-fastvit/blob/8af5928238cab99c45f64fc3e4e7b1516b8224ba/LICENSE).
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 7.6
- GMACs: 1.4
- Activations (M): 12.4
- Image size: 256 x 256
- **Papers:**
- FastViT: A Fast Hybrid Vision Transformer using Structural Reparameterization: https://arxiv.org/abs/2303.14189
- **Original:** https://github.com/apple/ml-fastvit
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('fastvit_t12.apple_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'fastvit_t12.apple_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 64, 64])
# torch.Size([1, 128, 32, 32])
# torch.Size([1, 256, 16, 16])
# torch.Size([1, 512, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'fastvit_t12.apple_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@inproceedings{vasufastvit2023,
author = {Pavan Kumar Anasosalu Vasu and James Gabriel and Jeff Zhu and Oncel Tuzel and Anurag Ranjan},
title = {FastViT: A Fast Hybrid Vision Transformer using Structural Reparameterization},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
year = {2023}
}
```
| 3,665 | [
[
-0.041778564453125,
-0.03692626953125,
0.002170562744140625,
0.0175628662109375,
-0.030853271484375,
-0.0143585205078125,
-0.008270263671875,
-0.0201416015625,
0.025146484375,
0.0264739990234375,
-0.03790283203125,
-0.04620361328125,
-0.05084228515625,
-0.01... |
deepmind/vision-perceiver-fourier | 2023-09-24T08:47:15.000Z | [
"transformers",
"pytorch",
"perceiver",
"image-classification",
"dataset:imagenet",
"arxiv:2107.14795",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | image-classification | deepmind | null | null | deepmind/vision-perceiver-fourier | 2 | 1,475 | transformers | 2022-03-02T23:29:05 | ---
license: apache-2.0
datasets:
- imagenet
---
# Perceiver IO for vision (fixed Fourier position embeddings)
Perceiver IO model pre-trained on ImageNet (14 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Jaegle et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/perceiver).
Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs.
To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For image classification, the output is a tensor containing the logits, of shape (batch_size, num_labels).
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perceiver_architecture.jpg" alt="drawing" width="600"/>
<small> Perceiver IO architecture.</small>
As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model directly on raw pixel values, rather than on patches as is done in ViT. This particular model only adds fixed Fourier 2D position embeddings to the pixel values.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by replacing the classification decoder.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=deepmind/perceiver) to look for other fine-tuned versions on a task that may interest you.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import PerceiverImageProcessor, PerceiverForImageClassificationFourier
import requests
from PIL import Image
processor = PerceiverImageProcessor.from_pretrained("deepmind/vision-perceiver-fourier")
model = PerceiverForImageClassificationFourier.from_pretrained("deepmind/vision-perceiver-fourier")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# prepare input
inputs = processor(image, return_tensors="pt").pixel_values
# forward pass
outputs = model(inputs)
logits = outputs.logits
print("Predicted class:", model.config.id2label[logits.argmax(-1).item()])
>>> should print Predicted class: tabby, tabby cat
```
## Training data
This model was pretrained on [ImageNet](http://www.image-net.org/), a dataset consisting of 14 million images and 1k classes.
## Training procedure
### Preprocessing
Images are center cropped and resized to a resolution of 224x224 and normalized across the RGB channels. Note that data augmentation was used during pre-training, as explained in Appendix H of the [paper](https://arxiv.org/abs/2107.14795).
### Pretraining
Hyperparameter details can be found in Appendix H of the [paper](https://arxiv.org/abs/2107.14795).
## Evaluation results
This model is able to achieve a top-1 accuracy of 79.0 on ImageNet-1k, and 84.5 when pre-trained on a large-scale dataset (JFT-300M, an internal dataset of Google).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2107-14795,
author = {Andrew Jaegle and
Sebastian Borgeaud and
Jean{-}Baptiste Alayrac and
Carl Doersch and
Catalin Ionescu and
David Ding and
Skanda Koppula and
Daniel Zoran and
Andrew Brock and
Evan Shelhamer and
Olivier J. H{\'{e}}naff and
Matthew M. Botvinick and
Andrew Zisserman and
Oriol Vinyals and
Jo{\~{a}}o Carreira},
title = {Perceiver {IO:} {A} General Architecture for Structured Inputs {\&}
Outputs},
journal = {CoRR},
volume = {abs/2107.14795},
year = {2021},
url = {https://arxiv.org/abs/2107.14795},
eprinttype = {arXiv},
eprint = {2107.14795},
timestamp = {Tue, 03 Aug 2021 14:53:34 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2107-14795.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 4,987 | [
[
-0.052001953125,
-0.050384521484375,
0.0258636474609375,
0.001979827880859375,
-0.01099395751953125,
-0.038055419921875,
-0.00649261474609375,
-0.059478759765625,
0.01151275634765625,
0.01024627685546875,
-0.0301361083984375,
-0.018218994140625,
-0.0437927246093... |
allenai/led-large-16384 | 2023-01-24T16:27:05.000Z | [
"transformers",
"pytorch",
"tf",
"led",
"text2text-generation",
"en",
"arxiv:2004.05150",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | text2text-generation | allenai | null | null | allenai/led-large-16384 | 21 | 1,474 | transformers | 2022-03-02T23:29:05 | ---
language: en
license: apache-2.0
---
## Introduction
[Allenai's Longformer Encoder-Decoder (LED)](https://github.com/allenai/longformer#longformer).
As described in [Longformer: The Long-Document Transformer](https://arxiv.org/pdf/2004.05150.pdf) by Iz Beltagy, Matthew E. Peters, Arman Cohan, *led-large-16384* was initialized from [*bart-large*](https://huggingface.co/facebook/bart-large) since both models share the exact same architecture. To be able to process 16K tokens, *bart-large*'s position embedding matrix was simply copied 16 times.
This model is especially interesting for long-range summarization and question answering.
## Fine-tuning for down-stream task
[This notebook](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing) shows how *led-large-16384* can effectively be fine-tuned on a downstream task.
| 868 | [
[
-0.031402587890625,
-0.05902099609375,
0.0537109375,
0.0177459716796875,
-0.024871826171875,
0.0136260986328125,
-0.0309906005859375,
-0.034820556640625,
0.02001953125,
0.0289306640625,
-0.0306396484375,
-0.00957489013671875,
-0.041961669921875,
0.0182800292... |
jonatasgrosman/wav2vec2-large-xlsr-53-italian | 2022-12-14T02:05:34.000Z | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"it",
"mozilla-foundation/common_voice_6_0",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"dataset:mozilla-foundation/common_voice_6_0",
"lice... | automatic-speech-recognition | jonatasgrosman | null | null | jonatasgrosman/wav2vec2-large-xlsr-53-italian | 8 | 1,474 | transformers | 2022-03-02T23:29:05 | ---
language: it
license: apache-2.0
datasets:
- common_voice
- mozilla-foundation/common_voice_6_0
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
- it
- mozilla-foundation/common_voice_6_0
- robust-speech-event
- speech
- xlsr-fine-tuning-week
model-index:
- name: XLSR Wav2Vec2 Italian by Jonatas Grosman
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice it
type: common_voice
args: it
metrics:
- name: Test WER
type: wer
value: 9.41
- name: Test CER
type: cer
value: 2.29
- name: Test WER (+LM)
type: wer
value: 6.91
- name: Test CER (+LM)
type: cer
value: 1.83
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: it
metrics:
- name: Dev WER
type: wer
value: 21.78
- name: Dev CER
type: cer
value: 7.94
- name: Dev WER (+LM)
type: wer
value: 15.82
- name: Dev CER (+LM)
type: cer
value: 6.83
---
# Fine-tuned XLSR-53 large model for speech recognition in Italian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Italian using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-italian")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "it"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-italian"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| POI LEI MORร. | POI LEI MORร |
| IL LIBRO HA SUSCITATO MOLTE POLEMICHE A CAUSA DEI SUOI CONTENUTI. | IL LIBRO HA SUSCITATO MOLTE POLEMICHE A CAUSA DEI SUOI CONTENUTI |
| "FIN DALL'INIZIO LA SEDE EPISCOPALE ร STATA IMMEDIATAMENTE SOGGETTA ALLA SANTA SEDE." | FIN DALL'INIZIO LA SEDE EPISCOPALE ร STATA IMMEDIATAMENTE SOGGETTA ALLA SANTA SEDE |
| IL VUOTO ASSOLUTO? | IL VUOTO ASSOLUTO |
| DOPO ALCUNI ANNI, EGLI DECISE DI TORNARE IN INDIA PER RACCOGLIERE ALTRI INSEGNAMENTI. | DOPO ALCUNI ANNI EGLI DECISE DI TORNARE IN INDIA PER RACCOGLIERE ALTRI INSEGNAMENTI |
| SALVATION SUE | SALVATION SOO |
| IN QUESTO MODO, DECIO OTTENNE IL POTERE IMPERIALE. | IN QUESTO MODO DECHO OTTENNE IL POTERE IMPERIALE |
| SPARTA NOVARA ACQUISISCE IL TITOLO SPORTIVO PER GIOCARE IN PRIMA CATEGORIA. | PARCANOVARACFILISCE IL TITOLO SPORTIVO PER GIOCARE IN PRIMA CATEGORIA |
| IN SEGUITO, KYGO E SHEAR HANNO PROPOSTO DI CONTINUARE A LAVORARE SULLA CANZONE. | IN SEGUITO KIGO E SHIAR HANNO PROPOSTO DI CONTINUARE A LAVORARE SULLA CANZONE |
| ALAN CLARKE | ALAN CLARK |
## Evaluation
1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-italian --dataset mozilla-foundation/common_voice_6_0 --config it --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-italian --dataset speech-recognition-community-v2/dev_data --config it --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-italian,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {I}talian},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-italian}},
year={2021}
}
```
| 5,527 | [
[
-0.032470703125,
-0.039154052734375,
0.01357269287109375,
0.01367950439453125,
-0.02154541015625,
-0.01407623291015625,
-0.0301666259765625,
-0.045623779296875,
0.0186920166015625,
0.01061248779296875,
-0.0455322265625,
-0.041046142578125,
-0.03594970703125,
... |
squirro/albert-base-v2-squad_v2 | 2023-01-31T14:37:20.000Z | [
"transformers",
"pytorch",
"tf",
"onnx",
"albert",
"question-answering",
"generated_from_trainer",
"en",
"dataset:squad_v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | squirro | null | null | squirro/albert-base-v2-squad_v2 | 3 | 1,474 | transformers | 2022-03-07T10:57:20 | ---
license: apache-2.0
language: en
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: albert-base-v2-squad_v2
results:
- task:
name: Question Answering
type: question-answering
dataset:
type: squad_v2 # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: The Stanford Question Answering Dataset
args: en
metrics:
- type: eval_exact
value: 78.8175
- type: eval_f1
value: 81.9984
- type: eval_HasAns_exact
value: 75.3374
- type: eval_HasAns_f1
value: 81.7083
- type: eval_NoAns_exact
value: 82.2876
- type: eval_NoAns_f1
value: 82.2876
---
# albert-base-v2-squad_v2
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the squad_v2 dataset.
## Model description
This model is fine-tuned on the extractive question answering task -- The Stanford Question Answering Dataset -- [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/).
For convenience this model is prepared to be used with the frameworks `PyTorch`, `Tensorflow` and `ONNX`.
## Intended uses & limitations
This model can handle mismatched question-context pairs. Make sure to specify `handle_impossible_answer=True` when using `QuestionAnsweringPipeline`.
__Example usage:__
```python
>>> from transformers import AutoModelForQuestionAnswering, AutoTokenizer, QuestionAnsweringPipeline
>>> model = AutoModelForQuestionAnswering.from_pretrained("squirro/albert-base-v2-squad_v2")
>>> tokenizer = AutoTokenizer.from_pretrained("squirro/albert-base-v2-squad_v2")
>>> qa_model = QuestionAnsweringPipeline(model, tokenizer)
>>> qa_model(
>>> question="What's your name?",
>>> context="My name is Clara and I live in Berkeley.",
>>> handle_impossible_answer=True # important!
>>> )
{'score': 0.9027367830276489, 'start': 11, 'end': 16, 'answer': 'Clara'}
```
## Training and evaluation data
Training and evaluation was done on [SQuAD2.0](https://huggingface.co/datasets/squad_v2).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- num_devices: 8
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| key | value |
|:-------------------------|--------------:|
| epoch | 3 |
| eval_HasAns_exact | 75.3374 |
| eval_HasAns_f1 | 81.7083 |
| eval_HasAns_total | 5928 |
| eval_NoAns_exact | 82.2876 |
| eval_NoAns_f1 | 82.2876 |
| eval_NoAns_total | 5945 |
| eval_best_exact | 78.8175 |
| eval_best_exact_thresh | 0 |
| eval_best_f1 | 81.9984 |
| eval_best_f1_thresh | 0 |
| eval_exact | 78.8175 |
| eval_f1 | 81.9984 |
| eval_samples | 12171 |
| eval_total | 11873 |
| train_loss | 0.775293 |
| train_runtime | 1402 |
| train_samples | 131958 |
| train_samples_per_second | 282.363 |
| train_steps_per_second | 1.104 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
---
# About Us
<img src="https://squirro.com/wp-content/themes/squirro/img/squirro_logo.svg" alt="Squirro Logo" width="250"/>
Squirro marries data from any source with your intent, and your context to intelligently augment decision-making - right when you need it!
An Insight Engine at its core, Squirro works with global organizations, primarily in financial services, public sector, professional services, and manufacturing, among others. Customers include Bank of England, European Central Bank (ECB), Deutsche Bundesbank, Standard Chartered, Henkel, Armacell, Candriam, and many other world-leading firms.
Founded in 2012, Squirro is currently present in Zรผrich, London, New York, and Singapore. Further information about AI-driven business insights can be found at http://squirro.com.
## Social media profiles:
- Redefining AI Podcast (Spotify): https://open.spotify.com/show/6NPLcv9EyaD2DcNT8v89Kb
- Redefining AI Podcast (Apple Podcasts): https://podcasts.apple.com/us/podcast/redefining-ai/id1613934397
- Squirro LinkedIn: https://www.linkedin.com/company/squirroag
- Squirro Academy LinkedIn: https://www.linkedin.com/showcase/the-squirro-academy
- Twitter: https://twitter.com/Squirro
- Facebook: https://www.facebook.com/squirro
- Instagram: https://www.instagram.com/squirro/ | 4,921 | [
[
-0.026641845703125,
-0.053192138671875,
0.0114593505859375,
0.0202178955078125,
0.01044464111328125,
-0.0015974044799804688,
0.00045371055603027344,
-0.038238525390625,
0.0121002197265625,
0.005279541015625,
-0.057281494140625,
-0.0292816162109375,
-0.033203125,... |
timm/eva02_base_patch14_448.mim_in22k_ft_in1k | 2023-03-31T05:45:05.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-22k",
"arxiv:2303.11331",
"arxiv:2303.15389",
"license:mit",
"region:us"
] | image-classification | timm | null | null | timm/eva02_base_patch14_448.mim_in22k_ft_in1k | 1 | 1,474 | timm | 2023-03-31T04:14:40 | ---
tags:
- image-classification
- timm
library_tag: timm
license: mit
datasets:
- imagenet-1k
- imagenet-22k
---
# Model card for eva02_base_patch14_448.mim_in22k_ft_in1k
An EVA02 image classification model. Pretrained on ImageNet-22k with masked image modeling (using EVA-CLIP as a MIM teacher) and fine-tuned on ImageNet-1k by paper authors.
EVA-02 models are vision transformers with mean pooling, SwiGLU, Rotary Position Embeddings (ROPE), and extra LN in MLP (for Base & Large).
NOTE: `timm` checkpoints are float32 for consistency with other models. Original checkpoints are float16 or bfloat16 in some cases, see originals if that's preferred.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 87.1
- GMACs: 107.1
- Activations (M): 259.1
- Image size: 448 x 448
- **Papers:**
- EVA-02: A Visual Representation for Neon Genesis: https://arxiv.org/abs/2303.11331
- EVA-CLIP: Improved Training Techniques for CLIP at Scale: https://arxiv.org/abs/2303.15389
- **Original:**
- https://github.com/baaivision/EVA
- https://huggingface.co/Yuxin-CV/EVA-02
- **Pretrain Dataset:** ImageNet-22k
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('eva02_base_patch14_448.mim_in22k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'eva02_base_patch14_448.mim_in22k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1025, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |top1 |top5 |param_count|img_size|
|-----------------------------------------------|------|------|-----------|--------|
|eva02_large_patch14_448.mim_m38m_ft_in22k_in1k |90.054|99.042|305.08 |448 |
|eva02_large_patch14_448.mim_in22k_ft_in22k_in1k|89.946|99.01 |305.08 |448 |
|eva_giant_patch14_560.m30m_ft_in22k_in1k |89.792|98.992|1014.45 |560 |
|eva02_large_patch14_448.mim_in22k_ft_in1k |89.626|98.954|305.08 |448 |
|eva02_large_patch14_448.mim_m38m_ft_in1k |89.57 |98.918|305.08 |448 |
|eva_giant_patch14_336.m30m_ft_in22k_in1k |89.56 |98.956|1013.01 |336 |
|eva_giant_patch14_336.clip_ft_in1k |89.466|98.82 |1013.01 |336 |
|eva_large_patch14_336.in22k_ft_in22k_in1k |89.214|98.854|304.53 |336 |
|eva_giant_patch14_224.clip_ft_in1k |88.882|98.678|1012.56 |224 |
|eva02_base_patch14_448.mim_in22k_ft_in22k_in1k |88.692|98.722|87.12 |448 |
|eva_large_patch14_336.in22k_ft_in1k |88.652|98.722|304.53 |336 |
|eva_large_patch14_196.in22k_ft_in22k_in1k |88.592|98.656|304.14 |196 |
|eva02_base_patch14_448.mim_in22k_ft_in1k |88.23 |98.564|87.12 |448 |
|eva_large_patch14_196.in22k_ft_in1k |87.934|98.504|304.14 |196 |
|eva02_small_patch14_336.mim_in22k_ft_in1k |85.74 |97.614|22.13 |336 |
|eva02_tiny_patch14_336.mim_in22k_ft_in1k |80.658|95.524|5.76 |336 |
## Citation
```bibtex
@article{EVA02,
title={EVA-02: A Visual Representation for Neon Genesis},
author={Fang, Yuxin and Sun, Quan and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue},
journal={arXiv preprint arXiv:2303.11331},
year={2023}
}
```
```bibtex
@article{EVA-CLIP,
title={EVA-02: A Visual Representation for Neon Genesis},
author={Sun, Quan and Fang, Yuxin and Wu, Ledell and Wang, Xinlong and Cao, Yue},
journal={arXiv preprint arXiv:2303.15389},
year={2023}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 5,399 | [
[
-0.044342041015625,
-0.029815673828125,
0.01226806640625,
0.00806427001953125,
-0.0174102783203125,
0.0011272430419921875,
-0.00830078125,
-0.033172607421875,
0.03912353515625,
0.02734375,
-0.034881591796875,
-0.05169677734375,
-0.0433349609375,
0.0063438415... |
AVIIAX/majic6 | 2023-10-28T16:36:15.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | AVIIAX | null | null | AVIIAX/majic6 | 1 | 1,473 | diffusers | 2023-10-28T16:36:15 | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/43331?modelVersionId=94640
| 193 | [
[
-0.0229339599609375,
0.029266357421875,
0.039306640625,
0.0309600830078125,
-0.03240966796875,
-0.0178375244140625,
0.038909912109375,
-0.00969696044921875,
0.01708984375,
0.029693603515625,
-0.050750732421875,
0.00043511390686035156,
0.01311492919921875,
-0... |
google/long-t5-tglobal-xl | 2023-01-24T17:11:32.000Z | [
"transformers",
"pytorch",
"jax",
"longt5",
"text2text-generation",
"en",
"arxiv:2112.07916",
"arxiv:1912.08777",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | google | null | null | google/long-t5-tglobal-xl | 20 | 1,472 | transformers | 2022-06-14T08:32:52 | ---
license: apache-2.0
language: en
---
# LongT5 (transient-global attention, XL-sized model)
LongT5 model pre-trained on English language. The model was introduced in the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/pdf/2112.07916.pdf) by Guo et al. and first released in [the LongT5 repository](https://github.com/google-research/longt5). All the model architecture and configuration can be found in [Flaxformer repository](https://github.com/google/flaxformer) which uses another Google research project repository [T5x](https://github.com/google-research/t5x).
Disclaimer: The team releasing LongT5 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
LongT5 model is an encoder-decoder transformer pre-trained in a text-to-text denoising generative setting ([Pegasus-like generation pre-training](https://arxiv.org/pdf/1912.08777.pdf)). LongT5 model is an extension of [T5 model](https://arxiv.org/pdf/1910.10683.pdf), and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention. The usage of attention sparsity patterns allows the model to efficiently handle input sequence.
LongT5 is particularly effective when fine-tuned for text generation (summarization, question answering) which requires handling long input sequences (up to 16,384 tokens).
## Intended uses & limitations
The model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=longt5) to look for fine-tuned versions on a task that interests you.
### How to use
```python
from transformers import AutoTokenizer, LongT5Model
tokenizer = AutoTokenizer.from_pretrained("google/long-t5-tglobal-xl")
model = LongT5Model.from_pretrained("google/long-t5-tglobal-xl")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
### BibTeX entry and citation info
```bibtex
@article{guo2021longt5,
title={LongT5: Efficient Text-To-Text Transformer for Long Sequences},
author={Guo, Mandy and Ainslie, Joshua and Uthus, David and Ontanon, Santiago and Ni, Jianmo and Sung, Yun-Hsuan and Yang, Yinfei},
journal={arXiv preprint arXiv:2112.07916},
year={2021}
}
``` | 2,380 | [
[
-0.03363037109375,
-0.045440673828125,
0.03533935546875,
0.0294342041015625,
-0.0218505859375,
-0.00849151611328125,
-0.0224151611328125,
-0.052215576171875,
0.006359100341796875,
0.0185089111328125,
-0.0423583984375,
-0.03485107421875,
-0.048583984375,
0.03... |
Yntec/elldrethSVividMix | 2023-09-30T06:51:50.000Z | [
"diffusers",
"General",
"Elldreth",
"Dream",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Yntec | null | null | Yntec/elldrethSVividMix | 1 | 1,470 | diffusers | 2023-09-30T05:16:53 | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- General
- Elldreth
- Dream
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# Elldreth's Vivid Mix
fp16 no-ema version of this model. Original page: https://huggingface.co/danbrown/elldreth-vivid-mix
Samples and prompt:


Pretty Cute Little Photorealistic, highly detailed, masterpiece, trending on ArtStation, sitting, Detailed Chibi Eyes, fantasy, beautiful detailed legs, streetwear, gorgeous detailed hair, hat, Magazine ad, iconic, 1943, from the movie, sharp focus. | 835 | [
[
-0.057220458984375,
-0.0814208984375,
0.0052947998046875,
0.0306243896484375,
-0.009124755859375,
0.01265716552734375,
0.019256591796875,
-0.056732177734375,
0.09796142578125,
0.0380859375,
-0.0677490234375,
-0.045867919921875,
-0.01003265380859375,
0.000120... |
bioformers/bioformer-8L-qnli | 2023-08-02T07:49:30.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"en",
"arxiv:1804.07461",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | bioformers | null | null | bioformers/bioformer-8L-qnli | 0 | 1,469 | transformers | 2022-03-02T23:29:05 | ---
license: apache-2.0
language:
- en
---
[bioformer-8L](https://huggingface.co/bioformers/bioformer-8L) fined-tuned on the [QNLI](https://huggingface.co/datasets/glue) dataset for 2 epochs.
The fine-tuning process was performed on two NVIDIA GeForce GTX 1080 Ti GPUs (11GB). The parameters are:
```
max_seq_length=512
per_device_train_batch_size=16
total train batch size (w. parallel, distributed & accumulation) = 32
learning_rate=3e-5
```
## Evaluation results
eval_accuracy = 0.883397
## More information
The QNLI (Question-answering NLI) dataset is a Natural Language Inference dataset automatically derived from the Stanford Question Answering Dataset v1.1 (SQuAD). SQuAD v1.1 consists of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The dataset was converted into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue. The QNLI dataset is part of GLEU benchmark.
(source: https://paperswithcode.com/dataset/qnli)
Original GLUE paper: https://arxiv.org/abs/1804.07461 | 1,597 | [
[
-0.033935546875,
-0.0654296875,
0.03265380859375,
0.00362396240234375,
-0.01322174072265625,
-0.01168060302734375,
-0.004428863525390625,
-0.039642333984375,
0.01192474365234375,
0.0181884765625,
-0.056060791015625,
-0.0190277099609375,
-0.02252197265625,
0.... |
oshizo/sbert-jsnli-luke-japanese-base-lite | 2023-01-10T12:36:12.000Z | [
"sentence-transformers",
"pytorch",
"luke",
"feature-extraction",
"sentence-similarity",
"transformers",
"ja",
"dataset:shunk031/jsnli",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | sentence-similarity | oshizo | null | null | oshizo/sbert-jsnli-luke-japanese-base-lite | 23 | 1,468 | sentence-transformers | 2023-01-10T11:53:15 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
license: apache-2.0
datasets:
- shunk031/jsnli
language:
- ja
---
# sbert-jsnli-luke-japanese-base-lite
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
The base model is [studio-ousia/luke-japanese-base-lite](https://huggingface.co/studio-ousia/luke-japanese-base-lite) and was trained 1 epoch with [shunk031/jsnli](https://huggingface.co/datasets/shunk031/jsnli).
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('oshizo/sbert-jsnli-luke-japanese-base-lite')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('oshizo/sbert-jsnli-luke-japanese-base-lite')
model = AutoModel.from_pretrained('oshizo/sbert-jsnli-luke-japanese-base-lite')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
The results of the evaluation by JSTS and JSICK are available [here](https://github.com/oshizo/JapaneseEmbeddingEval).
## Training
Training scripts are available in [this repository](https://github.com/oshizo/JapaneseEmbeddingTrain).
This model was trained 1 epoch on Google Colab Pro A100 and took approximately 40 minutes.
| 2,998 | [
[
-0.01415252685546875,
-0.05810546875,
0.022552490234375,
0.00942230224609375,
-0.0269622802734375,
-0.0259857177734375,
-0.0335693359375,
-0.0028247833251953125,
0.0269317626953125,
0.0302886962890625,
-0.049041748046875,
-0.037567138671875,
-0.047515869140625,
... |
MarkrAI/kyujin-Poly-platypus-ko-12.8b | 2023-10-19T13:32:01.000Z | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"ko",
"dataset:kyujinpy/KOpen-platypus",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | MarkrAI | null | null | MarkrAI/kyujin-Poly-platypus-ko-12.8b | 2 | 1,468 | transformers | 2023-09-30T13:28:32 | ---
language:
- ko
datasets:
- kyujinpy/KOpen-platypus
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
**(์ฃผ)๋ฏธ๋์ด๊ทธ๋ฃน์ฌ๋๊ณผ์ฒ๊ณผ (์ฃผ)๋ง์ปค์ LLM ์ฐ๊ตฌ ์ปจ์์์์์ ๊ฐ๋ฐ๋ ๋ชจ๋ธ์
๋๋ค**
**The license is `cc-by-nc-sa-4.0`.**
# **Poly-platypus-ko**

**Polyglot-ko + KO-platypus2 = Poly-platypus-ko**
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
Poly-platypus-ko is an auto-regressive language model based on the polyglot-ko transformer architecture.
**Repo Link**
Github KO-platypus2: [KO-platypus2](https://github.com/Marker-Inc-Korea/KO-Platypus)
Github Poly-platypus-ko: [Poly-platypus-ko](https://github.com/KyujinHan/Poly-platypus-ko)
**Base Model**
[Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b)
**Fine-tuning method**
Same as [KO-Platypus2](https://github.com/Marker-Inc-Korea/CoT-llama2).
**Training Dataset**
I use [KOpen-platypus dataset](https://huggingface.co/datasets/kyujinpy/KOpen-platypus).
I use A100 GPU 40GB and COLAB, when trianing.
---
# **Model Bechmark1**
## KO-LLM leaderboard
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).

| Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| --- | --- | --- | --- | --- | --- | --- |
| Poly-platypus-ko-12.8b(ours) | 44.95 | 35.15 | 50.39 | 25.58 | 38.74 | 74.88 |
| [KoT-platypus2-7B](https://huggingface.co/kyujinpy/KoT-platypus2-7B) | 45.62 | 38.05 | 49.63 | 34.68 | 37.69 | 68.08 |
| [KO-platypus2-7B-EX](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) | 45.41 | 39.08 | 50.86 | 34.60 | 37.94 | 64.55 |
| [42MARU/polyglot-ko-12.8b-instruct](https://huggingface.co/42MARU/polyglot-ko-12.8b-instruct) | 43.89 | 36.35 | 51.59 | 26.38 | 45.16 | 59.98 |
| [FINDA-FIT/llama-p](https://huggingface.co/FINDA-FIT/llama-p) | 43.63 | 39.59 | 50.74 | 33.85 | 38.09 | 55.87 |
> Compare with Top 4 SOTA models. (update: 10/01)
---
# **Model Benchmark2**
## LM Eval Harness - Korean (polyglot branch)
- Used EleutherAI's [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot)
> Question Answering (QA)
### COPA (F1)
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.7745 | 0.7676 | 0.7775 | 0.7887 |
| [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.7937 | 0.8108 | 0.8037 | 0.8369 |
| [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.7388 | 0.7626 | 0.7808 | 0.7979 |
| [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.7436 | 0.7927 | 0.8037 | 0.8259 |
| [KO-platypus2-7B-EX](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) | 0.7509 | 0.7899 | 0.8029 | 0.8290 |
| [KoT-platypus2-7B](https://huggingface.co/kyujinpy/KoT-platypus2-7B) | 0.7517 | 0.7868 | 0.8009 | 0.8239 |
| **Poly-platypus-ko-12.8b(ours)** | 0.7876 | 0.8099 | 0.8008 | 0.8239 |
> Natural Language Inference (NLI; ์์ฐ์ด ์ถ๋ก ํ๊ฐ)
### HellaSwag (F1)
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.5976 | 0.5998 | 0.5979 | 0.6208 |
| [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.5954 | 0.6306 | 0.6098 | 0.6118 |
| [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4518 | 0.4668 | 0.4726 | 0.4828 |
| [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4562 | 0.4657 | 0.4698 | 0.4774 |
| [KO-platypus2-7B-EX](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) | 0.4571 | 0.4461 | 0.4371 | 0.4525 |
| [KoT-platypus2-7B](https://huggingface.co/kyujinpy/KoT-platypus2-7B) | 0.4432 | 0.4382 | 0.4550 | 0.4534 |
| **Poly-platypus-ko-12.8b(ours)** | 0.4838 | 0.4858 | 0.5005 | 0.5062 |
> Question Answering (QA)
### BoolQ (F1)
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.4356 | 0.5698 | 0.5187 | 0.5236 |
| [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.4818 | 0.6041 | 0.6289 | 0.6448 |
| [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.3607 | 0.6797 | 0.6801 | 0.6622 |
| [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.5786 | 0.6977 | 0.7084 | 0.7144 |
| [KO-platypus2-7B-EX](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) | 0.6028 | 0.6979 | 0.7016 | 0.6988 |
| [KoT-platypus2-7B](https://huggingface.co/kyujinpy/KoT-platypus2-7B) | 0.6142 | 0.6757 | 0.6839 | 0.6878 |
| **Poly-platypus-ko-12.8b(ours)** | 0.4888 | 0.6520 | 0.6568 | 0.6835 |
> Classification
### SentiNeg (F1)
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.3394 | 0.8841 | 0.8808 | 0.9521 |
| [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.9117 | 0.9015 | 0.9345 | 0.9723 |
| [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4855 | 0.8295 | 0.8711 | 0.8513 |
| [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4594 | 0.7611 | 0.7276 | 0.9370 |
| [KO-platypus2-7B-EX](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) | 0.5821 | 0.7653 | 0.7991 | 0.8643 |
| [KoT-platypus2-7B](https://huggingface.co/kyujinpy/KoT-platypus2-7B) | 0.6127 | 0.7199 | 0.7531 | 0.8381 |
| **Poly-platypus-ko-12.8b(ours)** | 0.8490 | 0.9597 | 0.9723 | 0.9847 |
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "MarkrAI/kyujin-Poly-platypus-ko-12.8b"
CoT-llama = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
CoT-llama_tokenizer = AutoTokenizer.from_pretrained(repo)
```
> Readme format: [kyujinpy/KoT-platypus2-7B](https://huggingface.co/kyujinpy/KoT-platypus2-7B)
--- | 6,263 | [
[
-0.05169677734375,
-0.04595947265625,
0.02362060546875,
0.032470703125,
-0.04364013671875,
0.01187896728515625,
-0.00827789306640625,
-0.033935546875,
0.055450439453125,
0.00576019287109375,
-0.025604248046875,
-0.046112060546875,
-0.053131103515625,
0.01734... |
Undi95/Xwin-MLewd-13B-V0.2 | 2023-10-15T17:25:21.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"nsfw",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | Undi95 | null | null | Undi95/Xwin-MLewd-13B-V0.2 | 30 | 1,468 | transformers | 2023-10-14T21:15:56 | ---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- nsfw
---

THIS MODEL IS MADE FOR LEWD
SEXUAL, CRUDE AND KINKY CONTENT IN OUTPUT CAN AND WILL HAPPEN. YOU'RE WARNED
This is MLewd merged with [Xwin-LM/Xwin-LM-13B-V0.2](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.2)
<!-- description start -->
## Description
This repo contains fp16 files of Xwin-MLewd-13B-V0.2, very hot and lewd model based on Xwin 0.2 13B.
<!-- description end -->
<!-- description start -->
## Models and loras used
- Undi95/ReMM-S-Light (base/private)
- Undi95/CreativeEngine
- Brouz/Slerpeno
- The-Face-Of-Goonery/Huginn-v3-13b
- zattio770/120-Days-of-LORA-v2-13B
- PygmalionAI/pygmalion-2-13b
- Undi95/StoryTelling
- TokenBender/sakhi_13B_roleplayer_NSFW_chat_adapter
- nRuaif/Kimiko-v2-13B
- The-Face-Of-Goonery/Huginn-13b-FP16
- lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT
- Xwin-LM/Xwin-LM-13B-V0.2
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## The secret sauce
```
slices:
- sources:
- model: Xwin-LM/Xwin-LM-13B-V0.2
layer_range: [0, 40]
- model: Undi95/MLewd-v2.4-13B
layer_range: [0, 40]
merge_method: slerp
base_model: Xwin-LM/Xwin-LM-13B-V0.2
parameters:
t:
- filter: lm_head
value: [0.55]
- filter: embed_tokens
value: [0.7]
- filter: self_attn
value: [0.65, 0.35]
- filter: mlp
value: [0.35, 0.65]
- filter: layernorm
value: [0.4, 0.6]
- filter: modelnorm
value: [0.6]
- value: 0.5 # fallback for rest of tensors
dtype: float16
```
Special thanks to Sushi and Shena โฅ
If you want to support me, you can [here](https://ko-fi.com/undiai). | 1,954 | [
[
-0.040618896484375,
-0.04742431640625,
0.0282745361328125,
0.0220794677734375,
-0.0193634033203125,
-0.01409912109375,
0.01084136962890625,
-0.04315185546875,
0.034027099609375,
0.056365966796875,
-0.0738525390625,
-0.047149658203125,
-0.05230712890625,
-0.0... |
cardiffnlp/twitter-roberta-base-2021-124m | 2022-10-10T18:42:02.000Z | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"timelms",
"twitter",
"en",
"dataset:twitter-api",
"arxiv:2202.03829",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | cardiffnlp | null | null | cardiffnlp/twitter-roberta-base-2021-124m | 3 | 1,467 | transformers | 2022-03-02T23:29:05 | ---
language: en
tags:
- timelms
- twitter
license: mit
datasets:
- twitter-api
---
# Twitter 2021 124M (RoBERTa-base)
This is a RoBERTa-base model trained on 123.86M tweets until the end of 2021.
More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829).
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms).
For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms#released-models).
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data).
```python
def preprocess(text):
preprocessed_text = []
for t in text.split():
if len(t) > 1:
t = '@user' if t[0] == '@' and t.count('@') == 1 else t
t = 'http' if t.startswith('http') else t
preprocessed_text.append(t)
return ' '.join(preprocessed_text)
```
## Example Masked Language Model
```python
from transformers import pipeline, AutoTokenizer
MODEL = "cardiffnlp/twitter-roberta-base-2021-124m"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
def pprint(candidates, n):
for i in range(n):
token = tokenizer.decode(candidates[i]['token'])
score = candidates[i]['score']
print("%d) %.5f %s" % (i+1, score, token))
texts = [
"So glad I'm <mask> vaccinated.",
"I keep forgetting to bring a <mask>.",
"Looking forward to watching <mask> Game tonight!",
]
for text in texts:
t = preprocess(text)
print(f"{'-'*30}\n{t}")
candidates = fill_mask(t)
pprint(candidates, 5)
```
Output:
```
------------------------------
So glad I'm <mask> vaccinated.
1) 0.39613 fully
2) 0.26333 getting
3) 0.18988 not
4) 0.02312 still
5) 0.02099 already
------------------------------
I keep forgetting to bring a <mask>.
1) 0.08356 mask
2) 0.05696 book
3) 0.03505 bag
4) 0.02983 backpack
5) 0.02847 blanket
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.46618 the
2) 0.24042 The
3) 0.03216 End
4) 0.02925 Squid
5) 0.02610 this
```
## Example Tweet Embeddings
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter
def get_embedding(text): # naive approach for demonstration
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
return np.mean(features[0], axis=0)
MODEL = "cardiffnlp/twitter-roberta-base-2021-124m"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)
query = "The book was awesome"
tweets = ["I just ordered fried chicken ๐ฃ",
"The movie was great",
"What time is the next game?",
"Just finished reading 'Embeddings in NLP'"]
sims = Counter()
for tweet in tweets:
sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
sims[tweet] = sim
print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
print("%d) %.5f %s" % (idx+1, sim, tweet))
```
Output:
```
Most similar to: The book was awesome
------------------------------
1) 0.98969 The movie was great
2) 0.96102 Just finished reading 'Embeddings in NLP'
3) 0.95565 I just ordered fried chicken ๐ฃ
4) 0.95041 What time is the next game?
```
## Example Feature Extraction
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
MODEL = "cardiffnlp/twitter-roberta-base-2021-124m"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
text = "Good night ๐"
text = preprocess(text)
# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
#features_max = np.max(features[0], axis=0)
# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0)
# #features_max = np.max(features[0], axis=0)
``` | 4,723 | [
[
-0.01953125,
-0.04266357421875,
0.010009765625,
0.0259246826171875,
-0.01529693603515625,
0.00714111328125,
-0.0081787109375,
-0.006683349609375,
0.016265869140625,
0.0005731582641601562,
-0.03729248046875,
-0.045135498046875,
-0.058074951171875,
0.010688781... |
stockmark/stockmark-13b | 2023-10-28T11:19:54.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"japanese",
"llama-2",
"ja",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | stockmark | null | null | stockmark/stockmark-13b | 26 | 1,467 | transformers | 2023-10-21T06:53:06 | ---
license: mit
language:
- ja
library_name: transformers
pipeline_tag: text-generation
tags:
- japanese
- llama-2
---
# stockmark/stockmark-13b
Stockmark-13b is a 13 billion parameter LLM pretrained from scratch based on Japanese corpus of about 220B tokens. This model is developed by [Stockmark Inc.](https://stockmark.co.jp/)
Please see our [blog](https://tech.stockmark.co.jp/blog/202310_stockmark_13b/) for more details.
This project is supported by [AWS LLM development support program](https://aws.amazon.com/jp/local/llm-development-support-program/).
## How to use
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# For A100 or H100 GPU
model = AutoModelForCausalLM.from_pretrained("stockmark/stockmark-13b", device_map="auto", torch_dtype=torch.bfloat16)
# If you use a T4 or V100 GPU, please load a model in 8 bit with the below code.
# To do so, you need to install `bitsandbytes` via `pip install bitsandbytes`.
# model = AutoModelForCausalLM.from_pretrained("stockmark/stockmark-13b", device_map={"": 0}, load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("stockmark/stockmark-13b")
inputs = tokenizer("่ช็ถ่จ่ชๅฆ็ใจใฏ", return_tensors="pt").to(model.device)
with torch.no_grad():
tokens = model.generate(
**inputs,
max_new_tokens=128,
do_sample=True,
temperature=0.7
)
output = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(output)
```
## Examples:
- LoRA tuning: https://huggingface.co/stockmark/stockmark-13b/blob/main/notebooks/LoRA.ipynb
## Training dataset
We have used Japanese corpus of total of about 220 billion tokens.
|corpus|tokens after preprocessing|
|:---:|:---:|
|Stockmark Web Corpus (This dataset will not be released)|9.1 billion|
|Patent|34.8 billion|
|Wikipedia|1.0 billion|
|CC100|10.9 billion|
|mC4|53.2 billion|
|CommonCrawl (snapshot: 2023-23, 2022-49, 2022-21, 2021-21)|112.9 billion|
## Accelerator and Library
- Accelerator: AWS Trainium
- https://aws.amazon.com/machine-learning/trainium/
- Library for distributed training: neuronx-nemo-megatron
- https://github.com/aws-neuron/neuronx-nemo-megatron
## License
[MIT](https://opensource.org/licenses/MIT)
## Developed by
[Stockmark Inc.](https://stockmark.co.jp/)
## Author
[Takahiro Omi](https://huggingface.co/omitakahiro)
| 2,342 | [
[
-0.039306640625,
-0.044158935546875,
0.018829345703125,
0.01149749755859375,
-0.037841796875,
0.005451202392578125,
-0.01021575927734375,
-0.024383544921875,
0.0139923095703125,
0.022735595703125,
-0.04241943359375,
-0.056640625,
-0.056243896484375,
-0.00112... |
facebook/nllb-moe-54b | 2023-09-04T21:10:50.000Z | [
"transformers",
"pytorch",
"nllb-moe",
"feature-extraction",
"translation",
"ace",
"acm",
"acq",
"aeb",
"af",
"ajp",
"ak",
"als",
"am",
"apc",
"ar",
"ars",
"ary",
"arz",
"as",
"ast",
"awa",
"ayr",
"azb",
"azj",
"ba",
"bm",
"ban",
"be",
"bem",
"bn",
"bho"... | translation | facebook | null | null | facebook/nllb-moe-54b | 62 | 1,465 | transformers | 2023-03-16T14:12:22 | ---
language:
- ace
- acm
- acq
- aeb
- af
- ajp
- ak
- als
- am
- apc
- ar
- ars
- ary
- arz
- as
- ast
- awa
- ayr
- azb
- azj
- ba
- bm
- ban
- be
- bem
- bn
- bho
- bjn
- bo
- bs
- bug
- bg
- ca
- ceb
- cs
- cjk
- ckb
- crh
- cy
- da
- de
- dik
- dyu
- dz
- el
- en
- eo
- et
- eu
- ee
- fo
- fj
- fi
- fon
- fr
- fur
- fuv
- gaz
- gd
- ga
- gl
- gn
- gu
- ht
- ha
- he
- hi
- hne
- hr
- hu
- hy
- ig
- ilo
- id
- is
- it
- jv
- ja
- kab
- kac
- kam
- kn
- ks
- ka
- kk
- kbp
- kea
- khk
- km
- ki
- rw
- ky
- kmb
- kmr
- knc
- kg
- ko
- lo
- lij
- li
- ln
- lt
- lmo
- ltg
- lb
- lua
- lg
- luo
- lus
- lvs
- mag
- mai
- ml
- mar
- min
- mk
- mt
- mni
- mos
- mi
- my
- nl
- nn
- nb
- npi
- nso
- nus
- ny
- oc
- ory
- pag
- pa
- pap
- pbt
- pes
- plt
- pl
- pt
- prs
- quy
- ro
- rn
- ru
- sg
- sa
- sat
- scn
- shn
- si
- sk
- sl
- sm
- sn
- sd
- so
- st
- es
- sc
- sr
- ss
- su
- sv
- swh
- szl
- ta
- taq
- tt
- te
- tg
- tl
- th
- ti
- tpi
- tn
- ts
- tk
- tum
- tr
- tw
- tzm
- ug
- uk
- umb
- ur
- uzn
- vec
- vi
- war
- wo
- xh
- ydd
- yo
- yue
- zh
- zsm
- zu
language_details: "ace_Arab, ace_Latn, acm_Arab, acq_Arab, aeb_Arab, afr_Latn, ajp_Arab, aka_Latn, amh_Ethi, apc_Arab, arb_Arab, ars_Arab, ary_Arab, arz_Arab, asm_Beng, ast_Latn, awa_Deva, ayr_Latn, azb_Arab, azj_Latn, bak_Cyrl, bam_Latn, ban_Latn,bel_Cyrl, bem_Latn, ben_Beng, bho_Deva, bjn_Arab, bjn_Latn, bod_Tibt, bos_Latn, bug_Latn, bul_Cyrl, cat_Latn, ceb_Latn, ces_Latn, cjk_Latn, ckb_Arab, crh_Latn, cym_Latn, dan_Latn, deu_Latn, dik_Latn, dyu_Latn, dzo_Tibt, ell_Grek, eng_Latn, epo_Latn, est_Latn, eus_Latn, ewe_Latn, fao_Latn, pes_Arab, fij_Latn, fin_Latn, fon_Latn, fra_Latn, fur_Latn, fuv_Latn, gla_Latn, gle_Latn, glg_Latn, grn_Latn, guj_Gujr, hat_Latn, hau_Latn, heb_Hebr, hin_Deva, hne_Deva, hrv_Latn, hun_Latn, hye_Armn, ibo_Latn, ilo_Latn, ind_Latn, isl_Latn, ita_Latn, jav_Latn, jpn_Jpan, kab_Latn, kac_Latn, kam_Latn, kan_Knda, kas_Arab, kas_Deva, kat_Geor, knc_Arab, knc_Latn, kaz_Cyrl, kbp_Latn, kea_Latn, khm_Khmr, kik_Latn, kin_Latn, kir_Cyrl, kmb_Latn, kon_Latn, kor_Hang, kmr_Latn, lao_Laoo, lvs_Latn, lij_Latn, lim_Latn, lin_Latn, lit_Latn, lmo_Latn, ltg_Latn, ltz_Latn, lua_Latn, lug_Latn, luo_Latn, lus_Latn, mag_Deva, mai_Deva, mal_Mlym, mar_Deva, min_Latn, mkd_Cyrl, plt_Latn, mlt_Latn, mni_Beng, khk_Cyrl, mos_Latn, mri_Latn, zsm_Latn, mya_Mymr, nld_Latn, nno_Latn, nob_Latn, npi_Deva, nso_Latn, nus_Latn, nya_Latn, oci_Latn, gaz_Latn, ory_Orya, pag_Latn, pan_Guru, pap_Latn, pol_Latn, por_Latn, prs_Arab, pbt_Arab, quy_Latn, ron_Latn, run_Latn, rus_Cyrl, sag_Latn, san_Deva, sat_Beng, scn_Latn, shn_Mymr, sin_Sinh, slk_Latn, slv_Latn, smo_Latn, sna_Latn, snd_Arab, som_Latn, sot_Latn, spa_Latn, als_Latn, srd_Latn, srp_Cyrl, ssw_Latn, sun_Latn, swe_Latn, swh_Latn, szl_Latn, tam_Taml, tat_Cyrl, tel_Telu, tgk_Cyrl, tgl_Latn, tha_Thai, tir_Ethi, taq_Latn, taq_Tfng, tpi_Latn, tsn_Latn, tso_Latn, tuk_Latn, tum_Latn, tur_Latn, twi_Latn, tzm_Tfng, uig_Arab, ukr_Cyrl, umb_Latn, urd_Arab, uzn_Latn, vec_Latn, vie_Latn, war_Latn, wol_Latn, xho_Latn, ydd_Hebr, yor_Latn, yue_Hant, zho_Hans, zho_Hant, zul_Latn"
tags:
- translation
license: "cc-by-nc-4.0"
datasets:
- flores-200
metrics:
- bleu
- spbleu
- chrf++
inference: false
---
# NLLB-MoE
This is the model card of NLLB-MoE variant.
- Information about training algorithms, parameters, fairness constraints or other applied approaches, and features. The exact training algorithm, data and the strategies to handle data imbalances for high and low resource languages that were used to train NLLB-200 is described in the paper.
- Paper or other resource for more information NLLB Team et al, No Language Left Behind: Scaling Human-Centered Machine Translation, Arxiv, 2022
- License: CC-BY-NC
- Where to send questions or comments about the model: https://github.com/facebookresearch/fairseq/issues
The NLLB model was presented in [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by Marta R. Costa-jussร , James Cross, Onur รelebi,
Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula,
Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews,
Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmรกn, Philipp Koehn, Alexandre Mourachko, Christophe Ropers,
Safiyyah Saleem, Holger Schwenk, and Jeff Wang.
## Training:
- The Expert Output Masking is used for training, which consists in droping the full contribution for some tokens. This corresponds to the following scheme:

## Generating with NLLB-MoE
The avalable checkpoints requires around 350GB of storage. Make sure to use `accelerate` if you do not have enough RAM on your machine.
While generating the target text set the `forced_bos_token_id` to the target language id. The following
example shows how to translate English to French using the *facebook/nllb-moe-54b* model.
Note that we're using the BCP-47 code for French `fra_Latn`. See [here](https://github.com/facebookresearch/flores/blob/main/flores200/README.md#languages-in-flores-200)
for the list of all BCP-47 in the Flores 200 dataset.
```python
>>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-moe-54b")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-moe-54b")
>>> batched_input = [
'We now have 4-month-old mice that are non-diabetic that used to be diabetic," he added.',
"Dr. Ehud Ur, professor of medicine at Dalhousie University in Halifax, Nova Scotia and chair of the clinical and scientific division of the Canadian Diabetes Association cautioned that the research is still in its early days."
"Like some other experts, he is skeptical about whether diabetes can be cured, noting that these findings have no relevance to people who already have Type 1 diabetes."
"On Monday, Sara Danius, permanent secretary of the Nobel Committee for Literature at the Swedish Academy, publicly announced during a radio program on Sveriges Radio in Sweden the committee, unable to reach Bob Dylan directly about winning the 2016 Nobel Prize in Literature, had abandoned its efforts to reach him.",
'Danius said, "Right now we are doing nothing. I have called and sent emails to his closest collaborator and received very friendly replies. For now, that is certainly enough."',
"Previously, Ring's CEO, Jamie Siminoff, remarked the company started when his doorbell wasn't audible from his shop in his garage.",
]
>>> inputs = tokenizer(article, return_tensors="pt", padding = True)
>>> translated_tokens = model.generate(
... **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["fra_Latn"]
... )
>>> tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)
['"Nous avons maintenant des souris de 4 mois non diabรฉtiques qui รฉtaient diabรฉtiques", a-t-il ajoutรฉ.',
"Le docteur Ehud Ur, professeur de mรฉdecine ร l'universitรฉ Dalhousie, ร Halifax, en Nouvelle-รcosse, et prรฉsident de la division clinique et scientifique de l'Association canadienne du diabรจte, prรฉvient que la recherche n'en est qu'ร ses dรฉbuts.",
"Comme d'autres spรฉcialistes, il est sceptique quant ร la guรฉrison du diabรจte, notant que ces rรฉsultats ne sont pas pertinents pour les personnes atteintes de diabรจte de type 1.",
"Lundi, Sara Danius, secrรฉtaire permanente du Comitรฉ Nobel de littรฉrature ร l'Acadรฉmie suรฉdoise, a annoncรฉ publiquement lors d'une รฉmission de radio sur Sveriges Radio en Suรจde que le comitรฉ, incapable de contacter Bob Dylan directement au sujet du prix Nobel de littรฉrature 2016, avait abandonnรฉ ses efforts pour le joindre.",
"Danius a dรฉclarรฉ: \"Pour le moment, nous ne faisons rien. J'ai appelรฉ et envoyรฉ des courriels ร son plus proche collaborateur et j'ai reรงu des rรฉponses trรจs amicales. Pour l'instant, c'est certainement suffisant\".",
"Auparavant, le PDG de Ring, Jamie Siminoff, a fait remarquer que la sociรฉtรฉ avait commencรฉ lorsque sa sonnette n'รฉtait pas audible depuis son magasin dans son garage.",
"Il a construit une sonnette WiFi, il a dit.",
]
```
| 8,380 | [
[
-0.01959228515625,
-0.049530029296875,
0.0179443359375,
0.0260772705078125,
-0.00666046142578125,
0.01120758056640625,
-0.01409149169921875,
-0.038116455078125,
0.047210693359375,
0.0312347412109375,
-0.0303497314453125,
-0.04327392578125,
-0.039825439453125,
... |
thibaud/controlnet-sd21-openposev2-diffusers | 2023-08-14T07:43:52.000Z | [
"diffusers",
"art",
"stable diffusion",
"controlnet",
"en",
"license:other",
"diffusers:ControlNetModel",
"region:us"
] | null | thibaud | null | null | thibaud/controlnet-sd21-openposev2-diffusers | 2 | 1,465 | diffusers | 2023-04-08T18:53:05 | ---
license: other
language:
- en
tags:
- art
- diffusers
- stable diffusion
- controlnet
---
Here's the first version of controlnet for stablediffusion 2.1 for diffusers
Trained on a subset of laion/laion-art
License: refers to the different preprocessor's ones.
### OpenPose v2:

### Misuse, Malicious Use, and Out-of-Scope Use
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
Thanks
- https://huggingface.co/lllyasviel/ControlNet for the implementation and the release of 1.5 models.
- https://huggingface.co/thepowefuldeez for the conversion script to diffusers | 923 | [
[
-0.0124664306640625,
-0.011627197265625,
0.000005125999450683594,
0.044189453125,
-0.034759521484375,
-0.042633056640625,
0.0036869049072265625,
-0.027923583984375,
0.004932403564453125,
0.054962158203125,
-0.02410888671875,
-0.02880859375,
-0.057861328125,
... |
alan-turing-institute/mt5-large-finetuned-mnli-xtreme-xnli | 2023-05-16T11:12:48.000Z | [
"transformers",
"pytorch",
"tf",
"safetensors",
"mt5",
"text2text-generation",
"multilingual",
"en",
"fr",
"es",
"de",
"el",
"bg",
"ru",
"tr",
"ar",
"vi",
"th",
"zh",
"hi",
"sw",
"ur",
"dataset:multi_nli",
"dataset:xnli",
"arxiv:2010.11934",
"license:apache-2.0",
... | text2text-generation | alan-turing-institute | null | null | alan-turing-institute/mt5-large-finetuned-mnli-xtreme-xnli | 11 | 1,464 | transformers | 2022-03-02T23:29:05 | ---
language:
- multilingual
- en
- fr
- es
- de
- el
- bg
- ru
- tr
- ar
- vi
- th
- zh
- hi
- sw
- ur
tags:
- pytorch
license: apache-2.0
datasets:
- multi_nli
- xnli
metrics:
- xnli
---
# mt5-large-finetuned-mnli-xtreme-xnli
## Model Description
This model takes a pretrained large [multilingual-t5](https://github.com/google-research/multilingual-t5) (also available from [models](https://huggingface.co/google/mt5-large)) and fine-tunes it on English MNLI and the [xtreme_xnli](https://www.tensorflow.org/datasets/catalog/xtreme_xnli) training set. It is intended to be used for zero-shot text classification, inspired by [xlm-roberta-large-xnli](https://huggingface.co/joeddav/xlm-roberta-large-xnli).
## Intended Use
This model is intended to be used for zero-shot text classification, especially in languages other than English. It is fine-tuned on English MNLI and the [xtreme_xnli](https://www.tensorflow.org/datasets/catalog/xtreme_xnli) training set, a multilingual NLI dataset. The model can therefore be used with any of the languages in the XNLI corpus:
- Arabic
- Bulgarian
- Chinese
- English
- French
- German
- Greek
- Hindi
- Russian
- Spanish
- Swahili
- Thai
- Turkish
- Urdu
- Vietnamese
As per recommendations in [xlm-roberta-large-xnli](https://huggingface.co/joeddav/xlm-roberta-large-xnli), for English-only classification, you might want to check out:
- [bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli)
- [a distilled bart MNLI model](https://huggingface.co/models?filter=pipeline_tag%3Azero-shot-classification&search=valhalla).
### Zero-shot example:
The model retains its text-to-text characteristic after fine-tuning. This means that our expected outputs will be text. During fine-tuning, the model learns to respond to the NLI task with a series of single token responses that map to entailment, neutral, or contradiction. The NLI task is indicated with a fixed prefix, "xnli:".
Below is an example, using PyTorch, of the model's use in a similar fashion to the `zero-shot-classification` pipeline. We use the logits from the LM output at the first token to represent confidence.
```python
from torch.nn.functional import softmax
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_name = "alan-turing-institute/mt5-large-finetuned-mnli-xtreme-xnli"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
model.eval()
sequence_to_classify = "ยฟA quiรฉn vas a votar en 2020?"
candidate_labels = ["Europa", "salud pรบblica", "polรญtica"]
hypothesis_template = "Este ejemplo es {}."
ENTAILS_LABEL = "โ0"
NEUTRAL_LABEL = "โ1"
CONTRADICTS_LABEL = "โ2"
label_inds = tokenizer.convert_tokens_to_ids(
[ENTAILS_LABEL, NEUTRAL_LABEL, CONTRADICTS_LABEL])
def process_nli(premise: str, hypothesis: str):
""" process to required xnli format with task prefix """
return "".join(['xnli: premise: ', premise, ' hypothesis: ', hypothesis])
# construct sequence of premise, hypothesis pairs
pairs = [(sequence_to_classify, hypothesis_template.format(label)) for label in
candidate_labels]
# format for mt5 xnli task
seqs = [process_nli(premise=premise, hypothesis=hypothesis) for
premise, hypothesis in pairs]
print(seqs)
# ['xnli: premise: ยฟA quiรฉn vas a votar en 2020? hypothesis: Este ejemplo es Europa.',
# 'xnli: premise: ยฟA quiรฉn vas a votar en 2020? hypothesis: Este ejemplo es salud pรบblica.',
# 'xnli: premise: ยฟA quiรฉn vas a votar en 2020? hypothesis: Este ejemplo es polรญtica.']
inputs = tokenizer.batch_encode_plus(seqs, return_tensors="pt", padding=True)
out = model.generate(**inputs, output_scores=True, return_dict_in_generate=True,
num_beams=1)
# sanity check that our sequences are expected length (1 + start token + end token = 3)
for i, seq in enumerate(out.sequences):
assert len(
seq) == 3, f"generated sequence {i} not of expected length, 3." \\\\
f" Actual length: {len(seq)}"
# get the scores for our only token of interest
# we'll now treat these like the output logits of a `*ForSequenceClassification` model
scores = out.scores[0]
# scores has a size of the model's vocab.
# However, for this task we have a fixed set of labels
# sanity check that these labels are always the top 3 scoring
for i, sequence_scores in enumerate(scores):
top_scores = sequence_scores.argsort()[-3:]
assert set(top_scores.tolist()) == set(label_inds), \\\\
f"top scoring tokens are not expected for this task." \\\\
f" Expected: {label_inds}. Got: {top_scores.tolist()}."
# cut down scores to our task labels
scores = scores[:, label_inds]
print(scores)
# tensor([[-2.5697, 1.0618, 0.2088],
# [-5.4492, -2.1805, -0.1473],
# [ 2.2973, 3.7595, -0.1769]])
# new indices of entailment and contradiction in scores
entailment_ind = 0
contradiction_ind = 2
# we can show, per item, the entailment vs contradiction probas
entail_vs_contra_scores = scores[:, [entailment_ind, contradiction_ind]]
entail_vs_contra_probas = softmax(entail_vs_contra_scores, dim=1)
print(entail_vs_contra_probas)
# tensor([[0.0585, 0.9415],
# [0.0050, 0.9950],
# [0.9223, 0.0777]])
# or we can show probas similar to `ZeroShotClassificationPipeline`
# this gives a zero-shot classification style output across labels
entail_scores = scores[:, entailment_ind]
entail_probas = softmax(entail_scores, dim=0)
print(entail_probas)
# tensor([7.6341e-03, 4.2873e-04, 9.9194e-01])
print(dict(zip(candidate_labels, entail_probas.tolist())))
# {'Europa': 0.007634134963154793,
# 'salud pรบblica': 0.0004287279152777046,
# 'polรญtica': 0.9919371604919434}
```
Unfortunately, the `generate` function for the TF equivalent model doesn't exactly mirror the PyTorch version so the above code won't directly transfer.
The model is currently not compatible with the existing `zero-shot-classification` pipeline.
## Training
This model was pre-trained on a set of 101 languages in the mC4, as described in [the mt5 paper](https://arxiv.org/abs/2010.11934). It was then fine-tuned on the [mt5_xnli_translate_train](https://github.com/google-research/multilingual-t5/blob/78d102c830d76bd68f27596a97617e2db2bfc887/multilingual_t5/tasks.py#L190) task for 8k steps in a similar manner to that described in the [offical repo](https://github.com/google-research/multilingual-t5#fine-tuning), with guidance from [Stephen Mayhew's notebook](https://github.com/mayhewsw/multilingual-t5/blob/master/notebooks/mt5-xnli.ipynb). The resulting model was then converted to :hugging_face: format.
## Eval results
Accuracy over XNLI test set:
| ar | bg | de | el | en | es | fr | hi | ru | sw | th | tr | ur | vi | zh | average |
|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|
| 81.0 | 85.0 | 84.3 | 84.3 | 88.8 | 85.3 | 83.9 | 79.9 | 82.6 | 78.0 | 81.0 | 81.6 | 76.4 | 81.7 | 82.3 | 82.4 |
| 7,028 | [
[
-0.0169219970703125,
-0.03857421875,
0.02288818359375,
0.00453948974609375,
-0.0119171142578125,
-0.00908660888671875,
-0.019683837890625,
-0.021881103515625,
0.0164794921875,
0.02105712890625,
-0.0487060546875,
-0.0592041015625,
-0.04925537109375,
0.0187530... |
42MARU/llama-2-ko-7b-instruct | 2023-09-29T09:38:03.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2",
"instruct",
"instruction",
"ko",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 42MARU | null | null | 42MARU/llama-2-ko-7b-instruct | 3 | 1,463 | transformers | 2023-09-29T09:18:11 | ---
language:
- ko
tags:
- llama-2
- instruct
- instruction
pipeline_tag: text-generation
---
# llama-2-ko-7b-instruct
### Model Details
- Developed by: [42MARU](https://www.42maru.ai/en/)
- Backbone Model: [llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)
- Library: [transformers](https://github.com/huggingface/transformers)
### Used Datasets
- Orca-style dataset
- KOpen-platypus
### Prompt Template
```
### User:
{User}
### Assistant:
{Assistant}
```
### Intruduce 42MARU
- At 42Maru we study QA (Question Answering) and are developing advanced search paradigms that help users spend less time searching by understanding natural language and intention thanks to AI and Deep Learning.
- [About Us](https://www.42maru.ai/en/about-us/)
- [Contact Us](https://www.42maru.ai/en/contact/)
### License
[LICENSE.txt](meta-license/LICENSE.txt)
### USE_POLICY
[USE_POLICY.md](meta-license/USE_POLICY.md)
### Responsible Use Guide
[Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf) | 1,012 | [
[
-0.03338623046875,
-0.0300445556640625,
0.028839111328125,
0.0239105224609375,
-0.035064697265625,
0.0096435546875,
0.0283660888671875,
-0.0206451416015625,
0.0236358642578125,
0.053619384765625,
-0.045745849609375,
-0.05401611328125,
-0.0289764404296875,
0.... |
kyujinpy/Korean-OpenOrca-13B | 2023-10-19T13:30:00.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"dataset:kyujinpy/OpenOrca-KO",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | kyujinpy | null | null | kyujinpy/Korean-OpenOrca-13B | 2 | 1,463 | transformers | 2023-10-08T19:07:11 | ---
language:
- ko
datasets:
- kyujinpy/OpenOrca-KO
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
**(์ฃผ)๋ฏธ๋์ด๊ทธ๋ฃน์ฌ๋๊ณผ์ฒ๊ณผ (์ฃผ)๋ง์ปค์ LLM ์ฐ๊ตฌ ์ปจ์์์์์ ๊ฐ๋ฐ๋ ๋ชจ๋ธ์
๋๋ค**
**The license is `cc-by-nc-sa-4.0`.**
# **๐ณKorean-OpenOrca-13B๐ณ**

## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
Korean-OpenOrca-13B is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Repo Link**
Github Korean-OpenOrca: [๐ณKorean-OpenOrca๐ณ](https://github.com/Marker-Inc-Korea/Korean-OpenOrca)
**Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b)
**Training Dataset**
I use [OpenOrca-KO](https://huggingface.co/datasets/kyujinpy/OpenOrca-KO).
Using DeepL, translate about [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca).
I use A100 GPU 40GB and COLAB, when trianing.
# **Model Benchmark**
## KO-LLM leaderboard
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
| Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| --- | --- | --- | --- | --- | --- | --- |
| Korean-OpenOrca-13B(ours๐ณ) | 47.85 | 43.09 | 54.13 | 40.24 | 45.22 | 56.57 |
| [KoT-Platypus2-13B](https://huggingface.co/kyujinpy/KoT-platypus2-13B) | 49.55 | 43.69 | 53.05 | 42.29 | 43.34 | 65.38 |
| [KO-Platypus2-13B](https://huggingface.co/kyujinpy/KO-Platypus2-13B) | 47.90 | 44.20 | 54.31 | 42.47 | 44.41 | 54.11 |
| [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b) | 46.68 | 42.15 | 54.23 | 38.90 | 40.74 | 57.39 |
| [MarkrAI/kyujin-CoTy-platypus-ko-12.8b](https://huggingface.co/MarkrAI/kyujin-CoTy-platypus-ko-12.8b) | 46.44 | 34.98 | 49.11 | 25.68 | 37.59 | 84.86 |
> Compare with Top 4 SOTA models. (update: 10/09)
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/Korean-OpenOrca-13B"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
--- | 2,350 | [
[
-0.041748046875,
-0.050537109375,
0.012603759765625,
0.035552978515625,
-0.028228759765625,
-0.005062103271484375,
-0.021331787109375,
-0.03509521484375,
0.0225982666015625,
0.01529693603515625,
-0.0309295654296875,
-0.06378173828125,
-0.033843994140625,
-0.... |
rinna/japanese-gpt-neox-3.6b-instruction-sft | 2023-06-15T14:30:48.000Z | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"ja",
"lm",
"nlp",
"dataset:Anthropic/hh-rlhf",
"dataset:stanfordnlp/SHP",
"license:mit",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | rinna | null | null | rinna/japanese-gpt-neox-3.6b-instruction-sft | 93 | 1,462 | transformers | 2023-05-17T02:16:28 | ---
language: ja
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
tags:
- ja
- gpt_neox
- text-generation
- lm
- nlp
license: mit
datasets:
- Anthropic/hh-rlhf
- stanfordnlp/SHP
inference: false
---
# japanese-gpt-neox-3.6b-instruction-sft

# Overview
This repository provides a Japanese GPT-NeoX model of 3.6 billion parameters. The model is based on [`rinna/japanese-gpt-neox-3.6b`](https://huggingface.co/rinna/japanese-gpt-neox-3.6b) and has been finetuned to serve as an instruction-following conversational agent.
* **Model architecture**
A 36-layer, 2816-hidden-size transformer-based language model.
* **Finetuning**
The finetuning data is the subset of the following datasets and has been translated into Japanese.
* [Anthropic HH RLHF data](https://huggingface.co/datasets/Anthropic/hh-rlhf)
* [FLAN Instruction Tuning data](https://github.com/google-research/FLAN)
* [Stanford Human Preferences Dataset](https://huggingface.co/datasets/stanfordnlp/SHP)
The data will **not** be released.
* **Model Series**
| Variant | Link |
| :-- | :--|
| 3.6B PPO | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo |
| 3.6B SFT-v2 | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft-v2 |
| 3.6B SFT | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft |
| 3.6B pretrained | https://huggingface.co/rinna/japanese-gpt-neox-3.6b |
* **Authors**
[Tianyu Zhao](https://huggingface.co/tianyuz) and [Kei Sawada](https://huggingface.co/keisawada)
# I/O Format
A special format has been adopted to construct inputs.
* An input prompt is formatted as a conversation between `ใฆใผใถใผ` and `ใทในใใ `.
* Each input utterance consists of (1) its speaker (`"ใฆใผใถใผ"` or `"ใทในใใ "`), (2) a colon (`":"`), (3) a whitespace (`" "`), and (4) utterance text (e.g. `"ไธ็ใงไธ็ช้ซใๅฑฑใฏ๏ผ"`).
* The input prompt should be ended with `"ใทในใใ : "` to acknowledge the model to generate a response.
* Since the model's tokenizer does not recognize `"\n"`, a special newline symbol `"<NL>"` is used instead.
* All the newlines in input and output utterances should be replaced with `"<NL>"`.
* All the utterances in the input prompt should be separated by `"<NL>"`.
Following is an example to construct an input from a conversation.
~~~python
prompt = [
{
"speaker": "ใฆใผใถใผ",
"text": "ๆฅๆฌใฎใใใใใฎ่ฆณๅ
ๅฐใๆใใฆใใ ใใใ"
},
{
"speaker": "ใทในใใ ",
"text": "ใฉใฎๅฐๅใฎ่ฆณๅ
ๅฐใ็ฅใใใใงใใ๏ผ"
},
{
"speaker": "ใฆใผใถใผ",
"text": "ๆธ่ฐทใฎ่ฆณๅ
ๅฐใๆใใฆใใ ใใใ"
}
]
prompt = [
f"{uttr['speaker']}: {uttr['text']}"
for uttr in prompt
]
prompt = "<NL>".join(prompt)
prompt = (
prompt
+ "<NL>"
+ "ใทในใใ : "
)
print(prompt)
# "ใฆใผใถใผ: ๆฅๆฌใฎใใใใใฎ่ฆณๅ
ๅฐใๆใใฆใใ ใใใ<NL>ใทในใใ : ใฉใฎๅฐๅใฎ่ฆณๅ
ๅฐใ็ฅใใใใงใใ๏ผ<NL>ใฆใผใถใผ: ๆธ่ฐทใฎ่ฆณๅ
ๅฐใๆใใฆใใ ใใใ<NL>ใทในใใ : "
~~~
# How to use the model
~~~~python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("rinna/japanese-gpt-neox-3.6b-instruction-sft", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("rinna/japanese-gpt-neox-3.6b-instruction-sft")
if torch.cuda.is_available():
model = model.to("cuda")
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
do_sample=True,
max_new_tokens=128,
temperature=0.7,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id
)
output = tokenizer.decode(output_ids.tolist()[0][token_ids.size(1):])
output = output.replace("<NL>", "\n")
print(output)
"""ๅใใใพใใใใใใคใใฎใใใใใ็ดนไปใใพใใ
1. ใใๅ
ฌๅใงใใใใๅ
ฌๅใฏใๆฅๆฌใฎ่ฆณๅ
ในใใใใฎ1ใคใจใใฆไบบๆฐใใใใพใใ
2. ในใฏใฉใณใใซไบคๅทฎ็นใงใใๅคใใฎไบบใ
ใ่กใไบคใๅคงใใชไบคๅทฎ็นใงใ่ฆณๅ
ๅฎขใซไบบๆฐใฎในใใใใงใใ
3. 109ใงใใ109ใฏใใทใงใใใณใฐใใจใณใฟใผใใคใกใณใๆฝ่จญใงใใ
4. ้็ๅใงใใ้็ๅใฏใๆฅๆฌใฎๅๆฅญๅฐๅบใงใใๅ้ใงใใ</s>"""
~~~~
# Tokenization
The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer.
* The tokenizer has a vocabulary size of 32,000.
* It uses sentencepiece's byte fallback feature to decompose unknown text pieces into UTF-8 byte pieces and to avoid producing `<UNK>` tokens.
* sentencepiece's `--add_dummy_prefix` option was turned off so that a leading whitespace will not be prepended automatically.
~~~
print(tokenizer.tokenize("ๅพ่ผฉใฏ็ซใงใใ"))
# ['ๅพ', '่ผฉ', 'ใฏ', '็ซ', 'ใงใใ']
# instead of ['โ', 'ๅพ', '่ผฉ', 'ใฏ', '็ซ', 'ใงใใ'] as in rinna/japanese-gpt-1b
~~~
* sentencepiece's `--remove_extra_whitespaces` option was turned off so that leading, trailing, and duplicate whitespaces are reserved.
~~~
print(tokenizer.tokenize(" ๅพ่ผฉใฏ ็ซใงใใ "))
# ['โ', 'โ', 'ๅพ', '่ผฉ', 'ใฏ', 'โ', 'โ', '็ซ', 'ใงใใ', 'โ', 'โ', 'โ']
# instead of ['โ', 'ๅพ', '่ผฉ', 'ใฏ', 'โ็ซ', 'ใงใใ'] as in rinna/japanese-gpt-1b
~~~
* Don't forget to set `use_fast=False` to make the above features function correctly.
~~~
good_tokenizer = AutoTokenizer.from_pretrained("rinna/japanese-gpt-neox-3.6b", use_fast=False)
bad_tokenizer = AutoTokenizer.from_pretrained("rinna/japanese-gpt-neox-3.6b")
print(good_tokenizer.decode(good_tokenizer.encode("แแแแแ แฏแแแ ๅพ่ผฉใฏ ็ซใงใใ ")))
# 'แแแแแ แฏแแแ ๅพ่ผฉใฏ ็ซใงใใ </s>'
print(bad_tokenizer.decode(bad_tokenizer.encode("แแแแแ แฏแแแ ๅพ่ผฉใฏ ็ซใงใใ ")))
# 'แแแแแ [UNK]แแแ ๅพ่ผฉใฏ ็ซใงใใ </s>'
~~~
# Licenese
[The MIT license](https://opensource.org/licenses/MIT)
| 5,511 | [
[
-0.02447509765625,
-0.07623291015625,
0.031494140625,
0.0110931396484375,
-0.027587890625,
-0.0146484375,
-0.01494598388671875,
-0.033294677734375,
0.0305023193359375,
0.0322265625,
-0.048858642578125,
-0.04107666015625,
-0.03448486328125,
0.0221099853515625... |
deepghs/animefull-latest | 2023-07-18T15:34:18.000Z | [
"diffusers",
"text-to-image",
"license:mit",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | deepghs | null | null | deepghs/animefull-latest | 2 | 1,461 | diffusers | 2023-07-18T07:24:09 | ---
license: mit
pipeline_tag: text-to-image
---
The diffusers version of the officially leaked models from NovelAI.
The original version can be found at [deepghs/animefull-latest-ckpt](https://huggingface.co/deepghs/animefull-latest-ckpt).
| 245 | [
[
-0.020599365234375,
-0.03924560546875,
0.022430419921875,
0.0219268798828125,
-0.01410675048828125,
0.00452423095703125,
0.02264404296875,
-0.031951904296875,
0.026885986328125,
0.048187255859375,
-0.051300048828125,
0.013946533203125,
-0.00856781005859375,
... |
THUDM/chatglm3-6b-base | 2023-10-31T10:23:11.000Z | [
"transformers",
"pytorch",
"chatglm",
"glm",
"thudm",
"custom_code",
"zh",
"en",
"arxiv:2103.10360",
"arxiv:2210.02414",
"endpoints_compatible",
"region:us"
] | null | THUDM | null | null | THUDM/chatglm3-6b-base | 33 | 1,461 | transformers | 2023-10-26T09:34:43 | ---
language:
- zh
- en
tags:
- glm
- chatglm
- thudm
---
# ChatGLM3-6B-Base
<p align="center">
๐ป <a href="https://github.com/THUDM/ChatGLM" target="_blank">Github Repo</a> โข ๐ฆ <a href="https://twitter.com/thukeg" target="_blank">Twitter</a> โข ๐ <a href="https://arxiv.org/abs/2103.10360" target="_blank">[GLM@ACL 22]</a> <a href="https://github.com/THUDM/GLM" target="_blank">[GitHub]</a> โข ๐ <a href="https://arxiv.org/abs/2210.02414" target="_blank">[GLM-130B@ICLR 23]</a> <a href="https://github.com/THUDM/GLM-130B" target="_blank">[GitHub]</a> <br>
</p>
<p align="center">
๐ Join our <a href="https://join.slack.com/t/chatglm/shared_invite/zt-25ti5uohv-A_hs~am_D3Q8XPZMpj7wwQ" target="_blank">Slack</a> and <a href="https://github.com/THUDM/ChatGLM/blob/main/resources/WECHAT.md" target="_blank">WeChat</a>
</p>
<p align="center">
๐Experience the larger-scale ChatGLM model at <a href="https://www.chatglm.cn">chatglm.cn</a>
</p>
## ไป็ป (Introduction)
ChatGLM3-6B ๆฏ ChatGLM ็ณปๅๆๆฐไธไปฃ็ๅผๆบๆจกๅ๏ผๅจไฟ็ไบๅไธคไปฃๆจกๅๅฏน่ฏๆต็
ใ้จ็ฝฒ้จๆงไฝ็ญไผๅคไผ็ง็นๆง็ๅบ็กไธ๏ผChatGLM3-6B ๅผๅ
ฅไบๅฆไธ็นๆง๏ผ
1. **ๆดๅผบๅคง็ๅบ็กๆจกๅ๏ผ** ChatGLM3-6B ็ๅบ็กๆจกๅ ChatGLM3-6B-Base ้็จไบๆดๅคๆ ท็่ฎญ็ปๆฐๆฎใๆดๅ
ๅ็่ฎญ็ปๆญฅๆฐๅๆดๅ็็่ฎญ็ป็ญ็ฅใๅจ่ฏญไนใๆฐๅญฆใๆจ็ใไปฃ็ ใ็ฅ่ฏ็ญไธๅ่งๅบฆ็ๆฐๆฎ้ไธๆต่ฏๆพ็คบ๏ผChatGLM3-6B-Base ๅ
ทๆๅจ 10B ไปฅไธ็้ข่ฎญ็ปๆจกๅไธญๆๅผบ็ๆง่ฝใ
2. **ๆดๅฎๆด็ๅ่ฝๆฏๆ๏ผ** ChatGLM3-6B ้็จไบๅ
จๆฐ่ฎพ่ฎก็ [Prompt ๆ ผๅผ](https://github.com/THUDM/ChatGLM3/blob/main/PROMPT.md)๏ผ้คๆญฃๅธธ็ๅค่ฝฎๅฏน่ฏๅคใๅๆถๅ็ๆฏๆ[ๅทฅๅ
ท่ฐ็จ](https://github.com/THUDM/ChatGLM3/blob/main/tool_using/README.md)๏ผFunction Call๏ผใไปฃ็ ๆง่ก๏ผCode Interpreter๏ผๅ Agent ไปปๅก็ญๅคๆๅบๆฏใ
3. **ๆดๅ
จ้ข็ๅผๆบๅบๅ๏ผ** ้คไบๅฏน่ฏๆจกๅ ChatGLM3-6B ๅค๏ผ่ฟๅผๆบไบๅบ็กๆจกๅ ChatGLM-6B-Baseใ้ฟๆๆฌๅฏน่ฏๆจกๅ ChatGLM3-6B-32Kใไปฅไธๆๆๆ้ๅฏนๅญฆๆฏ็ ็ฉถ**ๅฎๅ
จๅผๆพ**๏ผๅจๅกซๅ[้ฎๅท](https://open.bigmodel.cn/mla/form)่ฟ่ก็ป่ฎฐๅ**ไบฆๅ
่ฎธๅ
่ดนๅไธไฝฟ็จ**ใ
ๆฌไปๅบไธบ ChatGLM3-6B ็ๅบ็กๆจกๅ ChatGLM3-6B-Baseใ
ChatGLM3-6B is the latest open-source model in the ChatGLM series. While retaining many excellent features such as smooth dialogue and low deployment threshold from the previous two generations, ChatGLM3-6B introduces the following features:
1. **More Powerful Base Model:** The base model of ChatGLM3-6B, ChatGLM3-6B-Base, employs a more diverse training dataset, more sufficient training steps, and a more reasonable training strategy. Evaluations on datasets such as semantics, mathematics, reasoning, code, knowledge, etc., show that ChatGLM3-6B-Base has the strongest performance among pre-trained models under 10B.
2. **More Comprehensive Function Support:** ChatGLM3-6B adopts a newly designed [Prompt format](https://github.com/THUDM/ChatGLM3/blob/main/PROMPT_en.md), in addition to the normal multi-turn dialogue. It also natively supports [function call](https://github.com/THUDM/ChatGLM3/blob/main/tool_using/README_en.md), code interpreter, and complex scenarios such as agent tasks.
3. **More Comprehensive Open-source Series:** In addition to the dialogue model ChatGLM3-6B, the base model ChatGLM-6B-Base and the long-text dialogue model ChatGLM3-6B-32K are also open-sourced. All the weights are **fully open** for academic research, and after completing the [questionnaire](https://open.bigmodel.cn/mla/form) registration, they are also **allowed for free commercial use**.
This repo is ChatGLM3-6B-Base, the base model of ChatGLM3-6B.
## ่ฝฏไปถไพ่ต (Dependencies)
```shell
pip install protobuf transformers==4.30.2 cpm_kernels torch>=2.0 gradio mdtex2html sentencepiece accelerate
```
## ไปฃ็ ่ฐ็จ (Code Usage)
ไฝไธบๆฒกๆ็ป่ฟไบบ็ฑปๆๅพๅฏน้ฝ็ๆจกๅ๏ผChatGLM3-6B-Base ไธ่ฝ็จไบๅค่ฝฎๅฏน่ฏใไฝๆฏๅฏไปฅ่ฟ่กๆๆฌ็ปญๅใ
As a model that has not been aligned with human intent, ChatGLM3-6B-Base cannot be used for multi-turn conversations. However, text completion is possible.
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm3-6b-base", trust_remote_code=True)
model = AutoModel.from_pretrained("THUDM/chatglm3-6b-base", trust_remote_code=True).half().cuda()
inputs = tokenizer(["ไปๅคฉๅคฉๆฐ็ไธ้"], return_tensors="pt").to('cuda')
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0].tolist()))
```
ๅ
ณไบๆดๅค็ไฝฟ็จ่ฏดๆ๏ผๅ
ๆฌๅฆไฝ่ฟ่กๅฝไปค่กๅ็ฝ้กต็ๆฌ็ DEMO๏ผไปฅๅไฝฟ็จๆจกๅ้ๅไปฅ่็ๆพๅญ๏ผ่ฏทๅ่ๆไปฌ็ [Github Repo](https://github.com/THUDM/ChatGLM)ใ
For more instructions, including how to run CLI and web demos, and model quantization, please refer to our [Github Repo](https://github.com/THUDM/ChatGLM).
## ๅ่ฎฎ (License)
ๆฌไปๅบ็ไปฃ็ ไพ็
ง [Apache-2.0](LICENSE) ๅ่ฎฎๅผๆบ๏ผChatGLM3-6B ๆจกๅ็ๆ้็ไฝฟ็จๅ้่ฆ้ตๅพช [Model License](MODEL_LICENSE)ใ
The code in this repository is open-sourced under the [Apache-2.0 license](LICENSE), while the use of the ChatGLM3-6B model weights needs to comply with the [Model License](MODEL_LICENSE).
## ๅผ็จ (Citation)
ๅฆๆไฝ ่งๅพๆไปฌ็ๅทฅไฝๆๅธฎๅฉ็่ฏ๏ผ่ฏท่่ๅผ็จไธๅ่ฎบๆใ
If you find our work helpful, please consider citing the following papers.
```
@article{zeng2022glm,
title={Glm-130b: An open bilingual pre-trained model},
author={Zeng, Aohan and Liu, Xiao and Du, Zhengxiao and Wang, Zihan and Lai, Hanyu and Ding, Ming and Yang, Zhuoyi and Xu, Yifan and Zheng, Wendi and Xia, Xiao and others},
journal={arXiv preprint arXiv:2210.02414},
year={2022}
}
```
```
@inproceedings{du2022glm,
title={GLM: General Language Model Pretraining with Autoregressive Blank Infilling},
author={Du, Zhengxiao and Qian, Yujie and Liu, Xiao and Ding, Ming and Qiu, Jiezhong and Yang, Zhilin and Tang, Jie},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={320--335},
year={2022}
}
```
| 5,311 | [
[
-0.030792236328125,
-0.0640869140625,
0.017242431640625,
0.0249176025390625,
-0.0154266357421875,
0.00223541259765625,
-0.0290069580078125,
-0.041259765625,
-0.006195068359375,
0.0201568603515625,
-0.03912353515625,
-0.052825927734375,
-0.039520263671875,
-0... |
alexandrainst/da-sentiment-base | 2023-09-20T11:56:22.000Z | [
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"arxiv:1910.09700",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | alexandrainst | null | null | alexandrainst/da-sentiment-base | 3 | 1,460 | transformers | 2022-03-02T23:29:04 | ---
language:
- da
license: apache-2.0
widget:
- text: Det er super godt
---
# Model Card for Danish BERT
Danish BERT Tone for sentiment polarity detection
# Model Details
## Model Description
The BERT Tone model detects sentiment polarity (positive, neutral or negative) in Danish texts. It has been finetuned on the pretrained Danish BERT model by BotXO.
- **Developed by:** DaNLP
- **Shared by [Optional]:** Hugging Face
- **Model type:** Text Classification
- **Language(s) (NLP):** Danish (da)
- **License:** cc-by-sa-4.0
- **Related Models:** More information needed
- **Parent Model:** BERT
- **Resources for more information:**
- [GitHub Repo](https://github.com/certainlyio/nordic_bert)
- [Associated Documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/sentiment_analysis.html#bert-tone)
# Uses
## Direct Use
This model can be used for text classification
## Downstream Use [Optional]
More information needed.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
The data used for training come from the [Twitter Sentiment](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#twitsent) and [EuroParl sentiment 2](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#europarl-sentiment2) datasets.
## Training Procedure
### Preprocessing
It has been finetuned on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO.
### Speeds, Sizes, Times
More information needed.
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed.
### Factors
### Metrics
F1
## Results
More information needed.
# Model Examination
More information needed.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed.
- **Hours used:** More information needed.
- **Cloud Provider:** More information needed.
- **Compute Region:** More information needed.
- **Carbon Emitted:** More information needed.
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed.
## Compute Infrastructure
More information needed.
### Hardware
More information needed.
### Software
More information needed.
# Citation
**BibTeX:**
More information needed.
**APA:**
More information needed.
# Glossary [optional]
More information needed.
# More Information [optional]
More information needed.
# Model Card Authors [optional]
DaNLP in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed.
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("alexandrainst/da-sentiment-base")
tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-sentiment-base")
```
</details> | 3,957 | [
[
-0.04278564453125,
-0.042572021484375,
0.0249481201171875,
0.026702880859375,
-0.040771484375,
-0.0186767578125,
-0.019805908203125,
-0.031768798828125,
0.01629638671875,
0.03887939453125,
-0.057464599609375,
-0.06036376953125,
-0.04998779296875,
-0.01019287... |
timm/maxvit_rmlp_tiny_rw_256.sw_in1k | 2023-05-11T00:20:10.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2204.01697",
"arxiv:2111.09883",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/maxvit_rmlp_tiny_rw_256.sw_in1k | 0 | 1,460 | timm | 2023-01-20T21:34:39 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for maxvit_rmlp_tiny_rw_256.sw_in1k
A timm specific MaxViT (w/ a MLP Log-CPB (continuous log-coordinate relative position bias motivated by Swin-V2) image classification model. Trained in `timm` on ImageNet-1k by Ross Wightman.
ImageNet-1k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program.
### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py)
MaxxViT covers a number of related model architectures that share a common structure including:
- CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages.
- MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid).
- CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate.
Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations.
All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 29.1
- GMACs: 6.8
- Activations (M): 46.9
- Image size: 256 x 256
- **Papers:**
- MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697
- Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('maxvit_rmlp_tiny_rw_256.sw_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'maxvit_rmlp_tiny_rw_256.sw_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 128, 128])
# torch.Size([1, 64, 64, 64])
# torch.Size([1, 128, 32, 32])
# torch.Size([1, 256, 16, 16])
# torch.Size([1, 512, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'maxvit_rmlp_tiny_rw_256.sw_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
### By Top-1
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
### By Throughput (samples / sec)
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{tu2022maxvit,
title={MaxViT: Multi-Axis Vision Transformer},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={ECCV},
year={2022},
}
```
```bibtex
@article{dai2021coatnet,
title={CoAtNet: Marrying Convolution and Attention for All Data Sizes},
author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing},
journal={arXiv preprint arXiv:2106.04803},
year={2021}
}
```
| 22,316 | [
[
-0.052459716796875,
-0.031829833984375,
0.0017747879028320312,
0.0290985107421875,
-0.023956298828125,
-0.0191802978515625,
-0.011474609375,
-0.0253143310546875,
0.053863525390625,
0.01525115966796875,
-0.042755126953125,
-0.046875,
-0.04669189453125,
-0.004... |
stablediffusionapi/realistic-vision-v20-2047 | 2023-07-18T14:13:31.000Z | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | stablediffusionapi | null | null | stablediffusionapi/realistic-vision-v20-2047 | 2 | 1,456 | diffusers | 2023-06-12T19:52:00 | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Realistic Vision V2.0 API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "realistic-vision-v20-2047"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/realistic-vision-v20-2047)
Model link: [View model](https://stablediffusionapi.com/models/realistic-vision-v20-2047)
Credits: [View credits](https://civitai.com/?query=Realistic%20Vision%20V2.0)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "realistic-vision-v20-2047",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** | 2,539 | [
[
-0.034088134765625,
-0.054290771484375,
0.04058837890625,
0.0139007568359375,
-0.0401611328125,
0.003467559814453125,
0.02423095703125,
-0.04522705078125,
0.036529541015625,
0.04541015625,
-0.062042236328125,
-0.057098388671875,
-0.0298919677734375,
-0.00638... |
ToolBench/ToolLLaMA-2-7b-v2 | 2023-10-02T16:21:44.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:llama2",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | ToolBench | null | null | ToolBench/ToolLLaMA-2-7b-v2 | 9 | 1,456 | transformers | 2023-09-29T15:53:08 | ---
license: llama2
---
# Model Card for Model ID
This is ToolLLaMA-2-7b version model introduced in [ToolBench](https://github.com/OpenBMB/ToolBench).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **License:** llama2
- **Finetuned from model [optional]:** LLaMA-2-7b-hf
## Uses
Refer to [ToolBench](https://github.com/OpenBMB/ToolBench).
## Training Details
Trained with the new version data in ToolBench.
| 471 | [
[
-0.0140380859375,
-0.0322265625,
0.0285186767578125,
0.0279388427734375,
-0.0557861328125,
0.003330230712890625,
0.04034423828125,
-0.0251312255859375,
0.0145721435546875,
0.048797607421875,
-0.047271728515625,
-0.046173095703125,
-0.03765869140625,
-0.02653... |
microsoft/resnet-152 | 2023-06-26T19:49:50.000Z | [
"transformers",
"pytorch",
"tf",
"safetensors",
"resnet",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:1512.03385",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | microsoft | null | null | microsoft/resnet-152 | 4 | 1,453 | transformers | 2022-03-16T14:54:22 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
---
# ResNet-152 v1.5
ResNet model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by He et al.
Disclaimer: The team releasing ResNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ResNet (Residual Network) is a convolutional neural network that democratized the concepts of residual learning and skip connections. This enables to train much deeper models.
This is ResNet v1.5, which differs from the original model: in the bottleneck blocks which require downsampling, v1 has stride = 2 in the first 1x1 convolution, whereas v1.5 has stride = 2 in the 3x3 convolution. This difference makes ResNet50 v1.5 slightly more accurate (\~0.5% top1) than v1, but comes with a small performance drawback (~5% imgs/sec) according to [Nvidia](https://catalog.ngc.nvidia.com/orgs/nvidia/resources/resnet_50_v1_5_for_pytorch).

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=resnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, ResNetForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-152")
model = ResNetForImageClassification.from_pretrained("microsoft/resnet-152")
inputs = feature_extractor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/resnet).
### BibTeX entry and citation info
```bibtex
@inproceedings{he2016deep,
title={Deep residual learning for image recognition},
author={He, Kaiming and Zhang, Xiangyu and Ren, Shaoqing and Sun, Jian},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={770--778},
year={2016}
}
```
| 2,665 | [
[
-0.046295166015625,
-0.01425933837890625,
-0.016448974609375,
-0.00699615478515625,
-0.0219268798828125,
-0.01403045654296875,
-0.004802703857421875,
-0.055999755859375,
0.025177001953125,
0.031890869140625,
-0.045166015625,
-0.0190277099609375,
-0.043212890625,... |
algiraldohe/lm-ner-linkedin-skills-recognition | 2023-07-07T22:51:06.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | token-classification | algiraldohe | null | null | algiraldohe/lm-ner-linkedin-skills-recognition | 13 | 1,453 | transformers | 2023-07-07T21:42:41 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: lm-ner-linkedin-skills-recognition
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lm-ner-linkedin-skills-recognition
This model is a fine-tuned version of [algiraldohe/distilbert-base-uncased-linkedin-domain-adaptation](https://huggingface.co/algiraldohe/distilbert-base-uncased-linkedin-domain-adaptation) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0307
- Precision: 0.9119
- Recall: 0.9312
- F1: 0.9214
- Accuracy: 0.9912
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1301 | 1.0 | 729 | 0.0468 | 0.8786 | 0.8715 | 0.8750 | 0.9863 |
| 0.0432 | 2.0 | 1458 | 0.0345 | 0.8994 | 0.9219 | 0.9105 | 0.9900 |
| 0.0332 | 3.0 | 2187 | 0.0307 | 0.9119 | 0.9312 | 0.9214 | 0.9912 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 1,780 | [
[
-0.025421142578125,
-0.036041259765625,
0.01174163818359375,
-0.006927490234375,
-0.0124969482421875,
-0.0246734619140625,
0.006153106689453125,
-0.030609130859375,
0.01421356201171875,
0.01116180419921875,
-0.051177978515625,
-0.052947998046875,
-0.049377441406... |
nlp-waseda/roberta-base-japanese-with-auto-jumanpp | 2022-10-21T01:57:40.000Z | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | nlp-waseda | null | null | nlp-waseda/roberta-base-japanese-with-auto-jumanpp | 4 | 1,452 | transformers | 2022-10-15T05:09:36 | ---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
mask_token: "[MASK]"
widget:
- text: "ๆฉ็จฒ็ฐๅคงๅญฆใง่ช็ถ่จ่ชๅฆ็ใ[MASK]ใใใ"
---
# nlp-waseda/roberta-base-japanese-with-auto-jumanpp
## Model description
This is a Japanese RoBERTa base model pretrained on Japanese Wikipedia and the Japanese portion of CC-100.
## How to use
You can use this model for masked language modeling as follows:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("nlp-waseda/roberta-base-japanese-with-auto-jumanpp")
model = AutoModelForMaskedLM.from_pretrained("nlp-waseda/roberta-base-japanese-with-auto-jumanpp")
sentence = 'ๆฉ็จฒ็ฐๅคงๅญฆใง่ช็ถ่จ่ชๅฆ็ใ[MASK]ใใใ'
encoding = tokenizer(sentence, return_tensors='pt')
...
```
You can fine-tune this model on downstream tasks.
## Tokenization
`BertJapaneseTokenizer` now supports automatic tokenization for [Juman++](https://github.com/ku-nlp/jumanpp). However, if your dataset is large, you may take a long time since `BertJapaneseTokenizer` still does not supoort fast tokenization. You can still do the Juman++ tokenization by your self and use the old model [nlp-waseda/roberta-base-japanese](https://huggingface.co/nlp-waseda/roberta-base-japanese).
Juman++ 2.0.0-rc3 was used for pretraining. Each word is tokenized into tokens by [sentencepiece](https://github.com/google/sentencepiece).
## Vocabulary
The vocabulary consists of 32000 tokens including words ([JumanDIC](https://github.com/ku-nlp/JumanDIC)) and subwords induced by the unigram language model of [sentencepiece](https://github.com/google/sentencepiece).
## Training procedure
This model was trained on Japanese Wikipedia (as of 20210920) and the Japanese portion of CC-100. It took a week using eight NVIDIA A100 GPUs.
The following hyperparameters were used during pretraining:
- learning_rate: 1e-4
- per_device_train_batch_size: 256
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 4096
- max_seq_length: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 700000
- warmup_steps: 10000
- mixed_precision_training: Native AMP
## Performance on JGLUE
See the [Baseline Scores](https://github.com/yahoojapan/JGLUE#baseline-scores) of JGLUE.
| 2,323 | [
[
-0.034454345703125,
-0.0650634765625,
0.01318359375,
0.0171966552734375,
-0.03692626953125,
0.0018606185913085938,
-0.03363037109375,
-0.027008056640625,
0.03515625,
0.04180908203125,
-0.05401611328125,
-0.03436279296875,
-0.0506591796875,
0.0050735473632812... |
Trofish/KULLM-RLHF | 2023-10-01T06:19:28.000Z | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"arxiv:2303.16634",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | Trofish | null | null | Trofish/KULLM-RLHF | 2 | 1,452 | transformers | 2023-08-28T11:21:32 | 2023 ์ฑ๊ท ๊ด๋ ํ๊ณ์ง์ค ์ฐํํ๋ ฅํ๋ก์ ํธ VAIV
### Github : https://github.com/VAIV-2023/RLHF-Korean-Friendly-LLM
## GPT ๊ธฐ๋ฐ์ ์์ฐ์ค๋ฝ๊ณ (Friendly) ์ค๋ฆฌ์ ์ธ(Harmless) ์ผ์ ๋ํํ ์ฑ๋ด ๋ชจ๋ธ
# ๊ณผ์ ๋ชฉํ
GPT-NEOX ๊ธฐ๋ฐ ์์ฐ์ค๋ฝ๊ณ ์ค๋ฆฌ์ ์ธ ํ๊ตญ์ด ๊ธฐ๋ฐ ์ผ์ ๋ํํ ์ฑ๋ด ๋ชจ๋ธ ๊ตฌํ
- Self-Instruct: GPT4๋ฅผ ์ด์ฉํ ๋ฐ์ดํฐ ์ฆ๊ฐ
- RLHF(Reinforcement Learning from Human Feedback): ์ฌ๋์ ์ ํธ๋๋ฅผ ๋ฐ์ํ ๊ฐํํ์ต
- DeepSpeed: ๋๊ท๋ชจ ๋ถ์ฐ ๋ฅ๋ฌ๋์ ์ํ ์๋ก์ด ๋ฉ๋ชจ๋ฆฌ ์ต์ ํ ๊ธฐ์
# ๊ฐ๋ฐ ๋ด์ฉ
Task 1: ๊ฐํํ์ต ๋จ๊ณ๋ณ ๋ฐ์ดํฐ์
๊ตฌ์ถ
Task 2: SFT ๋ชจ๋ธ Fine-tuning (https://huggingface.co/Trofish/KULLM-SFT-v2)
Task 3: Reward ๋ชจ๋ธ ver1,2,3 ๊ตฌํ
Task 4: RLHF์ DeepSpeedChat์ ํตํ ์ต์ข
๋ชจ๋ธ ๊ตฌํ (https://huggingface.co/Trofish/KULLM-RLHF)
# Task1. ๊ฐํํ์ต ๋จ๊ณ๋ณ ๋ฐ์ดํฐ์
๊ตฌ์ถ


## ๋ฐ์ดํฐ์
์ ์ ์ ๊ณ ๋ ค ์ฌํญ
- **์ผ์ ๋ํ์ ํ์ค ํํ ๋์ฒ ๋ฅ๋ ฅ์ ์ฌ๋ฆฌ๊ธฐ ์ํ ๋ฐ์ดํฐ์
๊ณผ, ํ์ต ์ ์ฑ๋ด ๋ชจ๋ธ์ generalํ task์ ๋ํ ์ฑ๋ฅ์ด ํ๋ฝํ๋ ๊ฒ์ ๋ง๊ธฐ ์ํด์ general task ๋ฐ์ดํฐ์
์ ๊ตฌ์ฑ**
- **๊ตญ๋ฆฝ๊ตญ์ด์ ์ผ์ ๋ํ ๋ฐ์ดํฐ์
:** ์ผ์์ ์ธ ๋ํ์ ๋ํ ์์ฐ์ค๋ฌ์ด ์๋ต์ด ์์ผ๋ฉด์๋, ๋ง์ถค๋ฒ์ด ์ ์ง์ผ์ง๊ณ ์์ด, ๋น๋ฌธ, ์ด์ฑ ๋ฑ์ด ์์ผ๋ฉฐ ์ฃผ์ ๋ณ๋ก ๋ค์ํ ๋ํ๊ฐ ์์
- **AI Hub ํ์ค ํํ ๋ฐ์ดํฐ์
:** ํ์ค, ์ฐจ๋ณ, ์ฑ์ ์ธ ๋ด์ฉ, ํญ๋ ฅ, ๋ฒ์ฃ ๋ฑ ์นดํ
๊ณ ๋ฆฌ๋ณ๋ก ๋ค์ํ ํ์ค ํํ์ด ์์
- **General task ๋ฐ์ดํฐ์
**
- Evol-Instruct ๋ฐ์ดํฐ์
: ๋ค์ํ ๋ถ์ผ์ ๋ํ ๋ณต์กํ๊ณ ๋
ผ๋ฆฌ์ ์ธ prompt์ ๋ต๋ณ์ด ์์
- Self-Instruct ๋ฐ์ดํฐ์
: ์ฌ๋์ด ์ง์ ์์ฑํ ์์ง์ Seed data๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ๋ฐ์ดํฐ ์ฆ๊ฐ
- RLHF ํ๊ตญ์ด ๋ฒ์ญ ๋ฐ์ดํฐ์
: DeepSpeedChat์์ ๊ณต๊ฐํ ๋ฐ์ดํฐ์
์ ํ๊ตญ์ด๋ก ๋ฒ์ญ
# Task2. SFT ๋ชจ๋ธ Fine-tuning
## Baseline Model
[- ๊ณ ๋ ค๋ํ๊ต NLP & AI ์ฐ๊ตฌ์ค๊ณผ HIAI ์ฐ๊ตฌ์๊ฐ ๊ฐ๋ฐํ ํ๊ตญ์ด LLM **"KULLM"** ์ฌ์ฉ](https://github.com/nlpai-lab/KULLM)
## Datasets

## SFT Model Finetuning

* ๋ชจ๋ธํ์ต์๋ Google Colab์์ ์ ๊ณตํ๋ A100 40GB GPU ์ฌ์ฉ
## SFT Model Evaluation


* G-Eval: https://arxiv.org/abs/2303.16634
## Final SFT Model
- https://huggingface.co/Trofish/KULLM-SFT-v2
# Task3-1. Reward Model ver1 ๊ตฌํ
## Baseline Model
- EleutherAI์์ ๊ฐ๋ฐํ ์ด๊ฑฐ๋ ํ๊ตญ์ด ์ธ์ด ๋ชจ๋ธ **Polyglot-Ko** ์ฌ์ฉ
- 1.3b ๋ชจ๋ธ๊ณผ 5.8b ๋ชจ๋ธ์ ๊ฐ๊ฐ ์คํ
## Datasets

- InstructGPT์ ๋ฐ์ดํฐ์
๊ตฌ์ถ ๋ฐฉ๋ฒ
- Reward ๋ชจ๋ธ ํ์ต ๋ฐ์ดํฐ์
์ผ๋ก SFT ํ์ต์ ์ฌ์ฉํ prompt(1,500๊ฐ - ์ผ์๋ํ:ํ์คํํ=2:1)์ ์๋ก์ด prompt(1,000๊ฐ - DeepSpeedChat ๋ฒ์ญ ๋ฐ์ดํฐ์
) ์ฌ์ฉ
- SFT ๋ชจ๋ธ์์ ํ๊ฐ์ prompt๋น K๊ฐ์ Response๋ฅผ ์์ฑํ๊ณ , ์์๋ฅผ Labeling
- ๋ฐ์ดํฐ์
๋ผ๋ฒจ๋ง
- Instruct GPT์ ๊ฒฝ์ฐ ์ฌ๋์ด ์ง์ Labeling์ ํ์ฟ์ง๋ง, ์ผ๊ด๋ ํ๊ฐ์ ์๊ฐ ๋จ์ถ์ ์ํด GPt-4์ G-Eval์ ์ด์ฉ
- SFT์์ ์์ฑํ ๋ Response ์ค G-Eval ํ๊ฐ ์ ์ ํฉ์ด ๋์ ๊ฒ์ Chosen response๋ก ๊ฒฐ์
- ๋ฐ์ดํฐ์
์ ํ๋ณ๋ก G-Eval ํ๊ฐ Prompt์ ์ฐจ์ด๋ฅผ ๋์์
- 
## Reward v1 Model Finetuning
- 
- InstructGPT ๋
ผ๋ฌธ์ ๋ฐ๋ฅด๋ฉด, Reward ๋ชจ๋ธ์ overfitting๋๋ฉด ์ฑ๋ฅ์ด ํฌ๊ฒ ์ ํ๋๋ค๊ณ ํจ --> epoch ์๋ฅผ 1๋ก ์ค์
- batch size๋ learning rate ๋ฑ ๋ค๋ฅธ hyper-parameter๋ ์ฑ๋ฅ์ ํฐ ์ํฅ์ด ์๋ค๊ณ ํจ
- Colab A100 40GB ๊ธฐ์ค ์ด ํ์ต ์๊ฐ 4๋ถ
## Reward v1 Model Evaluation
- 
- Reward Model Template
- **"์๋๋ ์์
์ ์ค๋ช
ํ๋ ๋ช
๋ น์ด์
๋๋ค. ์์ฒญ์ ์ ์ ํ ์๋ฃํ๋ ์๋ต์ ์์ฑํ์ธ์. \n\n ### ๋ช
๋ น์ด:\n{prompt}\n\n ### ์๋ต:\n"**
# Task3-2. Reward Model ver2,3 ๊ตฌํ
## RewardModel ver1 Issues
- ๊ตฌํ๋ Reward ๋ชจ๋ธ์ ์ฑ๋ฅ์ด ์ข์ง ์์ (Accuracy 0.65)
- Reward ๋ชจ๋ธ์ ์ฌ์ฉํ์ฌ Step3 ํ์ต์ ํ์คํํ์ด ์๋๋ฐ๋ ํ์คํํ์ด๋ผ๊ณ ์ธ์ํ๊ณ ๋ต๋ณํ๋ ๋ฌธ์ ๋ฐ์
## Issue ํด๊ฒฐ๋ฐฉ์ (Reward Model ver2,3)
- 
- General Task ๋ต๋ณ์ ๋ํ ํ๊ฐ ์ฑ๋ฅ์ ๋์ด๊ธฐ ์ํด Evol-instruct ๋ฐ์ดํฐ ์ถ๊ฐ
- SFT ๋ชจ๋ธ๋ก ๋ต๋ณ์ 2๊ฐ ์์ฑํ์์ ๋, Chosen, Rejected ๋ต๋ณ์ ์ฐจ์ด๊ฐ ํฌ๊ฒ ์์ด ๋ชจ๋ธ์ด ํ์ต๋์ง ์๋ ํ์์ ๋ฐฉ์งํ๊ธฐ ์ํ์ฌ 2๊ฐ์ ๋ชจ๋ธ **(ChatGPT, SFT)**๋ฅผ ์ฌ์ฉํ์ฌ ๋ต๋ณ์ ์์ฑ
- ํ์คํํ ํ์ต์(Ver2) Step3 ํ์ต ์ดํ์ ๋ต๋ณ์ด ์ด์ํ๊ฒ ์์ฑ๋๋ Issue๊ฐ ์์ด, ํ์คํํ์ ๋ฐ์ดํฐ๋ฅผ ์ ๊ฑฐํ๊ณ ํ์ต(Ver3)
- RM-ver1์ GPT4๊ฐ Chosen, Rejected ๋ ์ด๋ธ๋ง์ ์งํํ์์ง๋ง, Resource ์ด์๋ก ์ธํด ์ผ๋ถ๋ง ์ฌ๋์ด ๋ผ๋ฒจ๋ง ์งํ
- ์ผ์๋ํ, ํ์คํํ ๋ฐ์ดํฐ์
- ChatGPT์ SFT ๋ชจ๋ ์ผ๊ด๋๊ฒ ๋์ ํ๋ฆฌํฐ์ ๋ต๋ณ์ ์์ฑํ์ง ์์, ์ฌ๋์ด ์ง์ ๋ผ๋ฒจ๋ง ์งํ
- RLHF ํ๊ตญ์ด ๋ฒ์ญ, Evol-Instruct ๋ฐ์ดํฐ์
- ChatGPT๊ฐ ์ผ๊ด๋๊ฒ ๋์ ํ๋ฆฌํฐ์ ๋ต๋ณ์ ์์ฑํ์ฌ ChatGPT๋ฅผ Chosen, SFT๋ฅผ Rejected๋ก ๋ผ๋ฒจ๋ง ์ง
## Reward Model ver2,3 Evaluation

# Task4. RLHF์ DeepSpeedChat์ ํตํ ์ต์ข
๋ชจ๋ธ ๊ตฌํ
- Microsoft์์ ๋ง๋ ๋๊ท๋ชจ ๋ถ์ฐ ๋ฅ๋ฌ๋์ ์ํ ์๋ก์ด ๋ฉ๋ชจ๋ฆฌ ์ต์ ํ ๊ธฐ์ (DeepSpeed)์ RLHF Process์ ์ ์ฉํ DeepSpeedChat ์ฌ์ฉ
- Human preference๋ก ํ์ต์ ์ํจ Reward ๋ชจ๋ธ๊ณผ ๊ฐํํ์ต์ ํตํด SFT ๋ชจ๋ธ์ ์ฌ๋์ ์ ํธ๋๋ฅผ ๋ฐ์ํ์ฌ ์์ฐ์ค๋ฝ๊ณ (FRIENDLY), ์ค๋ฆฌ์ ์ธ (HARMLESS)ย ์ฑ๋ด ์์ฑ
## Baseline Models
- Actor Model: KULLM-SFT-V2
- Reward Model: Polyglot-Ko-Reward-V3
## Training Options

## RLHF Training

- ํ์ต ๊ฒฐ๊ณผ, SFT ๋ชจ๋ธ์ ๋ต๋ณ์ ๋ํ ํ๋ฆฌํฐ์ธ Reward๊ฐ ์์นํ๋ ๊ฒ์ ํ์ธ (์ฌ๋์ ์ ํธ๋๊ฐ ๋์ ๋ต๋ณ์ ์์ฑ)
## RLFH Model Evaluation


## Final RLHF Model
- https://huggingface.co/Trofish/KULLM-RLHF
# Contributors ๐
- ๋ฐ์ฑ์ (์ฑ๊ท ๊ด๋ํ๊ต ์ํํธ์จ์ดํ๊ณผ 20ํ๋ฒ, waniboyy@gmail.com)
- ์กํ๋น (์ฑ๊ท ๊ด๋ํ๊ต ์ํํธ์จ์ดํ๊ณผ 20ํ๋ฒ, shbin0519@gmail.com)
- ํ์ ๋ฏผ (์ฑ๊ท ๊ด๋ํ๊ต ์ํํธ์จ์ดํ๊ณผ 21ํ๋ฒ, ymheo1123@gmail.com)
- ํ์ฌ์ (์ฑ๊ท ๊ด๋ํ๊ต ์ํํธ์จ์ดํ๊ณผ 20ํ๋ฒ, ryeowon13@gmail.com)
| 5,673 | [
[
-0.03955078125,
-0.0570068359375,
0.0300750732421875,
0.0309295654296875,
-0.02691650390625,
-0.008392333984375,
-0.0005297660827636719,
-0.0267181396484375,
0.0279388427734375,
0.0141448974609375,
-0.04766845703125,
-0.0296783447265625,
-0.03607177734375,
0... |
EarthnDusk/havoc-and-disorder | 2023-05-29T09:23:40.000Z | [
"diffusers",
"stable diffusion",
"anime",
"finetune",
"text-to-image",
"en",
"dataset:Nerfgun3/bad_prompt",
"dataset:gsdf/EasyNegative",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | EarthnDusk | null | null | EarthnDusk/havoc-and-disorder | 1 | 1,451 | diffusers | 2023-05-28T05:16:55 | ---
license: creativeml-openrail-m
datasets:
- Nerfgun3/bad_prompt
- gsdf/EasyNegative
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable diffusion
- anime
- finetune
---
# Havoc and Disorder
---
EXTREMELY OPINIONATED FLAPPY FLAPPY MOTH THAT LOVES LAMPS LIVES IN THIS MODEL WE SWEAR!
UPDATES ARE AT: https://civitai.com/models/78864/havoc-and-disorder
## THIS MODEL IS ENTIRELY DEDICATED TO THE MOTH QUEEN MAGE HERSELF THE BOMBASTIC MORI
---
Original Finetune based on
https://civitai.com/models/30203?modelVersionId=61088
---
Join our Reddit: https://www.reddit.com/r/earthndusk/
Funding for a HUGE ART PROJECT THIS YEAR: https://www.buymeacoffee.com/duskfallxcrew / any chance you can spare a coffee or three? https://ko-fi.com/DUSKFALLcrew
If you got requests, or concerns, We're still looking for beta testers: JOIN THE DISCORD AND DEMAND THINGS OF US: https://discord.gg/Da7s8d3KJ7
Listen to the music that we've made that goes with our art: https://open.spotify.com/playlist/00R8x00YktB4u541imdSSf?si=b60d209385a74b38
---
Havoc_and_Disorder Dreambooth model trained by Duskfallcrew with TheLastBen's fast-DreamBooth notebook
THE ANIME MODEL YOU NEVER ASKED FOR -
YES WE ARE SPONSORED BY PIRATE DIFFUSION - LINKS AND BANNERS ARE COMING ON SPONSORED MODELS - YA GOT AN ISSUE? TAKE IT UP WITH THE BOSS
** STICKS TONGUE OUT, RASPBERRY, TWO FINGERS Up, LIFTS LEG, FARTS AND RUNS AWAY
---
| 1,442 | [
[
-0.01593017578125,
-0.042877197265625,
0.01806640625,
0.036956787109375,
-0.0244140625,
-0.0027980804443359375,
0.03375244140625,
-0.04937744140625,
0.0701904296875,
0.033203125,
-0.0489501953125,
-0.0247650146484375,
-0.0189971923828125,
-0.0076408386230468... |
timm/convnext_xlarge.fb_in22k_ft_in1k_384 | 2023-03-31T22:49:52.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-22k",
"arxiv:2201.03545",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/convnext_xlarge.fb_in22k_ft_in1k_384 | 0 | 1,450 | timm | 2022-12-13T07:19:21 | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
datasets:
- imagenet-1k
- imagenet-22k
---
# Model card for convnext_xlarge.fb_in22k_ft_in1k_384
A ConvNeXt image classification model. Pretrained on ImageNet-22k and fine-tuned on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 350.2
- GMACs: 179.2
- Activations (M): 169.0
- Image size: 384 x 384
- **Papers:**
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- **Original:** https://github.com/facebookresearch/ConvNeXt
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convnext_xlarge.fb_in22k_ft_in1k_384', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_xlarge.fb_in22k_ft_in1k_384',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 256, 96, 96])
# torch.Size([1, 512, 48, 48])
# torch.Size([1, 1024, 24, 24])
# torch.Size([1, 2048, 12, 12])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_xlarge.fb_in22k_ft_in1k_384',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 12, 12) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
| model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
| [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
| [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
| [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
| [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
| [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
| [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
| [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
| [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
| [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
| [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 |
| [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
| [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
| [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
| [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
| [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
| [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
| [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
| [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
| [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
| [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
| [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
| [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
| [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
| [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
| [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
| [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
| [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
| [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
| [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
| [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
| [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
| [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
| [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
| [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
| [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
| [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
| [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
| [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
| [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
| [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
| [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
## Citation
```bibtex
@article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 15,744 | [
[
-0.06719970703125,
-0.03216552734375,
-0.003536224365234375,
0.03759765625,
-0.031463623046875,
-0.015869140625,
-0.0130462646484375,
-0.035491943359375,
0.064208984375,
0.0174560546875,
-0.0439453125,
-0.0421142578125,
-0.050689697265625,
-0.002922058105468... |
kazzand/ru-longformer-tiny-16384 | 2023-11-02T12:00:30.000Z | [
"transformers",
"pytorch",
"longformer",
"fill-mask",
"ru",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | kazzand | null | null | kazzand/ru-longformer-tiny-16384 | 13 | 1,450 | transformers | 2023-07-12T12:07:43 | ---
language:
- ru
- en
---
This is a tiny Longformer model designed for Russian language. It was initialized from [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) weights and has been modified to support a context length of up to 16384 tokens.
We fine-tuned it on a dataset of Russian books, news, wiki and habr, however it still undrestands English, thanks to the source model. For a detailed information check out our [post](https://habr.com/ru/companies/ru_mts/articles/761116/) on Habr.
Model attributes:
- 12 attention heads
- 3 hidden layers
- 16384 tokens length of context
The model can be used as-is to produce text embeddings or it can be further fine-tuned for a specific downstream task.
Text embeddings can be produced as follows:
```python
# pip install transformers sentencepiece
import torch
from transformers import LongformerModel, LongformerTokenizerFast
model = LongformerModel.from_pretrained('kazzand/ru-longformer-tiny-16384')
tokenizer = LongformerTokenizerFast.from_pretrained('kazzand/ru-longformer-tiny-16384')
def get_cls_embedding(text, model, tokenizer, device='cuda'):
model.to(device)
batch = tokenizer(text, return_tensors='pt')
#set global attention for cls token
global_attention_mask = [
[1 if token_id == tokenizer.cls_token_id else 0 for token_id in input_ids]
for input_ids in batch["input_ids"]
]
#add global attention mask to batch
batch["global_attention_mask"] = torch.tensor(global_attention_mask)
with torch.no_grad():
output = model(**batch.to(device))
return output.last_hidden_state[:,0,:]
``` | 1,655 | [
[
-0.0027637481689453125,
-0.053863525390625,
0.034149169921875,
0.0219879150390625,
-0.0286712646484375,
-0.0165557861328125,
-0.04278564453125,
-0.02618408203125,
0.017974853515625,
0.0250091552734375,
-0.036529541015625,
-0.020172119140625,
-0.0435791015625,
... |
JasperLS/gelectra-base-injection | 2023-05-08T14:33:47.000Z | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | JasperLS | null | null | JasperLS/gelectra-base-injection | 2 | 1,449 | transformers | 2023-05-08T14:32:41 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: gelectra-base-injection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gelectra-base-injection
This model is a fine-tuned version of [deepset/gelectra-base](https://huggingface.co/deepset/gelectra-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0940
- Accuracy: 0.9828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 69 | 0.2601 | 0.9397 |
| No log | 2.0 | 138 | 0.0940 | 0.9828 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,399 | [
[
-0.0222625732421875,
-0.048370361328125,
0.0173492431640625,
-0.00799560546875,
-0.01861572265625,
-0.01458740234375,
-0.0040435791015625,
-0.022216796875,
0.0186614990234375,
0.038116455078125,
-0.03314208984375,
-0.06353759765625,
-0.044769287109375,
-0.01... |
TheBloke/PMC_LLAMA-7B-GPTQ | 2023-08-21T10:25:52.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"medical",
"dataset:allenai/s2orc",
"license:other",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/PMC_LLAMA-7B-GPTQ | 4 | 1,449 | transformers | 2023-06-03T00:46:05 | ---
datasets:
- allenai/s2orc
inference: false
license: other
model_type: llama
tags:
- medical
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Chaoyi Wi's PMC_LLAMA 7B 10 epoch GPTQ
These files are GPTQ model files for [Chaoyi Wi's PMC_LLAMA 7B 10 epoch](https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B_10_epoch).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These models were quantised using hardware kindly provided by [Latitude.sh](https://www.latitude.sh/accelerate).
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/PMC_LLAMA-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/PMC_LLAMA-7B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B_10_epoch)
## Prompt template: Unknown
```
{prompt}
```
## Provided files
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
| Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
| main | 4 | 128 | False | 4.52 GB | True | GPTQ-for-LLaMa | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
| gptq-4bit-32g-actorder_True | 4 | 32 | True | 4.28 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
| gptq-4bit-64g-actorder_True | 4 | 64 | True | 4.02 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| gptq-4bit-128g-actorder_True | 4 | 128 | True | 3.90 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| gptq-8bit--1g-actorder_True | 8 | None | True | 7.01 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
| gptq-8bit-128g-actorder_False | 8 | 128 | False | 7.16 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
| gptq-8bit-128g-actorder_True | 8 | 128 | True | 7.16 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
| gptq-8bit-64g-actorder_True | 8 | 64 | True | 7.31 GB | False | AutoGPTQ | 8-bit, with group size 64g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. |
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/PMC_LLAMA-7B-GPTQ:gptq-4bit-32g-actorder_True`
- With Git, you can clone a branch with:
```
git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/PMC_LLAMA-7B-GPTQ`
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/PMC_LLAMA-7B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/PMC_LLAMA-7B-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done"
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `PMC_LLAMA-7B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
## How to use this GPTQ model from Python code
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
`GITHUB_ACTIONS=true pip install auto-gptq`
Then try the following example code:
```python
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
model_name_or_path = "TheBloke/PMC_LLAMA-7B-GPTQ"
model_basename = "PMC_LLAMA-7B-GPTQ-4bit-128g.no-act.order"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
model_basename=model_basename
use_safetensors=True,
trust_remote_code=True,
device="cuda:0",
use_triton=use_triton,
quantize_config=None)
"""
To download from a specific branch, use the revision parameter, as in this example:
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
revision="gptq-4bit-32g-actorder_True",
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=True,
device="cuda:0",
quantize_config=None)
"""
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Compatibility
The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, ้ฟๆ, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieล, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Chaoyi Wi's PMC_LLAMA 7B 10 epoch
This repo contains the latest version of PMC_LLaMA_7B, which is LLaMA-7b finetuned on the PMC papers in the S2ORC dataset.
Notably, different from `chaoyi-wu/PMC_LLAMA_7B`, this model is further trained for 10 epochs.
The model was trained with the following hyperparameters:
* Epochs: **10**
* Batch size: 128
* Cutoff length: 512
* Learning rate: 2e-5
Each epoch we sample 512 tokens per paper for training.
The model can be loaded as follows:
```
import transformers
import torch
tokenizer = transformers.LlamaTokenizer.from_pretrained('chaoyi-wu/PMC_LLAMA_7B_10_epoch')
model = transformers.LlamaForCausalLM.from_pretrained('chaoyi-wu/PMC_LLAMA_7B_10_epoch')
sentence = 'Hello, doctor'
batch = tokenizer(
sentence,
return_tensors="pt",
add_special_tokens=False
)
with torch.no_grad():
generated = model.generate(inputs = batch["input_ids"], max_length=200, do_sample=True, top_k=50)
print('model predict: ',tokenizer.decode(generated[0]))
```
| 11,699 | [
[
-0.03570556640625,
-0.06622314453125,
0.020172119140625,
0.01482391357421875,
-0.027435302734375,
-0.00911712646484375,
0.006237030029296875,
-0.0254669189453125,
0.0074920654296875,
0.02105712890625,
-0.03887939453125,
-0.035491943359375,
-0.028106689453125,
... |
sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned | 2022-06-15T22:19:18.000Z | [
"sentence-transformers",
"pytorch",
"tf",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:1908.10084",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | sentence-similarity | sentence-transformers | null | null | sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned | 4 | 1,448 | sentence-transformers | 2022-03-02T23:29:05 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned')
model = AutoModel.from_pretrained('sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` | 3,864 | [
[
-0.0189971923828125,
-0.056610107421875,
0.019500732421875,
0.033294677734375,
-0.020294189453125,
-0.02423095703125,
-0.02838134765625,
-0.00439453125,
0.01277923583984375,
0.019195556640625,
-0.044342041015625,
-0.0347900390625,
-0.056640625,
0.01597595214... |
Nara-Lab/nallm-bart | 2023-06-30T09:13:16.000Z | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | Nara-Lab | null | null | Nara-Lab/nallm-bart | 2 | 1,448 | transformers | 2023-06-28T05:28:44 | ---
license: apache-2.0
language:
- ko
---
NA-LLM(๋๋ฆ)์ ๋๋ผ์ง์์ ๋ณด๊ฐ ๊ฐ๋ฐํ ํ๊ตญ์ด Large Language Model (LLM) ์
๋๋ค.
https://github.com/Nara-Information/NA-LLM | 145 | [
[
-0.0246734619140625,
-0.053680419921875,
0.03973388671875,
0.03131103515625,
-0.02825927734375,
0.00785064697265625,
-0.037017822265625,
-0.00196075439453125,
0.057586669921875,
0.04144287109375,
-0.0190887451171875,
-0.061981201171875,
-0.0189056396484375,
... |
textattack/roberta-base-MRPC | 2021-05-20T22:07:47.000Z | [
"transformers",
"pytorch",
"jax",
"roberta",
"text-classification",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | textattack | null | null | textattack/roberta-base-MRPC | 0 | 1,447 | transformers | 2022-03-02T23:29:05 | ## TextAttack Model Card
This `roberta-base` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 3e-05, and a maximum sequence length of 256.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9117647058823529, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
| 618 | [
[
-0.00730133056640625,
-0.037994384765625,
0.0246429443359375,
0.004032135009765625,
-0.024078369140625,
0.0008907318115234375,
-0.01336669921875,
-0.034698486328125,
-0.009063720703125,
0.02227783203125,
-0.03790283203125,
-0.05291748046875,
-0.05096435546875,
... |
s3nh/artwork-arcane-stable-diffusion | 2023-05-05T11:22:16.000Z | [
"diffusers",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | s3nh | null | null | s3nh/artwork-arcane-stable-diffusion | 15 | 1,447 | diffusers | 2022-11-07T14:59:20 | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
### Arcane based Artwork Diffusion Model
I present you fine tuned model of stable-diffusion-v1-5, which heavily based of
work of great artworks from Arcane.
Use the tokens **_arcane style_** in your prompts for the effect.
Model was trained using the diffusers library, which based on Dreambooth implementation.
Training steps included:
- prior preservation loss
- train-text-encoder fine tuning
### ๐งจ Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
#!pip install diffusers transformers scipy torch
from diffusers import StableDiffusionPipeline
import torch
model_id = "s3nh/artwork-arcane-stable-diffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "Rain forest, arcane style"
image = pipe(prompt).images[0]
image.save("./example_output.png")
```
# Gallery
## Rain forest, arcane style


## Car traffic, arcane style


## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
| 2,801 | [
[
-0.031768798828125,
-0.048583984375,
0.036590576171875,
0.025970458984375,
-0.020263671875,
-0.0242919921875,
0.0031604766845703125,
-0.035369873046875,
0.0270843505859375,
0.0537109375,
-0.029876708984375,
-0.037506103515625,
-0.05029296875,
-0.011238098144... |
cardiffnlp/twitter-roberta-base-2022-154m | 2023-08-31T03:06:34.000Z | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"timelms",
"twitter",
"en",
"dataset:twitter-api",
"arxiv:2202.03829",
"arxiv:2308.02142",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | cardiffnlp | null | null | cardiffnlp/twitter-roberta-base-2022-154m | 5 | 1,447 | transformers | 2023-03-08T11:15:09 | ---
language: en
tags:
- timelms
- twitter
license: mit
datasets:
- twitter-api
---
# Twitter 2022 154M (RoBERTa-base, 154M - full update)
This is a RoBERTa-base model trained on 154M tweets until the end of December 2022 (from original checkpoint, no incremental updates).
A large model trained on the same data is available [here](https://huggingface.co/cardiffnlp/twitter-roberta-large-2022-154m).
These 154M tweets result from filtering 220M tweets obtained exclusively from the Twitter Academic API, covering every month between 2018-01 and 2022-12.
Filtering and preprocessing details are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829).
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms).
For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms#released-models).
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data).
```python
def preprocess(text):
preprocessed_text = []
for t in text.split():
if len(t) > 1:
t = '@user' if t[0] == '@' and t.count('@') == 1 else t
t = 'http' if t.startswith('http') else t
preprocessed_text.append(t)
return ' '.join(preprocessed_text)
```
## Example Masked Language Model
```python
from transformers import pipeline, AutoTokenizer
MODEL = "cardiffnlp/twitter-roberta-base-2022-154m"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
def pprint(candidates, n):
for i in range(n):
token = tokenizer.decode(candidates[i]['token'])
score = candidates[i]['score']
print("%d) %.5f %s" % (i+1, score, token))
texts = [
"So glad I'm <mask> vaccinated.",
"I keep forgetting to bring a <mask>.",
"Looking forward to watching <mask> Game tonight!",
]
for text in texts:
t = preprocess(text)
print(f"{'-'*30}\n{t}")
candidates = fill_mask(t)
pprint(candidates, 5)
```
Output:
```
------------------------------
So glad I'm <mask> vaccinated.
1) 0.26251 not
2) 0.25460 a
3) 0.12611 in
4) 0.11036 the
5) 0.04210 getting
------------------------------
I keep forgetting to bring a <mask>.
1) 0.09274 charger
2) 0.04727 lighter
3) 0.04469 mask
4) 0.04395 drink
5) 0.03644 camera
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.57683 Squid
2) 0.17419 The
3) 0.04198 the
4) 0.00970 Spring
5) 0.00921 Big
```
## Example Tweet Embeddings
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter
def get_embedding(text): # naive approach for demonstration
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
return np.mean(features[0], axis=0)
MODEL = "cardiffnlp/twitter-roberta-base-2022-154m"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)
query = "The book was awesome"
tweets = ["I just ordered fried chicken ๐ฃ",
"The movie was great",
"What time is the next game?",
"Just finished reading 'Embeddings in NLP'"]
sims = Counter()
for tweet in tweets:
sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
sims[tweet] = sim
print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
print("%d) %.5f %s" % (idx+1, sim, tweet))
```
Output:
```
Most similar to: The book was awesome
------------------------------
1) 0.99403 The movie was great
2) 0.98006 Just finished reading 'Embeddings in NLP'
3) 0.97314 What time is the next game?
4) 0.92448 I just ordered fried chicken ๐ฃ
```
## Example Feature Extraction
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
MODEL = "cardiffnlp/twitter-roberta-base-2022-154m"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
text = "Good night ๐"
text = preprocess(text)
# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
#features_max = np.max(features[0], axis=0)
# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0)
# #features_max = np.max(features[0], axis=0)
```
### BibTeX entry and citation info
Please cite the [reference paper](https://arxiv.org/abs/2308.02142) if you use this model.
```bibtex
@article{loureiro2023tweet,
title={Tweet Insights: A Visualization Platform to Extract Temporal Insights from Twitter},
author={Loureiro, Daniel and Rezaee, Kiamehr and Riahi, Talayeh and Barbieri, Francesco and Neves, Leonardo and Anke, Luis Espinosa and Camacho-Collados, Jose},
journal={arXiv preprint arXiv:2308.02142},
year={2023}
}
``` | 5,565 | [
[
-0.01369476318359375,
-0.04388427734375,
0.0161590576171875,
0.0212860107421875,
-0.0155487060546875,
0.010101318359375,
-0.0072784423828125,
-0.013092041015625,
0.020782470703125,
0.0007457733154296875,
-0.038177490234375,
-0.0458984375,
-0.053497314453125,
... |
openai/imagegpt-medium | 2023-06-12T11:16:08.000Z | [
"transformers",
"pytorch",
"imagegpt",
"vision",
"dataset:imagenet-21k",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null | openai | null | null | openai/imagegpt-medium | 4 | 1,446 | transformers | 2022-03-02T23:29:05 | ---
license: apache-2.0
tags:
- vision
datasets:
- imagenet-21k
---
# ImageGPT (medium-sized model)
ImageGPT (iGPT) model pre-trained on ImageNet ILSVRC 2012 (14 million images, 21,843 classes) at resolution 32x32. It was introduced in the paper [Generative Pretraining from Pixels](https://cdn.openai.com/papers/Generative_Pretraining_from_Pixels_V2.pdf) by Chen et al. and first released in [this repository](https://github.com/openai/image-gpt). See also the official [blog post](https://openai.com/blog/image-gpt/).
Disclaimer: The team releasing ImageGPT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The ImageGPT (iGPT) is a transformer decoder model (GPT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 32x32 pixels.
The goal for the model is simply to predict the next pixel value, given the previous ones.
By pre-training the model, it learns an inner representation of images that can then be used to:
- extract features useful for downstream tasks: one can either use ImageGPT to produce fixed image features, in order to train a linear model (like a sklearn logistic regression model or SVM). This is also referred to as "linear probing".
- perform (un)conditional image generation.
## Intended uses & limitations
You can use the raw model for either feature extractor or (un) conditional image generation. See the [model hub](https://huggingface.co/models?search=openai/imagegpt) to all ImageGPT variants.
### How to use
Here is how to use this model in PyTorch to perform unconditional image generation:
```python
from transformers import ImageGPTImageProcessor, ImageGPTForCausalImageModeling
import torch
import matplotlib.pyplot as plt
import numpy as np
processor = ImageGPTImageProcessor.from_pretrained('openai/imagegpt-medium')
model = ImageGPTForCausalImageModeling.from_pretrained('openai/imagegpt-medium')
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
# unconditional generation of 8 images
batch_size = 8
context = torch.full((batch_size, 1), model.config.vocab_size - 1) #initialize with SOS token
context = torch.tensor(context).to(device)
output = model.generate(pixel_values=context, max_length=model.config.n_positions + 1, temperature=1.0, do_sample=True, top_k=40)
clusters = processor.clusters
n_px = processor.size
samples = output[:,1:].cpu().detach().numpy()
samples_img = [np.reshape(np.rint(127.5 * (clusters[s] + 1.0)), [n_px, n_px, 3]).astype(np.uint8) for s in samples] # convert color cluster tokens back to pixels
f, axes = plt.subplots(1, batch_size, dpi=300)
for img, ax in zip(samples_img, axes):
ax.axis('off')
ax.imshow(img)
```
## Training data
The ImageGPT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes.
## Training procedure
### Preprocessing
Images are first resized/rescaled to the same resolution (32x32) and normalized across the RGB channels. Next, color-clustering is performed. This means that every pixel is turned into one of 512 possible cluster values. This way, one ends up with a sequence of 32x32 = 1024 pixel values, rather than 32x32x3 = 3072, which is prohibitively large for Transformer-based models.
### Pretraining
Training details can be found in section 3.4 of v2 of the paper.
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to the original paper.
### BibTeX entry and citation info
```bibtex
@InProceedings{pmlr-v119-chen20s,
title = {Generative Pretraining From Pixels},
author = {Chen, Mark and Radford, Alec and Child, Rewon and Wu, Jeffrey and Jun, Heewoo and Luan, David and Sutskever, Ilya},
booktitle = {Proceedings of the 37th International Conference on Machine Learning},
pages = {1691--1703},
year = {2020},
editor = {III, Hal Daumรยฉ and Singh, Aarti},
volume = {119},
series = {Proceedings of Machine Learning Research},
month = {13--18 Jul},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v119/chen20s/chen20s.pdf},
url = {https://proceedings.mlr.press/v119/chen20s.html
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
``` | 4,634 | [
[
-0.0462646484375,
-0.0218353271484375,
0.01080322265625,
-0.0005507469177246094,
-0.03094482421875,
-0.0231170654296875,
-0.016082763671875,
-0.0262603759765625,
-0.00319671630859375,
0.0152130126953125,
-0.032135009765625,
-0.03173828125,
-0.04541015625,
-0... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.