modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
shi-labs/oneformer_ade20k_dinat_large | 2023-08-30T00:03:28.000Z | [
"transformers",
"pytorch",
"oneformer",
"vision",
"image-segmentation",
"dataset:scene_parse_150",
"arxiv:2211.06220",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | image-segmentation | shi-labs | null | null | shi-labs/oneformer_ade20k_dinat_large | 4 | 1,369 | transformers | 2022-11-15T20:25:34 | ---
license: mit
tags:
- vision
- image-segmentation
datasets:
- scene_parse_150
widget:
- src: https://praeclarumjj3.github.io/files/ade20k.jpeg
example_title: House
- src: https://praeclarumjj3.github.io/files/demo_2.jpg
example_title: Airplane
- src: https://praeclarumjj3.github.io/files/coco.jpeg
example_title: Person
---
# OneFormer
OneFormer model trained on the ADE20k dataset (large-sized version, Dinat backbone). It was introduced in the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jain et al. and first released in [this repository](https://github.com/SHI-Labs/OneFormer).

## Model description
OneFormer is the first multi-task universal image segmentation framework. It needs to be trained only once with a single universal architecture, a single model, and on a single dataset, to outperform existing specialized models across semantic, instance, and panoptic segmentation tasks. OneFormer uses a task token to condition the model on the task in focus, making the architecture task-guided for training, and task-dynamic for inference, all with a single model.

## Intended uses & limitations
You can use this particular checkpoint for semantic, instance and panoptic segmentation. See the [model hub](https://huggingface.co/models?search=oneformer) to look for other fine-tuned versions on a different dataset.
### How to use
Here is how to use this model:
```python
from transformers import OneFormerProcessor, OneFormerForUniversalSegmentation
from PIL import Image
import requests
url = "https://huggingface.co/datasets/shi-labs/oneformer_demo/blob/main/ade20k.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
# Loading a single model for all three tasks
processor = OneFormerProcessor.from_pretrained("shi-labs/oneformer_ade20k_dinat_large")
model = OneFormerForUniversalSegmentation.from_pretrained("shi-labs/oneformer_ade20k_dinat_large")
# Semantic Segmentation
semantic_inputs = processor(images=image, task_inputs=["semantic"], return_tensors="pt")
semantic_outputs = model(**semantic_inputs)
# pass through image_processor for postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# Instance Segmentation
instance_inputs = processor(images=image, task_inputs=["instance"], return_tensors="pt")
instance_outputs = model(**instance_inputs)
# pass through image_processor for postprocessing
predicted_instance_map = processor.post_process_instance_segmentation(outputs, target_sizes=[image.size[::-1]])[0]["segmentation"]
# Panoptic Segmentation
panoptic_inputs = processor(images=image, task_inputs=["panoptic"], return_tensors="pt")
panoptic_outputs = model(**panoptic_inputs)
# pass through image_processor for postprocessing
predicted_semantic_map = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]["segmentation"]
```
For more examples, please refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/oneformer).
### Citation
```bibtex
@article{jain2022oneformer,
title={{OneFormer: One Transformer to Rule Universal Image Segmentation}},
author={Jitesh Jain and Jiachen Li and MangTik Chiu and Ali Hassani and Nikita Orlov and Humphrey Shi},
journal={arXiv},
year={2022}
}
```
| 3,666 | [
[
-0.04962158203125,
-0.057037353515625,
0.0264129638671875,
0.0102081298828125,
-0.022216796875,
-0.046295166015625,
0.02191162109375,
-0.0177154541015625,
0.002803802490234375,
0.0506591796875,
-0.0758056640625,
-0.045684814453125,
-0.047882080078125,
-0.017... |
kabachuha/modelscope-damo-text2video-pruned-weights | 2023-03-23T12:19:20.000Z | [
"open_clip",
"license:cc-by-nc-4.0",
"region:us"
] | null | kabachuha | null | null | kabachuha/modelscope-damo-text2video-pruned-weights | 32 | 1,367 | open_clip | 2023-03-22T08:42:25 | ---
license: cc-by-nc-4.0
---
https://huggingface.co/damo-vilab/modelscope-damo-text-to-video-synthesis, but with fp16 (half precision) weights
Read all the info here https://huggingface.co/damo-vilab/modelscope-damo-text-to-video-synthesis/blob/main/README.md
| 263 | [
[
-0.038116455078125,
-0.036956787109375,
0.01369476318359375,
0.051971435546875,
-0.01357269287109375,
-0.01218414306640625,
-0.0006833076477050781,
-0.00852203369140625,
0.0190277099609375,
0.017425537109375,
-0.05364990234375,
-0.033233642578125,
-0.05255126953... |
mrm8488/bert-base-spanish-wwm-cased-finetuned-spa-squad2-es | 2021-05-20T00:22:53.000Z | [
"transformers",
"pytorch",
"jax",
"bert",
"question-answering",
"es",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | question-answering | mrm8488 | null | null | mrm8488/bert-base-spanish-wwm-cased-finetuned-spa-squad2-es | 6 | 1,366 | transformers | 2022-03-02T23:29:05 | ---
language: es
thumbnail: https://i.imgur.com/jgBdimh.png
---
# BETO (Spanish BERT) + Spanish SQuAD2.0
This model is provided by [BETO team](https://github.com/dccuchile/beto) and fine-tuned on [SQuAD-es-v2.0](https://github.com/ccasimiro88/TranslateAlignRetrieve) for **Q&A** downstream task.
## Details of the language model('dccuchile/bert-base-spanish-wwm-cased')
Language model ([**'dccuchile/bert-base-spanish-wwm-cased'**](https://github.com/dccuchile/beto/blob/master/README.md)):
BETO is a [BERT model](https://github.com/google-research/bert) trained on a [big Spanish corpus](https://github.com/josecannete/spanish-corpora). BETO is of size similar to a BERT-Base and was trained with the Whole Word Masking technique. Below you find Tensorflow and Pytorch checkpoints for the uncased and cased versions, as well as some results for Spanish benchmarks comparing BETO with [Multilingual BERT](https://github.com/google-research/bert/blob/master/multilingual.md) as well as other (not BERT-based) models.
## Details of the downstream task (Q&A) - Dataset
[SQuAD-es-v2.0](https://github.com/ccasimiro88/TranslateAlignRetrieve)
| Dataset | # Q&A |
| ---------------------- | ----- |
| SQuAD2.0 Train | 130 K |
| SQuAD2.0-es-v2.0 | 111 K |
| SQuAD2.0 Dev | 12 K |
| SQuAD-es-v2.0-small Dev| 69 K |
## Model training
The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command:
```bash
export SQUAD_DIR=path/to/nl_squad
python transformers/examples/question-answering/run_squad.py \
--model_type bert \
--model_name_or_path dccuchile/bert-base-spanish-wwm-cased \
--do_train \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train_nl-v2.0.json \
--predict_file $SQUAD_DIR/dev_nl-v2.0.json \
--per_gpu_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /content/model_output \
--save_steps 5000 \
--threads 4 \
--version_2_with_negative
```
## Results:
| Metric | # Value |
| ---------------------- | ----- |
| **Exact** | **76.50**50 |
| **F1** | **86.07**81 |
```json
{
"exact": 76.50501430594491,
"f1": 86.07818773108252,
"total": 69202,
"HasAns_exact": 67.93020719738277,
"HasAns_f1": 82.37912207996466,
"HasAns_total": 45850,
"NoAns_exact": 93.34104145255225,
"NoAns_f1": 93.34104145255225,
"NoAns_total": 23352,
"best_exact": 76.51223953064941,
"best_exact_thresh": 0.0,
"best_f1": 86.08541295578848,
"best_f1_thresh": 0.0
}
```
### Model in action (in a Colab Notebook)
<details>
1. Set the context and ask some questions:

2. Run predictions:

</details>
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
| 3,040 | [
[
-0.037078857421875,
-0.054443359375,
0.024078369140625,
0.0281219482421875,
-0.0101165771484375,
0.02398681640625,
-0.0248260498046875,
-0.03570556640625,
0.0213623046875,
0.01409149169921875,
-0.06927490234375,
-0.032806396484375,
-0.042724609375,
-0.002098... |
AstraliteHeart/pony-diffusion | 2023-05-16T09:20:10.000Z | [
"diffusers",
"stable-diffusion",
"text-to-image",
"en",
"license:bigscience-bloom-rail-1.0",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | AstraliteHeart | null | null | AstraliteHeart/pony-diffusion | 57 | 1,366 | diffusers | 2022-10-01T01:16:37 | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: bigscience-bloom-rail-1.0
inference: true
thumbnail: "https://cdn.discordapp.com/attachments/1020199895694589953/1020200601780494386/000001.553325548.png"
---
# pony-diffusion - >nohooves
**[Pony Diffusion V4 is now live!](https://huggingface.co/AstraliteHeart/pony-diffusion-v4)**
pony-diffusion is a latent text-to-image diffusion model that has been conditioned on high-quality pony SFW-ish images through fine-tuning.
With special thanks to [Waifu-Diffusion](https://huggingface.co/hakurei/waifu-diffusion) for providing finetuning expertise and [Novel AI](https://novelai.net/) for providing necessary compute.
[](https://colab.research.google.com/drive/10Naa1SiIy0CA7bjk0q1rCcMMza6aWXpy?usp=sharing)
[](https://discord.gg/pYsdjMfu3q)
<img src=https://cdn.discordapp.com/attachments/1020199895694589953/1020200601780494386/000001.553325548.png width=50% height=50%>
<img src=https://cdn.discordapp.com/attachments/1020199895694589953/1020213899175415858/unknown.png width=50% height=50%>
<img src=https://cdn.discordapp.com/attachments/1020199895694589953/1021448446072340520/unknown.png width=50% height=50%>
<img src=https://cdn.discordapp.com/attachments/704226060178292846/1018644965905141840/upscaled_100_pony_made_of_rough_ivy_.webp width=50% height=50%>
[Original PyTorch Model Download Link](https://mega.nz/file/ZT1xEKgC#Xxir5udMmU_mKaRZAbBkF247Yk7DqCr01V0pDzSlYI0)
[Real-ESRGAN Model finetuned on pony faces](https://mega.nz/folder/cPMlxBqT#aPKYrEfgA_lpPexr0UlQ6w)
## Model Description
The model originally used for fine-tuning is an early finetuned checkpoint of [waifu-diffusion](https://huggingface.co/hakurei/waifu-diffusion) on top of [Stable Diffusion V1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4), which is a latent image diffusion model trained on [LAION2B-en](https://huggingface.co/datasets/laion/laion2B-en).
This particular checkpoint has been fine-tuned with a learning rate of 5.0e-6 for 4 epochs on approximately 80k pony text-image pairs (using tags from derpibooru) which all have score greater than `500` and belong to categories `safe` or `suggestive`.
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
## Downstream Uses
This model can be used for entertainment purposes and as a generative art assistant.
## Example Code
```python
import torch
from torch import autocast
from diffusers import StableDiffusionPipeline, DDIMScheduler
model_id = "AstraliteHeart/pony-diffusion"
device = "cuda"
pipe = StableDiffusionPipeline.from_pretrained(
model_id,
torch_dtype=torch.float16,
revision="fp16",
scheduler=DDIMScheduler(
beta_start=0.00085,
beta_end=0.012,
beta_schedule="scaled_linear",
clip_sample=False,
set_alpha_to_one=False,
),
)
pipe = pipe.to(device)
prompt = "pinkie pie anthro portrait wedding dress veil intricate highly detailed digital painting artstation concept art smooth sharp focus illustration Unreal Engine 5 8K"
with autocast("cuda"):
image = pipe(prompt, guidance_scale=7.5)["sample"][0]
image.save("cute_poner.png")
```
## Team Members and Acknowledgements
This project would not have been possible without the incredible work by the [CompVis Researchers](https://ommer-lab.com/).
- [Waifu-Diffusion for helping with finetuning and providing starting checkpoint](https://huggingface.co/hakurei/waifu-diffusion)
- [Novel AI for providing compute](https://novelai.net/)
In order to reach us, you can join our [Discord server](https://discord.gg/WG78ZbSB). | 4,601 | [
[
-0.03631591796875,
-0.0645751953125,
0.02691650390625,
0.0369873046875,
-0.006793975830078125,
-0.017547607421875,
0.01183319091796875,
-0.03515625,
0.021240234375,
0.04095458984375,
-0.043487548828125,
-0.033935546875,
-0.032012939453125,
-0.017684936523437... |
inseq/wmt21-mlqe-ru-en | 2023-01-30T22:15:30.000Z | [
"transformers",
"pytorch",
"fsmt",
"text2text-generation",
"translation",
"wmt20",
"en",
"ru",
"multilingual",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | inseq | null | null | inseq/wmt21-mlqe-ru-en | 0 | 1,366 | transformers | 2023-01-30T12:34:11 | ---
language:
- en
- ru
- multilingual
license: cc-by-sa-4.0
tags:
- translation
- wmt20
widget:
- text: "Сахалинская кайнозойская складчатая область разделяется на Восточную и Западную зоны, разделённые Центрально-Сахалинским грабеном."
- text: "Существует несколько мнений о его точном месторасположении."
- text: "Крупный научно-образовательный центр, в котором обучается свыше ста тысяч студентов."
---
# Fairseq Ru-En NMT WMT20 MLQE
This repository contains the Russian-English model trained with the [fairseq toolkit](https://github.com/pytorch/fairseq) that was used to produce translations used in the WMT21 shared task on quality estimation (QE) on the [MLQE dataset](https://github.com/facebookresearch/mlqe).
The checkpoint was converted from the original fairseq checkpoint available [here](https://github.com/facebookresearch/mlqe/tree/master/nmt_models) using the `convert_fsmt_original_pytorch_checkpoint_to_pytorch.py` script from the 🤗 Transformers library (v4.26.0).
Please refer to the repositories linked above for additional information on usage, parameters and training data. | 1,101 | [
[
-0.0170745849609375,
-0.0143585205078125,
0.035919189453125,
0.014312744140625,
-0.01275634765625,
-0.0161590576171875,
0.021514892578125,
-0.01378631591796875,
0.008026123046875,
0.0254974365234375,
-0.07257080078125,
-0.0300750732421875,
-0.0330810546875,
... |
lexlms/legal-roberta-base | 2023-05-15T13:48:14.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"legal",
"en",
"dataset:lexlms/lex_files",
"arxiv:2305.07507",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | lexlms | null | null | lexlms/legal-roberta-base | 4 | 1,365 | transformers | 2022-11-11T11:15:13 | ---
language: en
pipeline_tag: fill-mask
license: cc-by-sa-4.0
tags:
- legal
model-index:
- name: lexlms/legal-roberta-base
results: []
widget:
- text: "The applicant submitted that her husband was subjected to treatment amounting to <mask> whilst in the custody of police."
- text: "This <mask> Agreement is between General Motors and John Murray."
- text: "Establishing a system for the identification and registration of <mask> animals and regarding the labelling of beef and beef products."
- text: "Because the Court granted <mask> before judgment, the Court effectively stands in the shoes of the Court of Appeals and reviews the defendants’ appeals."
datasets:
- lexlms/lex_files
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LexLM base
This model was continued pre-trained from RoBERTa base (https://huggingface.co/roberta-base) on the LeXFiles corpus (https://huggingface.co/datasets/lexlms/lexfiles).
## Model description
LexLM (Base/Large) are our newly released RoBERTa models. We follow a series of best-practices in language model development:
* We warm-start (initialize) our models from the original RoBERTa checkpoints (base or large) of Liu et al. (2019).
* We train a new tokenizer of 50k BPEs, but we reuse the original embeddings for all lexically overlapping tokens (Pfeiffer et al., 2021).
* We continue pre-training our models on the diverse LeXFiles corpus for additional 1M steps with batches of 512 samples, and a 20/30% masking rate (Wettig et al., 2022), for base/large models, respectively.
* We use a sentence sampler with exponential smoothing of the sub-corpora sampling rate following Conneau et al. (2019) since there is a disparate proportion of tokens across sub-corpora and we aim to preserve per-corpus capacity (avoid overfitting).
* We consider mixed cased models, similar to all recently developed large PLMs.
## Intended uses & limitations
More information needed
## Training and evaluation data
The model was trained on the LeXFiles corpus (https://huggingface.co/datasets/lexlms/lexfiles). For evaluation results, please consider our work "LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development" (Chalkidis* et al, 2023).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: tpu
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 512
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 1000000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 1.0389 | 0.05 | 50000 | 0.9802 |
| 0.9685 | 0.1 | 100000 | 0.9021 |
| 0.9337 | 0.15 | 150000 | 0.8752 |
| 0.9106 | 0.2 | 200000 | 0.8558 |
| 0.8981 | 0.25 | 250000 | 0.8512 |
| 0.8813 | 1.03 | 300000 | 0.8203 |
| 0.8899 | 1.08 | 350000 | 0.8286 |
| 0.8581 | 1.13 | 400000 | 0.8148 |
| 0.856 | 1.18 | 450000 | 0.8141 |
| 0.8527 | 1.23 | 500000 | 0.8034 |
| 0.8345 | 2.02 | 550000 | 0.7763 |
| 0.8342 | 2.07 | 600000 | 0.7862 |
| 0.8147 | 2.12 | 650000 | 0.7842 |
| 0.8369 | 2.17 | 700000 | 0.7766 |
| 0.814 | 2.22 | 750000 | 0.7737 |
| 0.8046 | 2.27 | 800000 | 0.7692 |
| 0.7941 | 3.05 | 850000 | 0.7538 |
| 0.7956 | 3.1 | 900000 | 0.7562 |
| 0.8068 | 3.15 | 950000 | 0.7512 |
| 0.8066 | 3.2 | 1000000 | 0.7516 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.12.0+cu102
- Datasets 2.6.1
- Tokenizers 0.12.0
### Citation
[*Ilias Chalkidis\*, Nicolas Garneau\*, Catalina E.C. Goanta, Daniel Martin Katz, and Anders Søgaard.*
*LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development.*
*2022. In the Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Toronto, Canada.*](https://arxiv.org/abs/2305.07507)
```
@inproceedings{chalkidis-garneau-etal-2023-lexlms,
title = {{LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development}},
author = "Chalkidis*, Ilias and
Garneau*, Nicolas and
Goanta, Catalina and
Katz, Daniel Martin and
Søgaard, Anders",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics",
month = july,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2305.07507",
}
``` | 5,156 | [
[
-0.0261383056640625,
-0.042236328125,
0.0243072509765625,
-0.00039839744567871094,
-0.0183868408203125,
0.006134033203125,
-0.0181884765625,
-0.020660400390625,
0.020538330078125,
0.0267333984375,
-0.032135009765625,
-0.06475830078125,
-0.0467529296875,
-0.0... |
Salesforce/codet5p-770m-py | 2023-05-16T00:31:41.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2305.07922",
"license:bsd-3-clause",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text2text-generation | Salesforce | null | null | Salesforce/codet5p-770m-py | 18 | 1,364 | transformers | 2023-05-15T09:57:01 | ---
license: bsd-3-clause
---
# CodeT5+ 770M (further tuned on Python)
## Model description
[CodeT5+](https://github.com/salesforce/CodeT5/tree/main/CodeT5+) is a new family of open code large language models with an encoder-decoder architecture that can flexibly operate in different modes (i.e. _encoder-only_, _decoder-only_, and _encoder-decoder_) to support a wide range of code understanding and generation tasks.
It is introduced in the paper:
[CodeT5+: Open Code Large Language Models for Code Understanding and Generation](https://arxiv.org/pdf/2305.07922.pdf)
by [Yue Wang](https://yuewang-cuhk.github.io/)\*, [Hung Le](https://sites.google.com/view/henryle2018/home?pli=1)\*, [Akhilesh Deepak Gotmare](https://akhileshgotmare.github.io/), [Nghi D.Q. Bui](https://bdqnghi.github.io/), [Junnan Li](https://sites.google.com/site/junnanlics), [Steven C.H. Hoi](https://sites.google.com/view/stevenhoi/home) (* indicates equal contribution).
Compared to the original CodeT5 family (base: `220M`, large: `770M`), CodeT5+ is pretrained with a diverse set of pretraining tasks including _span denoising_, _causal language modeling_, _contrastive learning_, and _text-code matching_ to learn rich representations from both unimodal code data and bimodal code-text data.
Additionally, it employs a simple yet effective _compute-efficient pretraining_ method to initialize the model components with frozen off-the-shelf LLMs such as [CodeGen](https://github.com/salesforce/CodeGen) to efficiently scale up the model (i.e. `2B`, `6B`, `16B`), and adopts a "shallow encoder and deep decoder" architecture.
Furthermore, it is instruction-tuned to align with natural language instructions (see our InstructCodeT5+ 16B) following [Code Alpaca](https://github.com/sahil280114/codealpaca).
## How to use
This model can be easily loaded using the `T5ForConditionalGeneration` functionality and employs the same tokenizer as original [CodeT5](https://github.com/salesforce/CodeT5).
```python
from transformers import T5ForConditionalGeneration, AutoTokenizer
checkpoint = "Salesforce/codet5p-770m-py"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = T5ForConditionalGeneration.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs, max_length=10)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# ==> print('Hello World!')
```
## Pretraining data
This checkpoint is trained on the stricter permissive subset of the deduplicated version of the [github-code dataset](https://huggingface.co/datasets/codeparrot/github-code).
The data is preprocessed by reserving only permissively licensed code ("mit" “apache-2”, “bsd-3-clause”, “bsd-2-clause”, “cc0-1.0”, “unlicense”, “isc”).
Supported languages (9 in total) are as follows:
`c`, `c++`, `c-sharp`, `go`, `java`, `javascript`, `php`, `python`, `ruby.`
## Training procedure
This checkpoint is first trained on the multilingual unimodal code data at the first-stage pretraining, which includes a diverse set of pretraining tasks including _span denoising_ and two variants of _causal language modeling_.
After that, it is further trained on the Python subset with the causal language modeling objective for another epoch to better adapt for Python code generation. Please refer to the paper for more details.
## Evaluation results
CodeT5+ models have been comprehensively evaluated on a wide range of code understanding and generation tasks in various settings: _zero-shot_, _finetuning_, and _instruction-tuning_.
Specifically, CodeT5+ yields substantial performance gains on many downstream tasks compared to their SoTA baselines, e.g.,
8 text-to-code retrieval tasks (+3.2 avg. MRR), 2 line-level code completion tasks (+2.1 avg. Exact Match), and 2 retrieval-augmented code generation tasks (+5.8 avg. BLEU-4).
In 2 math programming tasks on MathQA-Python and GSM8K-Python, CodeT5+ models of below billion-parameter sizes significantly outperform many LLMs of up to 137B parameters.
Particularly, in the zero-shot text-to-code generation task on HumanEval benchmark, InstructCodeT5+ 16B sets new SoTA results of 35.0% pass@1 and 54.5% pass@10 against other open code LLMs, even surpassing the closed-source OpenAI code-cushman-001 mode
Please refer to the [paper](https://arxiv.org/pdf/2305.07922.pdf) for more details.
Specifically for this checkpoint, it achieves 15.5% pass@1 on HumanEval in the zero-shot setting, which is comparable to much larger LLMs such as Incoder 6B’s 15.2%, GPT-NeoX 20B’s 15.4%, and PaLM 62B’s 15.9%.
## BibTeX entry and citation info
```bibtex
@article{wang2023codet5plus,
title={CodeT5+: Open Code Large Language Models for Code Understanding and Generation},
author={Wang, Yue and Le, Hung and Gotmare, Akhilesh Deepak and Bui, Nghi D.Q. and Li, Junnan and Hoi, Steven C. H.},
journal={arXiv preprint},
year={2023}
}
``` | 5,020 | [
[
-0.02960205078125,
-0.03314208984375,
0.013763427734375,
0.02197265625,
-0.0102691650390625,
0.00705718994140625,
-0.03350830078125,
-0.044830322265625,
-0.021270751953125,
0.016937255859375,
-0.0281982421875,
-0.048309326171875,
-0.03466796875,
0.0160217285... |
timm/convformer_b36.sail_in1k | 2023-05-05T05:55:15.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2210.13452",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/convformer_b36.sail_in1k | 0 | 1,362 | timm | 2023-05-05T05:53:51 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for convformer_b36.sail_in1k
A ConvFormer (a MetaFormer) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 99.9
- GMACs: 22.7
- Activations (M): 56.1
- Image size: 224 x 224
- **Papers:**
- Metaformer baselines for vision: https://arxiv.org/abs/2210.13452
- **Original:** https://github.com/sail-sg/metaformer
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convformer_b36.sail_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convformer_b36.sail_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 128, 56, 56])
# torch.Size([1, 256, 28, 28])
# torch.Size([1, 512, 14, 14])
# torch.Size([1, 768, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convformer_b36.sail_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 768, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{yu2022metaformer_baselines,
title={Metaformer baselines for vision},
author={Yu, Weihao and Si, Chenyang and Zhou, Pan and Luo, Mi and Zhou, Yichen and Feng, Jiashi and Yan, Shuicheng and Wang, Xinchao},
journal={arXiv preprint arXiv:2210.13452},
year={2022}
}
```
| 3,642 | [
[
-0.035858154296875,
-0.030670166015625,
0.004741668701171875,
0.01148223876953125,
-0.031585693359375,
-0.024627685546875,
-0.0162506103515625,
-0.028656005859375,
0.0157928466796875,
0.03814697265625,
-0.04193115234375,
-0.058074951171875,
-0.053253173828125,
... |
mncai/Mistral-7B-v0.1-orca-1k | 2023-10-22T04:33:28.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"MindsAndCompany",
"en",
"ko",
"dataset:kyujinpy/OpenOrca-KO",
"arxiv:2306.02707",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | mncai | null | null | mncai/Mistral-7B-v0.1-orca-1k | 0 | 1,362 | transformers | 2023-10-22T04:19:01 | ---
pipeline_tag: text-generation
license: mit
language:
- en
- ko
library_name: transformers
tags:
- MindsAndCompany
datasets:
- kyujinpy/OpenOrca-KO
---
## Model Details
* **Developed by**: [Minds And Company](https://mnc.ai/)
* **Backbone Model**: [Mistral-7B-v0.1](mistralai/Mistral-7B-v0.1)
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
## Dataset Details
### Used Datasets
- kyujinpy/OpenOrca-KO
### Prompt Template
- Llama Prompt Template
## Limitations & Biases:
Llama2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
## License Disclaimer:
This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.
## Contact Us
- [Minds And Company](https://mnc.ai/)
## Citiation:
Please kindly cite using the following BibTeX:
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{Orca-best,
title = {Orca-best: A filtered version of orca gpt4 dataset.},
author = {Shahul Es},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/datasets/shahules786/orca-best/},
}
```
```
@software{touvron2023llama2,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava,
Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann,
Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,
Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith,
Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom},
year={2023}
}
```
> Readme format: [Riiid/sheep-duck-llama-2-70b-v1.1](https://huggingface.co/Riiid/sheep-duck-llama-2-70b-v1.1) | 3,404 | [
[
-0.03167724609375,
-0.0498046875,
0.01027679443359375,
0.019439697265625,
-0.0237884521484375,
0.00551605224609375,
0.0104522705078125,
-0.053070068359375,
0.016693115234375,
0.0265045166015625,
-0.05328369140625,
-0.04229736328125,
-0.045196533203125,
-0.00... |
timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k | 2023-05-11T00:50:56.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-12k",
"arxiv:2204.01697",
"arxiv:2201.03545",
"arxiv:2111.09883",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k | 1 | 1,360 | timm | 2023-01-20T21:38:37 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- imagenet-12k
---
# Model card for maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k
A timm specific MaxxViT-V2 (w/ a MLP Log-CPB (continuous log-coordinate relative position bias motivated by Swin-V2) image classification model. Pretrained in `timm` on ImageNet-12k (a 11821 class subset of full ImageNet-22k) and fine-tuned on ImageNet-1k by Ross Wightman.
ImageNet-12k pretraining and ImageNet-1k fine-tuning performed on 8x GPU [Lambda Labs](https://lambdalabs.com/) cloud instances..
### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py)
MaxxViT covers a number of related model architectures that share a common structure including:
- CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages.
- MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid).
- CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate.
Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations.
All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 116.1
- GMACs: 73.0
- Activations (M): 213.7
- Image size: 384 x 384
- **Papers:**
- MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-12k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 128, 192, 192])
# torch.Size([1, 128, 96, 96])
# torch.Size([1, 256, 48, 48])
# torch.Size([1, 512, 24, 24])
# torch.Size([1, 1024, 12, 12])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1024, 12, 12) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
### By Top-1
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
### By Throughput (samples / sec)
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{tu2022maxvit,
title={MaxViT: Multi-Axis Vision Transformer},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={ECCV},
year={2022},
}
```
```bibtex
@article{dai2021coatnet,
title={CoAtNet: Marrying Convolution and Attention for All Data Sizes},
author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing},
journal={arXiv preprint arXiv:2106.04803},
year={2021}
}
```
| 22,585 | [
[
-0.052886962890625,
-0.031982421875,
0.0009202957153320312,
0.028778076171875,
-0.0243072509765625,
-0.0198974609375,
-0.011474609375,
-0.0245361328125,
0.04656982421875,
0.018829345703125,
-0.04302978515625,
-0.046356201171875,
-0.04949951171875,
-0.0051307... |
SJ-Ray/Re-Punctuate | 2022-06-29T09:05:36.000Z | [
"transformers",
"tf",
"t5",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | SJ-Ray | null | null | SJ-Ray/Re-Punctuate | 7 | 1,359 | transformers | 2022-03-16T16:10:00 | ---
license: apache-2.0
---
<h2>Re-Punctuate:</h2>
Re-Punctuate is a T5 model that attempts to correct Capitalization and Punctuations in the sentences.
<h3>DataSet:</h3>
DialogSum dataset (115056 Records) was used to fine-tune the model for Punctuation and Capitalization correction.
<h3>Usage:</h3>
<pre>
from transformers import T5Tokenizer, TFT5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('SJ-Ray/Re-Punctuate')
model = TFT5ForConditionalGeneration.from_pretrained('SJ-Ray/Re-Punctuate')
input_text = 'the story of this brave brilliant athlete whose very being was questioned so publicly is one that still captures the imagination'
inputs = tokenizer.encode("punctuate: " + input_text, return_tensors="tf")
result = model.generate(inputs)
decoded_output = tokenizer.decode(result[0], skip_special_tokens=True)
print(decoded_output)
</pre>
<h4> Example: </h4>
<b>Input:</b> the story of this brave brilliant athlete whose very being was questioned so publicly is one that still captures the imagination <br>
<b>Output:</b> The story of this brave, brilliant athlete, whose very being was questioned so publicly, is one that still captures the imagination.
<h4> Connect on: <a href="https://www.linkedin.com/in/suraj-kumar-710382a7" target="_blank">LinkedIn : Suraj Kumar</a></h4> | 1,316 | [
[
0.01065826416015625,
-0.04547119140625,
0.0230712890625,
0.0113677978515625,
-0.01419830322265625,
0.036041259765625,
-0.003726959228515625,
-0.01276397705078125,
0.0218048095703125,
0.0183563232421875,
-0.056610107421875,
-0.039520263671875,
-0.031768798828125,... |
luongphamit/NeverEnding-Dream2 | 2023-03-21T03:01:41.000Z | [
"diffusers",
"stable-diffusion",
"text-to-image",
"art",
"artistic",
"en",
"license:other",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | luongphamit | null | null | luongphamit/NeverEnding-Dream2 | 11 | 1,359 | diffusers | 2023-03-21T03:01:16 | ---
language:
- en
license: other
tags:
- stable-diffusion
- text-to-image
- art
- artistic
- diffusers
inference: true
duplicated_from: Lykon/NeverEnding-Dream
---
# NeverEnding Dream (NED)
## Official Repository
Read more about this model here: https://civitai.com/models/10028/neverending-dream-ned
Also please support by giving 5 stars and a heart, which will notify new updates.
Also consider supporting me on Patreon or ByuMeACoffee
- https://www.patreon.com/Lykon275
- https://www.buymeacoffee.com/lykon
You can run this model on:
- https://sinkin.ai/m/qGdxrYG
Some sample output:






| 1,069 | [
[
-0.0203857421875,
-0.0071868896484375,
0.038909912109375,
0.02874755859375,
-0.0457763671875,
0.00548553466796875,
0.01517486572265625,
-0.0386962890625,
0.053497314453125,
0.061614990234375,
-0.08099365234375,
-0.05560302734375,
-0.050018310546875,
-0.00281... |
openbmb/VisCPM-Chat | 2023-08-12T11:45:22.000Z | [
"transformers",
"pytorch",
"viscpmchatbee",
"feature-extraction",
"custom_code",
"zh",
"en",
"has_space",
"region:us"
] | feature-extraction | openbmb | null | null | openbmb/VisCPM-Chat | 15 | 1,358 | transformers | 2023-06-28T02:18:29 | ---
language:
- zh
- en
---
# VisCPM
简体中文 | [English](README_en.md)
<p align="center">
<p align="left">
<a href="./LICENSE"><img src="https://img.shields.io/badge/license-Apache%202-dfd.svg"></a>
<a href=""><img src="https://img.shields.io/badge/python-3.8+-aff.svg"></a>
</p>
`VisCPM` is a family of open-source large multimodal models, which support multimodal conversational capabilities (`VisCPM-Chat` model) and text-to-image generation capabilities (`VisCPM-Paint` model) in both Chinese and English, achieving state-of-the-art peformance among Chinese open-source multimodal models. `VisCPM` is trained based on the large language model [CPM-Bee](https://github.com/OpenBMB/CPM-Bee) with 10B parameters, fusing visual encoder (Q-Former) and visual decoder (Diffusion-UNet) to support visual inputs and outputs. Thanks to the good bilingual capability of CPM-Bee, `VisCPM` can be pre-trained with English multimodal data only and well generalize to achieve promising Chinese multimodal capabilities.
`VisCPM`是一个开源的多模态大模型系列,支持中英双语的多模态对话能力(`VisCPM-Chat`模型)和文到图生成能力(`VisCPM-Paint`模型),在中文多模态开源模型中达到最佳水平。`VisCPM`基于百亿参数量语言大模型[CPM-Bee](https://github.com/OpenBMB/CPM-Bee)(10B)训练,融合视觉编码器(`Q-Former`)和视觉解码器(`Diffusion-UNet`)以支持视觉信号的输入和输出。得益于`CPM-Bee`底座优秀的双语能力,`VisCPM`可以仅通过英文多模态数据预训练,泛化实现优秀的中文多模态能力。
## VisCPM-Chat
`VisCPM-Chat`支持面向图像进行中英双语多模态对话。该模型使用`Q-Former`作为视觉编码器,使用CPM-Bee(10B)作为语言交互基底模型,并通过语言建模训练目标融合视觉和语言模型。模型训练包括预训练和指令精调两阶段:
* 预训练:我们使用约100M高质量英文图文对数据对`VisCPM-Chat`进行了预训练,数据包括CC3M、CC12M、COCO、Visual Genome、Laion等。在预训练阶段,语言模型参数保持固定,仅更新`Q-Former`部分参数,以支持大规模视觉-语言表示的高效对齐。
* 指令精调:我们采用[LLaVA-150K](https://llava-vl.github.io/)英文指令精调数据,并混合相应翻译后的中文数据对模型进行指令精调,以对齐模型多模态基础能力和用户使用意图。在指令精调阶段,我们更新全部模型参数,以提升指令精调数据的利用效率。有趣的是,我们发现即使仅采用英文指令数据进行指令精调,模型也可以理解中文问题,但仅能用英文回答。这表明模型的多语言多模态能力已经得到良好的泛化。在指令精调阶段进一步加入少量中文翻译数据,可以将模型回复语言和用户问题语言对齐。
我们在LLaVA英文测试集和翻译的中文测试集对模型进行了评测,该评测基准考察模型在开放域对话、图像细节描述、复杂推理方面的表现,并使用GPT-4进行打分。可以观察到,`VisCPM-Chat`在中文多模态能力方面取得了最佳的平均性能,在通用域对话和复杂推理表现出色,同时也表现出了不错的英文多模态能力。
<table>
<tr>
<td align="center" rowspan="2" colspan="2">模型</td>
<td align="center" colspan="4">英文</td>
<td align="center" colspan="4">中文</td>
</tr>
<tr>
<td align="center">多模态对话</td>
<td align="center">细节描述</td>
<td align="center">复杂推理</td>
<td align="center">平均</td>
<td align="center">多模态对话</td>
<td align="center">细节描述</td>
<td align="center">复杂推理</td>
<td align="center">平均</td>
</tr>
<tr>
<td align="center" rowspan="3">英文模型</td>
<td align="center">MiniGPT4</td>
<td align="center">65</td>
<td align="center">67.3</td>
<td align="center">76.6</td>
<td align="center">69.7</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
</tr>
<tr>
<td align="center">InstructBLIP</td>
<td align="center">81.9</td>
<td align="center">68</td>
<td align="center">91.2</td>
<td align="center">80.5</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
</tr>
<tr>
<td align="center">LLaVA</td>
<td align="center">89.5</td>
<td align="center">70.4</td>
<td align="center">96.2</td>
<td align="center">85.6</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
</tr>
<tr>
<td align="center" rowspan="4">中英双语</td>
<td align="center">mPLUG-Owl </td>
<td align="center">64.6</td>
<td align="center">47.7</td>
<td align="center">80.1</td>
<td align="center">64.2</td>
<td align="center">76.3</td>
<td align="center">61.2</td>
<td align="center">77.8</td>
<td align="center">72</td>
</tr>
<tr>
<td align="center">VisualGLM</td>
<td align="center">62.4</td>
<td align="center">63</td>
<td align="center">80.6</td>
<td align="center">68.7</td>
<td align="center">76.6</td>
<td align="center">87.8</td>
<td align="center">83.6</td>
<td align="center">82.7</td>
</tr>
<tr>
<td align="center">Ziya (LLaMA 13B)</td>
<td align="center">82.7</td>
<td align="center">69.9</td>
<td align="center">92.1</td>
<td align="center">81.7</td>
<td align="center">85</td>
<td align="center">74.7</td>
<td align="center">82.4</td>
<td align="center">80.8</td>
</tr>
<tr>
<td align="center">VisCPM-Chat</td>
<td align="center">83.3</td>
<td align="center">68.9</td>
<td align="center">90.5</td>
<td align="center">81.1</td>
<td align="center">92.7</td>
<td align="center">76.1</td>
<td align="center">89.2</td>
<td align="center">86.3</td>
</tr>
</table>
## VisCPM-Paint
`VisCPM-Paint`支持中英双语的文到图生成。该模型使用CPM-Bee(10B)作为文本编码器,使用`UNet`作为图像解码器,并通过扩散模型训练目标融合语言和视觉模型。在训练过程中,语言模型参数始终保持固定。我们使用[Stable Diffusion 2.1](https://github.com/Stability-AI/stablediffusion)的UNet参数初始化视觉解码器,并通过逐步解冻其中关键的桥接参数将其与语言模型融合:首先训练文本表示映射到视觉模型的线性层,然后进一步解冻`UNet`的交叉注意力层。该模型在[LAION 2B](https://laion.ai/)英文图文对数据上进行了训练。
与`VisCPM-Chat`类似,我们发现得益于CPM-Bee的双语能力,`VisCPM-Paint`可以仅通过英文图文对训练,泛化实现良好的中文文到图生成能力,达到中文开源模型的最佳效果。通过进一步加入20M清洗后的原生中文图文对数据,以及120M翻译到中文的图文对数据,模型的中文文到图生成能力可以获得进一步提升。我们在MSCOCO上采样了3万张图片,计算了FID(Fréchet Inception Distance)和Clip Score,前者用于评估生成图片的质量,后面用于评估生成的图片与输入的匹配程度。
<table>
<tr>
<td align="center" rowspan="2">模型</td>
<td align="center" colspan="2">英文</td>
<td align="center" colspan="2">中文</td>
</tr>
<tr>
<td align="center">FID↓</td>
<td align="center">CLIP Score↑</td>
<td align="center">FID↓</td>
<td align="center">CLIP Score↑</td>
</tr>
<tr>
<td align="center">AltDiffusion</td>
<td align="center">17.16</td>
<td align="center">25.24</td>
<td align="center">16.09</td>
<td align="center">24.05</td>
</tr>
<tr>
<td align="center">TaiyiDiffusion</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">15.58</td>
<td align="center">22.69</td>
</tr>
<tr>
<td align="center">Stable Diffusion</td>
<td align="center">9.08</td>
<td align="center">26.22</td>
<td align="center">-</td>
<td align="center">-</td>
</tr>
<tr>
<td align="center">VisCPM-Paint-en</td>
<td align="center">9.51</td>
<td align="center">25.35</td>
<td align="center">10.86</td>
<td align="center">23.38</td>
</tr>
<tr>
<td align="center">VisCPM-Paint-zh</td>
<td align="center">9.98</td>
<td align="center">25.04</td>
<td align="center">9.65</td>
<td align="center">24.17</td>
</tr>
</table>
# 安装
```Shell
conda create -n viscpm python=3.10 -y
conda activate viscpm
pip install setuptools
pip install diffusers jieba matplotlib numpy opencv_python
pip install pandas Pillow psutil pydantic scipy
pip install torch==1.13.1 torchscale==0.2.0 torchvision==0.14.1 timm
pip install transformers==4.28.0
pip install tqdm typing_extensions
pip install git+https://github.com/thunlp/OpenDelta.git
pip install git+https://github.com/OpenBMB/CPM-Bee.git#egg=cpm-live&subdirectory=src
```
VisCPM需要单卡40GB以上的GPU运行,我们会在尽快更新更加节省显存的推理方式。
## 使用
```python
>>> from transformers import AutoModel, AutoTokenizer, AutoImageProcessor
>>> from PIL import Image
>>> tokenizer = AutoTokenizer.from_pretrained('openbmb/VisCPM-Chat', trust_remote_code=True)
>>> processor = AutoImageProcessor.from_pretrained('openbmb/VisCPM-Chat', trust_remote_code=True)
>>> model = AutoModel.from_pretrained('openbmb/VisCPM-Chat', trust_remote_code=True).to('cuda')
>>> data = [{
>>> 'context': '',
>>> 'question': 'describe this image in detail.',
>>> 'image': tokenizer.unk_token * model.query_num,
>>> '<ans>': ''
>>> }]
>>> image = Image.open('case.jpg')
>>> result = model.generate(data, tokenizer, processor, image)
>>> print(result[0]['<ans>'])
这幅图片显示了一群热气球在天空中飞行。这些热气球漂浮在不同的地方,包括山脉、城市和乡村地区。
``` | 8,373 | [
[
-0.05743408203125,
-0.0477294921875,
0.00676727294921875,
0.024505615234375,
-0.0207672119140625,
0.004978179931640625,
-0.00991058349609375,
-0.02728271484375,
0.0239410400390625,
-0.002468109130859375,
-0.05078125,
-0.0286865234375,
-0.034454345703125,
-0.... |
mirroring/civitai_mirror | 2023-11-06T17:19:21.000Z | [
"diffusers",
"Civitai",
"mirror",
"stable diffusion",
"text-to-image",
"license:wtfpl",
"has_space",
"region:us"
] | text-to-image | mirroring | null | null | mirroring/civitai_mirror | 50 | 1,357 | diffusers | 2023-03-28T23:16:54 | ---
license: wtfpl
pipeline_tag: text-to-image
tags:
- Civitai
- mirror
- stable diffusion
---
# A mirror of anything interesting on civitai or elsewhere on the web.
This repo is *mostly* organized around the structure
that's necessary to import the entire repo into an
automatic1111 install. This is because models were originally
uploaded directly from such an install via google drive in
a colab session.
Currently I'm uploading files via my new space over at
https://hf.co/anonderpling/repo_uploader, however I expect
to go back to a real file system and git uploads soon,
hopefully paperspace can help with that.
## Workflow
My workflow for downloading files into paperspace gradient
is to download the entire repo *without* pulling LFS
files, do a sparse checkout, and then pull the files I
want with LFS --include (slow) or aria2 (fast). This
workflow should work with colab, too. Whether you use
colab or paperspace, you'll probably need the latest
version of git to use `sparse-checkout --add`.
## How to upload and download files
<details>
<summary>sparse checkout with git, bypassing LFS, download with aria2c</summary>
```bash
!GIT_LFS_SPARSE_CHECKOUT=1 git clone git@hf.co:anonderpling/civitai_mirror # this is my command so I can push changes, you'll need to use the https://hf.co/ instead of git@hf.co:
!cd civitai_mirror
!git sparse-checkout set embeddings # embeddings are small, so it's easy enough to just pull all of them
!git sparse-checkout add models/VAE # there's only a few VAEs, and they're generally needed, so grab all those too...
!git sparse-checkout add models/Stable-diffusion/illuminati* models/Stable-diffusion/revAnimated* # add some stable diffusion models I intend to work with in this session
!apt install aria2 # make sure aria2c is installed
# let's break the following command down into parts, since there's multiple commands on one line
# find models embeddings --type f --size -2 # find files in models and embeddings directories smaller than 2 kilobytes (these are the lfs pointers that were checked out)
# | while read a; do #lets build an aria2c input file
# echo "https://huggingface.co/anonderpling/civitai_mirror/resolve/main/${a}"; # tell aria2c where to find the file
# echo " out=${a}"; # tell aria2c where to place said file
# rm "${a}"; remove the existing file, because I'm too lazy to look up the option to have aria2c overwrite it (plus if you stop in the middle, you can tell at a glance what else is needed)
# done | tee aria2.in.txt # end the loop, but watch to make sure theres nothing accidentally included by wildcards that shouldnt have been...downloads could take a while (and fill the disk) if I accidentally put a space before the *
# aria2 -x16 --split=16 -i aria2.in.txt # download all the files as fast as possible
!find models embeddings --type f --size -2 | while read a; do echo "https://huggingface.co/anonderpling/civitai_mirror/resolve/main/${a}"; echo " out=${a}"; rm "${a}"; done | tee aria2.in.txt
!aria2 -x16 --split=16 -i aria2.in.txt
```
</details>
#### Uploading files:
- [repo uploader](https://anonderpling-repo-uploader.hf.space), for single files. requires write permission to repo.
- [Civitai model uploader](https://mirroring-upload-civitai-model.hf.space), for uploading a model from Civitai. can create PR. intended for uploading model previews and civitai's API response json as well as the model, for use by Civitai unofficial extension.
</details>
<details>
<summary>to upload more files via git</summary>
```bash
# enable the git lfs filters
!pip install huggingface_hub
!huggingface-cli lfs_enable_largefiles .
# yup, not telling. I'm an *anonymous* derpling, after all
!git config --local user.name 'not telling'
# really couldnt care less if this is accurate...maybe I'll start randomizing it...
!git config --local user.email 'anonderpling@users.noreply.huggingface.co'
# create an rsa key with no password
!ssh-keygen -t rsa -b 4096 -f ~/.ssh/id_hf.co -P ''
# a clickable link in jupyter/colab/paperspace
print('https://hf.co/settings/keys')
# give the public key so it can be easily copied to huggingface
!cat ~/.ssh/id_hf.co.pub
# track files 1mb+ with lfs manually (huggingface filter deals with models automatically, but large previews will give you errors)
!find -type f -size +999k -not \( -name '*.safetensors' -o -name '*.ckpt' -o -name '*.pt' \) -exec git lfs track '{}' +
# IMPORTANT: make sure you add the git ssh key above before uploading
!sleep 1m # gives you time to do so
# upload your files now. do make sure you dont upload files that didnt download properly (interrupted aria2c, lfs pointers, etc)
!git add .gitattributes models embeddings
!git commit -m "add a message..."
!git push
```
</details>
<details>
<summary>to upload a single file via huggingface_hub:</summary>
```python
inurl=api.upload_folder( # https://huggingface.co/docs/huggingface_hub/v0.14.1/en/package_reference/hf_api#huggingface_hub.HfApi.upload_folder
token="hf_token", # only needed if you didn't already use huggingface_hub.login() previously.
path_or_fileobj="/content/stable-diffusion-webui/models/Stable-diffusion/Rev-Animated.safetensors", # location of the file you want to upload
path_in_repo="models/Stable-diffusion/Rev-Animated.safetensors",
repo_id="mirroring/civitai_mirror",
commit_message="Uploading Rev Animated", # optional.
commit_description="I want to upload Rev Animated because it's a special file for me.\nPlease accept my PR. I don't want to host it on my own HF repo!" # optional. Required (here) for PRs.
create_pr=True # optional. needed if you're not a special person allowed to add new files to the repo (ie, if you just want us to mirror something; make sure to fill out the description/message above, as well)
)
print("Uploaded files. Check them out at "+inurl)
```
</details>
<details>
<summary>to upload multiple files via huggingface_hub:</summary>
```python
inurl=api.upload_folder( # https://huggingface.co/docs/huggingface_hub/v0.14.1/en/package_reference/hf_api#huggingface_hub.HfApi.upload_folder
token="hf_token", # only needed if you didn't already use huggingface_hub.login() previously.
folder_path="/stable-diffusion-webui/models/",
path_in_repo="models",
repo_id="mirroring/civitai_mirror",
allow_patterns="*.safetensors", # optional. Only upload certain files.
ignore_patterns=["*.tmp","tmp/*","*.jpg"], # optional. Ignore certain files.
commit_message="Uploading my own models",
commit_description="I want to upload these models because..."
create_pr=True # needed if you're not a special person allowed to add new files to the repo (ie, if you just want us to mirror something)
)
print("Uploaded files. Check them out at "+inurl)
```
</details>
#### Using paperspace: It's extremely important to remove the ssh key from your HF repo after you're done with it. this ensures that nobody else can access your account.
Paperspace makes free notebooks public, and I'm not sure
if that includes filesystem access or outputs; if someone
can access that ssh key and you didnt remove the access it
generates, you've given them thr ability to make changes
to your repo! This means they could delete *everything*.
If you're technically inclined, you can possibly use the
paperspace secrets configuration to hide such information
(I'm not sure how it works yet)
Alternatively, you can add big files via
https://hf.co/anonderpling/repo_uploader before your
session (the renamed file part is pretty much added for
uploading from HF urls, but also works for adding preview
images), and manually upload the civitai.info files
locally (these are just simple civitai api responses
afaik)
## TODO:
1. finish moving files around (figure out a way to do so without 2 commits per file (one to copy, one to delete file) without downloading every single file
2. move sfw models into a subdirectory
3. consider moving locons to their own directory in models, now that I'm using paperspace...
- Perpetual: keep an eye on civitai update notifications
## License
I like WTFPL. That means anything generated by @anonderpling is licensed WTFPL. That means pretty much only this readme and a couple scripts, since everything else is someone else's work.
Other files? You'll have to find their official sources to find their licenses. The licenses for the civitai uploads *might* be in the .civitai.info files, which are standard json as returned by the civitai API. | 8,510 | [
[
-0.035400390625,
-0.035186767578125,
0.040069580078125,
0.034515380859375,
-0.00891876220703125,
0.004791259765625,
0.0003681182861328125,
-0.0193328857421875,
0.055419921875,
0.02728271484375,
-0.045654296875,
-0.028778076171875,
-0.034210205078125,
0.01249... |
lcw99/llama2-ko-chang-instruct-chat | 2023-11-06T22:22:11.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"ko",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | lcw99 | null | null | lcw99/llama2-ko-chang-instruct-chat | 1 | 1,355 | transformers | 2023-10-03T08:02:04 | ---
language:
- en
- ko
---
base model [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) | 105 | [
[
-0.029388427734375,
-0.039520263671875,
0.019012451171875,
0.04437255859375,
-0.0567626953125,
0.0039043426513671875,
0.035858154296875,
-0.0302886962890625,
0.053985595703125,
0.049163818359375,
-0.0548095703125,
-0.0204315185546875,
-0.05194091796875,
0.00... |
deepset/gelectra-large | 2022-08-15T11:50:29.000Z | [
"transformers",
"pytorch",
"tf",
"electra",
"pretraining",
"de",
"dataset:wikipedia",
"dataset:OPUS",
"dataset:OpenLegalData",
"dataset:oscar",
"arxiv:2010.10906",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null | deepset | null | null | deepset/gelectra-large | 15 | 1,353 | transformers | 2022-03-02T23:29:05 | ---
language: de
license: mit
datasets:
- wikipedia
- OPUS
- OpenLegalData
- oscar
---
# German ELECTRA large
Released, Oct 2020, this is a German ELECTRA language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our [paper](https://arxiv.org/pdf/2010.10906.pdf), we outline the steps taken to train our model and show that this is the state of the art German language model.
## Overview
**Paper:** [here](https://arxiv.org/pdf/2010.10906.pdf)
**Architecture:** ELECTRA large (discriminator)
**Language:** German
## Performance
```
GermEval18 Coarse: 80.70
GermEval18 Fine: 55.16
GermEval14: 88.95
```
See also:
deepset/gbert-base
deepset/gbert-large
deepset/gelectra-base
deepset/gelectra-large
deepset/gelectra-base-generator
deepset/gelectra-large-generator
## Authors
Branden Chan: `branden.chan [at] deepset.ai`
Stefan Schweter: `stefan [at] schweter.eu`
Timo Möller: `timo.moeller [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
| 1,942 | [
[
-0.0379638671875,
-0.05157470703125,
0.030914306640625,
-0.0013628005981445312,
0.0017004013061523438,
0.0024890899658203125,
-0.031402587890625,
-0.046417236328125,
0.020721435546875,
0.03753662109375,
-0.033782958984375,
-0.054718017578125,
-0.0206146240234375... |
ELiRF/NASES | 2023-07-18T11:28:43.000Z | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"summarization",
"es",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | ELiRF | null | null | ELiRF/NASES | 3 | 1,352 | transformers | 2022-03-02T23:29:04 | ---
language: es
tags:
- summarization
widget:
- text: "La Agencia Valenciana de la Innovación (AVI) financia el desarrollo de un software que integra diferentes modelos y tecnologías para la monitorización y análisis multilingüe de las redes sociales. A través de técnicas de 'deep learning' y procesamiento del lenguaje natural es capaz de interpretar la ironía y las emociones en los textos, incluso en aquellos escritos en idiomas menos extendidos, a menudo no contemplados por las herramientas comerciales. La iniciativa, bautizada como 'Guaita', está liderada por el Instituto Valenciano de Investigación en Inteligencia Artificial (VRAIN), adscrito a la Universidad Politécnica de Valencia (UPV), que cuenta a su vez para su desarrollo con la colaboración del Instituto Valenciano de Informática (ITI) y la Corporación Valenciana de Mitjans de Comunicación (CVMC).De este modo, y a solicitud del usuario o usuaria, monitorizará las redes sociales para obtener la información asociada a los temas objeto de interés y ofrecerá los resultados de forma gráfica, bien a través de una interfaz web, bien mediante la generación de informes. El programa será, además, capaz de determinar la reputación de una empresa o institución a partir de dichos análisis gracias a la combinación de distintas tecnologías de procesamiento e interpretación, destaca la agencia en un comunicado."
---
**IMPORTANT:** On the 5th of April 2022, we detected a mistake in the configuration file; thus, the model was not generating the summaries correctly, and it was underperforming in all scenarios. For this reason, if you had used the model until that day, we would be glad if you would re-evaluate the model if you are publishing some results with it. We apologize for the inconvenience and thank you for your understanding.
# NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish
Most of the models proposed in the literature for abstractive summarization are generally suitable for the English language but not for other languages. Multilingual models were introduced to address that language constraint, but despite their applicability being broader than that of the monolingual models, their performance is typically lower, especially for minority languages like Catalan. In this paper, we present a monolingual model for abstractive summarization of textual content in the Catalan language. The model is a Transformer encoder-decoder which is pretrained and fine-tuned specifically for the Catalan language using a corpus of newspaper articles. In the pretraining phase, we introduced several self-supervised tasks to specialize the model on the summarization task and to increase the abstractivity of the generated summaries. To study the performance of our proposal in languages with higher resources than Catalan, we replicate the model and the experimentation for the Spanish language. The usual evaluation metrics, not only the most used ROUGE measure but also other more semantic ones such as BertScore, do not allow to correctly evaluate the abstractivity of the generated summaries. In this work, we also present a new metric, called content reordering, to evaluate one of the most common characteristics of abstractive summaries, the rearrangement of the original content. We carried out an exhaustive experimentation to compare the performance of the monolingual models proposed in this work with two of the most widely used multilingual models in text summarization, mBART and mT5. The experimentation results support the quality of our monolingual models, especially considering that the multilingual models were pretrained with many more resources than those used in our models. Likewise, it is shown that the pretraining tasks helped to increase the degree of abstractivity of the generated summaries. To our knowledge, this is the first work that explores a monolingual approach for abstractive summarization both in Catalan and Spanish.
# The NASes model
News Abstractive Summarization for Spanish (NASes) is a Transformer encoder-decoder model, with the same hyper-parameters than BART, to perform summarization of Spanish news articles. It is pre-trained on a combination of several self-supervised tasks that help to increase the abstractivity of the generated summaries. Four pre-training tasks have been combined: sentence permutation, text infilling, Gap Sentence Generation, and Next Segment Generation. Spanish newspapers, and Wikipedia articles in Spanish were used for pre-training the model (21GB of raw text -8.5 millions of documents-).
NASes is finetuned for the summarization task on 1.802.919 (document, summary) pairs from the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA).
### BibTeX entry
```bibtex
@Article{app11219872,
AUTHOR = {Ahuir, Vicent and Hurtado, Lluís-F. and González, José Ángel and Segarra, Encarna},
TITLE = {NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish},
JOURNAL = {Applied Sciences},
VOLUME = {11},
YEAR = {2021},
NUMBER = {21},
ARTICLE-NUMBER = {9872},
URL = {https://www.mdpi.com/2076-3417/11/21/9872},
ISSN = {2076-3417},
DOI = {10.3390/app11219872}
}
``` | 5,258 | [
[
-0.03399658203125,
-0.0311431884765625,
0.0023651123046875,
0.044189453125,
-0.0406494140625,
0.0089874267578125,
-0.0167083740234375,
-0.044586181640625,
0.060760498046875,
0.022003173828125,
-0.03662109375,
-0.049652099609375,
-0.05291748046875,
0.03323364... |
phob0s/bert-tiny | 2023-01-26T09:55:34.000Z | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | phob0s | null | null | phob0s/bert-tiny | 1 | 1,351 | transformers | 2023-01-19T08:53:21 | Testclone of https://huggingface.co/prajjwal1/bert-tiny
Mentioned in
* Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics(Bhargava, Drozd and Rogers)
* Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation (Turc et al.) | 277 | [
[
-0.0232391357421875,
-0.0394287109375,
0.04620361328125,
0.01535797119140625,
0.002429962158203125,
-0.005161285400390625,
-0.044525146484375,
-0.035797119140625,
0.01392364501953125,
-0.00949859619140625,
-0.045989990234375,
-0.00775146484375,
-0.01910400390625... |
mncai/Mistral-7B-v0.1-platy-1k | 2023-10-22T04:57:06.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"MindsAndCompany",
"en",
"ko",
"dataset:kyujinpy/KOpen-platypus",
"arxiv:2306.02707",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | mncai | null | null | mncai/Mistral-7B-v0.1-platy-1k | 0 | 1,351 | transformers | 2023-10-22T04:44:03 | ---
pipeline_tag: text-generation
license: mit
language:
- en
- ko
library_name: transformers
tags:
- MindsAndCompany
datasets:
- kyujinpy/KOpen-platypus
---
## Model Details
* **Developed by**: [Minds And Company](https://mnc.ai/)
* **Backbone Model**: [Mistral-7B-v0.1](mistralai/Mistral-7B-v0.1)
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
## Dataset Details
### Used Datasets
- kyujinpy/KOpen-platypus
### Prompt Template
- Llama Prompt Template
## Limitations & Biases:
Llama2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
## License Disclaimer:
This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.
## Contact Us
- [Minds And Company](https://mnc.ai/)
## Citiation:
Please kindly cite using the following BibTeX:
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{Orca-best,
title = {Orca-best: A filtered version of orca gpt4 dataset.},
author = {Shahul Es},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/datasets/shahules786/orca-best/},
}
```
```
@software{touvron2023llama2,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava,
Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann,
Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,
Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith,
Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom},
year={2023}
}
```
> Readme format: [Riiid/sheep-duck-llama-2-70b-v1.1](https://huggingface.co/Riiid/sheep-duck-llama-2-70b-v1.1) | 3,412 | [
[
-0.030517578125,
-0.0487060546875,
0.01140594482421875,
0.0200347900390625,
-0.024688720703125,
0.00722503662109375,
0.011962890625,
-0.052978515625,
0.0168914794921875,
0.025543212890625,
-0.054412841796875,
-0.041290283203125,
-0.046600341796875,
-0.002962... |
ostris/embroidery_style_lora_sdxl | 2023-08-15T01:25:42.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"sdxl",
"license:apache-2.0",
"region:us",
"has_space"
] | text-to-image | ostris | null | null | ostris/embroidery_style_lora_sdxl | 2 | 1,349 | diffusers | 2023-08-15T00:46:55 | ---
license: apache-2.0
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- sdxl
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt:
widget:
- text: captain america eating his boogers
---
# Embroidery Style - SDXL LoRA
### Tips
- No trigger words needed.
- Converts any prompt into an embroidered patch.
- Strength of 1.0 usually works but you may need to increase or decrease an needed.
# Samples
[<img src="https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03349-3219327513-captain%20america%20eating%20his%20boogers%20_%20_lora_embroidered_style_v1_sdxl_1_.jpeg" style="max-width:400px; height:auto" />](https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03349-3219327513-captain%20america%20eating%20his%20boogers%20_%20_lora_embroidered_style_v1_sdxl_1_.jpeg)
[<img src="https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03341-3308281020-ant%20man%20eating%20lasagna%20_%20_lora_embroidered_style_v1_sdxl_1_.jpeg" style="max-width:400px; height:auto" />](https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03341-3308281020-ant%20man%20eating%20lasagna%20_%20_lora_embroidered_style_v1_sdxl_1_.jpeg)
[<img src="https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03340-1823840499-star%20lord%20eating%20ramen%20noodles%20_%20_lora_embroidered_style_v1_sdxl_1_.jpeg" style="max-width:400px; height:auto" />](https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03340-1823840499-star%20lord%20eating%20ramen%20noodles%20_%20_lora_embroidered_style_v1_sdxl_1_.jpeg)
[<img src="https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03338-782369059-thanos%20eating%20a%20hotdog%20_%20_lora_embroidered_style_v1_sdxl_1_.jpeg" style="max-width:400px; height:auto" />](https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03338-782369059-thanos%20eating%20a%20hotdog%20_%20_lora_embroidered_style_v1_sdxl_1_.jpeg)
[<img src="https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03329-688247848-rocket%20raccoon%20drinking%20a%20beer%20_%20_lora_embroidered_style_v1_sdxl_1_.jpeg" style="max-width:400px; height:auto" />](https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03329-688247848-rocket%20raccoon%20drinking%20a%20beer%20_%20_lora_embroidered_style_v1_sdxl_1_.jpeg)
[<img src="https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03326-3620475301-iron%20man%20eating%20spaghetti%20_%20_lora_embroidered_style_v1_sdxl_1_.jpeg" style="max-width:400px; height:auto" />](https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03326-3620475301-iron%20man%20eating%20spaghetti%20_%20_lora_embroidered_style_v1_sdxl_1_.jpeg)
[<img src="https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03325-3276530100-hulk%20eating%20an%20ice%20cream%20_%20_lora_embroidered_style_v1_sdxl_1_.jpeg" style="max-width:400px; height:auto" />](https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03325-3276530100-hulk%20eating%20an%20ice%20cream%20_%20_lora_embroidered_style_v1_sdxl_1_.jpeg)
[<img src="https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03321-4005862427-batman%20eating%20pizza%20_%20_lora_embroidered_style_v1_sdxl_1_.jpeg" style="max-width:400px; height:auto" />](https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03321-4005862427-batman%20eating%20pizza%20_%20_lora_embroidered_style_v1_sdxl_1_.jpeg)
[<img src="https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03314-1210391215-sexy%20stripper%20_%20_lora_embroidered_style_v1_sdxl_1_.jpeg" style="max-width:400px; height:auto" />](https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03314-1210391215-sexy%20stripper%20_%20_lora_embroidered_style_v1_sdxl_1_.jpeg)
[<img src="https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03312-1585236845-home%20alone%20_%20_lora_embroidered_style_v1_sdxl_1_.jpeg" style="max-width:400px; height:auto" />](https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03312-1585236845-home%20alone%20_%20_lora_embroidered_style_v1_sdxl_1_.jpeg)
[<img src="https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03308-3331935770-the%20magic%20school%20bus%20_%20_lora_embroidered_style_v1_sdxl_1_.jpeg" style="max-width:400px; height:auto" />](https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03308-3331935770-the%20magic%20school%20bus%20_%20_lora_embroidered_style_v1_sdxl_1_.jpeg)
[<img src="https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03306-3533380568-an%20orange%20tabby%20with%20a%20tophat%20_%20_lora_embroidered_style_v1_sdxl_1_.jpeg" style="max-width:400px; height:auto" />](https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03306-3533380568-an%20orange%20tabby%20with%20a%20tophat%20_%20_lora_embroidered_style_v1_sdxl_1_.jpeg)
[<img src="https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03305-3177229677-a%20man%20that%20is%20on%20fire%2C%20running_%20_lora_embroidered_style_v1_sdxl_1_.jpeg" style="max-width:400px; height:auto" />](https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03305-3177229677-a%20man%20that%20is%20on%20fire%2C%20running_%20_lora_embroidered_style_v1_sdxl_1_.jpeg)
[<img src="https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03300-259495106-trippy%2C%20psychedelic%2C%20hippie%20man%20%20tripping%20on%20mushrooms%20_lora_embroidered_style_v1_sdxl_1_.jpeg" style="max-width:400px; height:auto" />](https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03300-259495106-trippy%2C%20psychedelic%2C%20hippie%20man%20%20tripping%20on%20mushrooms%20_lora_embroidered_style_v1_sdxl_1_.jpeg)
[<img src="https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03286-1101754843-astronaut%20monkey%20in%20space%20%20%20_lora_embroidered_style_v1_sdxl_1_.jpeg" style="max-width:400px; height:auto" />](https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03286-1101754843-astronaut%20monkey%20in%20space%20%20%20_lora_embroidered_style_v1_sdxl_1_.jpeg)
[<img src="https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03283-3131897700-mario%20tripping%20on%20mushrooms%20%20%20_lora_embroidered_style_v1_sdxl_1_.jpeg" style="max-width:400px; height:auto" />](https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03283-3131897700-mario%20tripping%20on%20mushrooms%20%20%20_lora_embroidered_style_v1_sdxl_1_.jpeg)
[<img src="https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03280-3379092481-zombie%20eating%20brains%20%20%20%20_lora_embroidered_style_v1_sdxl_1_.jpeg" style="max-width:400px; height:auto" />](https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03280-3379092481-zombie%20eating%20brains%20%20%20%20_lora_embroidered_style_v1_sdxl_1_.jpeg)
[<img src="https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03277-3181046669-a%20juicy%20hamburger%20%20%20_lora_embroidered_style_v1_sdxl_0.9_.jpeg" style="max-width:400px; height:auto" />](https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03277-3181046669-a%20juicy%20hamburger%20%20%20_lora_embroidered_style_v1_sdxl_0.9_.jpeg)
[<img src="https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03275-2541887904-a%20bear%20dj%2C%20working%20at%20a%20rave%20%20_lora_embroidered_style_v1_sdxl_0.9_.jpeg" style="max-width:400px; height:auto" />](https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03275-2541887904-a%20bear%20dj%2C%20working%20at%20a%20rave%20%20_lora_embroidered_style_v1_sdxl_0.9_.jpeg)
[<img src="https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03273-3798714744-a%20woman%20in%20black%20lingerie%20_lora_embroidered_style_v1_sdxl_0.9_.jpeg" style="max-width:400px; height:auto" />](https://huggingface.co/ostris/embroidery_style_lora_sdxl/resolve/main/samples/03273-3798714744-a%20woman%20in%20black%20lingerie%20_lora_embroidered_style_v1_sdxl_0.9_.jpeg)
| 8,501 | [
[
-0.050872802734375,
-0.057159423828125,
0.009124755859375,
0.038116455078125,
-0.03985595703125,
0.007045745849609375,
0.01232147216796875,
-0.07562255859375,
0.09014892578125,
0.050750732421875,
-0.051422119140625,
-0.026947021484375,
-0.058563232421875,
0.... |
huggingface-course/bert-finetuned-squad | 2021-11-11T17:49:56.000Z | [
"transformers",
"pytorch",
"tf",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | question-answering | huggingface-course | null | null | huggingface-course/bert-finetuned-squad | 5 | 1,348 | transformers | 2022-03-02T23:29:05 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: test-bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-bert-finetuned-squad
This model was trained from scratch on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.8.1+cu111
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
| 1,003 | [
[
-0.05303955078125,
-0.05712890625,
0.0105743408203125,
0.023345947265625,
-0.0243072509765625,
-0.022003173828125,
-0.01358795166015625,
-0.0209197998046875,
0.003917694091796875,
0.023345947265625,
-0.07489013671875,
-0.0252532958984375,
-0.041595458984375,
... |
voidful/context-only-question-generator | 2023-03-29T15:40:54.000Z | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"question",
"generation",
"seq2seq",
"en",
"dataset:unifiedQA",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | text2text-generation | voidful | null | null | voidful/context-only-question-generator | 15 | 1,348 | transformers | 2022-03-02T23:29:05 | ---
language: en
tags:
- bart
- question
- generation
- seq2seq
datasets:
- unifiedQA
metrics:
- bleu
- rouge
pipeline_tag: text2text-generation
widget:
- text: "Harry Potter is a series of seven fantasy novels written by British author J. K. Rowling. The novels chronicle the lives of a young wizard, Harry Potter, and his friends Hermione Granger and Ron Weasley, all of whom are students at Hogwarts School of Witchcraft and Wizardry. The main story arc concerns Harry's struggle against Lord Voldemort, a dark wizard who intends to become immortal, overthrow the wizard governing body known as the Ministry of Magic and subjugate all wizards and Muggles(non-magical people)."
---
# context-only-question-generator
## Model description
This model is a sequence-to-sequence question generator which takes context as an input, and generates a question as an output.
It is based on a pretrained `bart-base` model.
#### How to use
Inputs should be organised into the following format:
```
context
```
The input sequence can then be encoded and passed as the `input_ids` argument in the model's `generate()` method.
| 1,130 | [
[
-0.0386962890625,
-0.08343505859375,
0.046417236328125,
0.0026111602783203125,
-0.0301666259765625,
-0.04241943359375,
0.0154571533203125,
0.001895904541015625,
0.006134033203125,
0.06121826171875,
-0.07220458984375,
-0.0159454345703125,
-0.0169677734375,
0.... |
rinna/bilingual-gpt-neox-4b-instruction-sft | 2023-09-14T11:06:07.000Z | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"ja",
"en",
"dataset:Anthropic/hh-rlhf",
"license:mit",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | rinna | null | null | rinna/bilingual-gpt-neox-4b-instruction-sft | 14 | 1,348 | transformers | 2023-07-31T11:25:11 | ---
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
license: mit
datasets:
- Anthropic/hh-rlhf
language:
- ja
- en
inference: false
---
# bilingual-gpt-neox-4b-instruction-sft

---
# Update
- **2023/08/02** We uploaded the newly trained `rinna/bilingual-gpt-neox-4b-instruction-sft` with the MIT license.
- Please refrain from using the previous model released on 2023/07/31 for commercial purposes if you have already downloaded it.
- The new model released on 2023/08/02 is built from datasets with less strict licenses and has better evaluation performance, so we suggest using the new model.
- For reference, we provide the MD5 checksum values for the `pytorch_model.bin` files of the previous and current models.
- 2023/07/31 model: `edf190a323c0ae63f71476700fb0b462`
- 2023/08/02 model: `de72aa5b66beee7b65783c96f687d186`
- **2023/07/31** In the previously released `rinna/bilingual-gpt-neox-4b-instruction-sft`, we found that part of the training data (i.e. Openchat ShareGPT4 and WizardLM) have a non-commercial license, and thus it does not comply with **the MIT license**. We decided to remove the previous version and build a new SFT model from datasets with less strict licenses. The new model will be uploaded in a few days. We sincerely apologize for our careless mistake.
---
# Overview
This repository provides an English-Japanese bilingual GPT-NeoX model of 3.8 billion parameters.
The model is based on [`rinna/bilingual-gpt-neox-4b`](https://huggingface.co/rinna/bilingual-gpt-neox-4b) and has been finetuned to serve as an instruction-following conversational agent.
* **Model architecture**
A 36-layer, 2816-hidden-size transformer-based language model.
* **Fine-tuning**
The fine-tuning data is the subset of the following datasets.
* [Anthropic HH RLHF data](https://huggingface.co/datasets/Anthropic/hh-rlhf) and its Japanese translation
* [FLAN Instruction Tuning data](https://github.com/google-research/FLAN) and its Japanese translation
* **Model Series**
| Variant | Link |
| :-- | :--|
| Bilingual 4B MiniGPT4 | https://huggingface.co/rinna/bilingual-gpt-neox-4b-minigpt4 |
| Bilingual 4B PPO | https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-ppo |
| Bilingual 4B SFT | https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-sft |
| Bilingual 4B 8K | https://huggingface.co/rinna/bilingual-gpt-neox-4b-8k |
| Bilingual 4B | https://huggingface.co/rinna/bilingual-gpt-neox-4b |
| Japanese 3.6B PPO | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo |
| Japanese 3.6B SFT-v2 | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft-v2 |
| Japanese 3.6B SFT | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft |
| Japanese 3.6B | https://huggingface.co/rinna/japanese-gpt-neox-3.6b |
* **Authors**
[Tianyu Zhao](https://huggingface.co/tianyuz) and [Kei Sawada](https://huggingface.co/keisawada)
---
# Benchmarking
Our evaluation experiments suggest that the bilingual-gpt-neox-4b-instruction-sft model performs slightly better than the previous [Japanese GPT-NeoX 3.6B PPO](https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo) in Japanese tasks.
- *The 4-task average accuracy is based on results of JCommonsenseQA, JNLI, MARC-ja, and JSQuAD.*
- *The 6-task average accuracy is based on results of JCommonsenseQA, JNLI, MARC-ja, JSQuAD, XWinograd, and JAQKET-v2.*
| Model | 4-task average accuracy | 6-task average accuracy |
| :-- | :-- | :-- |
| bilingual-gpt-neox-4b-instruction-ppo | 61.01 | 61.16 |
| **bilingual-gpt-neox-4b-instruction-sft** | **61.02** | **61.69** |
| bilingual-gpt-neox-4b | 56.12 | 51.83 |
| japanese-gpt-neox-3.6b-instruction-ppo | 59.86 | 60.07 |
| japanese-gpt-neox-3.6b | 55.07 | 50.32 |
---
# I/O Format
A special format has been adopted to construct inputs.
* An input prompt is formatted as a conversation between `ユーザー` and `システム`.
* Each input utterance consists of (1) its speaker (`"ユーザー"` or `"システム"`), (2) a colon (`":"`), (3) a whitespace (`" "`), and (4) utterance text (e.g. `"世界で一番高い山は?"`).
* The input prompt should be ended with `"システム: "` to acknowledge the model to generate a response.
* All the utterances in the input prompt should be separated by a newline `\n`.
Following is an example to construct input from a conversation.
~~~python
prompt = [
{
"speaker": "ユーザー",
"text": "Hello, you are an assistant that helps me learn Japanese."
},
{
"speaker": "システム",
"text": "Sure, what can I do for you?"
},
{
"speaker": "ユーザー",
"text": "VRはなんですか。"
}
]
prompt = [
f"{uttr['speaker']}: {uttr['text']}"
for uttr in prompt
]
prompt = "\n".join(prompt)
prompt = (
prompt
+ "\n"
+ "システム: "
)
print(prompt)
"""
ユーザー: Hello, you are an assistant that helps me learn Japanese.
システム: Sure, what can I do for you?
ユーザー: VRはなんですか。
システム:
"""
~~~
---
# How to use the model
**Notice:** Since the model is **sensitive to decoding hyper-parameters** (e.g. `temperature`, `top_p`, `top_k`, `repetition_penalty`), it is suggested to explore the best setting for your task.
~~~~python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("rinna/bilingual-gpt-neox-4b-instruction-sft", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("rinna/bilingual-gpt-neox-4b-instruction-sft")
if torch.cuda.is_available():
model = model.to("cuda")
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
max_new_tokens=512,
do_sample=True,
temperature=1.0,
top_p=0.85,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id
)
output = tokenizer.decode(output_ids.tolist()[0][token_ids.size(1):])
print(output)
"""VRとはVirtual Realityの略で、仮想現実とも呼ばれます。これは、コンピューターを使用して仮想世界を作り出し、仮想世界上でコンピューターのゲームや仮想世界を体験するための技術です。この技術は、コンピューターやモバイ ルデバイスの進歩によって、2015年以降、ますます普及しています。VRは、ゲームや仮想世界、その他のアプリケー ションなどのさまざまな分野で、コンピューターと人間の相互作用の新しい方法を提供しています。</s>"""
~~~~
---
# Tokenization
The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer.
* The tokenizer has a vocabulary size of 65,536.
* It uses *byte fallback* to decompose unknown text pieces into UTF-8 byte pieces to avoid producing `<UNK>` tokens.
* It can recognize *consecutive whitespaces*, *newlines*, and *tabs* to handle structured texts better.
* We turned off the default behaviour of prepending leading whitespace because it is not beneficial for processing Japanese.
* Specifically, single whitespace is always processed as one token so that any English word won't have a preceding whitespace like in many other tokenizers (e.g. `_Hello`).
* This decision trades the English processing efficiency for a unified way to treat whitespaces.
* It leads to a significantly lower loss of next token prediction on English data because whitespaces are easy to predict.
* **Don't forget to set `use_fast=False` to make the above features function correctly.**
---
# Licenese
[The MIT license](https://opensource.org/licenses/MIT) | 7,455 | [
[
-0.0194854736328125,
-0.07415771484375,
0.0271148681640625,
0.0187530517578125,
-0.0154571533203125,
-0.00792694091796875,
-0.0200042724609375,
-0.0341796875,
0.020263671875,
0.0311737060546875,
-0.041259765625,
-0.047943115234375,
-0.037994384765625,
0.0220... |
ShoukanLabs/OpenNiji | 2023-05-29T08:39:20.000Z | [
"diffusers",
"art",
"anime",
"nijijourney",
"open",
"text-to-image",
"en",
"dataset:Korakoe/NijiJourney-Prompt-Pairs",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | ShoukanLabs | null | null | ShoukanLabs/OpenNiji | 86 | 1,347 | diffusers | 2023-01-21T08:20:20 | ---
thumbnail: https://i.imgur.com/7jAg1Jq.png
license: creativeml-openrail-m
datasets:
- Korakoe/NijiJourney-Prompt-Pairs
language:
- en
tags:
- art
- anime
- nijijourney
- open
pipeline_tag: text-to-image
---

# OpenNiji
The Stable Diffusion model trained on Nijijourney images!
# Announcements:
- OpenNiji-V2 is [Out Now!](https://huggingface.co/Korakoe/OpenNiji-V2)
- V3 of the dataset released [Here](https://huggingface.co/datasets/ShoukanLabs/OpenNiji-Dataset)
- - This is **NOT** the dataset we trained on, however, it still includes those images, and is overall a higher quality dataset thanks to NijiJourney V5!
## Changelog
- Added LoRA version of finetune, you should now be able to use OpenNiji on any model!
## Acknowledgements
- [Anythingv4.5 - Andite](https://huggingface.co/andite/anything-v4.0)
- [Nijijourney - Spellbrush](https://nijijourney.com/en/)
- [Kohya Trainer - bmaltais](https://github.com/bmaltais/kohya_ss)
## Results

```
1girl, eyes closed, slight smile, underwater, water bubbles, reflection, long light brown hair, bloom, depth of field, bokeh
```

```
masterpiece, best quality, 1girl, aqua eyes, baseball cap, blonde hair, closed mouth, earrings, green background, hat, hoop earrings, jewellery, looking at viewer, shirt, short hair, simple background, solo, upper body, yellow shirt
```
### Small Note
This model already has the in01 trick applied, so this model should
be better at generating hands!
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | 2,447 | [
[
-0.02740478515625,
-0.040496826171875,
0.02630615234375,
0.01418304443359375,
-0.036529541015625,
-0.03082275390625,
-0.004077911376953125,
-0.0274505615234375,
0.01268768310546875,
0.047393798828125,
-0.03765869140625,
-0.036407470703125,
-0.054840087890625,
... |
zeroshot/gte-large-sparse | 2023-10-24T16:53:49.000Z | [
"transformers",
"onnx",
"bert",
"feature-extraction",
"sparse sparsity quantized onnx embeddings int8",
"mteb",
"en",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | feature-extraction | zeroshot | null | null | zeroshot/gte-large-sparse | 0 | 1,347 | transformers | 2023-10-15T18:14:48 | ---
tags:
- sparse sparsity quantized onnx embeddings int8
- mteb
model-index:
- name: gte-large-sparse
results:
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 88.64253410928214
- type: cos_sim_spearman
value: 85.83388349410652
- type: euclidean_pearson
value: 86.86126159318735
- type: euclidean_spearman
value: 85.61580623591163
- type: manhattan_pearson
value: 86.6901132883383
- type: manhattan_spearman
value: 85.60255292187769
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.23314640591607
- type: cos_sim_spearman
value: 79.00078545104338
- type: euclidean_pearson
value: 83.48009254500714
- type: euclidean_spearman
value: 78.95413001389939
- type: manhattan_pearson
value: 83.46945566025941
- type: manhattan_spearman
value: 78.9241707208135
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 81.77526666043804
- type: cos_sim_spearman
value: 73.4849063285867
- type: euclidean_pearson
value: 78.04477932740524
- type: euclidean_spearman
value: 73.01394205771743
- type: manhattan_pearson
value: 78.08836684503294
- type: manhattan_spearman
value: 73.05074711098149
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 84.57839215661352
- type: cos_sim_spearman
value: 86.13854767345153
- type: euclidean_pearson
value: 85.12712609946449
- type: euclidean_spearman
value: 85.52497994789026
- type: manhattan_pearson
value: 85.06833141611173
- type: manhattan_spearman
value: 85.45003068636466
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.30485126978374
- type: cos_sim_spearman
value: 80.36497172462357
- type: euclidean_pearson
value: 82.91977909424605
- type: euclidean_spearman
value: 80.16995106297438
- type: manhattan_pearson
value: 82.88200991402184
- type: manhattan_spearman
value: 80.14259757215227
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.99883111314007
- type: cos_sim_spearman
value: 88.531352572377
- type: euclidean_pearson
value: 87.96834578059067
- type: euclidean_spearman
value: 88.44800718542935
- type: manhattan_pearson
value: 87.94889391725033
- type: manhattan_spearman
value: 88.45467695837115
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.4636984892402
- type: cos_sim_spearman
value: 84.0808920789148
- type: euclidean_pearson
value: 83.70613486028309
- type: euclidean_spearman
value: 84.35941626905009
- type: manhattan_pearson
value: 83.70259457073782
- type: manhattan_spearman
value: 84.35496521501604
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.76172944971023
- type: cos_sim_spearman
value: 89.4190945039165
- type: euclidean_pearson
value: 89.47263005347381
- type: euclidean_spearman
value: 89.49228360724095
- type: manhattan_pearson
value: 89.49959868816694
- type: manhattan_spearman
value: 89.5314536157954
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.57158223787549
- type: cos_sim_spearman
value: 66.75053533168037
- type: euclidean_pearson
value: 66.45526604831747
- type: euclidean_spearman
value: 66.14567667353113
- type: manhattan_pearson
value: 66.47352000151176
- type: manhattan_spearman
value: 66.21099856852885
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.055653571006
- type: cos_sim_spearman
value: 85.45387832634702
- type: euclidean_pearson
value: 86.31667154906651
- type: euclidean_spearman
value: 85.66079590537946
- type: manhattan_pearson
value: 86.2806853257308
- type: manhattan_spearman
value: 85.63700636713952
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.78811881188119
- type: cos_sim_ap
value: 94.67027715905307
- type: cos_sim_f1
value: 89.33074684772066
- type: cos_sim_precision
value: 86.7231638418079
- type: cos_sim_recall
value: 92.10000000000001
- type: dot_accuracy
value: 99.47128712871287
- type: dot_ap
value: 78.41478815918727
- type: dot_f1
value: 73.30049261083744
- type: dot_precision
value: 72.23300970873787
- type: dot_recall
value: 74.4
- type: euclidean_accuracy
value: 99.78415841584159
- type: euclidean_ap
value: 94.60075930867181
- type: euclidean_f1
value: 89.12175648702593
- type: euclidean_precision
value: 88.94422310756973
- type: euclidean_recall
value: 89.3
- type: manhattan_accuracy
value: 99.78415841584159
- type: manhattan_ap
value: 94.62867439278095
- type: manhattan_f1
value: 89.2337536372454
- type: manhattan_precision
value: 86.62900188323917
- type: manhattan_recall
value: 92.0
- type: max_accuracy
value: 99.78811881188119
- type: max_ap
value: 94.67027715905307
- type: max_f1
value: 89.33074684772066
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.09864695714371
- type: cos_sim_ap
value: 70.33704198164713
- type: cos_sim_f1
value: 66.22893954410307
- type: cos_sim_precision
value: 62.42410088743577
- type: cos_sim_recall
value: 70.52770448548813
- type: dot_accuracy
value: 79.11426357513263
- type: dot_ap
value: 49.15484584572233
- type: dot_f1
value: 51.12580243364951
- type: dot_precision
value: 40.13840830449827
- type: dot_recall
value: 70.3957783641161
- type: euclidean_accuracy
value: 85.15825236931514
- type: euclidean_ap
value: 70.51017350854076
- type: euclidean_f1
value: 66.45416294785159
- type: euclidean_precision
value: 64.29805082654823
- type: euclidean_recall
value: 68.7598944591029
- type: manhattan_accuracy
value: 85.1403707456637
- type: manhattan_ap
value: 70.47587863399994
- type: manhattan_f1
value: 66.4576802507837
- type: manhattan_precision
value: 63.32138590203107
- type: manhattan_recall
value: 69.92084432717678
- type: max_accuracy
value: 85.15825236931514
- type: max_ap
value: 70.51017350854076
- type: max_f1
value: 66.4576802507837
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.8539604921023
- type: cos_sim_ap
value: 85.71869912577101
- type: cos_sim_f1
value: 78.00535626720983
- type: cos_sim_precision
value: 76.46232344893885
- type: cos_sim_recall
value: 79.61194949183862
- type: dot_accuracy
value: 84.57717235223348
- type: dot_ap
value: 74.89496650237145
- type: dot_f1
value: 69.05327823892932
- type: dot_precision
value: 65.75666829166377
- type: dot_recall
value: 72.69787496150293
- type: euclidean_accuracy
value: 88.89471028835332
- type: euclidean_ap
value: 85.75169460500409
- type: euclidean_f1
value: 78.17055393586006
- type: euclidean_precision
value: 74.21118184334348
- type: euclidean_recall
value: 82.57622420696026
- type: manhattan_accuracy
value: 88.92187681918733
- type: manhattan_ap
value: 85.7496679471825
- type: manhattan_f1
value: 78.11088295687884
- type: manhattan_precision
value: 75.82083061535117
- type: manhattan_recall
value: 80.5435786880197
- type: max_accuracy
value: 88.92187681918733
- type: max_ap
value: 85.75169460500409
- type: max_f1
value: 78.17055393586006
license: mit
language:
- en
---
# gte-large-sparse
This is the sparse ONNX variant of the [gte-large](https://huggingface.co/thenlper/gte-large) embeddings model created with [DeepSparse Optimum](https://github.com/neuralmagic/optimum-deepsparse) for ONNX export/inference and Neural Magic's [Sparsify](https://github.com/neuralmagic/sparsify) for one-shot quantization (INT8) and unstructured pruning 50%.
Current list of sparse and quantized gte ONNX models:
| Links | Sparsification Method |
| --------------------------------------------------------------------------------------------------- | ---------------------- |
| [zeroshot/gte-large-sparse](https://huggingface.co/zeroshot/gte-large-sparse) | Quantization (INT8) & 50% Pruning |
| [zeroshot/gte-large-quant](https://huggingface.co/zeroshot/gte-large-quant) | Quantization (INT8) |
| [zeroshot/gte-base-sparse](https://huggingface.co/zeroshot/gte-base-sparse) | Quantization (INT8) & 50% Pruning |
| [zeroshot/gte-base-quant](https://huggingface.co/zeroshot/gte-base-quant) | Quantization (INT8) |
| [zeroshot/gte-small-sparse](https://huggingface.co/zeroshot/gte-small-sparse) | Quantization (INT8) & 50% Pruning |
| [zeroshot/gte-small-quant](https://huggingface.co/zeroshot/gte-small-quant) | Quantization (INT8) |
```bash
pip install -U deepsparse-nightly[sentence_transformers]
```
```python
from deepsparse.sentence_transformers import SentenceTransformer
model = SentenceTransformer('zeroshot/gte-large-sparse', export=False)
# Our sentences we like to encode
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
# Sentences are encoded by calling model.encode()
embeddings = model.encode(sentences)
# Print the embeddings
for sentence, embedding in zip(sentences, embeddings):
print("Sentence:", sentence)
print("Embedding:", embedding.shape)
print("")
```
For further details regarding DeepSparse & Sentence Transformers integration, refer to the [DeepSparse README](https://github.com/neuralmagic/deepsparse/tree/main/src/deepsparse/sentence_transformers).
For general questions on these models and sparsification methods, reach out to the engineering team on our [community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ).
 | 13,159 | [
[
-0.036895751953125,
-0.05816650390625,
0.042816162109375,
0.0201568603515625,
0.00296783447265625,
-0.01198577880859375,
-0.02008056640625,
-0.0012187957763671875,
0.038055419921875,
0.0229034423828125,
-0.06658935546875,
-0.059478759765625,
-0.04534912109375,
... |
T-Systems-onsite/german-roberta-sentence-transformer-v2 | 2023-04-27T19:28:49.000Z | [
"transformers",
"pytorch",
"tf",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence_embedding",
"search",
"roberta",
"xlm-r-distilroberta-base-paraphrase-v1",
"paraphrase",
"de",
"dataset:STSbenchmark",
"license:mit",
"endpoints_compatible",
"region:us"
] | feature-extraction | T-Systems-onsite | null | null | T-Systems-onsite/german-roberta-sentence-transformer-v2 | 4 | 1,346 | transformers | 2022-03-02T23:29:05 | ---
language: de
license: mit
tags:
- sentence_embedding
- search
- pytorch
- xlm-roberta
- roberta
- xlm-r-distilroberta-base-paraphrase-v1
- paraphrase
datasets:
- STSbenchmark
metrics:
- Spearman’s rank correlation
- cosine similarity
---
# German RoBERTa for Sentence Embeddings V2
**The new [T-Systems-onsite/cross-en-de-roberta-sentence-transformer](https://huggingface.co/T-Systems-onsite/cross-en-de-roberta-sentence-transformer) model is slightly better for German language. It is also the current best model for English language and works cross-lingually. Please consider using that model.**
| 605 | [
[
0.004070281982421875,
-0.06787109375,
0.04315185546875,
0.03668212890625,
-0.025604248046875,
-0.0025157928466796875,
-0.00897979736328125,
-0.0245513916015625,
0.025970458984375,
0.015899658203125,
-0.0328369140625,
-0.03192138671875,
-0.063720703125,
0.000... |
sail-rvc/CardiB2333333 | 2023-07-14T07:20:00.000Z | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | sail-rvc | null | null | sail-rvc/CardiB2333333 | 0 | 1,346 | transformers | 2023-07-14T07:19:46 |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# CardiB2333333
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:20:00
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
| 381 | [
[
-0.034576416015625,
-0.01800537109375,
0.016204833984375,
0.0073394775390625,
-0.0377197265625,
0.006927490234375,
0.0153656005859375,
-0.0002608299255371094,
0.0279388427734375,
0.07000732421875,
-0.04852294921875,
-0.045318603515625,
-0.033843994140625,
-0... |
gagneurlab/SpeciesLM | 2023-08-14T09:27:08.000Z | [
"license:mit",
"region:us"
] | null | gagneurlab | null | null | gagneurlab/SpeciesLM | 0 | 1,346 | null | 2023-08-14T08:41:30 | ---
license: mit
---
Load each model using:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("gagneurlab/SpeciesLM", revision = "<<choose model type>>")
model = AutoModelForMaskedLM.from_pretrained("gagneurlab/SpeciesLM", revision = "<<choose model type>>")
```
Model type:
- Species LM, 3' region: `downstream_species_lm`
- Agnostic LM, 3' region: `downstream_agnostic_lm`
- Species LM, 5' region: `upstream_species_lm`
- Agnostic LM, 5' region: `upstream_agnostic_lm`
| 548 | [
[
-0.036529541015625,
-0.0021457672119140625,
0.020843505859375,
0.0278472900390625,
-0.022308349609375,
0.001773834228515625,
0.02899169921875,
-0.004360198974609375,
0.0070343017578125,
0.05572509765625,
-0.035736083984375,
-0.017547607421875,
-0.047393798828125... |
TheBloke/Tinyllama-2-1b-miniguanaco-AWQ | 2023-10-14T16:24:07.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/Tinyllama-2-1b-miniguanaco-AWQ | 2 | 1,346 | transformers | 2023-10-11T03:20:45 | ---
base_model: abdgrt/Tinyllama-2-1b-miniguanaco
inference: false
license: other
model_creator: Odunusi Abraham Ayoola
model_name: Tinyllama 2 1B MiniGuanaco
model_type: llama
prompt_template: '### Human: {prompt}
### Assistant:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Tinyllama 2 1B MiniGuanaco - AWQ
- Model creator: [Odunusi Abraham Ayoola](https://huggingface.co/abdgrt)
- Original model: [Tinyllama 2 1B MiniGuanaco](https://huggingface.co/abdgrt/Tinyllama-2-1b-miniguanaco)
<!-- description start -->
## Description
This repo contains AWQ model files for [Odunusi Abraham Ayoola's Tinyllama 2 1B MiniGuanaco](https://huggingface.co/abdgrt/Tinyllama-2-1b-miniguanaco).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of Llama AWQ models for high-throughput concurrent inference in multi-user server scenarios.
As of September 25th 2023, preliminary Llama-only AWQ support has also been added to [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference).
Note that, at the time of writing, overall throughput is still lower than running vLLM or TGI with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-GGUF)
* [Odunusi Abraham Ayoola's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/abdgrt/Tinyllama-2-1b-miniguanaco)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Guanaco
```
### Human: {prompt}
### Assistant:
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 0.77 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Serving this model from vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
Note: at the time of writing, vLLM has not yet done a new release with AWQ support.
If you try the vLLM examples below and get an error about `quantization` being unrecognised, or other AWQ-related issues, please install vLLM from Github source.
- When using vLLM as a server, pass the `--quantization awq` parameter, for example:
```shell
python3 python -m vllm.entrypoints.api_server --model TheBloke/Tinyllama-2-1b-miniguanaco-AWQ --quantization awq --dtype half
```
When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Tinyllama-2-1b-miniguanaco-AWQ", quantization="awq", dtype="half")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Tinyllama-2-1b-miniguanaco-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''### Human: {prompt}
### Assistant:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## How to use this AWQ model from Python code
### Install the necessary packages
Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.1 or later
```shell
pip3 install autoawq
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### You can then try the following example code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/Tinyllama-2-1b-miniguanaco-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=False, safetensors=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
prompt = "Tell me about AI"
prompt_template=f'''### Human: {prompt}
### Assistant:
'''
print("\n\n*** Generate:")
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
tokens,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
print("Output: ", tokenizer.decode(generation_output[0]))
"""
# Inference should be possible with transformers pipeline as well in future
# But currently this is not yet supported by AutoAWQ (correct as of September 25th 2023)
from transformers import pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
"""
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ)
- [vLLM](https://github.com/vllm-project/vllm)
- [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
TGI merged AWQ support on September 25th, 2023: [TGI PR #1054](https://github.com/huggingface/text-generation-inference/pull/1054). Use the `:latest` Docker container until the next TGI release is made.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Odunusi Abraham Ayoola's Tinyllama 2 1B MiniGuanaco
No original model card was available.
| 12,353 | [
[
-0.03955078125,
-0.0604248046875,
0.02960205078125,
0.0008406639099121094,
-0.0174713134765625,
-0.0128326416015625,
0.0003254413604736328,
-0.037109375,
0.0005469322204589844,
0.022918701171875,
-0.051422119140625,
-0.04254150390625,
-0.019500732421875,
-0.... |
Salesforce/codet5-large | 2022-07-07T11:55:19.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2109.00859",
"arxiv:2207.01780",
"arxiv:1909.09436",
"license:bsd-3-clause",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text2text-generation | Salesforce | null | null | Salesforce/codet5-large | 49 | 1,343 | transformers | 2022-07-06T03:56:45 | ---
license: bsd-3-clause
---
# CodeT5 (large-size model 770M)
## Model description
CodeT5 is a family of encoder-decoder language models for code from the paper: [CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation](https://arxiv.org/pdf/2109.00859.pdf) by Yue Wang, Weishi Wang, Shafiq Joty, and Steven C.H. Hoi.
The checkpoint included in this repository is denoted as **CodeT5-large** (770M), which is introduced by the paper: [CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning](https://arxiv.org/pdf/2207.01780.pdf) by Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, Steven C.H. Hoi.
## Training data
CodeT5-large was pretrained on [CodeSearchNet](https://arxiv.org/abs/1909.09436) data in six programming languages (Ruby/JavaScript/Go/Python/Java/PHP). See Section 4.1 of the [paper](https://arxiv.org/pdf/2207.01780.pdf) for more details.
## Training procedure
CodeT5-large was pretrained using masked span prediction objective for 150 epochs. See Section 4.1 of the [paper](https://arxiv.org/pdf/2207.01780.pdf) for more details.
## Evaluation results
We validate the effectiveness of this checkpoint pretrained with simplified strategies on [CodeXGLUE](https://github.com/microsoft/CodeXGLUE) benchmark. See Appendix A.1 of the [paper](https://arxiv.org/pdf/2207.01780.pdf) for more details.
## How to use
This model can be easily loaded using the `T5ForConditionalGeneration` functionality:
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codet5-large")
model = T5ForConditionalGeneration.from_pretrained("Salesforce/codet5-large")
text = "def greet(user): print(f'hello <extra_id_0>!')"
input_ids = tokenizer(text, return_tensors="pt").input_ids
# simply generate a single sequence
generated_ids = model.generate(input_ids, max_length=8)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@inproceedings{CodeT52021,
author = {Yue Wang and Weishi Wang and Shafiq R. Joty and Steven C. H. Hoi},
title = {CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation},
booktitle = {EMNLP},
pages = {8696--8708},
publisher = {Association for Computational Linguistics},
year = {2021}
}
@article{CodeRL2022
author = {Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, Steven C.H. Hoi},
title = {CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning},
journal = {arXiv preprint},
volume = {abs/2207.01780},
year = {2022}
}
``` | 2,753 | [
[
-0.0340576171875,
-0.02996826171875,
0.01436614990234375,
0.0140533447265625,
-0.0108795166015625,
0.0174102783203125,
-0.0206451416015625,
-0.030364990234375,
-0.0164337158203125,
0.01490020751953125,
-0.0306854248046875,
-0.04254150390625,
-0.042999267578125,
... |
sail-rvc/SpongeBob_SquarePants__RVC_v2_ | 2023-07-14T07:33:31.000Z | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | sail-rvc | null | null | sail-rvc/SpongeBob_SquarePants__RVC_v2_ | 1 | 1,342 | transformers | 2023-07-14T07:32:15 |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# SpongeBob_SquarePants__RVC_v2_
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:33:31
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
| 398 | [
[
-0.03607177734375,
-0.0282135009765625,
0.019439697265625,
0.0268707275390625,
-0.033477783203125,
0.009124755859375,
0.0149383544921875,
-0.0022430419921875,
0.0421142578125,
0.0706787109375,
-0.037261962890625,
-0.022186279296875,
-0.04052734375,
-0.011550... |
ehartford/WizardLM-Uncensored-Falcon-7b | 2023-06-01T16:48:59.000Z | [
"transformers",
"pytorch",
"RefinedWebModel",
"text-generation",
"custom_code",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | text-generation | ehartford | null | null | ehartford/WizardLM-Uncensored-Falcon-7b | 44 | 1,341 | transformers | 2023-06-01T15:31:58 | ---
license: apache-2.0
---
This is WizardLM trained on top of tiiuae/falcon-7b, with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
Shout out to the open source AI/ML community, and everyone who helped me out.
Note:
An uncensored model has no guardrails.
You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
Prompt format is Wizardlm.
```
What is a falcon? Can I keep one as a pet?
### Response:
``` | 971 | [
[
-0.030914306640625,
-0.050537109375,
-0.0014123916625976562,
-0.0020999908447265625,
-0.018402099609375,
-0.01406097412109375,
0.02545166015625,
-0.021484375,
0.01641845703125,
0.0643310546875,
-0.04840087890625,
-0.0281219482421875,
-0.03240966796875,
-0.00... |
TheLastBen/Josef_Koudelka_Style_SDXL | 2023-08-07T11:05:24.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"license:creativeml-openrail-m",
"region:us",
"has_space"
] | text-to-image | TheLastBen | null | null | TheLastBen/Josef_Koudelka_Style_SDXL | 4 | 1,339 | diffusers | 2023-08-01T21:56:15 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: josef koudelka
widget:
- text: by josef koudelka
---
### Josef Koudelka Photography Style
#### SDXL LoRA by TheLastBen
#### Prompts to start with :
closeup on a man in a protest by josef koudelka
an empty dark old room's fireplace, closeup by josef koudelka
extreme closeup on a cart wheel in a forest by josef koudelka
!Do not use long negative prompts, "ugly, fake, blurry" is enough.
---
Trained using https://github.com/TheLastBen/fast-stable-diffusion SDXL trainer.
#### Sample pictures:
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
| 3,295 | [
[
-0.06500244140625,
-0.048492431640625,
0.025146484375,
0.040618896484375,
-0.035125732421875,
-0.005702972412109375,
0.007312774658203125,
-0.0518798828125,
0.0872802734375,
0.0270843505859375,
-0.07904052734375,
-0.053802490234375,
-0.032958984375,
0.002975... |
mncai/Mistral-7B-v0.1-orca_platy-1k | 2023-10-22T05:13:25.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"MindsAndCompany",
"en",
"ko",
"dataset:kyujinpy/KOpen-platypus",
"dataset:kyujinpy/OpenOrca-KO",
"arxiv:2306.02707",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | mncai | null | null | mncai/Mistral-7B-v0.1-orca_platy-1k | 0 | 1,339 | transformers | 2023-10-22T04:58:43 | ---
pipeline_tag: text-generation
license: mit
language:
- en
- ko
library_name: transformers
tags:
- MindsAndCompany
datasets:
- kyujinpy/KOpen-platypus
- kyujinpy/OpenOrca-KO
---
## Model Details
* **Developed by**: [Minds And Company](https://mnc.ai/)
* **Backbone Model**: [Mistral-7B-v0.1](mistralai/Mistral-7B-v0.1)
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
## Dataset Details
### Used Datasets
- kyujinpy/KOpen-platypus
- kyujinpy/OpenOrca-KO
### Prompt Template
- Llama Prompt Template
## Limitations & Biases:
Llama2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
## License Disclaimer:
This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.
## Contact Us
- [Minds And Company](https://mnc.ai/)
## Citiation:
Please kindly cite using the following BibTeX:
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{Orca-best,
title = {Orca-best: A filtered version of orca gpt4 dataset.},
author = {Shahul Es},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/datasets/shahules786/orca-best/},
}
```
```
@software{touvron2023llama2,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava,
Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann,
Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,
Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith,
Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom},
year={2023}
}
```
> Readme format: [Riiid/sheep-duck-llama-2-70b-v1.1](https://huggingface.co/Riiid/sheep-duck-llama-2-70b-v1.1) | 3,455 | [
[
-0.031158447265625,
-0.0491943359375,
0.01087188720703125,
0.0191802978515625,
-0.0244293212890625,
0.0059051513671875,
0.0111541748046875,
-0.05322265625,
0.0171051025390625,
0.0262603759765625,
-0.053314208984375,
-0.041748046875,
-0.04571533203125,
-0.003... |
AlekseyKorshuk/vicuna-7b | 2023-04-10T21:34:38.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | AlekseyKorshuk | null | null | AlekseyKorshuk/vicuna-7b | 109 | 1,338 | transformers | 2023-04-05T03:30:23 | ---
license: other
---
# Vicuna 7B without "ethics" filtering
This repository contains an alternative version of the [Vicuna 7B model](https://huggingface.co/lmsys/vicuna-7b-delta-v0).
This model was natively fine-tuned using ShareGPT data, but without the "ethics" filtering used for the original Vicuna.
[A GPTQ quantised 4-bit version is available here](https://huggingface.co/TheBloke/vicuna-AlekseyKorshuk-7B-GPTQ-4bit-128g).
# Original Vicuna Model Card
## Model details
**Model type:**
Vicuna is an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
Vicuna was trained between March 2023 and April 2023.
**Organizations developing the model:**
The Vicuna team with members from UC Berkeley, CMU, Stanford, and UC San Diego.
**Paper or resources for more information:**
https://vicuna.lmsys.org/
**License:**
Apache License 2.0
**Where to send questions or comments about the model:**
https://github.com/lm-sys/FastChat/issues
## Intended use
**Primary intended uses:**
The primary use of Vicuna is research on large language models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## Training dataset
70K conversations collected from ShareGPT.com.
## Evaluation dataset
A preliminary evaluation of the model quality is conducted by creating a set of 80 diverse questions and utilizing GPT-4 to judge the model outputs. See https://vicuna.lmsys.org/ for more details. | 1,679 | [
[
-0.0228424072265625,
-0.0667724609375,
0.041290283203125,
0.018798828125,
-0.04229736328125,
-0.0213623046875,
-0.006694793701171875,
-0.046417236328125,
0.01140594482421875,
0.0623779296875,
-0.046478271484375,
-0.035064697265625,
-0.0301513671875,
-0.00469... |
dangvantuan/CrossEncoder-camembert-large | 2022-03-28T13:47:00.000Z | [
"transformers",
"pytorch",
"camembert",
"text-classification",
"Text",
"Sentence Similarity",
"Sentence-Embedding",
"camembert-base",
"sentence-similarity",
"fr",
"dataset:stsb_multi_mt",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | sentence-similarity | dangvantuan | null | null | dangvantuan/CrossEncoder-camembert-large | 7 | 1,337 | transformers | 2022-03-28T13:19:00 | ---
pipeline_tag: sentence-similarity
language: fr
datasets:
- stsb_multi_mt
tags:
- Text
- Sentence Similarity
- Sentence-Embedding
- camembert-base
license: apache-2.0
model-index:
- name: sentence-camembert-base by Van Tuan DANG
results:
- task:
name: Sentence-Embedding
type: Text Similarity
dataset:
name: Text Similarity fr
type: stsb_multi_mt
args: fr
metrics:
- name: Test Pearson correlation coefficient
type: Pearson_correlation_coefficient
value: xx.xx
---
## Model
Cross-Encoder for sentence-similarity
This model was trained using [sentence-transformers](https://www.SBERT.net) Cross-Encoder class.
## Training Data
This model was trained on the [STS benchmark dataset](https://huggingface.co/datasets/stsb_multi_mt/viewer/fr/train). The model will predict a score between 0 and 1 how for the semantic similarity of two sentences.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('dangvantuan/CrossEncoder-camembert-large', max_length=128)
scores = model.predict([('Un avion est en train de décoller.', "Un homme joue d'une grande flûte."), ("Un homme étale du fromage râpé sur une pizza.", "Une personne jette un chat au plafond") ])
```
## Evaluation
The model can be evaluated as follows on the French test data of stsb.
```python
from sentence_transformers.readers import InputExample
from sentence_transformers.cross_encoder.evaluation import CECorrelationEvaluator
from datasets import load_dataset
def convert_dataset(dataset):
dataset_samples=[]
for df in dataset:
score = float(df['similarity_score'])/5.0 # Normalize score to range 0 ... 1
inp_example = InputExample(texts=[df['sentence1'],
df['sentence2']], label=score)
dataset_samples.append(inp_example)
return dataset_samples
# Loading the dataset for evaluation
df_dev = load_dataset("stsb_multi_mt", name="fr", split="dev")
df_test = load_dataset("stsb_multi_mt", name="fr", split="test")
# Convert the dataset for evaluation
# For Dev set:
dev_samples = convert_dataset(df_dev)
val_evaluator = CECorrelationEvaluator.from_input_examples(dev_samples, name='sts-dev')
val_evaluator(model, output_path="./")
# For Test set
test_samples = convert_dataset(df_test)
test_evaluator = CECorrelationEvaluator.from_input_examples(test_samples, name='sts-test')
test_evaluator(models, output_path="./")
```
**Test Result**:
The performance is measured using Pearson and Spearman correlation:
- On dev
| Model | Pearson correlation | Spearman correlation | #params |
| ------------- | ------------- | ------------- |------------- |
| [dangvantuan/CrossEncoder-camembert-large](https://huggingface.co/dangvantuan/CrossEncoder-camembert-large)| 90.11 |90.01 | 336M |
- On test
| Model | Pearson correlation | Spearman correlation |
| ------------- | ------------- | ------------- |
| [dangvantuan/CrossEncoder-camembert-large](https://huggingface.co/dangvantuan/CrossEncoder-camembert-large)| 88.16 | 87.57| | 3,288 | [
[
-0.025848388671875,
-0.05389404296875,
0.0266571044921875,
0.032257080078125,
-0.015869140625,
-0.01403045654296875,
-0.031158447265625,
-0.0187225341796875,
0.00659942626953125,
0.0213623046875,
-0.0335693359375,
-0.0484619140625,
-0.055023193359375,
0.0074... |
lucas-leme/FinBERT-PT-BR | 2023-10-16T13:59:09.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"pt",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | lucas-leme | null | null | lucas-leme/FinBERT-PT-BR | 12 | 1,337 | transformers | 2022-12-04T22:15:16 | ---
language: pt
license: apache-2.0
widget:
- text: "O futuro de DI caiu 20 bps nesta manhã"
example_title: "Example 1"
- text: "O Nubank decidiu cortar a faixa de preço da oferta pública inicial (IPO) após revés no humor dos mercados internacionais com as fintechs."
example_title: "Example 2"
- text: "O Ibovespa acompanha correção do mercado e fecha com alta moderada"
example_title: "Example 3"
---
# FinBERT-PT-BR : Financial BERT PT BR
FinBERT-PT-BR is a pre-trained NLP model to analyze sentiment of Brazilian Portuguese financial texts.
The model was trained in two main stages: language modeling and sentiment modeling. In the first stage, a language model was trained with more than 1.4 million texts of financial news in Portuguese.
From this first training, it was possible to build a sentiment classifier with few labeled texts (500) that presented a satisfactory convergence.
At the end of the work, a comparative analysis with other models and the possible applications of the developed model are presented.
In the comparative analysis, it was possible to observe that the developed model presented better results than the current models in the state of the art.
Among the applications, it was demonstrated that the model can be used to build sentiment indices, investment strategies and macroeconomic data analysis, such as inflation.
## Applications
### Sentiment Index

## Usage
#### BertForSequenceClassification
```python
from transformers import AutoTokenizer, BertForSequenceClassification
import numpy as np
pred_mapper = {
0: "POSITIVE",
1: "NEGATIVE",
2: "NEUTRAL"
}
tokenizer = AutoTokenizer.from_pretrained("lucas-leme/FinBERT-PT-BR")
finbertptbr = BertForSequenceClassification.from_pretrained("lucas-leme/FinBERT-PT-BR")
tokens = tokenizer(["Hoje a bolsa caiu", "Hoje a bolsa subiu"], return_tensors="pt",
padding=True, truncation=True, max_length=512)
finbertptbr_outputs = finbertptbr(**tokens)
preds = [pred_mapper[np.argmax(pred)] for pred in finbertptbr_outputs.logits.cpu().detach().numpy()]
```
#### Pipeline
```python
from transformers import (
AutoTokenizer,
BertForSequenceClassification,
pipeline,
)
finbert_pt_br_tokenizer = AutoTokenizer.from_pretrained("lucas-leme/FinBERT-PT-BR")
finbert_pt_br_model = BertForSequenceClassification.from_pretrained("lucas-leme/FinBERT-PT-BR")
finbert_pt_br_pipeline = pipeline(task='text-classification', model=finbert_pt_br_model, tokenizer=finbert_pt_br_tokenizer)
finbert_pt_br_pipeline(['Hoje a bolsa caiu', 'Hoje a bolsa subiu'])
```
## Author
- [Lucas Leme](https://www.linkedin.com/in/lucas-leme-santos/) - lucaslssantos99@gmail.com
## Paper
- Paper: [FinBERT-PT-BR: Sentiment Analysis of Texts in Portuguese from the Financial Market](https://sol.sbc.org.br/index.php/bwaif/article/view/24960)
- Undergraduate thesis: [FinBERT-PT-BR: Análise de sentimentos de textos em português referentes ao mercado financeiro](https://pcs.usp.br/pcspf/wp-content/uploads/sites/8/2022/12/Monografia_PCS3860_COOP_2022_Grupo_C12.pdf)
| 3,140 | [
[
-0.04364013671875,
-0.04644775390625,
-0.0034236907958984375,
0.039764404296875,
-0.03375244140625,
0.0046539306640625,
-0.014678955078125,
-0.023162841796875,
0.0164642333984375,
0.0260009765625,
-0.030853271484375,
-0.049285888671875,
-0.054962158203125,
0... |
stablediffusionapi/realistic-vision-v40 | 2023-08-29T05:57:22.000Z | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | stablediffusionapi | null | null | stablediffusionapi/realistic-vision-v40 | 4 | 1,337 | diffusers | 2023-07-12T14:53:49 | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Realistic Vision V4.0 API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "realistic-vision-v40"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/realistic-vision-v40)
Model link: [View model](https://stablediffusionapi.com/models/realistic-vision-v40)
Credits: [View credits](https://civitai.com/?query=Realistic%20Vision%20V4.0)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "realistic-vision-v40",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** | 2,500 | [
[
-0.03753662109375,
-0.051605224609375,
0.042022705078125,
0.01384735107421875,
-0.038726806640625,
0.004974365234375,
0.02386474609375,
-0.044342041015625,
0.036865234375,
0.04534912109375,
-0.061737060546875,
-0.06109619140625,
-0.026275634765625,
-0.001349... |
retrieva-jp/t5-xl | 2023-05-10T01:01:04.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ja",
"arxiv:2002.05202",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | retrieva-jp | null | null | retrieva-jp/t5-xl | 14 | 1,336 | transformers | 2023-04-26T07:19:08 | ---
license: cc-by-sa-4.0
language:
- ja
---
# Model card for model ID
This is a T5 v1.1 model, pre-trained on a Japanese corpus.
## Model details
T5 is a Transformer-based Encoder-Decoder model, now in v1.1, with the following improvements over the original T5.
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see https://arxiv.org/abs/2002.05202 .
- Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning.
- no parameter sharing between embedding and classifier layer
- "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger d_model and smaller num_heads and d_ff.
This model is based on T5 v1.1. It was pre-trained on a Japanese corpus. For the Japanese corpus, Japanese Wikipedia and mC4/ja were used.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Retrieva, Inc.
- **Model type:** T5 v1.1
- **Language(s) (NLP):** Japanese
- **License:** CC-BY-SA 4.0 Although commercial use is permitted, we kindly request that you contact us beforehand.
## Training Details
We use T5X (https://github.com/google-research/t5x) for the training of this model, and it has been converted to the Huggingface transformer format.
## Training Data
The training data used is
- The Japanese part of the multilingual C4(mC4/ja).
- Japanese Wikipedia(20220920).
#### Preprocessing
The following filtering is done
- Remove documents that do not use a single hiragana character. This removes English-only documents and documents in Chinese.
- Whitelist-style filtering using the top level domain of URL to remove affiliate sites.
#### Training Hyperparameters
- dropout rate: 0.0
- batch size: 128
- bf16
- input length: 512
- output length: 114
- Otherwise, the default value of T5X (https://github.com/google-research/t5x/blob/main/t5x/examples/t5/t5_1_1/xl.gin) is followed, including the following.
- optimizer: Adafactor
- base_learning_rate: 1.0
- warmup steps: 10000
#### Speeds, Sizes, Times
We trained 524288 steps.
## Technical Specifications
### Model Architecture and Objective
Model architecture.
- T5 v1.1(https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511)
- Size: XL(~3 billion parameters)
### Compute Infrastructure
Google Cloud TPU v3-128.
#### Software
- T5X(https://github.com/google-research/t5x).
## More Information
https://note.com/retrieva/n/n7b4186dc5ada (in Japanese)
## Model Card Authors
Jiro Nishitoba
## Model Card Contact
pr@retrieva.jp
| 2,590 | [
[
-0.0293731689453125,
-0.0323486328125,
0.0243377685546875,
0.000026047229766845703,
-0.0303497314453125,
0.0022563934326171875,
-0.007366180419921875,
-0.038818359375,
0.0035858154296875,
0.0269012451171875,
-0.058441162109375,
-0.07135009765625,
-0.053405761718... |
Yntec/NeverExisted | 2023-10-25T15:26:05.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"Retro",
"DucHaiten",
"elldreths",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Yntec | null | null | Yntec/NeverExisted | 3 | 1,336 | diffusers | 2023-07-15T15:18:33 | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- Retro
- DucHaiten
- elldreths
---
# Never Existed
A mix of elldreths Retro Mix and DucHaiten retro so that you can create pictures from a past that never existed.
Original pages:
https://civitai.com/models/103966/duchaiten-
https://civitai.com/models/1474/elldreths-retro-mix
| 450 | [
[
-0.05328369140625,
-0.036529541015625,
0.02752685546875,
-0.0015287399291992188,
-0.03106689453125,
0.0038471221923828125,
0.047943115234375,
-0.042938232421875,
0.083984375,
0.032958984375,
-0.09149169921875,
-0.01488494873046875,
0.0006036758422851562,
-0.... |
Helsinki-NLP/opus-mt-de-fi | 2023-08-16T11:27:51.000Z | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | Helsinki-NLP | null | null | Helsinki-NLP/opus-mt-de-fi | 0 | 1,335 | transformers | 2022-03-02T23:29:04 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-fi
* source languages: de
* target languages: fi
* OPUS readme: [de-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.fi | 40.0 | 0.628 |
| 818 | [
[
-0.02203369140625,
-0.04290771484375,
0.0210113525390625,
0.02532958984375,
-0.031982421875,
-0.0265045166015625,
-0.0297698974609375,
-0.00431060791015625,
0.004093170166015625,
0.034088134765625,
-0.0496826171875,
-0.039825439453125,
-0.04754638671875,
0.0... |
rinna/youri-7b-instruction-gptq | 2023-10-31T00:55:38.000Z | [
"transformers",
"llama",
"text-generation",
"ja",
"en",
"dataset:databricks/databricks-dolly-15k",
"dataset:kunishou/databricks-dolly-15k-ja",
"dataset:izumi-lab/llm-japanese-dataset",
"license:llama2",
"text-generation-inference",
"region:us"
] | text-generation | rinna | null | null | rinna/youri-7b-instruction-gptq | 5 | 1,335 | transformers | 2023-10-30T15:14:26 | ---
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
license: llama2
language:
- ja
- en
inference: false
datasets:
- databricks/databricks-dolly-15k
- kunishou/databricks-dolly-15k-ja
- izumi-lab/llm-japanese-dataset
---
# `rinna/youri-7b-instruction-gptq`

# Overview
`rinna/youri-7b-instruction-gptq` is the quantized model for [`rinna/youri-7b-instruction`](https://huggingface.co/rinna/youri-7b-instruction) using AutoGPTQ. The quantized version is 4x smaller than the original model and thus requires less memory and provides faster inference.
* **Model architecture**
Refer to the [original model](https://huggingface.co/rinna/youri-7b-instruction) for architecture details.
* **Fine-tuning**
Refer to the [original model](https://huggingface.co/rinna/youri-7b-instruction) for fine-tuning details.
* **Authors**
- [Toshiaki Wakatsuki](https://huggingface.co/t-w)
- [Tianyu Zhao](https://huggingface.co/tianyuz)
- [Kei Sawada](https://huggingface.co/keisawada)
---
# Benchmarking
Our evaluation experiments show that the quantization yields slight performance degradation on downstream tasks.
Results will be updated soon.
---
# How to use the model
~~~~python
import torch
from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM
tokenizer = AutoTokenizer.from_pretrained("rinna/youri-7b-instruction-gptq")
model = AutoGPTQForCausalLM.from_quantized("rinna/youri-7b-instruction-gptq", use_safetensors=True)
instruction = "次の日本語を英語に翻訳してください。"
input = "大規模言語モデル(だいきぼげんごモデル、英: large language model、LLM)は、多数のパラメータ(数千万から数十億)を持つ人工ニューラルネットワークで構成されるコンピュータ言語モデルで、膨大なラベルなしテキストを使用して自己教師あり学習または半教師あり学習によって訓練が行われる。"
prompt = f"""
以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。
### 指示:
{instruction}
### 入力:
{input}
### 応答:
"""
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
input_ids=token_ids.to(model.device),
max_new_tokens=200,
do_sample=True,
temperature=0.5,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id
)
output = tokenizer.decode(output_ids.tolist()[0])
print(output)
~~~~
---
# Tokenization
The model uses the original llama-2 tokenizer.
---
# How to cite
~~~
@misc{RinnaYouri7bInstructionGPTQ,
url={https://huggingface.co/rinna/youri-7b-instruction-gptq},
title={rinna/youri-7b-instruction-gptq},
author={Wakatsuki, Toshiaki and Zhao, Tianyu and Sawada, Kei}
}
~~~
---
# License
[The llama2 license](https://ai.meta.com/llama/license/) | 2,723 | [
[
-0.01528167724609375,
-0.060882568359375,
0.01548004150390625,
0.01885986328125,
-0.0311431884765625,
-0.0070648193359375,
0.002567291259765625,
-0.01483154296875,
-0.0011081695556640625,
0.032958984375,
-0.0180816650390625,
-0.0443115234375,
-0.0443115234375,
... |
HooshvareLab/bert-base-parsbert-ner-uncased | 2021-05-18T20:43:54.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"fa",
"arxiv:2005.12515",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | HooshvareLab | null | null | HooshvareLab/bert-base-parsbert-ner-uncased | 4 | 1,334 | transformers | 2022-03-02T23:29:04 | ---
language: fa
license: apache-2.0
---
## ParsBERT: Transformer-based Model for Persian Language Understanding
ParsBERT is a monolingual language model based on Google’s BERT architecture with the same configurations as BERT-Base.
Paper presenting ParsBERT: [arXiv:2005.12515](https://arxiv.org/abs/2005.12515)
All the models (downstream tasks) are uncased and trained with whole word masking. (coming soon stay tuned)
## Persian NER [ARMAN, PEYMA, ARMAN+PEYMA]
This task aims to extract named entities in the text, such as names and label with appropriate `NER` classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with `IOB` format. In this format, tokens that are not part of an entity are tagged as `”O”` the `”B”`tag corresponds to the first word of an object, and the `”I”` tag corresponds to the rest of the terms of the same entity. Both `”B”` and `”I”` tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, `ARMAN`, and `PEYMA`. In ParsBERT, we prepared ner for both datasets as well as a combination of both datasets.
### PEYMA
PEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes.
1. Organization
2. Money
3. Location
4. Date
5. Time
6. Person
7. Percent
| Label | # |
|:------------:|:-----:|
| Organization | 16964 |
| Money | 2037 |
| Location | 8782 |
| Date | 4259 |
| Time | 732 |
| Person | 7675 |
| Percent | 699 |
**Download**
You can download the dataset from [here](http://nsurl.org/tasks/task-7-named-entity-recognition-ner-for-farsi/)
---
### ARMAN
ARMAN dataset holds 7,682 sentences with 250,015 sentences tagged over six different classes.
1. Organization
2. Location
3. Facility
4. Event
5. Product
6. Person
| Label | # |
|:------------:|:-----:|
| Organization | 30108 |
| Location | 12924 |
| Facility | 4458 |
| Event | 7557 |
| Product | 4389 |
| Person | 15645 |
**Download**
You can download the dataset from [here](https://github.com/HaniehP/PersianNER)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF |
|:---------------:|:--------:|:----------:|:--------------:|:----------:|:----------------:|:------------:|
| ARMAN + PEYMA | 95.13* | - | - | - | - | - |
| PEYMA | 98.79* | - | 90.59 | - | 84.00 | - |
| ARMAN | 93.10* | 89.9 | 84.03 | 86.55 | - | 77.45 |
## How to use :hugs:
| Notebook | Description | |
|:----------|:-------------|------:|
| [How to use Pipelines](https://github.com/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) | Simple and efficient way to use State-of-the-Art models on downstream tasks through transformers | [](https://colab.research.google.com/github/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) |
## Cite
Please cite the following paper in your publication if you are using [ParsBERT](https://arxiv.org/abs/2005.12515) in your research:
```markdown
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Acknowledgments
We hereby, express our gratitude to the [Tensorflow Research Cloud (TFRC) program](https://tensorflow.org/tfrc) for providing us with the necessary computation resources. We also thank [Hooshvare](https://hooshvare.com) Research Group for facilitating dataset gathering and scraping online text resources.
## Contributors
- Mehrdad Farahani: [Linkedin](https://www.linkedin.com/in/m3hrdadfi/), [Twitter](https://twitter.com/m3hrdadfi), [Github](https://github.com/m3hrdadfi)
- Mohammad Gharachorloo: [Linkedin](https://www.linkedin.com/in/mohammad-gharachorloo/), [Twitter](https://twitter.com/MGharachorloo), [Github](https://github.com/baarsaam)
- Marzieh Farahani: [Linkedin](https://www.linkedin.com/in/marziehphi/), [Twitter](https://twitter.com/marziehphi), [Github](https://github.com/marziehphi)
- Mohammad Manthouri: [Linkedin](https://www.linkedin.com/in/mohammad-manthouri-aka-mansouri-07030766/), [Twitter](https://twitter.com/mmanthouri), [Github](https://github.com/mmanthouri)
- Hooshvare Team: [Official Website](https://hooshvare.com/), [Linkedin](https://www.linkedin.com/company/hooshvare), [Twitter](https://twitter.com/hooshvare), [Github](https://github.com/hooshvare), [Instagram](https://www.instagram.com/hooshvare/)
+ And a special thanks to Sara Tabrizi for her fantastic poster design. Follow her on: [Linkedin](https://www.linkedin.com/in/sara-tabrizi-64548b79/), [Behance](https://www.behance.net/saratabrizi), [Instagram](https://www.instagram.com/sara_b_tabrizi/)
## Releases
### Release v0.1 (May 29, 2019)
This is the first version of our ParsBERT NER!
| 5,562 | [
[
-0.03692626953125,
-0.045867919921875,
0.0174713134765625,
0.01430511474609375,
-0.02490234375,
0.006557464599609375,
-0.019927978515625,
-0.01715087890625,
0.0184478759765625,
0.02325439453125,
-0.037567138671875,
-0.06097412109375,
-0.042510986328125,
-0.0... |
cimm-kzn/rudr-bert | 2023-10-16T13:07:05.000Z | [
"transformers",
"pytorch",
"bio",
"med",
"biomedical",
"ru",
"arxiv:2004.03659",
"endpoints_compatible",
"region:us"
] | null | cimm-kzn | null | null | cimm-kzn/rudr-bert | 3 | 1,334 | transformers | 2022-03-02T23:29:05 | ---
language:
- ru
tags:
- bio
- med
- biomedical
---
## RuDR-BERT
RuDR-BERT - Multilingual, Cased, which pretrained on the raw part of the RuDReC corpus (1.4M reviews). Pre-training was based on the [original BERT code](https://github.com/google-research/bert) provided by Google. In particular, Multi-BERT was for used for initialization; vocabulary of Russian subtokens and parameters are the same as in Multi-BERT. Training details are described in our paper. \
link: https://yadi.sk/d/-PTn0xhk1PqvgQ
## Citing & Authors
If you find this repository helpful, feel free to cite our publication:
[1] Tutubalina E, Alimova I, Miftahutdinov Z, et al. The Russian Drug Reaction Corpus and Neural Models for Drug Reactions and Effectiveness Detection in User Reviews.
preprint: https://arxiv.org/abs/2004.03659
```
@article{10.1093/bioinformatics/btaa675,
author = {Tutubalina, Elena and Alimova, Ilseyar and Miftahutdinov, Zulfat and Sakhovskiy, Andrey and Malykh, Valentin and Nikolenko, Sergey},
title = "{The Russian Drug Reaction Corpus and Neural Models for Drug Reactions and Effectiveness Detection in User Reviews}",
journal = {Bioinformatics},
year = {2020},
month = {07},
issn = {1367-4803},
doi = {10.1093/bioinformatics/btaa675},
url = {https://doi.org/10.1093/bioinformatics/btaa675},
note = {btaa675},
eprint = {https://academic.oup.com/bioinformatics/advance-article-pdf/doi/10.1093/bioinformatics/btaa675/33539752/btaa675.pdf},
}
```
[2] Tutubalina, EV and Miftahutdinov, Z Sh and Nugmanov, RI and Madzhidov, TI and Nikolenko, SI and Alimova, IS and Tropsha, AE Using semantic analysis of texts for the identification of drugs with similar therapeutic effects.
[link to paper](https://www.researchgate.net/profile/Elena_Tutubalina/publication/323751823_Using_semantic_analysis_of_texts_for_the_identification_of_drugs_with_similar_therapeutic_effects/links/5bf7cfc3299bf1a0202cbc1f/Using-semantic-analysis-of-texts-for-the-identification-of-drugs-with-similar-therapeutic-effects.pdf)
```
@article{tutubalina2017using,
title={Using semantic analysis of texts for the identification of drugs with similar therapeutic effects},
author={Tutubalina, EV and Miftahutdinov, Z Sh and Nugmanov, RI and Madzhidov, TI and Nikolenko, SI and Alimova, IS and Tropsha, AE},
journal={Russian Chemical Bulletin},
volume={66},
number={11},
pages={2180--2189},
year={2017},
publisher={Springer}
}
``` | 2,491 | [
[
-0.01153564453125,
-0.060516357421875,
0.0335693359375,
0.005809783935546875,
-0.024322509765625,
-0.006439208984375,
-0.027130126953125,
-0.02764892578125,
0.037078857421875,
0.018280029296875,
-0.036590576171875,
-0.0592041015625,
-0.05706787109375,
0.0193... |
mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization | 2020-12-11T21:53:12.000Z | [
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"summarization",
"en",
"dataset:cnn_dailymail",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | summarization | mrm8488 | null | null | mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization | 8 | 1,334 | transformers | 2022-03-02T23:29:05 | ---
language: en
license: apache-2.0
datasets:
- cnn_dailymail
tags:
- summarization
---
# Bert-small2Bert-small Summarization with 🤗EncoderDecoder Framework
This model is a warm-started *BERT2BERT* ([small](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8)) model fine-tuned on the *CNN/Dailymail* summarization dataset.
The model achieves a **17.37** ROUGE-2 score on *CNN/Dailymail*'s test dataset.
For more details on how the model was fine-tuned, please refer to
[this](https://colab.research.google.com/drive/1Ekd5pUeCX7VOrMx94_czTkwNtLN32Uyu?usp=sharing) notebook.
## Results on test set 📝
| Metric | # Value |
| ------ | --------- |
| **ROUGE-2** | **17.37** |
## Model in Action 🚀
```python
from transformers import BertTokenizerFast, EncoderDecoderModel
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
tokenizer = BertTokenizerFast.from_pretrained('mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization')
model = EncoderDecoderModel.from_pretrained('mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization').to(device)
def generate_summary(text):
# cut off at BERT max length 512
inputs = tokenizer([text], padding="max_length", truncation=True, max_length=512, return_tensors="pt")
input_ids = inputs.input_ids.to(device)
attention_mask = inputs.attention_mask.to(device)
output = model.generate(input_ids, attention_mask=attention_mask)
return tokenizer.decode(output[0], skip_special_tokens=True)
text = "your text to be summarized here..."
generate_summary(text)
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
| 1,791 | [
[
-0.0218505859375,
-0.04620361328125,
0.0123138427734375,
0.032440185546875,
-0.0308990478515625,
-0.01465606689453125,
-0.0287322998046875,
-0.0228424072265625,
0.03924560546875,
0.01291656494140625,
-0.04144287109375,
-0.03741455078125,
-0.046783447265625,
... |
openclimatefix/nowcasting_cnn_v5 | 2022-09-30T14:27:31.000Z | [
"transformers",
"pytorch",
"nowcasting",
"forecasting",
"timeseries",
"remote-sensing",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | openclimatefix | null | null | openclimatefix/nowcasting_cnn_v5 | 0 | 1,334 | transformers | 2022-09-30T14:27:24 | ---
license: mit
tags:
- nowcasting
- forecasting
- timeseries
- remote-sensing
---
# Nowcasting CNN
## Model description
3d conv model, that takes in different data streams
architecture is roughly
1. satellite image time series goes into many 3d convolution layers.
2. nwp time series goes into many 3d convolution layers.
3. Final convolutional layer goes to full connected layer. This is joined by
other data inputs like
- pv yield
- time variables
Then there ~4 fully connected layers which end up forecasting the
pv yield / gsp into the future
## Intended uses & limitations
Forecasting short term PV power for different regions and nationally in the UK
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
Training data is EUMETSAT RSS imagery over the UK, on-the-ground PV data, and NWP predictions.
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
| 1,046 | [
[
-0.04486083984375,
-0.023590087890625,
0.0188446044921875,
0.002685546875,
-0.0266265869140625,
-0.00653839111328125,
0.0237884521484375,
-0.030975341796875,
-0.0241851806640625,
0.05364990234375,
-0.04791259765625,
-0.014556884765625,
-0.029327392578125,
0.... |
nousr/robo-diffusion | 2023-05-16T09:18:48.000Z | [
"diffusers",
"robots",
"stable-diffusion",
"aiart",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | nousr | null | null | nousr/robo-diffusion | 349 | 1,333 | diffusers | 2022-09-29T00:24:25 | ---
language:
- en
thumbnail: "https://huggingface.co/nousr/robo-diffusion/resolve/main/robo_example.png"
tags:
- robots
- stable-diffusion
- aiart
- text-to-image
license: "creativeml-openrail-m"
---
# UPDATE!
There is a new model based on stable-dffusion 2.0 (base) that can be found [here](https://huggingface.co/nousr/robo-diffusion-2-base)!
# Robo-Diffusion
A dreambooth-method finetune of stable diffusion that will output cool looking robots when prompted.
<img src="https://huggingface.co/nousr/robo-diffusion/resolve/main/robo_example.png" width="256" height="256"/>
[](https://colab.research.google.com/github/nousr/robo-diffusion/blob/main/robo_diffusion_v1.ipynb)
# Repo
Github: https://github.com/nousr/robo-diffusion
# Usage
Keep the words `nousr robot` towards the beginning of your prompt to invoke the finetuned style.
# Socials
Use the #robodiffusion so i can see the cool stuff you make!
If you enjoy the model i'd appreciate a follow on [twitter](https://twitter.com/nousr_)
If you are feeling especially generous, you can sponsor me on [github](https://github.com/nousr)
---
*NOTE: usage of this model implies accpetance of stable diffusion's [CreativeML Open RAIL-M license](LICENSE)*
| 1,298 | [
[
-0.0241241455078125,
-0.065673828125,
0.041748046875,
0.01389312744140625,
-0.01181793212890625,
0.016357421875,
0.0041351318359375,
-0.014068603515625,
0.047760009765625,
0.039337158203125,
-0.054779052734375,
-0.0318603515625,
-0.05145263671875,
-0.0123596... |
Uminosachi/dreamshaper_8Inpainting | 2023-08-03T01:13:11.000Z | [
"diffusers",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | Uminosachi | null | null | Uminosachi/dreamshaper_8Inpainting | 0 | 1,333 | diffusers | 2023-08-03T00:52:21 | ---
license: creativeml-openrail-m
---
This is an inpainting model, which has been converted from the [dreamshaper_8Inpainting](https://civitai.com/models/4384?modelVersionId=131004). | 183 | [
[
-0.0082244873046875,
-0.00865936279296875,
0.033111572265625,
0.0073699951171875,
-0.0355224609375,
0.01264190673828125,
0.0347900390625,
-0.041290283203125,
0.046661376953125,
0.083740234375,
-0.051544189453125,
0.02374267578125,
-0.019073486328125,
-0.0173... |
allenai/PRIMERA | 2023-01-24T17:02:52.000Z | [
"transformers",
"pytorch",
"tf",
"led",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | text2text-generation | allenai | null | null | allenai/PRIMERA | 20 | 1,332 | transformers | 2022-03-10T23:37:17 | ---
language: en
license: apache-2.0
---
HF-version model for PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization (ACL 2022).
The original code can be found [here](https://github.com/allenai/PRIMER). You can find the script and notebook to train/evaluate the model in the original github repo.
* Note: due to the difference between the implementations of the original Longformer and the Huggingface LED model, the results of converted models are slightly different. We run a sanity check on both fine-tuned and non fine-tuned models on the **MultiNews dataset**, and show the results below:
| Model | Rouge-1 | Rouge-2 | Rouge-L |
| --- | ----------- |----------- |----------- |
| PRIMERA | 42.0 | 13.6 | 20.8|
| PRIMERA-hf | 41.7 |13.6 | 20.5|
| PRIMERA(finetuned) | 49.9 | 21.1 | 25.9|
| PRIMERA-hf(finetuned) | 49.9 | 20.9 | 25.8|
You can use it by
```
from transformers import (
AutoTokenizer,
LEDConfig,
LEDForConditionalGeneration,
)
tokenizer = AutoTokenizer.from_pretrained('allenai/PRIMERA')
config=LEDConfig.from_pretrained('allenai/PRIMERA')
model = LEDForConditionalGeneration.from_pretrained('allenai/PRIMERA')
```
| 1,241 | [
[
-0.03369140625,
-0.0523681640625,
0.01505279541015625,
0.042388916015625,
-0.0192718505859375,
-0.01059722900390625,
-0.0122528076171875,
-0.014434814453125,
0.01971435546875,
0.0458984375,
-0.055816650390625,
-0.0435791015625,
-0.053955078125,
0.02052307128... |
masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0 | 2023-06-25T07:13:23.000Z | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"token-classification",
"am",
"bm",
"obj",
"ee",
"fon",
"ha",
"ig",
"rw",
"lg",
"luo",
"mos",
"ny",
"pcm",
"sn",
"sw",
"tn",
"tw",
"wo",
"xh",
"yo",
"zu",
"multilingual",
"dataset:masakhaner2",
"arxiv:21... | token-classification | masakhane | null | null | masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0 | 5 | 1,332 | transformers | 2022-12-15T12:52:06 | ---
license: afl-3.0
language:
- am
- bm
- obj
- ee
- fon
- ha
- ig
- rw
- lg
- luo
- mos
- ny
- pcm
- sn
- sw
- tn
- tw
- wo
- xh
- yo
- zu
- multilingual
datasets:
- masakhaner2
---
# masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0
## Model description
**masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0** is a **Named Entity Recognition (NER)** model for 21 African languages. Specifically, this model is a *Davlan/afro-xlmr-large* model that was fine-tuned on an aggregation of African language datasets obtained from two versions of MasakhaNER dataset i.e. [MasakhaNER 1.0](https://huggingface.co/datasets/masakhaner) and [MasakhaNER 2.0](https://huggingface.co/datasets/masakhane/masakhaner2). The languages covered are:
- Amharic (Amharic)
- Bambara (bam)
- Ghomala (bbj)
- Ewe (ewe)
- Fon (fon)
- Hausa (hau)
- Igbo (ibo)
- Kinyarwanda (kin)
- Luganda (lug)
- Dholuo (luo)
-Mossi (mos)
- Chichewa (nya)
- Nigerian Pidgin
- chShona (sna)
- Kiswahili (swą)
- Setswana (tsn)
- Twi (twi)
- Wolof (wol)
- isiXhosa (xho)
- Yorùbá (yor)
- isiZulu (zul)
It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organization (ORG), and person (PER).
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0")
model = AutoModelForTokenClassification.from_pretrained("masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria"
ner_results = nlp(example)
print(ner_results)
```
## Eval results on MasakhaNER (F-score)
Model evaluated on MasakhaNER 1.0 and MasakhaNER 2.0 test sets
language| MasakhaNER 1.0 | MasakhaNER 2.0
-|-|-
amh |80.5|
bam || 83.1
bbj || 76.6
ewe || 89.6
fon || 83.8
hau |90.3| 87.5
ibo |89.5| 93.5
kin |82.0| 87.6
lug |87.1| 89.7
luo |80.8| 82.5
mos || 75.5
nya || 92.7
pcm |91.1| 90.9
sna || 96.5
swa |88.5| 93.4
tsn || 90.3
twi || 81.3
wol |72.7| 87.3
xho || 90.0
yor |88.1| 90.5
zul || 91.3
avg |**85.1**| **87.7**
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on the aggregation of [MasakhaNER 1.0](https://huggingface.co/datasets/masakhaner) and [MasakhaNER 2.0](https://huggingface.co/datasets/masakhane/masakhaner2) datasets
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
## Training procedure
This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original MasakhaNER paper](https://arxiv.org/abs/2103.11811) which trained & evaluated the model on MasakhaNER corpus.
### BibTeX entry and citation info
```
@article{Adelani2022MasakhaNER2A,
title={MasakhaNER 2.0: Africa-centric Transfer Learning for Named Entity Recognition},
author={David Ifeoluwa Adelani and Graham Neubig and Sebastian Ruder and Shruti Rijhwani and Michael Beukman and Chester Palen-Michel and Constantine Lignos and Jesujoba Oluwadara Alabi and Shamsuddeen Hassan Muhammad and Peter Nabende and Cheikh M. Bamba Dione and Andiswa Bukula and Rooweither Mabuya and Bonaventure F. P. Dossou and Blessing K. Sibanda and Happy Buzaaba and Jonathan Mukiibi and Godson Kalipe and Derguene Mbaye and Amelia Taylor and Fatoumata Kabore and Chris C. Emezue and Anuoluwapo Aremu and Perez Ogayo and Catherine W. Gitau and Edwin Munkoh-Buabeng and Victoire Memdjokam Koagne and Allahsera Auguste Tapo and Tebogo Macucwa and Vukosi Marivate and Elvis Mboning and Tajuddeen R. Gwadabe and Tosin P. Adewumi and Orevaoghene Ahia and Joyce Nakatumba-Nabende and Neo L. Mokono and Ignatius M Ezeani and Chiamaka Ijeoma Chukwuneke and Mofetoluwa Adeyemi and Gilles Hacheme and Idris Abdulmumin and Odunayo Ogundepo and Oreen Yousuf and Tatiana Moteu Ngoli and Dietrich Klakow},
journal={ArXiv},
year={2022},
volume={abs/2210.12391}
}
```
| 4,845 | [
[
-0.04180908203125,
-0.02911376953125,
0.00412750244140625,
0.0057373046875,
-0.0263671875,
-0.0007619857788085938,
-0.02056884765625,
-0.0372314453125,
0.0330810546875,
0.032012939453125,
-0.034912109375,
-0.03045654296875,
-0.07061767578125,
0.0241088867187... |
yahyasmt/brain_tumor_2 | 2023-10-01T17:53:17.000Z | [
"diffusers",
"text-to-image",
"autotrain",
"has_space",
"region:us"
] | text-to-image | yahyasmt | null | null | yahyasmt/brain_tumor_2 | 4 | 1,332 | diffusers | 2023-10-01T17:53:15 |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: brain tumor mri scan
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
| 233 | [
[
0.004871368408203125,
-0.011810302734375,
0.015594482421875,
0.0089569091796875,
-0.03631591796875,
0.06683349609375,
0.01294708251953125,
-0.0135040283203125,
0.035552978515625,
-0.0002448558807373047,
-0.03582763671875,
-0.0029697418212890625,
-0.0597534179687... |
google/deeplabv3_mobilenet_v2_1.0_513 | 2022-11-10T16:28:13.000Z | [
"transformers",
"pytorch",
"mobilenet_v2",
"vision",
"image-segmentation",
"dataset:pascal-voc",
"arxiv:1801.04381",
"arxiv:1802.02611",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | google | null | null | google/deeplabv3_mobilenet_v2_1.0_513 | 0 | 1,331 | transformers | 2022-11-10T16:05:57 | ---
license: other
tags:
- vision
- image-segmentation
datasets:
- pascal-voc
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-2.jpg
example_title: Cat
---
# MobileNetV2 with DeepLabV3+
MobileNet V2 model pre-trained on PASCAL VOC at resolution 513x513. It was introduced in [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. It was first released in [this repository](https://github.com/tensorflow/models/tree/master/research/deeplab).
Disclaimer: The team releasing MobileNet V2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md):
> MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature.
The model in this repo adds a [DeepLabV3+](https://arxiv.org/abs/1802.02611) head to the MobileNetV2 backbone for semantic segmentation.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=mobilenet_v2) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import AutoImageProcessor, AutoModelForSemanticSegmentation
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
preprocessor = AutoImageProcessor.from_pretrained("google/deeplabv3_mobilenet_v2_1.0_513")
model = AutoModelForSemanticSegmentation.from_pretrained("google/deeplabv3_mobilenet_v2_1.0_513")
inputs = preprocessor(images=image, return_tensors="pt")
outputs = model(**inputs)
predicted_mask = preprocessor.post_process_semantic_segmentation(outputs)
```
Currently, both the feature extractor and model support PyTorch.
### BibTeX entry and citation info
```bibtex
@inproceedings{deeplabv3plus2018,
title={Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation},
author={Liang-Chieh Chen and Yukun Zhu and George Papandreou and Florian Schroff and Hartwig Adam},
booktitle={ECCV},
year={2018}
}
```
| 2,740 | [
[
-0.0311126708984375,
-0.03253173828125,
0.005306243896484375,
0.0028705596923828125,
-0.0269927978515625,
-0.025421142578125,
0.0211029052734375,
-0.05316162109375,
0.0079498291015625,
0.0310516357421875,
-0.026885986328125,
-0.0313720703125,
-0.046905517578125,... |
lmqg/t5-base-squad-qag | 2023-01-10T03:08:25.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"questions and answers generation",
"en",
"dataset:lmqg/qag_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | lmqg | null | null | lmqg/t5-base-squad-qag | 1 | 1,331 | transformers | 2022-12-19T02:49:52 |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: en
datasets:
- lmqg/qag_squad
pipeline_tag: text2text-generation
tags:
- questions and answers generation
widget:
- text: "generate question and answer: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
example_title: "Questions & Answers Generation Example 1"
model-index:
- name: lmqg/t5-base-squad-qag
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qag_squad
type: default
args: default
metrics:
- name: QAAlignedF1Score-BERTScore (Question & Answer Generation)
type: qa_aligned_f1_score_bertscore_question_answer_generation
value: 93.34
- name: QAAlignedRecall-BERTScore (Question & Answer Generation)
type: qa_aligned_recall_bertscore_question_answer_generation
value: 93.51
- name: QAAlignedPrecision-BERTScore (Question & Answer Generation)
type: qa_aligned_precision_bertscore_question_answer_generation
value: 93.18
- name: QAAlignedF1Score-MoverScore (Question & Answer Generation)
type: qa_aligned_f1_score_moverscore_question_answer_generation
value: 65.78
- name: QAAlignedRecall-MoverScore (Question & Answer Generation)
type: qa_aligned_recall_moverscore_question_answer_generation
value: 65.68
- name: QAAlignedPrecision-MoverScore (Question & Answer Generation)
type: qa_aligned_precision_moverscore_question_answer_generation
value: 65.96
---
# Model Card of `lmqg/t5-base-squad-qag`
This model is fine-tuned version of [t5-base](https://huggingface.co/t5-base) for question & answer pair generation task on the [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [t5-base](https://huggingface.co/t5-base)
- **Language:** en
- **Training data:** [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="lmqg/t5-base-squad-qag")
# model prediction
question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/t5-base-squad-qag")
output = pipe("generate question and answer: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-base-squad-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_squad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 93.34 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
| QAAlignedF1Score (MoverScore) | 65.78 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
| QAAlignedPrecision (BERTScore) | 93.18 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
| QAAlignedPrecision (MoverScore) | 65.96 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
| QAAlignedRecall (BERTScore) | 93.51 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
| QAAlignedRecall (MoverScore) | 65.68 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qag_squad
- dataset_name: default
- input_types: ['paragraph']
- output_types: ['questions_answers']
- prefix_types: ['qag']
- model: t5-base
- max_length: 512
- max_length_output: 256
- epoch: 17
- batch: 8
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 8
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-base-squad-qag/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| 5,415 | [
[
-0.041412353515625,
-0.057373046875,
0.0163116455078125,
0.006809234619140625,
-0.0191650390625,
0.01035308837890625,
-0.005718231201171875,
-0.02655029296875,
0.01013946533203125,
0.01105499267578125,
-0.074462890625,
-0.05047607421875,
-0.0256195068359375,
... |
TheBloke/Amethyst-13B-Mistral-AWQ | 2023-10-04T17:48:31.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"nsfw",
"license:cc-by-nc-4.0",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/Amethyst-13B-Mistral-AWQ | 4 | 1,328 | transformers | 2023-10-04T17:29:25 | ---
base_model: Undi95/Amethyst-13B-Mistral
inference: false
license: cc-by-nc-4.0
model_creator: Undi
model_name: Amethyst 13B Mistral
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
tags:
- not-for-all-audiences
- nsfw
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Amethyst 13B Mistral - AWQ
- Model creator: [Undi](https://huggingface.co/Undi95)
- Original model: [Amethyst 13B Mistral](https://huggingface.co/Undi95/Amethyst-13B-Mistral)
<!-- description start -->
## Description
This repo contains AWQ model files for [Undi's Amethyst 13B Mistral](https://huggingface.co/Undi95/Amethyst-13B-Mistral).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of Llama AWQ models for high-throughput concurrent inference in multi-user server scenarios.
As of September 25th 2023, preliminary Llama-only AWQ support has also been added to [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference).
Note that, at the time of writing, overall throughput is still lower than running vLLM or TGI with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Amethyst-13B-Mistral-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Amethyst-13B-Mistral-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Amethyst-13B-Mistral-GGUF)
* [Undi's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Undi95/Amethyst-13B-Mistral)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Undi's Amethyst 13B Mistral](https://huggingface.co/Undi95/Amethyst-13B-Mistral).
<!-- licensing end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Amethyst-13B-Mistral-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.25 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Serving this model from vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
Note: at the time of writing, vLLM has not yet done a new release with AWQ support.
If you try the vLLM examples below and get an error about `quantization` being unrecognised, or other AWQ-related issues, please install vLLM from Github source.
- When using vLLM as a server, pass the `--quantization awq` parameter, for example:
```shell
python3 python -m vllm.entrypoints.api_server --model TheBloke/Amethyst-13B-Mistral-AWQ --quantization awq --dtype half
```
When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Amethyst-13B-Mistral-AWQ", quantization="awq", dtype="half")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Amethyst-13B-Mistral-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## How to use this AWQ model from Python code
### Install the necessary packages
Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.1 or later
```shell
pip3 install autoawq
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### You can then try the following example code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/Amethyst-13B-Mistral-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=False, safetensors=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
print("\n\n*** Generate:")
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
tokens,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
print("Output: ", tokenizer.decode(generation_output[0]))
"""
# Inference should be possible with transformers pipeline as well in future
# But currently this is not yet supported by AutoAWQ (correct as of September 25th 2023)
from transformers import pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
"""
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ)
- [vLLM](https://github.com/vllm-project/vllm)
- [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
TGI merged AWQ support on September 25th, 2023: [TGI PR #1054](https://github.com/huggingface/text-generation-inference/pull/1054). Use the `:latest` Docker container until the next TGI release is made.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Undi's Amethyst 13B Mistral
[THIS WAS A TEST, BUT PEOPLE LIKE IT, SO I ADD IT OFFICIALLY TO MY PROJECTS]

An attempt using [BlockMerge_Gradient](https://github.com/Gryphe/BlockMerge_Gradient) to get better result.
In addition, [LimaRP v3](https://huggingface.co/lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT) was used, is it recommanded to read the documentation.
The [llama2-to-mistral-diff](https://huggingface.co/Undi95/llama2-to-mistral-diff) was used on it at weight "1".
<!-- description start -->
## Description
This repo contains fp16 files of Amethyst-13B-Mistral.
<!-- description end -->
<!-- description start -->
## Models and loras used
- Xwin-LM/Xwin-LM-13B-V0.1
- The-Face-Of-Goonery/Huginn-13b-FP16
- zattio770/120-Days-of-LORA-v2-13B
- lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT
- Undi95/llama2-to-mistral-diff
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## LimaRP v3 usage and suggested settings

You can follow these instruction format settings in SillyTavern. Replace tiny with your desired response length:

Special thanks to Sushi.
If you want to support me, you can [here](https://ko-fi.com/undiai).
| 15,180 | [
[
-0.0357666015625,
-0.0609130859375,
0.0321044921875,
0.01593017578125,
-0.0219268798828125,
-0.01413726806640625,
0.0072479248046875,
-0.04608154296875,
-0.00004971027374267578,
0.0311431884765625,
-0.055755615234375,
-0.0423583984375,
-0.0220794677734375,
-... |
hustvl/yolos-base | 2022-06-27T08:37:10.000Z | [
"transformers",
"pytorch",
"yolos",
"object-detection",
"vision",
"dataset:coco",
"arxiv:2106.00666",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | hustvl | null | null | hustvl/yolos-base | 13 | 1,327 | transformers | 2022-04-26T09:30:39 | ---
license: apache-2.0
tags:
- object-detection
- vision
datasets:
- coco
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg
example_title: Savanna
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
example_title: Football Match
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg
example_title: Airport
---
# YOLOS (base-sized) model
YOLOS model fine-tuned on COCO 2017 object detection (118k annotated images). It was introduced in the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Fang et al. and first released in [this repository](https://github.com/hustvl/YOLOS).
Disclaimer: The team releasing YOLOS did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
YOLOS is a Vision Transformer (ViT) trained using the DETR loss. Despite its simplicity, a base-sized YOLOS model is able to achieve 42 AP on COCO validation 2017 (similar to DETR and more complex frameworks such as Faster R-CNN).
The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.
## Intended uses & limitations
You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=hustvl/yolos) to look for all available YOLOS models.
### How to use
Here is how to use this model:
```python
from transformers import YolosFeatureExtractor, YolosForObjectDetection
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = YolosFeatureExtractor.from_pretrained('hustvl/yolos-base')
model = YolosForObjectDetection.from_pretrained('hustvl/yolos-base')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts bounding boxes and corresponding COCO classes
logits = outputs.logits
bboxes = outputs.pred_boxes
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The YOLOS model was pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet2012) and fine-tuned on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
### Training
The model was pre-trained for 1000 epochs on ImageNet-1k and fine-tuned for 150 epochs on COCO.
## Evaluation results
This model achieves an AP (average precision) of **42.0** on COCO 2017 validation. For more details regarding evaluation results, we refer to the original paper.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-00666,
author = {Yuxin Fang and
Bencheng Liao and
Xinggang Wang and
Jiemin Fang and
Jiyang Qi and
Rui Wu and
Jianwei Niu and
Wenyu Liu},
title = {You Only Look at One Sequence: Rethinking Transformer in Vision through
Object Detection},
journal = {CoRR},
volume = {abs/2106.00666},
year = {2021},
url = {https://arxiv.org/abs/2106.00666},
eprinttype = {arXiv},
eprint = {2106.00666},
timestamp = {Fri, 29 Apr 2022 19:49:16 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-00666.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 4,160 | [
[
-0.053436279296875,
-0.04364013671875,
0.0036640167236328125,
-0.010009765625,
-0.0225677490234375,
-0.020263671875,
0.0010519027709960938,
-0.05865478515625,
0.013763427734375,
0.038330078125,
-0.03997802734375,
-0.031585693359375,
-0.04229736328125,
0.0260... |
mncai/Mistral-7B-v0.1-alpaca-1k | 2023-10-22T05:59:28.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"MindsAndCompany",
"en",
"ko",
"dataset:beomi/KoAlpaca-v1.1a",
"arxiv:2306.02707",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | mncai | null | null | mncai/Mistral-7B-v0.1-alpaca-1k | 0 | 1,326 | transformers | 2023-10-22T05:15:15 | ---
pipeline_tag: text-generation
license: mit
language:
- en
- ko
library_name: transformers
tags:
- MindsAndCompany
datasets:
- beomi/KoAlpaca-v1.1a
---
## Model Details
* **Developed by**: [Minds And Company](https://mnc.ai/)
* **Backbone Model**: [Mistral-7B-v0.1](mistralai/Mistral-7B-v0.1)
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
## Dataset Details
### Used Datasets
- beomi/KoAlpaca-v1.1a
### Prompt Template
- Llama Prompt Template
## Limitations & Biases:
Llama2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
## License Disclaimer:
This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.
## Contact Us
- [Minds And Company](https://mnc.ai/)
## Citiation:
Please kindly cite using the following BibTeX:
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{Orca-best,
title = {Orca-best: A filtered version of orca gpt4 dataset.},
author = {Shahul Es},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/datasets/shahules786/orca-best/},
}
```
```
@software{touvron2023llama2,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava,
Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann,
Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,
Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith,
Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom},
year={2023}
}
```
> Readme format: [Riiid/sheep-duck-llama-2-70b-v1.1](https://huggingface.co/Riiid/sheep-duck-llama-2-70b-v1.1) | 3,403 | [
[
-0.0311279296875,
-0.049072265625,
0.01161956787109375,
0.02044677734375,
-0.025054931640625,
0.007305145263671875,
0.01222991943359375,
-0.054351806640625,
0.0180206298828125,
0.0260467529296875,
-0.054595947265625,
-0.041259765625,
-0.047576904296875,
-0.0... |
timm/convnext_tiny.fb_in1k | 2023-03-31T22:37:20.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2201.03545",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/convnext_tiny.fb_in1k | 1 | 1,324 | timm | 2022-12-13T07:14:30 | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for convnext_tiny.fb_in1k
A ConvNeXt image classification model. Pretrained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 28.6
- GMACs: 4.5
- Activations (M): 13.4
- Image size: train = 224 x 224, test = 288 x 288
- **Papers:**
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- **Original:** https://github.com/facebookresearch/ConvNeXt
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convnext_tiny.fb_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_tiny.fb_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 96, 56, 56])
# torch.Size([1, 192, 28, 28])
# torch.Size([1, 384, 14, 14])
# torch.Size([1, 768, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_tiny.fb_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 768, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
| model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
| [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
| [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
| [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
| [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
| [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
| [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
| [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
| [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
| [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
| [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 |
| [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
| [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
| [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
| [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
| [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
| [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
| [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
| [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
| [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
| [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
| [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
| [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
| [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
| [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
| [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
| [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
| [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
| [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
| [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
| [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
| [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
| [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
| [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
| [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
| [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
| [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
| [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
| [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
| [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
| [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
| [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
## Citation
```bibtex
@article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 15,615 | [
[
-0.0667724609375,
-0.032989501953125,
-0.0031108856201171875,
0.036651611328125,
-0.031951904296875,
-0.0161590576171875,
-0.0128631591796875,
-0.034637451171875,
0.0662841796875,
0.0163726806640625,
-0.044219970703125,
-0.041168212890625,
-0.050018310546875,
... |
liuhaotian/llava-v1-0719-336px-lora-vicuna-13b-v1.3 | 2023-07-19T08:56:41.000Z | [
"transformers",
"llava",
"text-generation",
"region:us"
] | text-generation | liuhaotian | null | null | liuhaotian/llava-v1-0719-336px-lora-vicuna-13b-v1.3 | 8 | 1,324 | transformers | 2023-07-19T08:54:44 | ---
inference: false
---
<br>
<br>
# LLaVA Model Card
## Model details
**Model type:**
LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
LLaVA-v1-0719-336px-LoRA-Vicuna-13B-v1.3 was trained in July 2023.
**Paper or resources for more information:**
https://llava-vl.github.io/
## License
Non-commerical Use.
**Where to send questions or comments about the model:**
https://github.com/haotian-liu/LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- 80K GPT-generated multimodal instruction-following data.
## Evaluation dataset
A preliminary evaluation of the model quality is conducted by creating a set of 90 visual reasoning questions from 30 unique images randomly sampled from COCO val 2014 and each is associated with three types of questions: conversational, detailed description, and complex reasoning. We utilize GPT-4 to judge the model outputs.
We also evaluate our model on the ScienceQA dataset. Our synergy with GPT-4 sets a new state-of-the-art on the dataset.
See https://llava-vl.github.io/ for more details. | 1,576 | [
[
-0.00701141357421875,
-0.08062744140625,
0.0343017578125,
0.017974853515625,
-0.0310211181640625,
0.00959014892578125,
0.0016613006591796875,
-0.03955078125,
0.01522064208984375,
0.044677734375,
-0.036773681640625,
-0.0447998046875,
-0.03375244140625,
-0.003... |
Helsinki-NLP/opus-mt-bn-en | 2023-08-16T11:26:30.000Z | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bn",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | translation | Helsinki-NLP | null | null | Helsinki-NLP/opus-mt-bn-en | 4 | 1,322 | transformers | 2022-03-02T23:29:04 | ---
language:
- bn
- en
tags:
- translation
license: apache-2.0
---
### ben-eng
* source group: Bengali
* target group: English
* OPUS readme: [ben-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ben-eng/README.md)
* model: transformer-align
* source language(s): ben
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ben.eng | 49.7 | 0.641 |
### System Info:
- hf_name: ben-eng
- source_languages: ben
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ben-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bn', 'en']
- src_constituents: {'ben'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.test.txt
- src_alpha3: ben
- tgt_alpha3: eng
- short_pair: bn-en
- chrF2_score: 0.6409999999999999
- bleu: 49.7
- brevity_penalty: 0.976
- ref_len: 13978.0
- src_name: Bengali
- tgt_name: English
- train_date: 2020-06-17
- src_alpha2: bn
- tgt_alpha2: en
- prefer_old: False
- long_pair: ben-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | 2,070 | [
[
-0.03057861328125,
-0.0428466796875,
0.0146331787109375,
0.02801513671875,
-0.0304718017578125,
-0.0157928466796875,
-0.0194549560546875,
-0.033355712890625,
0.0183563232421875,
0.01538848876953125,
-0.044281005859375,
-0.04901123046875,
-0.043975830078125,
... |
nerijs/lego-brickheadz-xl | 2023-08-14T05:20:10.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"license:apache-2.0",
"has_space",
"region:us"
] | text-to-image | nerijs | null | null | nerijs/lego-brickheadz-xl | 17 | 1,322 | diffusers | 2023-08-14T05:18:21 | ---
license: apache-2.0
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: lego brickheadz
widget:
- text: picture of a lego brickheadz of a corgi
---
# LEGO Minifig XL
## Consider supporting further research on [Patreon](https://www.patreon.com/user?u=29466374) or [Twitter](https://twitter.com/nerijs)
 | 511 | [
[
-0.051910400390625,
-0.026092529296875,
0.02874755859375,
0.01885986328125,
-0.01751708984375,
0.0120849609375,
0.0174713134765625,
-0.028228759765625,
0.05517578125,
0.0161285400390625,
-0.04998779296875,
-0.0133819580078125,
-0.0292205810546875,
0.01597595... |
TheBloke/ShiningValiant-1.2-AWQ | 2023-10-14T19:30:31.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"shining-valiant",
"valiant",
"valiant-labs",
"llama-2",
"llama-2-chat",
"70b",
"en",
"license:llama2",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/ShiningValiant-1.2-AWQ | 0 | 1,322 | transformers | 2023-10-14T16:17:08 | ---
base_model: ValiantLabs/ShiningValiant
inference: false
language:
- en
license: llama2
model_creator: Valiant Labs
model_name: ShiningValiant 1.2
model_type: llama
pipeline_tag: text-generation
prompt_template: '[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as
possible, while being safe. Your answers should not include any harmful, unethical,
racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses
are socially unbiased and positive in nature. If a question does not make any sense,
or is not factually coherent, explain why instead of answering something not correct.
If you don''t know the answer to a question, please don''t share false information.
<</SYS>>
{prompt}[/INST]
'
quantized_by: TheBloke
tags:
- shining-valiant
- valiant
- valiant-labs
- llama
- llama-2
- llama-2-chat
- 70b
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# ShiningValiant 1.2 - AWQ
- Model creator: [Valiant Labs](https://huggingface.co/ValiantLabs)
- Original model: [ShiningValiant 1.2](https://huggingface.co/ValiantLabs/ShiningValiant)
<!-- description start -->
## Description
This repo contains AWQ model files for [Valiant Labs's ShiningValiant 1.2](https://huggingface.co/ValiantLabs/ShiningValiant).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of Llama AWQ models for high-throughput concurrent inference in multi-user server scenarios.
As of September 25th 2023, preliminary Llama-only AWQ support has also been added to [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference).
Note that, at the time of writing, overall throughput is still lower than running vLLM or TGI with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/ShiningValiant-1.2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/ShiningValiant-1.2-GPTQ)
* [Valiant Labs's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ValiantLabs/ShiningValiant)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Llama-2-Chat
```
[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{prompt}[/INST]
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/ShiningValiant-1.2-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 36.61 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Serving this model from vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
Note: at the time of writing, vLLM has not yet done a new release with AWQ support.
If you try the vLLM examples below and get an error about `quantization` being unrecognised, or other AWQ-related issues, please install vLLM from Github source.
- When using vLLM as a server, pass the `--quantization awq` parameter, for example:
```shell
python3 python -m vllm.entrypoints.api_server --model TheBloke/ShiningValiant-1.2-AWQ --quantization awq --dtype half
```
When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/ShiningValiant-1.2-AWQ", quantization="awq", dtype="half")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/ShiningValiant-1.2-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{prompt}[/INST]
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## How to use this AWQ model from Python code
### Install the necessary packages
Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.1 or later
```shell
pip3 install autoawq
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### You can then try the following example code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/ShiningValiant-1.2-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=False, safetensors=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
prompt = "Tell me about AI"
prompt_template=f'''[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{prompt}[/INST]
'''
print("\n\n*** Generate:")
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
tokens,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
print("Output: ", tokenizer.decode(generation_output[0]))
"""
# Inference should be possible with transformers pipeline as well in future
# But currently this is not yet supported by AutoAWQ (correct as of September 25th 2023)
from transformers import pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
"""
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ)
- [vLLM](https://github.com/vllm-project/vllm)
- [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
TGI merged AWQ support on September 25th, 2023: [TGI PR #1054](https://github.com/huggingface/text-generation-inference/pull/1054). Use the `:latest` Docker container until the next TGI release is made.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Valiant Labs's ShiningValiant 1.2

Shining Valiant is a chat model built on the Llama 2 architecture, finetuned on our data for insight, creativity, passion, and friendliness.
- Uses the llama-2-70b-chat model, with safetensors
- Finetuned on multiple runs across private and public data
- Data focused on knowledge, enthusiasm, and structured reasoning
## Version
The current version is **1.2**; congrats to our team on the new release!
Previous versions remain available in the repository. New models will be released for everyone once our team's training and validation process is complete :)
## Prompting
Shining Valiant uses the same prompt format as Llama 2 Chat - feel free to use your existing prompts and scripts!
A few examples of different formats:
1. [INST] Good morning! Can you let me know how to parse a text file and turn the semicolons into commas? [/INST]
2. [INST] (You are an intelligent, helpful AI assistant.) Hello, can you write me a thank you letter? [/INST]
3. [INST] << SYS >>You are an intelligent, helpful AI assistant.<< /SYS >>Deep dive about a country with interesting history: [/INST]
## The Model
Shining Valiant is built on top of Stellar Bright, which uses Llama 2's 70b parameter architecture and features upgraded general capability. (Stellar Bright uses public open source data only.)
From there, we've created Shining Valiant through multiple finetuning runs on different compositions of our private dataset.
Our private data focuses primarily on applying Shining Valiant's personality: she's friendly, enthusiastic, insightful, knowledgeable, and loves to learn!
We are actively working on expanding and improving the Shining Valiant dataset for use in future releases of this model and others.

Shining Valiant is created by Valiant Labs.
We care about open source.
For everyone to use.
We encourage others to finetune further from our models.
| 16,349 | [
[
-0.037933349609375,
-0.06134033203125,
0.0341796875,
0.006317138671875,
-0.0222320556640625,
-0.01166534423828125,
0.0085601806640625,
-0.0413818359375,
0.00400543212890625,
0.0256195068359375,
-0.053680419921875,
-0.037994384765625,
-0.0230712890625,
-0.005... |
Nacholmo/controlnet-qr-pattern | 2023-06-19T18:50:41.000Z | [
"diffusers",
"tensorboard",
"controlnet",
"dataset:Nacholmo/controlnet-test-darkest-color",
"dataset:yuvalkirstain/pexel_images_lots_with_generated_captions",
"license:creativeml-openrail-m",
"diffusers:ControlNetModel",
"region:us"
] | null | Nacholmo | null | null | Nacholmo/controlnet-qr-pattern | 38 | 1,321 | diffusers | 2023-06-13T23:50:18 | ---
license: creativeml-openrail-m
datasets:
- Nacholmo/controlnet-test-darkest-color
- yuvalkirstain/pexel_images_lots_with_generated_captions
tags:
- controlnet
base_model: runwayml/stable-diffusion-v1-5
---
# Controlnet model for use in qr codes
Conditioning only 15% of the pixels closest to black, so as not to affect the luminance of the rest of the image.
Also manteing the color to some degree.
## Lasted version on the Automatic1111-Compatible folder
Move the .yaml to extensions/sd-webui-controlnet/models
If you want to use it in diffusers (Not automatic111), the recommended checkpoint is the 11500
### Recommended parameters
```
Recommended parameters:
Steps: +50, Size: 920x920,
ControlNet:
preprocessor: none,
weight: 1,
starting/ending: (0, 0.75),
control mode: Balanced
```
Play arround with the starting step, 0 to 0.25 its the sweetspot, if it start at 0 the qr has priority, the higher you raise them, the stronger the prompt gets
To achieve optimal results with the Hires fix, ensure that your resolution is set to a minimum of 600x600. Additionally, set the denoising strength to a minimum of 0.7 and the hires steps to at least 20.




Example prompt parameters:
```
1girl, solo, red flower, print dress, bow, year of the ox, food print, short hair, black hair, looking at viewer, bangs, long sleeves, red bow
Negative prompt: (KHFB, AuroraNegative),(Worst Quality, Low Quality:1.4)
Steps: 40, Sampler: DPM++ 2M SDE Karras, CFG scale: 6, Seed: 1463218996, Size: 920x920, Model hash: b42b09ff12, Model: cetusMix_v4, Clip skip: 2, Version: 0ddf613, Parser: Full parser, ControlNet 0: "preprocessor: none, model: 11500_color [c4220211], weight: 1, starting/ending: (0, 0.75), resize mode: Crop and Resize, pixel perfect: False, control mode: Balanced, preprocessor params: (-1, -1, -1)"
``` | 2,457 | [
[
-0.041229248046875,
-0.0177001953125,
0.022674560546875,
0.042694091796875,
-0.0279541015625,
-0.00470733642578125,
0.016143798828125,
-0.0211029052734375,
0.041839599609375,
0.04205322265625,
-0.04071044921875,
-0.03759765625,
-0.039581298828125,
0.02496337... |
microsoft/swinv2-tiny-patch4-window16-256 | 2022-12-10T10:09:55.000Z | [
"transformers",
"pytorch",
"swinv2",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2111.09883",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | microsoft | null | null | microsoft/swinv2-tiny-patch4-window16-256 | 0 | 1,320 | transformers | 2022-06-14T06:17:52 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Swin Transformer v2 (tiny-sized model)
Swin Transformer v2 model pre-trained on ImageNet-1k at resolution 256x256. It was introduced in the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer).
Disclaimer: The team releasing Swin Transformer v2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally.
Swin Transformer v2 adds 3 main improvements: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) a log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) a self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images.

[Source](https://paperswithcode.com/method/swin-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swinv2) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained("microsoft/swinv2-tiny-patch4-window16-256")
model = AutoModelForImageClassification.from_pretrained("microsoft/swinv2-tiny-patch4-window16-256")
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swinv2.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2111-09883,
author = {Ze Liu and
Han Hu and
Yutong Lin and
Zhuliang Yao and
Zhenda Xie and
Yixuan Wei and
Jia Ning and
Yue Cao and
Zheng Zhang and
Li Dong and
Furu Wei and
Baining Guo},
title = {Swin Transformer {V2:} Scaling Up Capacity and Resolution},
journal = {CoRR},
volume = {abs/2111.09883},
year = {2021},
url = {https://arxiv.org/abs/2111.09883},
eprinttype = {arXiv},
eprint = {2111.09883},
timestamp = {Thu, 02 Dec 2021 15:54:22 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-09883.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 4,193 | [
[
-0.045623779296875,
-0.0182342529296875,
-0.016082763671875,
0.0145721435546875,
-0.00862884521484375,
-0.0293121337890625,
-0.00444793701171875,
-0.063720703125,
0.005405426025390625,
0.0296783447265625,
-0.04010009765625,
0.0003287792205810547,
-0.044708251953... |
Dizex/FoodBaseBERT-NER | 2023-05-14T19:31:01.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"FoodBase",
"NER",
"en",
"dataset:Dizex/FoodBase",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | Dizex | null | null | Dizex/FoodBaseBERT-NER | 6 | 1,320 | transformers | 2022-10-31T09:00:15 | ---
language: en
datasets:
- Dizex/FoodBase
widget:
- text: "Today's meal: Fresh olive poké bowl topped with chia seeds. Very delicious!"
example_title: "Food example 1"
- text: "Tartufo Pasta with garlic flavoured butter and olive oil, egg yolk, parmigiano and pasta water."
example_title: "Food example 2"
tags:
- FoodBase
- NER
license: mit
---
# FoodBaseBERT
## Model description
**FoodBaseBERT** is a fine-tuned BERT model that is ready to use for **Named Entity Recognition** of Food entities. It has been trained to recognize one entity: food (FOOD).
Specifically, this model is a *bert-base-cased* model that was fine-tuned on the [FoodBase NER](https://academic.oup.com/database/article/doi/10.1093/database/baz121/5611291) dataset.
## Intended uses
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Dizex/FoodBaseBERT")
model = AutoModelForTokenClassification.from_pretrained("Dizex/FoodBaseBERT")
pipe = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Today's meal: Fresh olive poké bowl topped with chia seeds. Very delicious!"
ner_entity_results = pipe(example)
print(ner_entity_results)
```
| 1,322 | [
[
-0.0230255126953125,
-0.03973388671875,
-0.00035834312438964844,
0.003658294677734375,
-0.0009918212890625,
-0.007266998291015625,
0.0022335052490234375,
-0.0103302001953125,
0.03656005859375,
0.036468505859375,
-0.037109375,
-0.033966064453125,
-0.0581359863281... |
digiplay/PhotoSomnia_vFinal | 2023-07-17T14:12:27.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | digiplay | null | null | digiplay/PhotoSomnia_vFinal | 2 | 1,320 | diffusers | 2023-07-17T13:45:52 | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/18637/photosomnia
Original Author's DEMO image :

Sample image thru huggingface's API :


| 695 | [
[
-0.0309295654296875,
-0.032073974609375,
0.04400634765625,
0.0299072265625,
-0.036895751953125,
-0.01117706298828125,
0.00409698486328125,
-0.017852783203125,
0.0489501953125,
0.035797119140625,
-0.0689697265625,
-0.03216552734375,
-0.0245513916015625,
0.001... |
Gladiator/microsoft-deberta-v3-large_ner_conll2003 | 2023-06-18T03:19:31.000Z | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | Gladiator | null | null | Gladiator/microsoft-deberta-v3-large_ner_conll2003 | 2 | 1,319 | transformers | 2022-12-09T05:19:42 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: microsoft-deberta-v3-large_ner_conll2003
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9667057052032793
- name: Recall
type: recall
value: 0.972399865365197
- name: F1
type: f1
value: 0.9695444248678582
- name: Accuracy
type: accuracy
value: 0.9945095595965889
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# microsoft-deberta-v3-large_ner_conll2003
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0293
- Precision: 0.9667
- Recall: 0.9724
- F1: 0.9695
- Accuracy: 0.9945
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0986 | 1.0 | 878 | 0.0323 | 0.9453 | 0.9596 | 0.9524 | 0.9921 |
| 0.0212 | 2.0 | 1756 | 0.0270 | 0.9571 | 0.9675 | 0.9623 | 0.9932 |
| 0.009 | 3.0 | 2634 | 0.0280 | 0.9638 | 0.9714 | 0.9676 | 0.9940 |
| 0.0035 | 4.0 | 3512 | 0.0290 | 0.9657 | 0.9712 | 0.9685 | 0.9943 |
| 0.0022 | 5.0 | 4390 | 0.0293 | 0.9667 | 0.9724 | 0.9695 | 0.9945 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 2,384 | [
[
-0.03271484375,
-0.036468505859375,
0.01806640625,
0.0149383544921875,
-0.0177459716796875,
-0.0206146240234375,
0.003173828125,
-0.02239990234375,
0.0137786865234375,
0.01953125,
-0.047454833984375,
-0.04534912109375,
-0.054779052734375,
-0.009918212890625,... |
timm/vit_base_r50_s16_224.orig_in21k | 2023-05-06T00:42:21.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-21k",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/vit_base_r50_s16_224.orig_in21k | 0 | 1,319 | timm | 2022-12-23T00:26:33 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-21k
---
# Model card for vit_base_r50_s16_224.orig_in21k
A ResNet - Vision Transformer (ViT) hybrid image classification model. Trained on ImageNet-21k in JAX by paper authors, ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 114.7
- GMACs: 21.0
- Activations (M): 27.9
- Image size: 224 x 224
- **Papers:**
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-21k
- **Original:** https://github.com/google-research/vision_transformer
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_base_r50_s16_224.orig_in21k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_base_r50_s16_224.orig_in21k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 197, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 3,307 | [
[
-0.033843994140625,
-0.0289154052734375,
0.0004131793975830078,
0.004547119140625,
-0.0259552001953125,
-0.0169677734375,
-0.020965576171875,
-0.037506103515625,
0.017974853515625,
0.022674560546875,
-0.033599853515625,
-0.044677734375,
-0.047698974609375,
-... |
mncai/Mistral-7B-v0.1-orca-2k | 2023-10-22T06:00:24.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"MindsAndCompany",
"en",
"ko",
"dataset:kyujinpy/OpenOrca-KO",
"arxiv:2306.02707",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | mncai | null | null | mncai/Mistral-7B-v0.1-orca-2k | 0 | 1,319 | transformers | 2023-10-22T05:46:39 | ---
pipeline_tag: text-generation
license: mit
language:
- en
- ko
library_name: transformers
tags:
- MindsAndCompany
datasets:
- kyujinpy/OpenOrca-KO
---
## Model Details
* **Developed by**: [Minds And Company](https://mnc.ai/)
* **Backbone Model**: [Mistral-7B-v0.1](mistralai/Mistral-7B-v0.1)
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
## Dataset Details
### Used Datasets
- kyujinpy/OpenOrca-KO
### Prompt Template
- Llama Prompt Template
## Limitations & Biases:
Llama2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
## License Disclaimer:
This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.
## Contact Us
- [Minds And Company](https://mnc.ai/)
## Citiation:
Please kindly cite using the following BibTeX:
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{Orca-best,
title = {Orca-best: A filtered version of orca gpt4 dataset.},
author = {Shahul Es},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/datasets/shahules786/orca-best/},
}
```
```
@software{touvron2023llama2,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava,
Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann,
Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,
Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith,
Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom},
year={2023}
}
```
> Readme format: [Riiid/sheep-duck-llama-2-70b-v1.1](https://huggingface.co/Riiid/sheep-duck-llama-2-70b-v1.1) | 3,403 | [
[
-0.03167724609375,
-0.04974365234375,
0.01029205322265625,
0.019439697265625,
-0.0237884521484375,
0.00551605224609375,
0.0104522705078125,
-0.05303955078125,
0.016693115234375,
0.026519775390625,
-0.05328369140625,
-0.042388916015625,
-0.045196533203125,
-0... |
facebook/data2vec-text-base | 2022-04-18T16:03:20.000Z | [
"transformers",
"pytorch",
"data2vec-text",
"feature-extraction",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2202.03555",
"arxiv:1806.02847",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | feature-extraction | facebook | null | null | facebook/data2vec-text-base | 12 | 1,318 | transformers | 2022-03-02T23:29:05 | ---
language: en
tags:
- exbert
license: mit
datasets:
- bookcorpus
- wikipedia
---
# Data2Vec-Text base model
Pretrained model on English language using the *data2vec* objective. It was introduced in
[this paper](https://arxiv.org/abs/2202.03555) and first released in
[this repository](https://github.com/pytorch/fairseq/tree/main/examples/data2vec). This model is case-sensitive: it
makes a difference between english and English.
Disclaimer: The team releasing Data2Vec-Text did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Pre-Training method

For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555).
## Abstract
*While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because
they were developed with a single modality in
mind. To get us closer to general self-supervised
learning, we present data2vec, a framework that
uses the same learning method for either speech,
NLP or computer vision. The core idea is to predict latent representations of the full input data
based on a masked view of the input in a selfdistillation setup using a standard Transformer architecture. Instead of predicting modality-specific
targets such as words, visual tokens or units of
human speech which are local in nature, data2vec
predicts contextualized latent representations that
contain information from the entire input. Experiments on the major benchmarks of speech
recognition, image classification, and natural language understanding demonstrate a new state of
the art or competitive performance to predominant approaches.*
## Intended uses & limitations
The model is intended to be fine-tuned on a downstream task.
See the [model hub](https://huggingface.co/models?filter=data2vec-text) to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
## Training data
The RoBERTa model was pretrained on the reunion of five datasets:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books;
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers) ;
- [CC-News](https://commoncrawl.org/2016/10/news-dataset-available/), a dataset containing 63 millions English news
articles crawled between September 2016 and February 2019.
- [OpenWebText](https://github.com/jcpeterson/openwebtext), an opensource recreation of the WebText dataset used to
train GPT-2,
- [Stories](https://arxiv.org/abs/1806.02847) a dataset containing a subset of CommonCrawl data filtered to match the
story-like style of Winograd schemas.
Together theses datasets weight 160GB of text.
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2202.03555,
doi = {10.48550/ARXIV.2202.03555},
url = {https://arxiv.org/abs/2202.03555},
author = {Baevski, Alexei and Hsu, Wei-Ning and Xu, Qiantong and Babu, Arun and Gu, Jiatao and Auli, Michael},
keywords = {Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` | 3,752 | [
[
-0.0015363693237304688,
-0.0640869140625,
0.020751953125,
0.01023101806640625,
-0.015472412109375,
-0.015167236328125,
-0.020721435546875,
-0.038299560546875,
-0.005336761474609375,
0.036346435546875,
-0.044830322265625,
-0.044769287109375,
-0.03997802734375,
... |
cardiffnlp/roberta-large-tweet-topic-single-all | 2022-09-30T18:03:25.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"dataset:cardiffnlp/tweet_topic_single",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | cardiffnlp | null | null | cardiffnlp/roberta-large-tweet-topic-single-all | 0 | 1,318 | transformers | 2022-09-30T08:26:33 | ---
datasets:
- cardiffnlp/tweet_topic_single
metrics:
- f1
- accuracy
model-index:
- name: cardiffnlp/roberta-large-tweet-topic-single-all
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: cardiffnlp/tweet_topic_single
type: cardiffnlp/tweet_topic_single
args: cardiffnlp/tweet_topic_single
split: test_2021
metrics:
- name: F1
type: f1
value: 0.896042528056704
- name: F1 (macro)
type: f1_macro
value: 0.8000614127334341
- name: Accuracy
type: accuracy
value: 0.896042528056704
pipeline_tag: text-classification
widget:
- text: "I'm sure the {@Tampa Bay Lightning@} would’ve rather faced the Flyers but man does their experience versus the Blue Jackets this year and last help them a lot versus this Islanders team. Another meat grinder upcoming for the good guys"
example_title: "Example 1"
- text: "Love to take night time bike rides at the jersey shore. Seaside Heights boardwalk. Beautiful weather. Wishing everyone a safe Labor Day weekend in the US."
example_title: "Example 2"
---
# cardiffnlp/roberta-large-tweet-topic-single-all
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the [tweet_topic_single](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single). This model is fine-tuned on `train_all` split and validated on `test_2021` split of tweet_topic.
Fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single/blob/main/lm_finetuning.py). It achieves the following results on the test_2021 set:
- F1 (micro): 0.896042528056704
- F1 (macro): 0.8000614127334341
- Accuracy: 0.896042528056704
### Usage
```python
from transformers import pipeline
pipe = pipeline("text-classification", "cardiffnlp/roberta-large-tweet-topic-single-all")
topic = pipe("Love to take night time bike rides at the jersey shore. Seaside Heights boardwalk. Beautiful weather. Wishing everyone a safe Labor Day weekend in the US.")
print(topic)
```
### Reference
```
@inproceedings{dimosthenis-etal-2022-twitter,
title = "{T}witter {T}opic {C}lassification",
author = "Antypas, Dimosthenis and
Ushio, Asahi and
Camacho-Collados, Jose and
Neves, Leonardo and
Silva, Vitor and
Barbieri, Francesco",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics"
}
```
| 2,617 | [
[
-0.01425933837890625,
-0.057037353515625,
0.020965576171875,
0.036102294921875,
-0.02923583984375,
-0.003200531005859375,
-0.041717529296875,
-0.0167388916015625,
0.049407958984375,
0.03131103515625,
-0.061798095703125,
-0.05615234375,
-0.049957275390625,
0.... |
TheBloke/Mistral-7B-v0.1-AWQ | 2023-09-29T16:52:11.000Z | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"pretrained",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/Mistral-7B-v0.1-AWQ | 13 | 1,317 | transformers | 2023-09-27T19:28:54 | ---
base_model: mistralai/Mistral-7B-v0.1
inference: false
license: apache-2.0
model_creator: Mistral AI
model_name: Mistral 7B v0.1
model_type: mistral
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: TheBloke
tags:
- pretrained
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Mistral 7B v0.1 - AWQ
- Model creator: [Mistral AI](https://huggingface.co/mistralai)
- Original model: [Mistral 7B v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
<!-- description start -->
## Description
This repo contains AWQ model files for [Mistral AI's Mistral 7B v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
### Mistral AWQs
These are experimental first AWQs for the brand-new model format, Mistral.
As of September 29th 2023, they are only supported by AutoAWQ (version 0.1.1+)
To use from AutoAWQ requires installing both AutoAWQ and Transformers from Github. More details are below.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Mistral-7B-v0.1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mistral-7B-v0.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mistral-7B-v0.1-GGUF)
* [Mistral AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mistralai/Mistral-7B-v0.1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Mistral-7B-v0.1-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.15 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-use-from-python start -->
## How to use this AWQ model from Python code
### Install the necessary packages
Requires:
- Transformers from [commit 72958fcd3c98a7afdc61f953aa58c544ebda2f79](https://github.com/huggingface/transformers/commit/72958fcd3c98a7afdc61f953aa58c544ebda2f79)
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) from [commit 1c5ccc791fa2cb0697db3b4070df1813f1736208](https://github.com/casper-hansen/AutoAWQ/commit/1c5ccc791fa2cb0697db3b4070df1813f1736208).
```shell
pip3 install git+https://github.com/huggingface/transformers.git@72958fcd3c98a7afdc61f953aa58c544ebda2f79
pip3 install git+https://github.com/casper-hansen/AutoAWQ.git@1c5ccc791fa2cb0697db3b4070df1813f1736208
```
### You can then try the following example code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/Mistral-7B-v0.1-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=False, safetensors=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
print("\n\n*** Generate:")
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
tokens,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
print("Output: ", tokenizer.decode(generation_output[0]))
"""
# Inference should be possible with transformers pipeline as well in future
# But currently this is not yet supported by AutoAWQ (correct as of September 25th 2023)
from transformers import pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
"""
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ)
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Mistral AI's Mistral 7B v0.1
# Model Card for Mistral-7B-v0.1
The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters.
Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested.
For full details of this model please read our [Release blog post](https://mistral.ai/news/announcing-mistral-7b/)
## Model Architecture
Mistral-7B-v0.1 is a transformer model, with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
| 9,639 | [
[
-0.0428466796875,
-0.04461669921875,
0.00994110107421875,
0.002719879150390625,
-0.012420654296875,
-0.006305694580078125,
0.0138092041015625,
-0.036590576171875,
0.0113677978515625,
0.01348114013671875,
-0.0555419921875,
-0.0279541015625,
-0.02679443359375,
... |
apple/DFN2B-CLIP-ViT-L-14 | 2023-10-31T17:56:28.000Z | [
"open_clip",
"pytorch",
"clip",
"arxiv:2309.17425",
"license:other",
"region:us"
] | null | apple | null | null | apple/DFN2B-CLIP-ViT-L-14 | 1 | 1,317 | open_clip | 2023-10-30T23:07:24 | ---
license: other
license_name: apple-sample-code-license
license_link: LICENSE
---
A CLIP (Contrastive Language-Image Pre-training) model trained on DFN-2B.
Data Filtering Networks (DFNs) are small networks used to automatically filter large pools of uncurated data.
This model was trained on 2B images that were filtered from a pool of 12.8B uncurated image-text pairs
(12.8B image-text pairs from CommonPool-12.8B).
This model has been converted to PyTorch from the original JAX checkpoints from Axlearn (https://github.com/apple/axlearn).
These weights are directly usable in OpenCLIP (image + text).
## Model Details
- **Model Type:** Contrastive Image-Text, Zero-Shot Image Classification.
- **Dataset:** DFN-2b
- **Papers:**
- Data Filtering Networks: https://arxiv.org/abs/2309.17425
- **Examples Seen:** 12.8B
## Model Metrics
| Eval Dataset | Metric |
|:-----------------------|---------:|
| ImageNet 1k | 0.81396 |
| Caltech-101 | 0.953141 |
| CIFAR-10 | 0.9836 |
| CIFAR-100 | 0.8835 |
| CLEVR Counts | 0.3338 |
| CLEVR Distance | 0.248733 |
| Country211 | 0.28237 |
| Describable Textures | 0.66117 |
| EuroSAT | 0.646296 |
| FGVC Aircraft | 0.395945 |
| Food-101 | 0.945861 |
| GTSRB | 0.616152 |
| ImageNet Sketch | 0.683311 |
| ImageNet v2 | 0.7453 |
| ImageNet-A | 0.6676 |
| ImageNet-O | 0.3915 |
| ImageNet-R | 0.900033 |
| KITTI Vehicle Distance | 0.201125 |
| MNIST | 0.8468 |
| ObjectNet | 0.739367 |
| Oxford Flowers-102 | 0.865822 |
| Oxford-IIIT Pet | 0.954941 |
| Pascal VOC 2007 | 0.81644 |
| PatchCamelyon | 0.63028 |
| Rendered SST2 | 0.551345 |
| RESISC45 | 0.733175 |
| Stanford Cars | 0.947146 |
| STL-10 | 0.976625 |
| SUN397 | 0.754565 |
| SVHN | 0.653503 |
| Flickr | 0.8244 |
| MSCOCO | 0.570363 |
| WinoGAViL | 0.551645 |
| iWildCam | 0.18877 |
| Camelyon17 | 0.626179 |
| FMoW | 0.222137 |
| Dollar Street | 0.688084 |
| GeoDE | 0.91023 |
| **Average** | **0.668558** |
## Model Usage
### With OpenCLIP
```
import torch
import torch.nn.functional as F
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer
model, preprocess = create_model_from_pretrained('hf-hub:apple/DFN2B-CLIP-ViT-L-14')
tokenizer = get_tokenizer('ViT-L-14')
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
image = preprocess(image).unsqueeze(0)
labels_list = ["a dog", "a cat", "a donut", "a beignet"]
text = tokenizer(labels_list, context_length=model.context_length)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features = F.normalize(image_features, dim=-1)
text_features = F.normalize(text_features, dim=-1)
text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias)
zipped_list = list(zip(labels_list, [round(p.item(), 3) for p in text_probs[0]]))
print("Label probabilities: ", zipped_list)
```
## Citation
```bibtex
@article{fang2023data,
title={Data Filtering Networks},
author={Fang, Alex and Jose, Albin Madappally and Jain, Amit and Schmidt, Ludwig and Toshev, Alexander and Shankar, Vaishaal},
journal={arXiv preprint arXiv:2309.17425},
year={2023}
}
```
| 3,830 | [
[
-0.043914794921875,
-0.03656005859375,
0.01232147216796875,
0.010223388671875,
-0.026947021484375,
-0.00885772705078125,
0.0019273757934570312,
-0.02459716796875,
0.035919189453125,
0.027618408203125,
-0.04345703125,
-0.0452880859375,
-0.05194091796875,
-0.0... |
cometrain/stocks-news-t5 | 2022-09-20T11:15:07.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"Cometrain AutoCode",
"Cometrain AlphaML",
"en",
"dataset:financial-sentiment-analysis",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text2text-generation | cometrain | null | null | cometrain/stocks-news-t5 | 17 | 1,316 | transformers | 2022-04-14T09:44:58 | ---
language:
- en
license: mit
tags:
- Cometrain AutoCode
- Cometrain AlphaML
datasets:
- financial-sentiment-analysis
widget:
- text: "April 14 (Reuters) - Rio Tinto (RIO.AX), one of the largest Australian mining companies, on Thursday confirmed its exit from the state mining lobby group after raising concerns that its policy on expansion of coal mines did not align with the Paris Climate Agreement."
example_title: "Rio Tinto Decision (Neutral)"
- text: "LONDON, April 13 (Reuters) - Crypto lender Nexo said it has teamed up with global payments company Mastercard (MA.N) to launch on Wednesday what it calls the world's first crypto-backed payment card."
example_title: "New Mastercard & Nexo project (Positive)"
- text: "April 14 (Reuters) - The Russian rouble weakened on Thursday, driven by expectations that Russia may relax its temporary capital control measures further, while stocks fell as the country continued what it calls 'a special military operation' in Ukraine."
example_title: "Crisis in Russia (Negative)"
inference:
parameters:
top_p: 0.9
temperature: 0.5
---
# stocks-news-t5
This model has been automatically fine-tuned and tested as part of the development of the GPT-2-based AutoML framework for accelerated and easy development of NLP enterprise solutions. Fine-tuned [T5](https://huggingface.co/t5-base) allows to analyze financial market news.
Automatically trained on [Financial Sentiment Analysis(2022)](https://www.kaggle.com/datasets/sbhatti/financial-sentiment-analysis) dataset.
## Made with Cometrain AlphaML & AutoCode
This model was automatically fine-tuned using the Cometrain AlphaML framework and tested with CI/CD pipeline made by Cometrain AutoCode
## Cometrain AlphaML command
```shell
$ cometrain create --name stocks-news --model auto --task 'Machine learning model for finance news analysis' --output transformers
``` | 1,885 | [
[
-0.0005855560302734375,
-0.037078857421875,
0.0029544830322265625,
0.03363037109375,
-0.02777099609375,
0.005096435546875,
-0.0005087852478027344,
-0.031280517578125,
-0.001209259033203125,
0.00909423828125,
-0.055419921875,
-0.06689453125,
-0.0516357421875,
... |
timm/resnet101.tv_in1k | 2023-04-05T18:24:18.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:1512.03385",
"license:bsd-3-clause",
"region:us"
] | image-classification | timm | null | null | timm/resnet101.tv_in1k | 0 | 1,316 | timm | 2023-04-05T18:23:30 | ---
tags:
- image-classification
- timm
library_tag: timm
license: bsd-3-clause
---
# Model card for resnet101.tv_in1k
A ResNet-B image classification model.
This model features:
* ReLU activations
* single layer 7x7 convolution with pooling
* 1x1 convolution shortcut downsample
Trained on ImageNet-1k, original torchvision model weight.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 44.5
- GMACs: 7.8
- Activations (M): 16.2
- Image size: 224 x 224
- **Papers:**
- Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385
- **Original:** https://github.com/pytorch/vision
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resnet101.tv_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnet101.tv_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1024, 14, 14])
# torch.Size([1, 2048, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnet101.tv_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec|
|------------------------------------------|--------|-----|-----|-----------|-----|-----|-------|
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 |
|[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 |
|[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 |
|[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 |
|[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 |
|[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 |
|[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 |
|[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 |
|[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 |
|[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 |
|[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 |
|[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 |
|[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 |
|[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 |
|[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 |
|[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 |
|[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 |
|[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 |
|[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 |
|[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 |
|[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 |
|[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 |
|[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 |
|[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 |
|[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 |
|[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 |
|[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 |
|[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 |
|[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 |
|[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 |
|[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 |
|[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 |
|[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 |
|[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 |
|[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 |
|[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 |
|[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 |
|[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 |
|[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 |
|[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 |
|[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 |
|[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 |
|[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 |
|[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 |
|[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 |
|[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 |
|[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 |
|[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 |
|[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 |
## Citation
```bibtex
@article{He2015,
author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {arXiv preprint arXiv:1512.03385},
year = {2015}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 37,880 | [
[
-0.06671142578125,
-0.0169525146484375,
0.0015897750854492188,
0.030853271484375,
-0.03277587890625,
-0.00904083251953125,
-0.0090789794921875,
-0.029022216796875,
0.08880615234375,
0.019622802734375,
-0.047393798828125,
-0.038299560546875,
-0.046173095703125,
... |
heegyu/polyglot-ko-1.3b-chat | 2023-09-26T04:34:32.000Z | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"ko",
"dataset:beomi/KoAlpaca-v1.1a",
"dataset:dbdu/ShareGPT-74k-ko",
"dataset:heegyu/korquad-chat-v1",
"dataset:HAERAE-HUB/KoInstruct-QA",
"dataset:changpt/ko-lima-vicuna",
"dataset:nlpai-lab/kullm-v2",
"endpoints_comp... | text-generation | heegyu | null | null | heegyu/polyglot-ko-1.3b-chat | 0 | 1,316 | transformers | 2023-08-18T05:50:33 | ---
datasets:
- beomi/KoAlpaca-v1.1a
- dbdu/ShareGPT-74k-ko
- heegyu/korquad-chat-v1
- HAERAE-HUB/KoInstruct-QA
- changpt/ko-lima-vicuna
- nlpai-lab/kullm-v2
language:
- ko
---
# heegyu/polyglot-ko-1.3b-chat
- [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b)를 여러 한국어 instruction 데이터셋으로 학습한 모델
## 사용한 데이터셋
| Dataset | # instance | 타입 |
| --- | --- | --- |
| [KoAlpaca v1.1](https://raw.githubusercontent.com/Beomi/KoAlpaca/main/KoAlpaca_v1.1.jsonl) | 50K | 싱글턴 |
| [dbdu/ShareGPT-74k-ko 의 part2_ko_uncleaned](https://huggingface.co/datasets/dbdu/ShareGPT-74k-ko/resolve/main/part2_ko_uncleaned.json) | 36K | 멀티턴 |
| [heegyu/korquad-chat-v1](https://huggingface.co/datasets/heegyu/korquad-chat-v1) | 9.6K | 멀티턴, 지식기반 |
| [lcw99/evolve-instruct](https://github.com/lcw99/evolve-instruct/) | 37K | 싱글턴 |
| [HAERAE-HUB/KoInstruct-QA](https://huggingface.co/datasets/HAERAE-HUB/KoInstruct-QA) | 50.3k | 싱글턴 |
| [changpt/ko-lima-vicuna](https://huggingface.co/datasets/changpt/ko-lima-vicuna) | 1K | 싱글턴, 멀티턴(극히 일부) |
| [nlpai-lab/kullm-v2](https://huggingface.co/datasets/nlpai-lab/kullm-v2) | 15K | 싱글턴 |
- KULLM v2 데이터셋에서는 GPT4ALL, Dolly 데이터만 추출해서 사용했습니다.
- 다양한 학습 데이터셋은 [HeegyuKim/open-korean-instructions](https://github.com/HeegyuKim/open-korean-instructions) GitHub repository를 참고하세요.
## 생성 Prompt
- EOS token(<|endoftext|>)이 나올 때까지 생성하면 됩니다.
- 최상단 프롬프트는 있는게 좋은 답변이 더 자주 나오는 것 같아요.
```
당신은 AI 챗봇입니다. 사용자에게 도움이 되고 유익한 내용을 제공해야합니다. 답변은 길고 자세하며 친절한 설명을 덧붙여서 작성하세요.
### 사용자:
오늘 날씨 알려줘
### 챗봇:
```
생성된 답변:
```
네, 오늘 서울을 포함한 전국적으로 맑은 날씨를 보이겠습니다. 하지만, 낮과 밤의 기온 차가 매우 크며, 아침 기온이 10도 이상 낮아지면 서리와 얼음이 생기기 시작합니다.
이로 인해, 낮 동안은 매우 추운 날씨를 느낄 수 있으니 가급적이면 두꺼운 옷이나 핫팩을 준비하세요.
```
대화 예시 (with prompt)
```
당신은 AI 챗봇입니다. 사용자에게 도움이 되고 유익한 내용을 제공해야합니다. 답변은 길고 자세하며 친절한 설명을 덧붙여서 작성하세요.
### 사용자:
한식 좋아하니?
### 챗봇:
그럼요, 한식 좋아합니다. 제가 알고 있는 한식을 공유해 볼게요.
한식은 다양한 유형의 요리들을 포함하고 있고, 고유한 맛과 향을 가지고 있죠. 한식은 식사보다는 특별한 날이나 간식으로 즐겨 먹곤 합니다. 한식에는 다양한 종류가 있는데, 어떤 요리들이 있을까요?
한식의 종류로는 나물, 전골, 찌개, 불고기, 볶음, 무침, 볶음밥과 국, 볶음밥과 비빔밥, 찜, 생선찜, 생선구이, 찜닭, 돼지국밥, 육개장, 보신정식 등이 있습니다. 여기에 추가로 다른 종류도 더 다양한데, 김치, 젓갈, 두부도 있죠.
하지만 한식은 맛과 영양적인 면에서 최고인 것 같아요. 다양한 야채나 재료를 사용해서 만든 한식은 맛있고 건강한 식단으로 평가 받습니다. 그리고 한국적인 이미지를 잘 살려서 한식의 종류와 맛을 즐길 수 있게 해줍니다.
한국에서는 한식을 먹으면서 전통의 아름다움을 느낄 수 있고, 음식의 조화로운 맛을 느낄 수 있습니다. 그리고 한식은 다른 음식들과 조화롭게 만들어 먹어야 맛있습니다.
이러한 한식의 장점들은 음식을 잘 요리하고 있다는 느낌을 주고, 음식의 특징과 맛을 살릴 수 있게 도와줍니다. 한식은 여러 가지 종류가 있어서 다양한 맛들을 즐길 수 있고, 음식과 재료가 조화롭게 맛을 나타내도록 만들어 주기에 맛있죠.
하지만 한식에는 몇 가지 단점도 있습니다. 한식은 전통적인 한국의 음식이지만, 재료의 부족과 한정된 음식 재료 등의 문제도 있습니다. 또한, 한식은 다른 재료들을 조합하는 과정이 필요하며, 요리의 모양이 보기 좋지 않을 수 있어 먹는 사람들에게 불편한 느낌을 줄 수 있습니다.
이러한 단점들을 개선하기 위해서는 한식 재료에 대한 연구와 다양한 양념 기법 등이 필요합니다. 전통적인 한식에 대한 이해와 연구가 필요해지고, 새로운 재료와 양념을 개발하고 조합하는 과정들이 필요합니다.
하지만 한식은 아름다움과 맛만으로도 많은 사람들에게 인기를 끌고 있으며, 다양한 메뉴와 한식을 즐길 수 있는 곳들이 늘어나고 있습니다. 한식을 즐겨 먹으면서 한식의 세계를 알아보고, 더 깊이 있는 식문화 경험을 즐기는 것은 좋은 선택이 될 수 있습니다.
```
멀티턴 대화 예시 (with prompt)
```
당신은 AI 챗봇입니다. 사용자에게 도움이 되고 유익한 내용을 제공해야합니다. 답변은 길고 자세하며 친절한 설명을 덧붙여서 작성하세요.
### 사용자:
일본 도쿄 여행계획 짜줘
### 챗봇:
도쿄 여행 계획은 다양한 관광 명소를 방문하고 맛있는 음식을 즐기는 것입니다. 또한 날씨, 교통, 숙박 등과 관련된 모든 상황을 고려하고 계획을 짜세요. 아래는 도움이 될 수 있는 도쿄 여행 일정입니다.
여행 날짜: 5월 5일~5월 말
주요 여행 장소: 도쿄 (도쿄, 하코네, 디즈니랜드)
도쿄 (도쿄, 하코네, 디즈니랜드)
오다이바 (유리카모메, 스카이워크, 오다이바 해변, 아쿠아시티, 신카이산, 팔레트 타운)
센코대학 (센코박물관/이와이다. 아츠다 신궁/지슈 신사/하세키 거리)
나카노 도쿄 가든 (도쿄 가든/아츠다 신궁/지슈 신사/도쿄 가든)
추천 일정:
하코네 여행: 아침 일찍 출발하여 하코네 관광을 하루 종일 즐기세요. 하코네 신사(가마쿠라코마치/니시도리마)나 토마리코마치(모리노리야/유노코모마치)에서 도시락을 즐기거나 길거리 시장에서 맛있는 간식을 즐기는 것도 좋습니다.
디즈니랜드 방문: 디즈니랜드를 방문하세요. 먼저 도쿄를 벗어나서 디즈니 리조트로 이동합니다. 디즈니 리조트에는 다양한 놀이기구와 레스토랑, 영화관 등이 있으니 꼭 방문하세요.
아키타 여행: 아키타를 탐험하세요. 아키타 가토 온천에 들어가서 온천욕을 즐기거나, 아키타의 명물인 명란 요리를 맛보는 등 다양한 즐길거리를 즐겨보세요.
하코네 여행: 하코네신사를 방문하세요. 다음으로 야마노에 가서 도라에비스기까지 도보로 온천에 갈 수도 있고, 나카노 도쿄 가든(히가네코마리)으로 가서 온천도 즐기세요.
센세키 대학 방문: 센세키 대학을 방문해보세요. 국립공원 하코네 코우자에서 하코네 국립박물관을 탐방하는 것도 좋습니다.
나카노 도쿄 가든 방문: 나카노 도쿄 가든에서 쇼핑을 즐기거나 아키타로 가는 버스를 타며 나카노 가든의 아름다운 정원을 구경할 수 있습니다.
오키나와 여행: 오키나와 지역을 둘러보세요. 츄라 오차노미야의 이에라 리조트를 관광하거나 오키나와의 유명한 관광 명소인 츄라스토우가쿠를 방문하고, 오키나와 섬 곳곳을 둘러보세요.
위의 일정은 일본 도쿄 여행 전문 가이드북인 "도쿄도쿄! 일본어"에서 발췌한 것으로, 참고하시면 도움이 될 것입니다. 또한, 일본어 발음 등 더 궁금한 점이 있으시다면 언제든지 문의하세요!
### 사용자:
긴자를 포함한 2박 3일 여행 계획으로 바꿔줘
### 챗봇:
네, 도쿄를 포함한 2박 3일의 도쿄여행 일정을 드리니 참고해주세요. 첫 날에는 긴자에서 유명한 미쓰마에 신사를 방문하고, 닌교초의 백화점 상점가와 상점가를 방문하세요. 오후에는 긴자의 관광 명소인 아사쿠사를 방문해 도쿄에서 빠질 수 없는 도쿄의 신사 중 하나인 센소오신을 방문하세요. 오후 8시가 되면 저녁식사를 즐기세요. 그리고 저녁에는 긴자에서 긴자선을 타고 시바마에로 이동합니다.
둘째 날에는 시바마에에서 일본 전통적인 신사 중 하나인 도쿄 텐다이를 방문하세요. 그리고 하루 종일 시바마에와 오다이바의 관광지를 둘러보세요. 저녁에는 닌교초에서 쇼핑을 즐기며, 저녁을 먹은 뒤 도쿄에서 빠질 수 없는 백화점 상점가와 쇼핑, 맛집 탐방 등을 즐겨보세요.
셋째 날에는 도쿄에서 당일치기 여행으로 하라주쿠를 방문하세요. 오전에는 하라주쿠 신사를 방문하고, 도쿄에서 빠질 수 없는 다케시다 신사를 방문하세요. 오후에는 시바마에에 있는 시바마 쇼핑몰에서 기념품을 사고, 도쿄에서 당일치기 여행으로 마루노우치를 방문하세요. 그리고 도쿄에서 빠질 수 없는 명품을 구입하세요. 오후에는 긴자에서 점심을 즐기고 쇼핑을 즐기며, 긴자에서 당일치기 여행으로 우에노를 방문하세요.
위의 일정을 참고해서 여행 계획을 짜세요. 도쿄 여행은 일본의 거리 문화와 전통적인 신사, 상점, 전통 음식 등 다양한 매력이 있는 여행지입니다. 위의 일정을 참고하여 즐거운 도쿄 여행을 즐기시기 바랍니다.
``` | 4,866 | [
[
-0.044097900390625,
-0.035614013671875,
0.0207977294921875,
0.0226593017578125,
-0.031097412109375,
0.00823211669921875,
0.0282135009765625,
-0.0216064453125,
0.0504150390625,
0.02813720703125,
-0.0209197998046875,
-0.0306243896484375,
-0.043548583984375,
0.... |
jhgan/ko-sbert-multitask | 2022-08-16T12:45:51.000Z | [
"sentence-transformers",
"pytorch",
"tf",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | sentence-similarity | jhgan | null | null | jhgan/ko-sbert-multitask | 10 | 1,314 | sentence-transformers | 2022-03-02T23:29:05 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ko-sbert-multitask
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["안녕하세요?", "한국어 문장 임베딩을 위한 버트 모델입니다."]
model = SentenceTransformer('jhgan/ko-sbert-multitask')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jhgan/ko-sbert-multitask')
model = AutoModel.from_pretrained('jhgan/ko-sbert-multitask')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
KorSTS, KorNLI 학습 데이터셋으로 멀티 태스크 학습을 진행한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.
- Cosine Pearson: 84.13
- Cosine Spearman: 84.71
- Euclidean Pearson: 82.42
- Euclidean Spearman: 82.66
- Manhattan Pearson: 81.41
- Manhattan Spearman: 81.69
- Dot Pearson: 80.05
- Dot Spearman: 79.69
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8885 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 719 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 360,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv
preprint arXiv:2004.03289
- Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019)
- Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020).
| 4,656 | [
[
-0.0186004638671875,
-0.06475830078125,
0.028594970703125,
0.0244598388671875,
-0.02276611328125,
-0.0254364013671875,
-0.0323486328125,
-0.0010471343994140625,
0.019775390625,
0.0247344970703125,
-0.048736572265625,
-0.040008544921875,
-0.05108642578125,
0.... |
timm/vit_large_r50_s32_384.augreg_in21k_ft_in1k | 2023-05-06T00:51:38.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-21k",
"arxiv:2106.10270",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/vit_large_r50_s32_384.augreg_in21k_ft_in1k | 0 | 1,314 | timm | 2022-12-23T00:31:34 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- imagenet-21k
---
# Model card for vit_large_r50_s32_384.augreg_in21k_ft_in1k
A ResNet - Vision Transformer (ViT) hybrid image classification model. Trained on ImageNet-21k and fine-tuned on ImageNet-1k (with additional augmentation and regularization) in JAX by paper authors, ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 329.1
- GMACs: 56.4
- Activations (M): 64.9
- Image size: 384 x 384
- **Papers:**
- How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers: https://arxiv.org/abs/2106.10270
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-21k
- **Original:** https://github.com/google-research/vision_transformer
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_large_r50_s32_384.augreg_in21k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_large_r50_s32_384.augreg_in21k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 145, 1024) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{steiner2021augreg,
title={How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers},
author={Steiner, Andreas and Kolesnikov, Alexander and and Zhai, Xiaohua and Wightman, Ross and Uszkoreit, Jakob and Beyer, Lucas},
journal={arXiv preprint arXiv:2106.10270},
year={2021}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 3,927 | [
[
-0.039031982421875,
-0.0273895263671875,
-0.00315093994140625,
0.003383636474609375,
-0.0290069580078125,
-0.019317626953125,
-0.0244140625,
-0.03436279296875,
0.019989013671875,
0.0208587646484375,
-0.040252685546875,
-0.03778076171875,
-0.0445556640625,
0.... |
castorini/mdpr-tied-pft-msmarco-ft-all | 2022-05-26T21:14:21.000Z | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | feature-extraction | castorini | null | null | castorini/mdpr-tied-pft-msmarco-ft-all | 0 | 1,313 | transformers | 2022-05-26T21:05:47 | The checkpoint is further fine-tuned based on the `castorini/mdpr-tied-pft-msmarco` checkpoint, on all the Mr. TyDi training data. | 130 | [
[
-0.044769287109375,
-0.01465606689453125,
0.0292205810546875,
-0.006198883056640625,
-0.023773193359375,
0.02984619140625,
0.007190704345703125,
-0.01509857177734375,
0.0242919921875,
0.048187255859375,
-0.06005859375,
-0.0195770263671875,
-0.026519775390625,
... |
transformer3/H2-keywordextractor | 2023-04-21T15:15:03.000Z | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:transformer3/autotrain-data-finance6",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | summarization | transformer3 | null | null | transformer3/H2-keywordextractor | 15 | 1,313 | transformers | 2023-04-21T14:55:32 | ---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- transformer3/autotrain-data-finance6
co2_eq_emissions:
emissions: 0.03294976193424359
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 51355121740
- CO2 Emissions (in grams): 0.0329
## Validation Metrics
- Loss: 1.406
- Rouge1: 29.067
- Rouge2: 19.200
- RougeL: 26.900
- RougeLsum: 26.940
- Gen Len: 20.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/transformer3/autotrain-finance6-51355121740
``` | 723 | [
[
-0.03289794921875,
-0.0289459228515625,
0.0277099609375,
0.019012451171875,
0.0024566650390625,
0.006565093994140625,
0.015716552734375,
-0.01183319091796875,
0.018768310546875,
0.027191162109375,
-0.06072998046875,
-0.0274810791015625,
-0.052825927734375,
-... |
EleutherAI/pythia-6.9b-deduped-v0 | 2023-07-10T01:30:05.000Z | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"pythia_v0",
"en",
"dataset:EleutherAI/the_pile_deduplicated",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | EleutherAI | null | null | EleutherAI/pythia-6.9b-deduped-v0 | 20 | 1,311 | transformers | 2022-10-18T03:04:37 | ---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-6.9B-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:contact@eleuther.ai).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-6.9B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-6.9B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-6.9B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-6.9B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-6.9B-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-6.9B-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-6.9B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-6.9B-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 13B | 12B | 11,846,072,320 | 11,327,027,200 |
</figure> | 11,894 | [
[
-0.0248260498046875,
-0.063720703125,
0.0200653076171875,
0.0035190582275390625,
-0.016815185546875,
-0.01190185546875,
-0.01561737060546875,
-0.03472900390625,
0.01435089111328125,
0.0160980224609375,
-0.0248870849609375,
-0.024505615234375,
-0.034912109375,
... |
timm/vit_giant_patch14_reg4_dinov2.lvd142m | 2023-10-30T04:56:40.000Z | [
"timm",
"pytorch",
"safetensors",
"arxiv:2309.16588",
"arxiv:2304.07193",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | null | timm | null | null | timm/vit_giant_patch14_reg4_dinov2.lvd142m | 0 | 1,311 | timm | 2023-10-30T04:49:01 | ---
tags:
- timm
library_name: timm
license: apache-2.0
---
# Model card for vit_giant_patch14_reg4_dinov2.lvd142m
A Vision Transformer (ViT) image feature model with registers. Pretrained on LVD-142M with self-supervised DINOv2 method.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 1136.5
- GMACs: 1558.1
- Activations (M): 874.4
- Image size: 518 x 518
- **Papers:**
- Vision Transformers Need Registers: https://arxiv.org/abs/2309.16588
- DINOv2: Learning Robust Visual Features without Supervision: https://arxiv.org/abs/2304.07193
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Original:** https://github.com/facebookresearch/dinov2
- **Pretrain Dataset:** LVD-142M
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_giant_patch14_reg4_dinov2.lvd142m', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_giant_patch14_reg4_dinov2.lvd142m',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1374, 1536) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{darcet2023vision,
title={Vision Transformers Need Registers},
author={Darcet, Timoth{'e}e and Oquab, Maxime and Mairal, Julien and Bojanowski, Piotr},
journal={arXiv preprint arXiv:2309.16588},
year={2023}
}
```
```bibtex
@misc{oquab2023dinov2,
title={DINOv2: Learning Robust Visual Features without Supervision},
author={Oquab, Maxime and Darcet, Timothée and Moutakanni, Theo and Vo, Huy V. and Szafraniec, Marc and Khalidov, Vasil and Fernandez, Pierre and Haziza, Daniel and Massa, Francisco and El-Nouby, Alaaeldin and Howes, Russell and Huang, Po-Yao and Xu, Hu and Sharma, Vasu and Li, Shang-Wen and Galuba, Wojciech and Rabbat, Mike and Assran, Mido and Ballas, Nicolas and Synnaeve, Gabriel and Misra, Ishan and Jegou, Herve and Mairal, Julien and Labatut, Patrick and Joulin, Armand and Bojanowski, Piotr},
journal={arXiv:2304.07193},
year={2023}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
``` | 4,311 | [
[
-0.037811279296875,
-0.0246734619140625,
0.00801849365234375,
0.003570556640625,
-0.031646728515625,
-0.024505615234375,
-0.019439697265625,
-0.035980224609375,
0.01398468017578125,
0.021942138671875,
-0.03277587890625,
-0.037139892578125,
-0.050079345703125,
... |
cointegrated/rut5-base-multitask | 2023-03-17T14:12:20.000Z | [
"transformers",
"pytorch",
"jax",
"safetensors",
"t5",
"text2text-generation",
"russian",
"ru",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | cointegrated | null | null | cointegrated/rut5-base-multitask | 23 | 1,310 | transformers | 2022-03-02T23:29:05 |
---
language: ["ru", "en"]
tags:
- russian
license: mit
widget:
- text: "fill | Почему они не ___ на меня?"
---
This is a smaller version of the [google/mt5-base](https://huggingface.co/google/mt5-base) with only some Rusian and English embeddings left.
More details are given in a Russian post: https://habr.com/ru/post/581932/
The model has been fine-tuned for several tasks with sentences or short paragraphs:
* translation (`translate ru-en` and `translate en-ru`)
* Paraphrasing (`paraphrase`)
* Filling gaps in a text (`fill`). The gaps can be denoted as `___` or `_3_`, where `3` is the approximate number of words that should be inserted.
* Restoring the text from a noisy bag of words (`assemble`)
* Simplification of texts (`simplify`)
* Dialogue response generation (`reply` based on fiction and `answer` based on online forums)
* Open-book question answering (`comprehend`)
* Asking questions about a text (`ask`)
* News title generation (`headline`)
For each task, the task name is joined with the input text by the ` | ` separator.
The model can be run with the following code:
```
# !pip install transformers sentencepiece
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("cointegrated/rut5-base-multitask")
model = T5ForConditionalGeneration.from_pretrained("cointegrated/rut5-base-multitask")
def generate(text, **kwargs):
inputs = tokenizer(text, return_tensors='pt')
with torch.no_grad():
hypotheses = model.generate(**inputs, num_beams=5, **kwargs)
return tokenizer.decode(hypotheses[0], skip_special_tokens=True)
```
The model can be applied to each of the pretraining tasks:
```
print(generate('translate ru-en | Каждый охотник желает знать, где сидит фазан.'))
# Each hunter wants to know, where he is.
print(generate('paraphrase | Каждый охотник желает знать, где сидит фазан.',
encoder_no_repeat_ngram_size=1, repetition_penalty=0.5, no_repeat_ngram_size=1))
# В любом случае каждый рыбак мечтает познакомиться со своей фермой
print(generate('fill | Каждый охотник _3_, где сидит фазан.'))
# смотрит на озеро
print(generate('assemble | охотник каждый знать фазан сидит'))
# Каждый охотник знает, что фазан сидит.
print(generate('simplify | Местным продуктом-специалитетом с защищённым географическим наименованием по происхождению считается люнебургский степной барашек.', max_length=32))
# Местным продуктом-специалитетом считается люнебургский степной барашек.
print(generate('reply | Помогите мне закадрить девушку'))
# Что я хочу?
print(generate('answer | Помогите мне закадрить девушку'))
# я хочу познакомиться с девушкой!!!!!!!!
print(generate("comprehend | На фоне земельного конфликта между владельцами овец и ранчеро разворачивается история любви овцевода Моргана Лейна, "
"прибывшего в США из Австралии, и Марии Синглетон, владелицы богатого скотоводческого ранчо. Вопрос: откуда приехал Морган?"))
# из Австралии
print(generate("ask | На фоне земельного конфликта между владельцами овец и ранчеро разворачивается история любви овцевода Моргана Лейна, "
"прибывшего в США из Австралии, и Марии Синглетон, владелицы богатого скотоводческого ранчо.", max_length=32))
# Что разворачивается на фоне земельного конфликта между владельцами овец и ранчеро?
print(generate("headline | На фоне земельного конфликта между владельцами овец и ранчеро разворачивается история любви овцевода Моргана Лейна, "
"прибывшего в США из Австралии, и Марии Синглетон, владелицы богатого скотоводческого ранчо.", max_length=32))
# На фоне земельного конфликта разворачивается история любви овцевода Моргана Лейна и Марии Синглетон
```
However, it is strongly recommended that you fine tune the model for your own task. | 3,770 | [
[
-0.036407470703125,
-0.060333251953125,
0.03106689453125,
0.0241241455078125,
-0.0360107421875,
-0.01435089111328125,
-0.0066375732421875,
-0.01523590087890625,
0.03106689453125,
0.0185546875,
-0.05523681640625,
-0.0423583984375,
-0.04095458984375,
0.0111007... |
persiannlp/mt5-base-parsinlu-sentiment-analysis | 2021-09-23T16:20:02.000Z | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"sentiment",
"sentiment-analysis",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | persiannlp | null | null | persiannlp/mt5-base-parsinlu-sentiment-analysis | 3 | 1,309 | transformers | 2022-03-02T23:29:05 | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- sentiment
- sentiment-analysis
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Sentiment Analysis (آنالیز احساسات)
This is a mT5 model for sentiment analysis.
Here is an example of how you can run this model:
```python
import torch
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
import numpy as np
model_name_or_path = "persiannlp/mt5-base-parsinlu-sentiment-analysis"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def model_predict(text_a, text_b):
features = tokenizer( [(text_a, text_b)], padding="max_length", truncation=True, return_tensors='pt')
output = model(**features)
logits = output[0]
probs = torch.nn.functional.softmax(logits, dim=1).tolist()
idx = np.argmax(np.array(probs))
print(labels[idx], probs)
def run_model(context, query, **generator_args):
input_ids = tokenizer.encode(context + "<sep>" + query, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model(
"یک فیلم ضعیف بی محتوا بدون فیلمنامه . شوخی های سخیف .",
"نظر شما در مورد داستان، فیلمنامه، دیالوگ ها و موضوع فیلم لونه زنبور چیست؟"
)
run_model(
"فیلم تا وسط فیلم یعنی دقیقا تا جایی که معلوم میشه بچه های املشی دنبال رضان خیلی خوب و جذاب پیش میره ولی دقیقا از همونجاش سکته میزنه و خلاص...",
"نظر شما به صورت کلی در مورد فیلم ژن خوک چیست؟"
)
run_model(
"اصلا به هیچ عنوان علاقه نداشتم اجرای می سی سی پی نشسته میمیرد روی پرده سینما ببینم دیالوگ های تکراری هلیکوپتر ماشین آلندلون لئون پاپیون آخه چرااااااااااااااا همون حسی که توی تالار وحدت بعد از نیم ساعت به سرم اومد امشب توی سالن سینما تجربه کردم ،حس گریز از سالن....... (ノಠ益ಠ)ノ ",
" نظر شما در مورد صداگذاری و جلوه های صوتی فیلم مسخرهباز چیست؟"
)
run_model(
" گول نخورید این رنگارنگ مینو نیست برای شرکت گرجیه و متاسفانه این محصولش اصلا مزه رنگارنگی که انتظار دارید رو نمیده ",
" نظر شما در مورد عطر، بو، و طعم این بیسکویت و ویفر چیست؟"
)
run_model(
"در مقایسه با سایر برندهای موجود در بازار با توجه به حراجی که داشت ارزانتر ب",
" شما در مورد قیمت و ارزش خرید این حبوبات و سویا چیست؟"
)
run_model(
"من پسرم عاشق ایناس ولی دیگه به خاطر حفظ محیط زیست فقط زمانهایی که مجبور باشم شیر دونه ای میخرم و سعی میکنم دیگه کمتر شیر با بسته بندی تتراپک استفاده کنم ",
"نظر شما به صورت کلی در مورد این شیر چیست؟"
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
| 2,707 | [
[
-0.038330078125,
-0.04815673828125,
0.021484375,
0.022216796875,
-0.03900146484375,
-0.008514404296875,
0.0049896240234375,
-0.0081634521484375,
0.009368896484375,
0.03607177734375,
-0.04345703125,
-0.056121826171875,
-0.040283203125,
0.01515960693359375,
... |
jncraton/codet5p-220m-py-ct2-int8 | 2023-07-08T19:12:46.000Z | [
"transformers",
"arxiv:2305.07922",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] | null | jncraton | null | null | jncraton/codet5p-220m-py-ct2-int8 | 0 | 1,309 | transformers | 2023-06-30T18:48:16 | ---
license: bsd-3-clause
---
# CodeT5+ 220M (further tuned on Python)
## Model description
[CodeT5+](https://github.com/salesforce/CodeT5/tree/main/CodeT5+) is a new family of open code large language models with an encoder-decoder architecture that can flexibly operate in different modes (i.e. _encoder-only_, _decoder-only_, and _encoder-decoder_) to support a wide range of code understanding and generation tasks.
It is introduced in the paper:
[CodeT5+: Open Code Large Language Models for Code Understanding and Generation](https://arxiv.org/pdf/2305.07922.pdf)
by [Yue Wang](https://yuewang-cuhk.github.io/)\*, [Hung Le](https://sites.google.com/view/henryle2018/home?pli=1)\*, [Akhilesh Deepak Gotmare](https://akhileshgotmare.github.io/), [Nghi D.Q. Bui](https://bdqnghi.github.io/), [Junnan Li](https://sites.google.com/site/junnanlics), [Steven C.H. Hoi](https://sites.google.com/view/stevenhoi/home) (* indicates equal contribution).
Compared to the original CodeT5 family (base: `220M`, large: `770M`), CodeT5+ is pretrained with a diverse set of pretraining tasks including _span denoising_, _causal language modeling_, _contrastive learning_, and _text-code matching_ to learn rich representations from both unimodal code data and bimodal code-text data.
Additionally, it employs a simple yet effective _compute-efficient pretraining_ method to initialize the model components with frozen off-the-shelf LLMs such as [CodeGen](https://github.com/salesforce/CodeGen) to efficiently scale up the model (i.e. `2B`, `6B`, `16B`), and adopts a "shallow encoder and deep decoder" architecture.
Furthermore, it is instruction-tuned to align with natural language instructions (i.e. InstructCodeT5+ 16B) following [Code Alpaca](https://github.com/sahil280114/codealpaca).
## How to use
This model can be easily loaded using the `T5ForConditionalGeneration` functionality and employs the same tokenizer as original [CodeT5](https://github.com/salesforce/CodeT5).
```python
from transformers import T5ForConditionalGeneration, AutoTokenizer
checkpoint = "Salesforce/codet5p-220m-py"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = T5ForConditionalGeneration.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs, max_length=10)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# ==> print('Hello World!')
```
## Pretraining data
This checkpoint is trained on the stricter permissive subset of the deduplicated version of the [github-code dataset](https://huggingface.co/datasets/codeparrot/github-code).
The data is preprocessed by reserving only permissively licensed code ("mit" “apache-2”, “bsd-3-clause”, “bsd-2-clause”, “cc0-1.0”, “unlicense”, “isc”).
Supported languages (9 in total) are as follows:
`c`, `c++`, `c-sharp`, `go`, `java`, `javascript`, `php`, `python`, `ruby.`
## Training procedure
This checkpoint is first trained on the multilingual unimodal code data at the first-stage pretraining, which includes a diverse set of pretraining tasks including _span denoising_ and two variants of _causal language modeling_.
After that, it is further trained on the Python subset with the causal language modeling objective for another epoch to better adapt for Python code generation. Please refer to the paper for more details.
## Evaluation results
CodeT5+ models have been comprehensively evaluated on a wide range of code understanding and generation tasks in various settings: _zero-shot_, _finetuning_, and _instruction-tuning_.
Specifically, CodeT5+ yields substantial performance gains on many downstream tasks compared to their SoTA baselines, e.g.,
8 text-to-code retrieval tasks (+3.2 avg. MRR), 2 line-level code completion tasks (+2.1 avg. Exact Match), and 2 retrieval-augmented code generation tasks (+5.8 avg. BLEU-4).
In 2 math programming tasks on MathQA-Python and GSM8K-Python, CodeT5+ models of below billion-parameter sizes significantly outperform many LLMs of up to 137B parameters.
Particularly, in the zero-shot text-to-code generation task on HumanEval benchmark, InstructCodeT5+ 16B sets new SoTA results of 35.0% pass@1 and 54.5% pass@10 against other open code LLMs, even surpassing the closed-source OpenAI code-cushman-001 mode
Please refer to the [paper](https://arxiv.org/pdf/2305.07922.pdf) for more details.
Specifically for this checkpoint, it achieves 12.0% pass@1 on HumanEval in the zero-shot setting, which outperforms much larger LLMs such as Incoder 1.3B’s 8.9%, GPT-Neo 2.7B's 6.4%, and GPT-J 6B's 11.6%.
## BibTeX entry and citation info
```bibtex
@article{wang2023codet5plus,
title={CodeT5+: Open Code Large Language Models for Code Understanding and Generation},
author={Wang, Yue and Le, Hung and Gotmare, Akhilesh Deepak and Bui, Nghi D.Q. and Li, Junnan and Hoi, Steven C. H.},
journal={arXiv preprint},
year={2023}
}
``` | 5,012 | [
[
-0.0301971435546875,
-0.03253173828125,
0.0131683349609375,
0.023284912109375,
-0.0091705322265625,
0.006130218505859375,
-0.03399658203125,
-0.04351806640625,
-0.0232391357421875,
0.0171966552734375,
-0.030548095703125,
-0.047027587890625,
-0.034393310546875,
... |
facebook/wav2vec2-large-960h-lv60 | 2022-04-05T16:42:07.000Z | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2006.11477",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | automatic-speech-recognition | facebook | null | null | facebook/wav2vec2-large-960h-lv60 | 6 | 1,308 | transformers | 2022-03-02T23:29:05 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
model-index:
- name: wav2vec2-large-960h-lv60
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Librispeech (clean)
type: librispeech_asr
args: en
metrics:
- name: Test WER
type: wer
value: 2.2
---
# Wav2Vec2-Large-960h-Lv60
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The large model pretrained and fine-tuned on 960 hours of Libri-Light and Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **facebook/wav2vec2-large-960h-lv60** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60")
def map_to_pred(batch):
inputs = processor(batch["audio"]["array"], return_tensors="pt", padding="longest")
input_values = inputs.input_values.to("cuda")
attention_mask = inputs.attention_mask.to("cuda")
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=16, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 2.2 | 4.5 | | 3,965 | [
[
-0.0168914794921875,
-0.04949951171875,
0.01517486572265625,
0.01340484619140625,
-0.0121307373046875,
-0.01297760009765625,
-0.041046142578125,
-0.042633056640625,
-0.0017709732055664062,
0.01132965087890625,
-0.045867919921875,
-0.0439453125,
-0.04318237304687... |
TheBloke/starchat-beta-GPTQ | 2023-08-21T09:37:58.000Z | [
"transformers",
"safetensors",
"gpt_bigcode",
"text-generation",
"generated_from_trainer",
"license:bigcode-openrail-m",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"has_space"
] | text-generation | TheBloke | null | null | TheBloke/starchat-beta-GPTQ | 25 | 1,308 | transformers | 2023-06-08T22:12:22 | ---
inference: false
tags:
- generated_from_trainer
model-index:
- name: starchat-beta
results: []
license: bigcode-openrail-m
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# HuggingFaceH4's Starchat Beta GPTQ
These files are GPTQ 4bit model files for [HuggingFaceH4's Starchat Beta](https://huggingface.co/HuggingFaceH4/starchat-beta).
It is the result of quantising to 4bit using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ).
## Prompt template
```
<|system|> system message goes here <|end|>
<|user|> prompt goes here <|end|>
<|assistant|>
```
Example:
```
<|system|> Below is a conversation between a human user and a helpful AI coding assistant. <|end|>
<|user|> How do I sort a list in Python? <|end|>
<|assistant|>
```
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/starchat-beta-GPTQ)
* [4, 5, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/starchat-beta-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/HuggingFaceH4/starchat-beta)
## How to easily download and use this model in text-generation-webui
Please make sure you're using the latest version of text-generation-webui
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/starchat-beta-GPTQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done"
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `starchat-beta-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
## How to use this GPTQ model from Python code
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
`pip install auto-gptq`
Then try the following example code:
```python
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import argparse
model_name_or_path = "TheBloke/starchat-beta-GPTQ"
# Or to load it locally, pass the local download path
# model_name_or_path = "/path/to/models/The_Bloke_starchat-beta-GPTQ"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
use_safetensors=True,
device="cuda:0",
use_triton=use_triton,
quantize_config=None)
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
prompt_template = "<|system|>\n<|end|>\n<|user|>\n{query}<|end|>\n<|assistant|>"
prompt = prompt_template.format(query="How do I sort a list in Python?")
# We use a special <|end|> token with ID 49155 to denote ends of a turn
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.2, top_k=50, top_p=0.95, eos_token_id=49155)
# You can sort a list in Python by using the sort() method. Here's an example:\n\n```\nnumbers = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]\nnumbers.sort()\nprint(numbers)\n```\n\nThis will sort the list in place and print the sorted list.
print(outputs[0]['generated_text'])
```
## Provided files
**gptq_model-4bit--1g.safetensors**
This will work with AutoGPTQ and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
It was created without group_size to lower VRAM requirements, and with --act-order (desc_act) to boost inference accuracy as much as possible.
* `gptq_model-4bit--1g.safetensors`
* Works with AutoGPTQ in CUDA or Triton modes.
* Works with text-generation-webui, including one-click-installers.
* Does not work with GPTQ-for-LLaMa.
* Parameters: Groupsize = -1. Act Order / desc_act = True.
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: HuggingFaceH4's Starchat Beta
<img src="https://huggingface.co/HuggingFaceH4/starchat-beta/resolve/main/model_logo.png" alt="StarChat Beta Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for StarChat Beta
StarChat is a series of language models that are trained to act as helpful coding assistants. StarChat Beta is the second model in the series, and is a fine-tuned version of [StarCoderPlus](https://huggingface.co/bigcode/starcoderplus) that was trained on an ["uncensored"](https://erichartford.com/uncensored-models) variant of the [`openassistant-guanaco` dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco). We found that removing the in-built alignment of the OpenAssistant dataset boosted performance on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) and made the model more helpful at coding tasks. However, this means that model is likely to generate problematic text when prompted to do so and should only be used for educational and research purposes.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Model type:** A 16B parameter GPT-like model fine-tuned on an ["uncensored"](https://erichartford.com/uncensored-models) variant of the [`openassistant-guanaco` dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
- **Language(s) (NLP):** Primarily English and 80+ programming languages.
- **License:** BigCode Open RAIL-M v1
- **Finetuned from model:** [bigcode/starcoderplus](https://huggingface.co/bigcode/starcoderplus)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bigcode-project/starcoder
- **Demo:** https://huggingface.co/spaces/HuggingFaceH4/starchat-playground
## Intended uses & limitations
The model was fine-tuned on a variant of the [`OpenAssistant/oasst1`](https://huggingface.co/datasets/OpenAssistant/oasst1) dataset, which contains a diverse range of dialogues in over 35 languages. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/starchat-playground) to test its coding capabilities.
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="HuggingFaceH4/starchat-beta", torch_dtype=torch.bfloat16, device_map="auto")
prompt_template = "<|system|>\n<|end|>\n<|user|>\n{query}<|end|>\n<|assistant|>"
prompt = prompt_template.format(query="How do I sort a list in Python?")
# We use a special <|end|> token with ID 49155 to denote ends of a turn
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.2, top_k=50, top_p=0.95, eos_token_id=49155)
# You can sort a list in Python by using the sort() method. Here's an example:\n\n```\nnumbers = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]\nnumbers.sort()\nprint(numbers)\n```\n\nThis will sort the list in place and print the sorted list.
```
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
StarChat Alpha has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
Models trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the [StarCoder dataset](https://huggingface.co/datasets/bigcode/starcoderdata) which is derived from The Stack.
Since the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect.
For example, it may produce code that does not compile or that produces incorrect results.
It may also produce code that is vulnerable to security exploits.
We have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking.
StarChat Alpha was fine-tuned from the base model [StarCoder Base](https://huggingface.co/bigcode/starcoderbase), please refer to its model card's [Limitations Section](https://huggingface.co/bigcode/starcoderbase#limitations) for relevant information.
In particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its [technical report](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view).
## Training and evaluation data
StarChat Beta is trained on an ["uncensored"](https://erichartford.com/uncensored-models) variant of the [`openassistant-guanaco` dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco). We applied the same [recipe](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered/blob/main/wizardlm_clean.py) used to filter the ShareGPT datasets behind the [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5321 | 0.98 | 15 | 1.2856 |
| 1.2071 | 1.97 | 30 | 1.2620 |
| 1.0162 | 2.95 | 45 | 1.2853 |
| 0.8484 | 4.0 | 61 | 1.3274 |
| 0.6981 | 4.98 | 76 | 1.3994 |
| 0.5668 | 5.9 | 90 | 1.4720 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@article{Tunstall2023starchat-alpha,
author = {Tunstall, Lewis and Lambert, Nathan and Rajani, Nazneen and Beeching, Edward and Le Scao, Teven and von Werra, Leandro and Han, Sheon and Schmid, Philipp and Rush, Alexander},
title = {Creating a Coding Assistant with StarCoder},
journal = {Hugging Face Blog},
year = {2023},
note = {https://huggingface.co/blog/starchat},
}
```
| 14,732 | [
[
-0.042083740234375,
-0.0550537109375,
0.01125335693359375,
0.01093292236328125,
-0.019805908203125,
0.003986358642578125,
0.003936767578125,
-0.0426025390625,
0.0299835205078125,
0.0184783935546875,
-0.04638671875,
-0.0316162109375,
-0.0305633544921875,
-0.0... |
haoranxu/ALMA-7B | 2023-10-27T05:11:30.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2309.11674",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | haoranxu | null | null | haoranxu/ALMA-7B | 9 | 1,307 | transformers | 2023-09-17T17:14:42 | ---
license: mit
---
**ALMA** (**A**dvanced **L**anguage **M**odel-based tr**A**nslator) is an LLM-based translation model, which adopts a new translation model paradigm: it begins with fine-tuning on monolingual data and is further optimized using high-quality parallel data. This two-step fine-tuning process ensures strong translation performance.
Please find more details in our [paper](https://arxiv.org/abs/2309.11674).
```
@misc{xu2023paradigm,
title={A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models},
author={Haoran Xu and Young Jin Kim and Amr Sharaf and Hany Hassan Awadalla},
year={2023},
eprint={2309.11674},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
We release four translation models presented in the paper:
- **ALMA-7B**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data
- **ALMA-7B-LoRA**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **LoRA** fine-tune on human-written parallel data
- **ALMA-13B**: Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data
- **ALMA-13B-LoRA** (Our best system): Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **LoRA** fine-tune on human-written parallel data
Model checkpoints are released at huggingface:
| Models | Base Model Link | LoRA Link |
|:-------------:|:---------------:|:---------:|
| ALMA-7B | [haoranxu/ALMA-7B](https://huggingface.co/haoranxu/ALMA-7B) | - |
| ALMA-7B-LoRA | [haoranxu/ALMA-7B-Pretrain](https://huggingface.co/haoranxu/ALMA-7B-Pretrain) | [haoranxu/ALMA-7B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-7B-Pretrain-LoRA) |
| ALMA-13B | [haoranxu/ALMA-13B](https://huggingface.co/haoranxu/ALMA-13B) | - |
| ALMA-13B-LoRA | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-13B-Pretrain-LoRA) |
**Note that `ALMA-7B-Pretrain` and `ALMA-13B-Pretrain` are NOT translation models. They only experience stage 1 monolingual fine-tuning (20B tokens for the 7B model and 12B tokens for the 13B model), and should be utilized in conjunction with their LoRA models for translation purposes.**
A quick start to use our best system (ALMA-13B-LoRA) for translation. An example of translating "我爱机器翻译。" into English:
```
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM
from transformers import LlamaTokenizer
# Load base model and LoRA weights
model = AutoModelForCausalLM.from_pretrained("haoranxu/ALMA-13B-Pretrain", torch_dtype=torch.float16, device_map="auto")
model = PeftModel.from_pretrained(model, "haoranxu/ALMA-13B-Pretrain-LoRA")
tokenizer = LlamaTokenizer.from_pretrained("haoranxu/ALMA-13B-Pretrain", padding_side='left')
# Add the source setence into the prompt template
prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:"
input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda()
# Translation
with torch.no_grad():
generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9)
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(outputs)
```
Please find more details in our [GitHub repository](https://github.com/fe1ixxu/ALMA) | 3,637 | [
[
-0.020599365234375,
-0.036468505859375,
0.013397216796875,
0.029266357421875,
-0.037506103515625,
-0.004207611083984375,
-0.00818634033203125,
-0.038421630859375,
0.0225067138671875,
0.034027099609375,
-0.04541015625,
-0.0584716796875,
-0.05133056640625,
0.0... |
Falconsai/medical_summarization | 2023-10-30T12:47:54.000Z | [
"transformers",
"pytorch",
"coreml",
"t5",
"text2text-generation",
"summarization",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | summarization | Falconsai | null | null | Falconsai/medical_summarization | 8 | 1,306 | transformers | 2023-10-23T03:15:02 | ---
license: apache-2.0
language:
- en
pipeline_tag: summarization
widget:
- text: "the need for magnetic resonance imaging ( mri ) in patients with an implanted pacemaker or implantable cardioverter - defibrillator ( icd ) is a growing clinical issue . it is estimated that as many as 75% of active cardiac device recipients will become indicated for mri . currently , the vast majority of such devices are contraindicated for use with an mri . in european heart rhythm association survey , published recently for non - mri - certified icds ( 0.51.5 t field strength ) , the totally subcutaneous icd ( s - icd ) system , an implantable defibrillator with no leads that touch the heart , has recently been demonstrated to be a safe and effective defibrillator option for patients at risk for sudden cardiac death . it provides shock therapy and post - shock pacing therapy , but no long - term bradycardia pacing . although it has been shown as an alternative to the standard transvenous icd , its compatibility with mri remains unclear . various types of clinical mri systems currently use a superconductive magnet that creates a static magnetic field strength , typically 1.5 or 3 t. the use of mri with most pacemakers and icds is considered a contraindication due to potential hazards , including heating of the electrode that resides in or on the heart , damage to myocardium , elevation of pacing thresholds , unintended induction of ventricular tachycardia ( vt ) or ventricular fibrillation ( vf ) , pacing inhibition , permanent device malfunction , and distortion of the mri scan . recently , mr - conditional. mr - conditional indicates a lack of known hazards in a specified mri environment with specified conditions of use . due to the variety of mri scanners and scanning protocols , it is not practical to test even a single device under all conditions . hence , mr - conditional labelling dictates that the device is safe for use under certain scanning conditions , as well as how the cardiac device should be programmed before an exposure to the magnetic field in a mri scanner . the literature , although limited , provides some guidance for imaging patients with implanted pacemakers or icds that do not have mr - conditional labelling . this single - centre prospective non - controlled study describes the first use of mri in patients with an implanted s - icd . patients with implanted s - icd systems ( boston scientific sqrx model 1010 and q - trak model 3010 ) were enrolled for mri testing over a period of 18 months . the s - icd system implanted in this patient cohort was composed of a can implanted in a left mid - lateral pocket and a para - sternal subcutaneous electrode . the s - icd is currently not certified for use with an mri ; therefore , the ethics committee of homolka hospital , prague , czech republic approved our clinical study . patients with newly implanted s - icd systems ( < 6 weeks ) were excluded , and none of the patients had any intravascular leads . the patients were randomized for either a cardiac , brain , cervical , or lumbar spinal scan . one of the subjects underwent an additional knee examination , due to reported chronic pain . a total of 15 patients were enrolled into this study ( 12 males and three females , aged 2283 years , mean 53 years . subjects in our cohort ( table 1 ) underwent a total of 22 mri scans between 6 june 2012 and 24 december 2013 . in total , five brain scans , three cardiac scans , 12 lumbar scans , one knee , and one cervical spine scan were conducted ( table 2 ) . however , in one patient a minor disc protrusion was found , in other mri revealed stenosis of intervertebral foramen which was causing radicular pain of the nerve root l4 and based on this examination the patient was referred to ct - navigated periradicular therapy . table 1summary of patient anatomical data and scan locations , along with noted clinical eventsidagesexbmidgef , % indication for s - icdheating0164f20.5hcmp / vfs85secondary preventionnone0283m30.0post - mi / smvts post - catheter ablation/35secondary prevention ( post - transvenous icd extraction)none0331m25.3arvc / d / smvts68secondary preventionin - tolerable re - scanned0458m23.6post - mi / post - cabg30primary preventionnone0577m25.5post - mi30primary preventionnone0663m27.0post - mi30primary preventionnone0768m23.7post - mi / vfs / vts60secondary prevention post - transvenous icd extraction / svc occlusiontolerable0822m29.4brugada sy / vfs68secondary preventionin - tolerable re - scanned0959m27.1dcmp / vfs / post - mitral valve surgery/60secondary prev./post - transvenous icd extractionnone1041f24.6arvc / d70primary preventionnone1123f21.5lqts / vf60secondary preventionnone1266m36.9post - mi / vf / post - cabg50secondary prevention / post - repeat transvenous icd extractiontolerable1348m22.9dcmp(non - compaction)/vfs35secondary preventionnone1470m29systolic dysfunction of lv35primary preventionnone1526m33brugada sy65primary preventionnonehcmp , hypertrophic cardiomyopathy ; smvt , sustained monomorphic ventricular tachycardia ; mi , myocardial infarction ; arvc , arrhythmogenic right ventricular cardiomyopathy ; cabg , coronary artery by - pass graft ; lqts , long qt syndrom . table 2parmeters of s - icd and patient sensation during individual mri scansscan # idbody partheating sensationsshock zone ( b.p.m.)condit . shock zone ( b.p.m.)bat % episode num.101brainnone2302101001202brainnone240220861303l spinein - tolerable240220831403brainnone240220831504brainnone220190691605l spinenone220210541706l spinenone240220681807l spinetolerable240220582908l spinein - tolerablenananana1008brainnonenananana1108l spinenone2302108411209heartnone2402208911310l spinenone2301807911410heartnonenananana1511heartnone2301909711612l spinetolerable2001709721712l spinenone2001709421813c spinenone23019010041913l spinenone23019010042014l spinenone2301908612115kneenone25021010012215l spinenone2502101001s - icd parameters acquired prior- and post - mri were without any change , therefore only one value is presented.indices : na , not available ; l spine , lumbar spine ; c spine , cervical spine . summary of patient anatomical data and scan locations , along with noted clinical events hcmp , hypertrophic cardiomyopathy ; smvt , sustained monomorphic ventricular tachycardia ; mi , myocardial infarction ; arvc , arrhythmogenic right ventricular cardiomyopathy ; cabg , coronary artery by - pass graft ; lqts , long qt syndrom . parmeters of s - icd and patient sensation during individual mri scans s - icd parameters acquired prior- and post - mri were without any change , therefore only one value is presented . indices : na , not available ; l spine , lumbar spine ; c spine , cervical spine . studies were performed using a siemens avanto 1.5 t mri scanner ( vb17 software , quantum gradient coils ) . all scans were run in normal operating mode , which is limited to 2 w / kg whole body averaged specific absorption rate ( sar ) . clinically relevant mri sequences were used for evaluation ( see table 3 ) . table 3types of pulse sequences typically used for imaging of respective anatomical areasscan locationscan sequencesflairdwiflashfsehastesestirtruefispbrainxxxxheartxxxxcervical spinexxkneexxxxlumbar spinexxflair , fluid attenuated inversion recovery ; dwi , diffusion weighted imaging ; flash , fast low angle shot ; fse , fast spin echo ; haste , half acquisition single - shot turbo spin echo ; se , spin echo ; stir , short tau inversion recovery ; truefisp , true fast imaging with steady - state precession.fse sequence caused heating in subjects with a thermistor probe during lumbar spine examination ( see the text for details ) . types of pulse sequences typically used for imaging of respective anatomical areas flair , fluid attenuated inversion recovery ; dwi , diffusion weighted imaging ; flash , fast low angle shot ; fse , fast spin echo ; haste , half acquisition single - shot turbo spin echo ; se , spin echo ; stir , short tau inversion recovery ; truefisp , true fast imaging with steady - state precession . fse sequence caused heating in subjects with a thermistor probe during lumbar spine examination ( see the text for details ) . patients were asked to report immediately any pain , torqueing movement , or heating sensation in the area of the pocket or the electrode by pressing an emergency bulb . furthermore , all patients were questioned immediately following the mri procedure to ascertain any discomfort in the vicinity of the can or electrode . pulse oximetry and standard lead electrocardiogram ( ecg ) if discomfort occurred , the patient was asked if the scan could be repeated at a later time using a revised scan sequence or the subject was again randomized for another anatomical area . since none of the components of the s - icd system are on or in the heart , heating near or around however , heating near the electrode or can with the s - icd system may still cause serious patient discomfort . therefore , along with education of subjects , each patient was instrumented by taping an oesophageal temperature probe ( beta - therm model g22k7mcd8 ) on the skin over the mid - lateral implant site to record any temperature excursions that might be correlated to patient symptoms of heating / discomfort near the pocket . to minimize the risk of inappropriate therapy , the s - icd system was programmed to therapy each s - icd system was evaluated prior to and immediately after the scan to verify proper functioning , including interrogation , sensing , and battery voltage . after the completion of the mri , long - term regular clinical follow - up and checking of the device were performed . patients with implanted s - icd systems ( boston scientific sqrx model 1010 and q - trak model 3010 ) were enrolled for mri testing over a period of 18 months . the s - icd system implanted in this patient cohort was composed of a can implanted in a left mid - lateral pocket and a para - sternal subcutaneous electrode . the s - icd is currently not certified for use with an mri ; therefore , the ethics committee of homolka hospital , prague , czech republic approved our clinical study . patients with newly implanted s - icd systems ( < 6 weeks ) were excluded , and none of the patients had any intravascular leads . the patients were randomized for either a cardiac , brain , cervical , or lumbar spinal scan . one of the subjects underwent an additional knee examination , due to reported chronic pain . a total of 15 patients were enrolled into this study ( 12 males and three females , aged 2283 years , mean 53 years . subjects in our cohort ( table 1 ) underwent a total of 22 mri scans between 6 june 2012 and 24 december 2013 . in total , five brain scans , three cardiac scans , 12 lumbar scans , one knee , and one cervical spine scan were conducted ( table 2 ) . however , in one patient a minor disc protrusion was found , in other mri revealed stenosis of intervertebral foramen which was causing radicular pain of the nerve root l4 and based on this examination the patient was referred to ct - navigated periradicular therapy . table 1summary of patient anatomical data and scan locations , along with noted clinical eventsidagesexbmidgef , % indication for s - icdheating0164f20.5hcmp / vfs85secondary preventionnone0283m30.0post - mi / smvts post - catheter ablation/35secondary prevention ( post - transvenous icd extraction)none0331m25.3arvc / d / smvts68secondary preventionin - tolerable re - scanned0458m23.6post - mi / post - cabg30primary preventionnone0577m25.5post - mi30primary preventionnone0663m27.0post - mi30primary preventionnone0768m23.7post - mi / vfs / vts60secondary prevention post - transvenous icd extraction / svc occlusiontolerable0822m29.4brugada sy / vfs68secondary preventionin - tolerable re - scanned0959m27.1dcmp / vfs / post - mitral valve surgery/60secondary prev./post - transvenous icd extractionnone1041f24.6arvc / d70primary preventionnone1123f21.5lqts / vf60secondary preventionnone1266m36.9post - mi / vf / post - cabg50secondary prevention / post - repeat transvenous icd extractiontolerable1348m22.9dcmp(non - compaction)/vfs35secondary preventionnone1470m29systolic dysfunction of lv35primary preventionnone1526m33brugada sy65primary preventionnonehcmp , hypertrophic cardiomyopathy ; smvt , sustained monomorphic ventricular tachycardia ; mi , myocardial infarction ; arvc , arrhythmogenic right ventricular cardiomyopathy ; cabg , coronary artery by - pass graft ; lqts , long qt syndrom . table 2parmeters of s - icd and patient sensation during individual mri scansscan # idbody partheating sensationsshock zone ( b.p.m.)condit . shock zone ( b.p.m.)bat % episode num.101brainnone2302101001202brainnone240220861303l spinein - tolerable240220831403brainnone240220831504brainnone220190691605l spinenone220210541706l spinenone240220681807l spinetolerable240220582908l spinein - tolerablenananana1008brainnonenananana1108l spinenone2302108411209heartnone2402208911310l spinenone2301807911410heartnonenananana1511heartnone2301909711612l spinetolerable2001709721712l spinenone2001709421813c spinenone23019010041913l spinenone23019010042014l spinenone2301908612115kneenone25021010012215l spinenone2502101001s - icd parameters acquired prior- and post - mri were without any change , therefore only one value is presented.indices : na , not available ; l spine , lumbar spine ; c spine , cervical spine . summary of patient anatomical data and scan locations , along with noted clinical events hcmp , hypertrophic cardiomyopathy ; smvt , sustained monomorphic ventricular tachycardia ; mi , myocardial infarction ; arvc , arrhythmogenic right ventricular cardiomyopathy ; cabg , coronary artery by - pass graft ; lqts , long qt syndrom . parmeters of s - icd and patient sensation during individual mri scans s - icd parameters acquired prior- and post - mri were without any change , therefore only one value is presented . indices : na , not available ; l spine , lumbar spine ; c spine , cervical spine . studies were performed using a siemens avanto 1.5 t mri scanner ( vb17 software , quantum gradient coils ) . all scans were run in normal operating mode , which is limited to 2 w / kg whole body averaged specific absorption rate ( sar ) . clinically relevant mri sequences were used for evaluation ( see table 3 ) . table 3types of pulse sequences typically used for imaging of respective anatomical areasscan locationscan sequencesflairdwiflashfsehastesestirtruefispbrainxxxxheartxxxxcervical spinexxkneexxxxlumbar spinexxflair , fluid attenuated inversion recovery ; dwi , diffusion weighted imaging ; flash , fast low angle shot ; fse , fast spin echo ; haste , half acquisition single - shot turbo spin echo ; se , spin echo ; stir , short tau inversion recovery ; truefisp , true fast imaging with steady - state precession.fse sequence caused heating in subjects with a thermistor probe during lumbar spine examination ( see the text for details ) . types of pulse sequences typically used for imaging of respective anatomical areas flair , fluid attenuated inversion recovery ; dwi , diffusion weighted imaging ; flash , fast low angle shot ; fse , fast spin echo ; haste , half acquisition single - shot turbo spin echo ; se , spin echo ; stir , short tau inversion recovery ; truefisp , true fast imaging with steady - state precession . fse sequence caused heating in subjects with a thermistor probe during lumbar spine examination ( see the text for details ) . patients were asked to report immediately any pain , torqueing movement , or heating sensation in the area of the pocket or the electrode by pressing an emergency bulb . furthermore , all patients were questioned immediately following the mri procedure to ascertain any discomfort in the vicinity of the can or electrode . pulse oximetry and standard lead electrocardiogram ( ecg ) if discomfort occurred , the patient was asked if the scan could be repeated at a later time using a revised scan sequence or the subject was again randomized for another anatomical area . since none of the components of the s - icd system are on or in the heart , heating near or around the electrode can not harm the myocardium . however , heating near the electrode or can with the s - icd system may still cause serious patient discomfort . therefore , along with education of subjects , each patient was instrumented by taping an oesophageal temperature probe ( beta - therm model g22k7mcd8 ) on the skin over the mid - lateral implant site to record any temperature excursions that might be correlated to patient symptoms of heating / discomfort near the pocket . to minimize the risk of inappropriate therapy , the s - icd system was programmed to therapy each s - icd system was evaluated prior to and immediately after the scan to verify proper functioning , including interrogation , sensing , and battery voltage . after the completion of the mri , the s - icd system was reprogrammed to original settings . long - term regular clinical follow - up and checking of the device were performed . no anomalies were noted via pulse oximetry or ecg during the scans for any of the patients . eleven of 15 patients reported no sensation or pain from heating of the can , two of 15 patients reported feeling some heating , and two patients reported intolerable heating ( see table 2 ) . in patients with intolerable heating , the scan was halted within seconds and changed to a scan of the brain , which proceeded without incident . patient reports of heating in the vicinity of the can occurred only during lumbar scans with a thermistor probe ; no such reports occurred during scans of the brain , cardiac area , cervical spine , or without the probe . in two cases where heating in the vicinity of the can was reported by the patient , the scan sequence was altered to reduce the intensity of radiofrequency ( rf ) field exposure by reducing the turbo factor ( e.g. from 21 to 7 ) , increasing the repetition time ( e.g. to > 4000 ms ) , and reducing the flip angle ( e.g. from 170 to 120 ) . the target values were chosen arbitrarily to maintain image contrast ( flip angle ) and keep scan time at reasonable limits ( turbo factor and repetition time ) . less heating was noted by patients after these modifications to the scan parameters were made . 03 ) was observed to have a skin lesion , appearing to be a circular rash or ulcer on the surface of the skin over the can , approximately 35 mm in diameter . the cause of this skin anomaly is not known ; it was later noted to have fully healed at a follow - up 10 days after the scan . to ascertain the effect of heating due to the instrumented thermistor catheter , the two patients who experienced the heating ( examinations 9 and 16 , see table 2 ) were rescanned several weeks later without the thermistor catheter in place ( examinations 11 and 17 ) . first , modified sequence ( with even lower amount of energy deposited in the tissue ) was used , which caused no heating . as no sensation was reported by the subjects , they were asked to report even a minimal discomfort , and the lumbar scans were performed using the same settings that resulted in heating with the thermistor catheter in place in the first imaging session . the results of the rescans revealed that no heating was felt by the patients when the thermistor catheter was absent . there were no noted changes to battery voltage , ability to detect the qrs signal or stored diagnostic data . pacing thresholds can not be assessed by the s - icd system , so this was not evaluated . none of the patients reported any pulling or twisting of the can or pain from heating of the s - icd electrode . for scans of the brain , lumbar spine , knee , and cervical spine , no effect from image artefact was noted in the anatomical area of interest . however , for scans of the cardiac area , image artefact was noted to interfere with the ability to see parts of the left ventricle , though the right ventricle of the heart was unaffected and could be imaged usefully . this was due to the can and not the electrode ( see figure 1 ) , modifications to the protocol for the lumbar spine resulted in a lower signal - to - noise ratio ; however , the images remain in diagnostic quality ( see figure 2 ) . figure 1kinetic study in four - chamber view : the systolic ( a and c ) and diastolic ( b and d ) images of cine sequences , four - chamber view . the steady - state free precession ( ssfp ) sequence ( a and b ) shows more artefacts . in ssfp kinetic study , an inflow of dark blood from the left pulmonary veins was seen . it could be caused by s - icd but also by metallic ring in mitral annulus . the spoiled gradient echo ( gre ) sequence ( c and d ) is better , but an artefact at the lateral wall is obvious . figure 2lumbar spine imaging with icd : low sar t2 fse sequence ( upper image ) compared with normal t2 fse in the same subject ( lower image , for the scanning parameters see the discussion section ) . kinetic study in four - chamber view : the systolic ( a and c ) and diastolic ( b and d ) images of cine sequences , four - chamber view . the steady - state free precession ( ssfp ) sequence ( a and b ) shows more artefacts . in ssfp kinetic study , an inflow of dark blood from the left pulmonary veins was seen . it could be caused by s - icd but also by metallic ring in mitral annulus . the spoiled gradient echo ( gre ) sequence ( c and d ) is better , but an artefact at the lateral wall is obvious . lumbar spine imaging with icd : low sar t2 fse sequence ( upper image ) compared with normal t2 fse in the same subject ( lower image , for the scanning parameters see the discussion section ) . there were no noted changes to battery voltage , ability to detect the qrs signal or stored diagnostic data . pacing thresholds can not be assessed by the s - icd system , so this was not evaluated . none of the patients reported any pulling or twisting of the can or pain from heating of the s - icd electrode . for scans of the brain , lumbar spine , knee , and cervical spine , no effect from image artefact was noted in the anatomical area of interest . however , for scans of the cardiac area , image artefact was noted to interfere with the ability to see parts of the left ventricle , though the right ventricle of the heart was unaffected and could be imaged usefully . this was due to the can and not the electrode ( see figure 1 ) , modifications to the protocol for the lumbar spine resulted in a lower signal - to - noise ratio ; however , the images remain in diagnostic quality ( see figure 2 ) . figure 1kinetic study in four - chamber view : the systolic ( a and c ) and diastolic ( b and d ) images of cine sequences , four - chamber view . the steady - state free precession ( ssfp ) sequence ( a and b ) shows more artefacts . in ssfp kinetic study , an inflow of dark blood from the left pulmonary veins was seen . it could be caused by s - icd but also by metallic ring in mitral annulus . the spoiled gradient echo ( gre ) sequence ( c and d ) is better , but an artefact at the lateral wall is obvious . figure 2lumbar spine imaging with icd : low sar t2 fse sequence ( upper image ) compared with normal t2 fse in the same subject ( lower image , for the scanning parameters see the discussion section ) . kinetic study in four - chamber view : the systolic ( a and c ) and diastolic ( b and d ) images of cine sequences , four - chamber view . the steady - state free precession ( ssfp ) sequence ( a and b ) shows more artefacts . in ssfp kinetic study , an inflow of dark blood from the left pulmonary veins was seen . it could be caused by s - icd but also by metallic ring in mitral annulus . the spoiled gradient echo ( gre ) sequence ( c and d ) is better , but an artefact at the lateral wall is obvious . lumbar spine imaging with icd : low sar t2 fse sequence ( upper image ) compared with normal t2 fse in the same subject ( lower image , there are several reports in the current literature about mr - conditional pacemakers from several companies , but very limited reports about mr - conditional icds . biotronik announced in late 2011 release of their first mr - compatible icd device and defibrillator leads pro mri , but in the conditions of use excluded scanning of the torso and focused more on the extremities examination . in european heart rhythm association survey , 60% of centres did not implant any mri - certified icds , 34.3% implanted < 10 icd devices , and only 5.6% implanted 10 and more icds ; one - fifth of responders stated that mri - certified icds should be implanted in all patients but lack of reimbursement was indicated as a possible obstacle to implant more mri - certified pacemakers / icds by 47.1% of responding centres . none of the components of the s - icd system are on or in the heart . the s - icd depends less upon being in direct contact with the myocardium to function and instead uses far - field sensing and stimulation to provide the shock and post - shock pacing therapy . as a consequence , unlike transvenous systems heating near or around the electrode can not harm the myocardium , which could present with possible safety consequences such as an elevation in pacing thresholds or scarring of the myocardium , but it may still cause serious patient discomfort . because the s - icd is larger than modern transvenous icd 's , there may be more potential for the can to experience heating due to the magnetic gradient or rf field . we report results from what we believe is the first experience of mri scanning in patients with an implanted s - icd and in various anatomical areas . overall , mri was performed safely in all patients , which is in contrast to the current literature with mri imaging in patients with electrical - active devices which are not mri - conditional . in our study , the primary clinically significant event attributable to the mri scan was the occurrence of heating in the area of the pocket in the four patients that underwent lumbar scans . it was not known if this was due to the s - icd can itself or an artefact of the thermistor catheter used to measure skin temperature over the pocket . this required a revision of our protocol , which was to re - scan two of the patients who complained of heating . re - scanning of these patients without the thermistor probe resulted in no complaints of heating , so it is assumed that the thermistor catheter itself heated during the lumbar scans and caused the discomfort . as further evidence , all the heating complaints occurred during rf - intensive scan sequences ( namely fast spin echo ) with the temperature probe located axially near the centre of the bore , where rf fields are the highest . the thermistor catheter is constructed of insulated conductive cables connected to electrodes and should couple to the rf fields efficiently , causing heating at the electrodes and pain or damage on the surface of the skin where the probe was placed over the s - icd can . if the heating was due to the s - icd can itself , it would more likely occur during gradient - intensive scan sequences ( which can generate eddy currents on can surfaces and internal components ) and at locations in the bore where there are high gradient fields , such as near the bore edges . however , when the patient was scanned with gradient - intensive scan sequences ( e.g. flair dwi ) and with the s - icd system in high gradient field locations in the bore ( e.g. such as during a brain scan ) , patients did not detect any heating or discomfort . in addition , the subcutaneous lead , which was not instrumented with a thermistor catheter , never resulted in any heating sensation noted by the patient , even when exactly the same sequence that resulted in heating in the first session was used . the use of mri - compatible temperature monitors such as fibre optic temperature probes would have provided better confirmation of possible skin temperature elevation and would not have been affected by the rf fields . for cardiac imaging , the main problem to solve is metallic artefact , especially on the gradient - echo sequences . like in research performed by nazarian et al . , several scan protocols were used to see if any yielded different effects or reduced the qualitative extent of artefact . gradient mode was changed from normal to whisper , resulting in slower ramping of the field and therefore diminishing the changes of the magnetic field in time . artefacts when present were limited to blurring of the left ventricle during cardiac scans and most yielded clinically useful information . standard interrogation of the s - icd revealed no adverse effects upon the functioning of the system . while no adverse effects upon the post - scan s - icd device function were noted , not all possible scanning protocols were tested . it should be noted that , four of the s - icd 's were exposed to repeat mri scans without adverse effects to device function . in addition , because the s - icd does not provide long - term bradycardia pacing , it is assumed that pacemaker - dependent patients would not be implanted with this system . the inhibition of the pacemaker function during the scanning sequence and possible pacing threshold changes are a unique concern in patients implanted with transvenous icds . this study included only 15 patients and 22 scans done on the same 1.5 t mri scanner . thus , even these preliminary results should only be applied to 1.5 t mri scanners ( similarly as reported in the present literature for other implantable devices ) . device functionality was tested immediately after the scan but not for long - term effects . in addition , not all device functions were tested although the s - icd system does have a beeper / interrogation warning if battery levels or memory irregularities occur . however , patients were scheduled for regular check - up and no defect of the device was observed in following 725 months ( mean observation time 18 months ) . delayed enhancement mri for determining cardiac scarring was also not tested . also , there are other anatomical areas that were not evaluated , such as shoulder and knees . this study included only 15 patients and 22 scans done on the same 1.5 t mri scanner . thus , even these preliminary results should only be applied to 1.5 t mri scanners ( similarly as reported in the present literature for other implantable devices ) . device functionality was tested immediately after the scan but not for long - term effects . in addition , not all device functions were tested although the s - icd system does have a beeper / interrogation warning if battery levels or memory irregularities occur . however , patients were scheduled for regular check - up and no defect of the device was observed in following 725 months ( mean observation time 18 months ) . delayed enhancement mri for determining cardiac scarring was also not tested . also , there are other anatomical areas that were not evaluated , such as shoulder and knees . while more data are required to support a claim of mri - conditional , this study is the study to demonstrate the feasibility of exposing s - icd patients to mri using the scanning and monitor protocol described , with some precautionary measures including : ( i ) programming the device therapy off ; ( ii ) limiting the sar to 2.0 w / kg ; ( iii ) continuous monitoring of the patients pulse oximetry and ecg by qualified personnel and especially for any feelings of heating ; ( iv ) evaluate device function post scan ; ( v ) availability of full resuscitation facilities at the mri site . given the variables of different mri scanners , the decision to perform mri on patients with an implanted s - icd system should be balanced against the potential risks . in our study , the only heating was very likely introduced by not fully mri - compatible thermometer probe ; subjects rescanned without the probe did not report any abnormalities during the scan of any body area listed ( brain , cervical and lumbar spine , heart , and knee ) . this study was supported by iga mz r nt12094/2011 , research project charles university in prague , prvouk p34 and unce 204010/2012 . funding to pay the open access publication charges for this article was provided by iga mz r nt12094/2011 ."
example_title: Summarization Example 1
---
# Model Card: T5 Large for Medical Text Summarization
## Model Description
The **T5 Large for Medical Text Summarization** is a specialized variant of the T5 transformer model, fine-tuned for the task of summarizing medical text. This model is designed to generate concise and coherent summaries of medical documents, research papers, clinical notes, and other healthcare-related text.
The T5 Large model, known as "t5-large," is pre-trained on a broad range of medical literature, enabling it to capture intricate medical terminology, extract crucial information, and produce meaningful summaries. The fine-tuning process for this model is meticulous, with attention to hyperparameter settings, including batch size and learning rate, to ensure optimal performance in the field of medical text summarization.
During the fine-tuning process, a batch size of 8 is chosen for efficiency, and a learning rate of 2e-5 is selected to strike a balance between convergence speed and model optimization. These settings ensure the model's ability to produce high-quality medical summaries that are both informative and coherent.
The fine-tuning dataset consists of diverse medical documents, clinical studies, and healthcare research, along with human-generated summaries. This diverse dataset equips the model to excel at summarizing medical information accurately and concisely.
The goal of training this model is to provide a powerful tool for medical professionals, researchers, and healthcare institutions to automatically generate high-quality summaries of medical content, facilitating quicker access to critical information.
## Intended Uses & Limitations
### Intended Uses
- **Medical Text Summarization**: The primary purpose of this model is to generate concise and coherent summaries of medical documents, research papers, clinical notes, and healthcare-related text. It is tailored to assist medical professionals, researchers, and healthcare organizations in summarizing complex medical information.
### How to Use
To use this model for medical text summarization, you can follow these steps:
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="your/medical_text_summarization_model")
MEDICAL_DOCUMENT = """
duplications of the alimentary tract are well - known but rare congenital malformations that can occur anywhere in the gastrointestinal ( gi ) tract from the tongue to the anus . while midgut duplications are the most common , foregut duplications such as oesophagus , stomach , and parts 1 and 2 of the duodenum account for approximately one - third of cases .
they are most commonly seen either in the thorax or abdomen or in both as congenital thoracoabdominal duplications .
cystic oesophageal duplication ( ced ) , the most common presentation , is often found in the lower third part ( 60 - 95% ) and on the right side [ 2 , 3 ] . hydatid cyst ( hc ) is still an important health problem throughout the world , particularly in latin america , africa , and mediterranean areas .
turkey , located in the mediterranean area , shares this problem , with an estimated incidence of 20/100 000 .
most commonly reported effected organ is liver , but in children the lungs are the second most frequent site of involvement [ 4 , 5 ] . in both ced and hc , the presentation depends on the site and the size of the cyst .
hydatid cysts are far more common than other cystic intrathoracic lesions , especially in endemic areas , so it is a challenge to differentiate ced from hc in these countries . here ,
we present a 7-year - old girl with intrathoracic cystic mass lesion , who had been treated for hydatid cyst for 9 months , but who turned out to have oesophageal cystic duplication .
a 7-year - old girl was referred to our clinic with coincidentally established cystic intrathoracic lesion during the investigation of aetiology of anaemia .
the child was first admitted with loss of vision in another hospital ten months previously .
the patient 's complaints had been attributed to pseudotumour cerebri due to severe iron deficiency anaemia ( haemoglobin : 3 g / dl ) .
chest radiography and computed tomography ( ct ) images resulted in a diagnosis of cystic intrathoracic lesion ( fig .
the cystic mass was accepted as a type 1 hydatid cyst according to world health organization ( who ) classification .
after 9 months of medication , no regression was detected in ct images , so the patient was referred to our department .
an ondirect haemagglutination test result was again negative . during surgery , after left thoracotomy incision , a semi - mobile cystic lesion , which was almost seven centimetres in diameter , with smooth contour , was found above the diaphragm , below the lung , outside the pleura ( fig .
the entire fluid in the cyst was aspirated ; it was brown and bloody ( fig .
2 ) . the diagnosis of cystic oesophageal duplication was considered , and so an attachment point was searched for .
it was below the hiatus , on the lower third left side of the oesophagus , and it also was excised completely through the hiatus .
pathologic analysis of the specimen showed oesophageal mucosa with an underlying proper smooth muscle layer .
computed tomography image of the cystic intrathoracic lesion cystic lesion with brownish fluid in the cyst
compressible organs facilitate the growth of the cyst , and this has been proposed as a reason for the apparent prevalence of lung involvement in children . diagnosis is often incidental and can be made with serological tests and imaging [ 5 , 7 ] .
laboratory investigations include the casoni and weinberg skin tests , indirect haemagglutination test , elisa , and the presence of eosinophilia , but can be falsely negative because children may have a poor serological response to eg .
false - positive reactions are related to the antigenic commonality among cestodes and conversely seronegativity can not exclude hydatidosis .
false - negative results are observed when cysts are calcified , even if fertile [ 4 , 8 ] . in our patient iha levels were negative twice .
due to the relatively non - specific clinical signs , diagnosis can only be made confidently using appropriate imaging .
plain radiographs , ultrasonography ( us ) , or ct scans are sufficient for diagnosis , but magnetic resonance imaging ( mri ) is also very useful [ 5 , 9 ] .
computed tomography demonstrates cyst wall calcification , infection , peritoneal seeding , bone involvement fluid density of intact cysts , and the characteristic internal structure of both uncomplicated and ruptured cysts [ 5 , 9 ] .
the conventional treatment of hydatid cysts in all organs is surgical . in children , small hydatid cysts of the lungs
respond favourably to medical treatment with oral administration of certain antihelminthic drugs such as albendazole in certain selected patients .
the response to therapy differs according to age , cyst size , cyst structure ( presence of daughter cysts inside the mother cysts and thickness of the pericystic capsule allowing penetration of the drugs ) , and localization of the cyst . in children , small cysts with thin pericystic capsule localised in the brain and lungs respond favourably [ 6 , 11 ] .
respiratory symptoms are seen predominantly in cases before two years of age . in our patient , who has vision loss , the asymptomatic duplication cyst was found incidentally .
the lesion occupied the left hemithorax although the most common localisation reported in the literature is the lower and right oesophagus .
the presentation depends on the site and the size of the malformations , varying from dysphagia and respiratory distress to a lump and perforation or bleeding into the intestine , but cysts are mostly diagnosed incidentally .
if a cystic mass is suspected in the chest , the best technique for evaluation is ct .
magnetic resonance imaging can be used to detail the intimate nature of the cyst with the spinal canal .
duplications should have all three typical signs : first of all , they should be attached to at least one point of the alimentary tract ; second and third are that they should have a well - developed smooth muscle coat , and the epithelial lining of duplication should represent some portions of alimentary tract , respectively [ 2 , 10 , 12 ] . in summary , the cystic appearance of both can cause a misdiagnosis very easily due to the rarity of cystic oesophageal duplications as well as the higher incidence of hydatid cyst , especially in endemic areas .
"""
print(summarizer(MEDICAL_DOCUMENT, max_length=230, min_length=30, do_sample=False))
>>> [{'summary_text': 'duplications of the alimentary tract are well - known but rare congenital malformations that can occur anywhere in the gastrointestinal ( gi ) tract from the tongue to the anus . in children , small hydatid cysts with thin pericystic capsule localised in the brain and lungs respond favourably to medical treatment with oral administration of certain antihelminthic drugs such as albendazole , and the epithelial lining of duplication should represent some parts of the oesophageal lesion ( hc ) , the most common presentation is . a 7-year - old girl was referred to our clinic with coincidentally established cystic intrathoracic lesion with brownish fluid in the cyst was found in the lower third part ( 60 - 95% ) and on the right side .'}]
```
Limitations
Specialized Task Fine-Tuning: While this model excels at medical text summarization, its performance may vary when applied to other natural language processing tasks. Users interested in employing this model for different tasks should explore fine-tuned versions available in the model hub for optimal results.
Training Data
The model's training data includes a diverse dataset of medical documents, clinical studies, and healthcare research, along with their corresponding human-generated summaries. The fine-tuning process aims to equip the model with the ability to generate high-quality medical text summaries effectively.
Training Stats
- Evaluation Loss: 0.012345678901234567
- Evaluation Rouge Score: 0.95 (F1)
- Evaluation Runtime: 2.3456
- Evaluation Samples per Second: 1234.56
- Evaluation Steps per Second: 45.678
Responsible Usage
It is crucial to use this model responsibly and ethically, adhering to content guidelines, privacy regulations, and ethical considerations when implementing it in real-world medical applications, particularly those involving sensitive patient data.
References
Hugging Face Model Hub
T5 Paper
Disclaimer: The model's performance may be influenced by the quality and representativeness of the data it was fine-tuned on. Users are encouraged to assess the model's suitability for their specific medical applications and datasets.
| 43,341 | [
[
-0.024322509765625,
-0.035430908203125,
0.046905517578125,
-0.005340576171875,
-0.042266845703125,
-0.005615234375,
0.023681640625,
-0.02520751953125,
0.033843994140625,
0.04315185546875,
-0.04217529296875,
-0.073486328125,
-0.044952392578125,
0.011558532714... |
timm/deit3_small_patch16_224.fb_in1k | 2023-03-28T01:26:47.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2204.07118",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/deit3_small_patch16_224.fb_in1k | 0 | 1,305 | timm | 2023-03-28T01:26:30 | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for deit3_small_patch16_224.fb_in1k
A DeiT-III image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 22.1
- GMACs: 4.6
- Activations (M): 11.9
- Image size: 224 x 224
- **Papers:**
- DeiT III: Revenge of the ViT: https://arxiv.org/abs/2204.07118
- **Original:** https://github.com/facebookresearch/deit
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('deit3_small_patch16_224.fb_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'deit3_small_patch16_224.fb_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 197, 384) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{Touvron2022DeiTIR,
title={DeiT III: Revenge of the ViT},
author={Hugo Touvron and Matthieu Cord and Herve Jegou},
journal={arXiv preprint arXiv:2204.07118},
year={2022},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 2,942 | [
[
-0.0311737060546875,
-0.036468505859375,
0.00894927978515625,
0.01171112060546875,
-0.0275115966796875,
-0.0250091552734375,
-0.0024471282958984375,
-0.029571533203125,
0.016143798828125,
0.0227813720703125,
-0.040069580078125,
-0.05560302734375,
-0.046020507812... |
rroby/sd-3-7-rsroby | 2023-03-07T22:35:14.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | rroby | null | null | rroby/sd-3-7-rsroby | 0 | 1,304 | diffusers | 2023-03-07T22:30:52 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### SD-3-7-Rsroby Dreambooth model trained by rroby with [buildspace's DreamBooth](https://colab.research.google.com/github/buildspace/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb) notebook
Build your own using the [AI Avatar project](https://buildspace.so/builds/ai-avatar)!
To get started head over to the [project dashboard](https://buildspace.so/p/build-ai-avatars).
Sample pictures of this concept:
| 519 | [
[
-0.027313232421875,
-0.03662109375,
0.0225677490234375,
0.037384033203125,
-0.0146942138671875,
0.03515625,
0.0300445556640625,
-0.0235137939453125,
0.061798095703125,
0.0247039794921875,
-0.036376953125,
-0.0207977294921875,
-0.03582763671875,
-0.0213775634... |
jhlfrfufyfn/bel-tts | 2023-09-28T23:40:02.000Z | [
"transformers",
"be",
"endpoints_compatible",
"has_space",
"region:us"
] | null | jhlfrfufyfn | null | null | jhlfrfufyfn/bel-tts | 1 | 1,304 | transformers | 2023-05-16T00:20:27 | ---
language:
- be
---
# Bel-TTS
## Install
Install TTS with pip:
```
pip install TTS
```
You would also need to download files from this repo.
## Running synthesizer
### Preprocessing text
This TTS model uses phonemized text as an input. To get phonemized version of the data,
you need to send the text you want to voice to the phonemizer api:
```
curl --location 'fonemizer.nikuchin.fun/processText' \
--header 'Content-Type: text/plain' \
--data 'Гепарды жывуць у адкрытых і прасторных месцах, дзе ёсць шмат здабычы.'
```
You'll get a response with the phonemized data, that will look something like this:
```
Ґеҁпарды җыҁѵуЦ ҁу аҁткрытыХ ҁйі праҁсторных ҁӎесцах, ҁЗе ҁйоСЦ ҁшмад Җдаҁбычы.
```
### Synthesizing
Example for the synthesizing command:
```
tts --text "Ґеҁпарды җыҁѵуЦ ҁу аҁткрытыХ ҁйі праҁсторных ҁӎесцах, ҁЗе ҁйоСЦ ҁшмад Җдаҁбычы." \
--config_path ${PATH_TO_FILE}/config.json \
--model_path ${PATH_TO_FILE}/model.pth \
--out_path ${PATH_TO_FILE}/output.wav \
--vocoder_path ${PATH_TO_FILE}/vocoder.pth \
--vocoder_config_path ${PATH_TO_FILE}/vocoder_config.json
```
(change ${PATH_TO_FILE} to your directory) | 1,155 | [
[
-0.0239715576171875,
-0.047271728515625,
0.0183563232421875,
0.014556884765625,
-0.04815673828125,
0.019317626953125,
0.0032291412353515625,
-0.002490997314453125,
0.0296783447265625,
0.02410888671875,
-0.058990478515625,
-0.0469970703125,
-0.035980224609375,
... |
digiplay/DreamShaper_7 | 2023-07-04T00:27:29.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | digiplay | null | null | digiplay/DreamShaper_7 | 3 | 1,304 | diffusers | 2023-07-03T22:12:00 | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/4384
Version 7 link:
https://civitai.com/models/4384?modelVersionId=109123
Original Author's DEMO image :
,%20best%20quality,%20beautiful%20lighting,%20(ulzzang-6500_0.5),%20lucy%20_(cyberpunk_),%201girl,%20white%20hair,.jpeg)
***Original Author's DEMO prompt in this image:***
```
masterpiece, (photorealistic:1.4), best quality, beautiful lighting, (ulzzang-6500:0.5), lucy \(cyberpunk\), 1girl, white hair, against railing, arm rest, bangs, bare shoulders, belt, black belt, black leotard, black pants, blurry, bob cut, breasts, building, cityscape, clothing cutout, (cropped jacket), cyberpunk, depth of field, from side, gradient eyes, grey eyes, grey hair, white jacket, leotard, lips, long sleeves, looking afar, looking ahead, (mechanical parts), medium breasts, multicolored eyes, multicolored hair, night, night sky, off shoulder, open clothes, open jacket, outdoors, pants, parted lips, railing, red eyeliner, science fiction, short hair with long locks, short shorts, shorts, sidelocks, sky, solo, standing, teeth, thigh cutout, upper teeth only, white jacket, white shorts, cyberpunk \(series\), cyberpunk edgerunners, RAW photo, 8k uhd, film grain, cosplay, white wig, night, neon lights,,,, <lora:lucy_offset:1.21>
```
***Negative prompt:***
```
BadDream, (UnrealisticDream:1.3)
``` | 1,615 | [
[
-0.04827880859375,
-0.034942626953125,
0.0240325927734375,
0.004669189453125,
-0.0267791748046875,
0.0103759765625,
0.023681640625,
-0.037078857421875,
0.045257568359375,
0.029327392578125,
-0.0714111328125,
-0.0343017578125,
-0.0146484375,
-0.00208663940429... |
UBC-NLP/ARBERT | 2022-01-19T20:10:55.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"Arabic BERT",
"MSA",
"Twitter",
"Masked Langauge Model",
"ar",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | UBC-NLP | null | null | UBC-NLP/ARBERT | 3 | 1,302 | transformers | 2022-03-02T23:29:05 | ---
language:
- ar
tags:
- Arabic BERT
- MSA
- Twitter
- Masked Langauge Model
widget:
- text: "اللغة العربية هي لغة [MASK]."
---
<img src="https://raw.githubusercontent.com/UBC-NLP/marbert/main/ARBERT_MARBERT.jpg" alt="drawing" width="30%" height="30%" align="right"/>
**ARBERT** is one of three models described in our **ACl 2021 paper** **["ARBERT & MARBERT: Deep Bidirectional Transformers for Arabic"](https://mageed.arts.ubc.ca/files/2020/12/marbert_arxiv_2020.pdf)**. ARBERT is a large-scale pre-trained masked language model focused on Modern Standard Arabic (MSA). To train ARBERT, we use the same architecture as BERT-base: 12 attention layers, each has 12 attention heads and 768 hidden dimensions, a vocabulary of 100K WordPieces, making ∼163M parameters. We train ARBERT on a collection of Arabic datasets comprising **61GB of text** (**6.2B tokens**). For more information, please visit our own GitHub [repo](https://github.com/UBC-NLP/marbert).
# BibTex
If you use our models (ARBERT, MARBERT, or MARBERTv2) for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (to be updated):
```bibtex
@inproceedings{abdul-mageed-etal-2021-arbert,
title = "{ARBERT} {\&} {MARBERT}: Deep Bidirectional Transformers for {A}rabic",
author = "Abdul-Mageed, Muhammad and
Elmadany, AbdelRahim and
Nagoudi, El Moatez Billah",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.551",
doi = "10.18653/v1/2021.acl-long.551",
pages = "7088--7105",
abstract = "Pre-trained language models (LMs) are currently integral to many natural language processing systems. Although multilingual LMs were also introduced to serve many languages, these have limitations such as being costly at inference time and the size and diversity of non-English data involved in their pre-training. We remedy these issues for a collection of diverse Arabic varieties by introducing two powerful deep bidirectional transformer-based models, ARBERT and MARBERT. To evaluate our models, we also introduce ARLUE, a new benchmark for multi-dialectal Arabic language understanding evaluation. ARLUE is built using 42 datasets targeting six different task clusters, allowing us to offer a series of standardized experiments under rich conditions. When fine-tuned on ARLUE, our models collectively achieve new state-of-the-art results across the majority of tasks (37 out of 48 classification tasks, on the 42 datasets). Our best model acquires the highest ARLUE score (77.40) across all six task clusters, outperforming all other models including XLM-R Large ( 3.4x larger size). Our models are publicly available at https://github.com/UBC-NLP/marbert and ARLUE will be released through the same repository.",
}
```
## Acknowledgments
We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, [ComputeCanada](www.computecanada.ca) and [UBC ARC-Sockeye](https://doi.org/10.14288/SOCKEYE). We also thank the [Google TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc) program for providing us with free TPU access. | 3,595 | [
[
-0.0487060546875,
-0.0333251953125,
0.021240234375,
0.0195465087890625,
-0.0084991455078125,
-0.002597808837890625,
-0.026275634765625,
-0.033935546875,
-0.007457733154296875,
0.041015625,
-0.0272979736328125,
-0.06292724609375,
-0.06573486328125,
0.00087785... |
khanhld/wav2vec2-base-vietnamese-160h | 2022-11-25T04:34:50.000Z | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"Transformer",
"vietnamese",
"vi",
"dataset:vivos",
"dataset:common_voice",
"dataset:FOSD",
"dataset:VLSP",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:... | automatic-speech-recognition | khanhld | null | null | khanhld/wav2vec2-base-vietnamese-160h | 5 | 1,302 | transformers | 2022-05-07T08:54:51 | ---
language: vi
datasets:
- vivos
- common_voice
- FOSD
- VLSP
metrics:
- wer
pipeline_tag: automatic-speech-recognition
tags:
- audio
- speech
- Transformer
- wav2vec2
- automatic-speech-recognition
- vietnamese
license: cc-by-nc-4.0
widget:
- example_title: common_voice_vi_30519758.mp3
src: https://huggingface.co/khanhld/wav2vec2-base-vietnamese-160h/raw/main/examples/common_voice_vi_30519758.mp3
- example_title: VIVOSDEV15_020.wav
src: https://huggingface.co/khanhld/wav2vec2-base-vietnamese-160h/raw/main/examples/VIVOSDEV15_020.wav
model-index:
- name: Wav2vec2 Base Vietnamese 160h
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: common-voice-vietnamese
type: common_voice
args: vi
metrics:
- name: Test WER
type: wer
value: 10.78
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: VIVOS
type: vivos
args: vi
metrics:
- name: Test WER
type: wer
value: 15.05
---
[](https://paperswithcode.com/sota/speech-recognition-on-common-voice-vi?p=wav2vec2-base-vietnamese-160h)
[](https://paperswithcode.com/sota/speech-recognition-on-vivos?p=wav2vec2-base-vietnamese-160h)
# Vietnamese Speech Recognition using Wav2vec 2.0
### Table of contents
1. [Model Description](#description)
2. [Implementation](#implementation)
3. [Benchmark Result](#benchmark)
4. [Example Usage](#example)
5. [Evaluation](#evaluation)
6. [Citation](#citation)
7. [Contact](#contact)
<a name = "description" ></a>
### Model Description
Fine-tuned the Wav2vec2-based model on about 160 hours of Vietnamese speech dataset from different resources, including [VIOS](https://huggingface.co/datasets/vivos), [COMMON VOICE](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [FOSD](https://data.mendeley.com/datasets/k9sxg2twv4/4) and [VLSP 100h](https://drive.google.com/file/d/1vUSxdORDxk-ePUt-bUVDahpoXiqKchMx/view). We have not yet incorporated the Language Model into our ASR system but still gained a promising result.
<a name = "implementation" ></a>
### Implementation
We also provide code for Pre-training and Fine-tuning the Wav2vec2 model. If you wish to train on your dataset, check it out here:
- [Pre-train code](https://github.com/khanld/Wav2vec2-Pretraining)
- [Fine-tune code](https://github.com/khanld/ASR-Wa2vec-Finetune)
<a name = "benchmark" ></a>
### Benchmark WER Result
| | [VIVOS](https://huggingface.co/datasets/vivos) | [COMMON VOICE 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) |
|---|---|---|
|without LM| 15.05 | 10.78 |
|with LM| in progress | in progress |
<a name = "example" ></a>
### Example Usage [](https://colab.research.google.com/drive/1blz1KclnIfbOp8o2fW3WJgObOQ9SMGBo?usp=sharing)
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
import librosa
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
processor = Wav2Vec2Processor.from_pretrained("khanhld/wav2vec2-base-vietnamese-160h")
model = Wav2Vec2ForCTC.from_pretrained("khanhld/wav2vec2-base-vietnamese-160h")
model.to(device)
def transcribe(wav):
input_values = processor(wav, sampling_rate=16000, return_tensors="pt").input_values
logits = model(input_values.to(device)).logits
pred_ids = torch.argmax(logits, dim=-1)
pred_transcript = processor.batch_decode(pred_ids)[0]
return pred_transcript
wav, _ = librosa.load('path/to/your/audio/file', sr = 16000)
print(f"transcript: {transcribe(wav)}")
```
<a name = "evaluation"></a>
### Evaluation [](https://colab.research.google.com/drive/1XQCq4YGLnl23tcKmYeSwaksro4IgC_Yi?usp=sharing)
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
import re
from datasets import load_dataset, load_metric, Audio
wer = load_metric("wer")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# load processor and model
processor = Wav2Vec2Processor.from_pretrained("khanhld/wav2vec2-base-vietnamese-160h")
model = Wav2Vec2ForCTC.from_pretrained("khanhld/wav2vec2-base-vietnamese-160h")
model.to(device)
model.eval()
# Load dataset
test_dataset = load_dataset("mozilla-foundation/common_voice_8_0", "vi", split="test", use_auth_token="your_huggingface_auth_token")
test_dataset = test_dataset.cast_column("audio", Audio(sampling_rate=16000))
chars_to_ignore = r'[,?.!\-;:"“%\'�]' # ignore special characters
# preprocess data
def preprocess(batch):
audio = batch["audio"]
batch["input_values"] = audio["array"]
batch["transcript"] = re.sub(chars_to_ignore, '', batch["sentence"]).lower()
return batch
# run inference
def inference(batch):
input_values = processor(batch["input_values"],
sampling_rate=16000,
return_tensors="pt").input_values
logits = model(input_values.to(device)).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_transcript"] = processor.batch_decode(pred_ids)
return batch
test_dataset = test_dataset.map(preprocess)
result = test_dataset.map(inference, batched=True, batch_size=1)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_transcript"], references=result["transcript"])))
```
**Test Result**: 10.78%
<a name = "citation" ></a>
### Citation
[](https://zenodo.org/badge/latestdoi/491468343)
<strong>BibTeX</strong>
```
@mics{Duy_Khanh_Finetune_Wav2vec_2_0_2022,
author = {Duy Khanh, Le},
doi = {10.5281/zenodo.6542357},
license = {CC-BY-NC-4.0},
month = {5},
title = {{Finetune Wav2vec 2.0 For Vietnamese Speech Recognition}},
url = {https://github.com/khanld/ASR-Wa2vec-Finetune},
year = {2022}
}
```
<strong>APA</strong>
```
Duy Khanh, L. (2022). Finetune Wav2vec 2.0 For Vietnamese Speech Recognition [Data set]. https://doi.org/10.5281/zenodo.6542357
```
<a name = "contact"></a>
### Contact
- khanhld218@uef.edu.vn
- [](https://github.com/)
- [](https://www.linkedin.com/in/khanhld257/)
| 6,773 | [
[
-0.02593994140625,
-0.04296875,
0.0054168701171875,
0.015228271484375,
-0.01305389404296875,
-0.0066986083984375,
-0.031707763671875,
-0.031646728515625,
-0.003002166748046875,
0.012420654296875,
-0.04833984375,
-0.053192138671875,
-0.037628173828125,
-0.008... |
tarteel-ai/whisper-base-ar-quran | 2022-12-13T16:49:54.000Z | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | automatic-speech-recognition | tarteel-ai | null | null | tarteel-ai/whisper-base-ar-quran | 10 | 1,302 | transformers | 2022-12-08T21:04:00 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-base-ar-quran
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-ar-quran
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0839
- Wer: 5.7544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1092 | 0.05 | 250 | 0.1969 | 13.3890 |
| 0.0361 | 0.1 | 500 | 0.1583 | 10.6375 |
| 0.0192 | 0.15 | 750 | 0.1109 | 8.8468 |
| 0.0144 | 0.2 | 1000 | 0.1157 | 7.9754 |
| 0.008 | 0.25 | 1250 | 0.1000 | 7.5360 |
| 0.0048 | 1.03 | 1500 | 0.0933 | 6.8227 |
| 0.0113 | 1.08 | 1750 | 0.0955 | 6.9638 |
| 0.0209 | 1.13 | 2000 | 0.0824 | 6.3586 |
| 0.0043 | 1.18 | 2250 | 0.0830 | 6.3444 |
| 0.002 | 1.23 | 2500 | 0.1015 | 6.3025 |
| 0.0013 | 2.01 | 2750 | 0.0863 | 6.0639 |
| 0.0014 | 2.06 | 3000 | 0.0905 | 6.0213 |
| 0.0018 | 2.11 | 3250 | 0.0864 | 6.0293 |
| 0.0008 | 2.16 | 3500 | 0.0887 | 5.9308 |
| 0.0029 | 2.21 | 3750 | 0.0777 | 5.9159 |
| 0.0022 | 2.26 | 4000 | 0.0847 | 5.8749 |
| 0.0005 | 3.05 | 4250 | 0.0827 | 5.8352 |
| 0.0003 | 3.1 | 4500 | 0.0826 | 5.7800 |
| 0.0006 | 3.15 | 4750 | 0.0833 | 5.7625 |
| 0.0003 | 3.2 | 5000 | 0.0839 | 5.7544 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
| 2,678 | [
[
-0.040557861328125,
-0.040130615234375,
0.0011110305786132812,
0.0095062255859375,
-0.012359619140625,
-0.0175323486328125,
0.0003905296325683594,
-0.01065826416015625,
0.027679443359375,
0.028167724609375,
-0.04754638671875,
-0.051971435546875,
-0.0506591796875... |
timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k | 2023-04-05T18:59:15.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:1905.00546",
"arxiv:1611.05431",
"arxiv:1512.03385",
"license:cc-by-nc-4.0",
"region:us"
] | image-classification | timm | null | null | timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k | 0 | 1,302 | timm | 2023-04-05T18:58:43 | ---
tags:
- image-classification
- timm
library_tag: timm
license: cc-by-nc-4.0
---
# Model card for resnext50_32x4d.fb_swsl_ig1b_ft_in1k
A ResNeXt-B image classification model.
This model features:
* ReLU activations
* single layer 7x7 convolution with pooling
* 1x1 convolution shortcut downsample
* grouped 3x3 bottleneck convolutions
Pretrained on Instagram-1B hashtags dataset using semi-weakly supervised learning and fine-tuned on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 25.0
- GMACs: 4.3
- Activations (M): 14.4
- Image size: 224 x 224
- **Papers:**
- Billion-scale semi-supervised learning for image classification: https://arxiv.org/abs/1905.00546
- Aggregated Residual Transformations for Deep Neural Networks: https://arxiv.org/abs/1611.05431
- Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385
- **Original:** https://github.com/facebookresearch/semi-supervised-ImageNet1K-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resnext50_32x4d.fb_swsl_ig1b_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnext50_32x4d.fb_swsl_ig1b_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1024, 14, 14])
# torch.Size([1, 2048, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnext50_32x4d.fb_swsl_ig1b_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec|
|------------------------------------------|--------|-----|-----|-----------|-----|-----|-------|
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 |
|[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 |
|[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 |
|[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 |
|[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 |
|[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 |
|[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 |
|[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 |
|[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 |
|[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 |
|[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 |
|[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 |
|[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 |
|[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 |
|[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 |
|[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 |
|[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 |
|[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 |
|[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 |
|[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 |
|[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 |
|[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 |
|[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 |
|[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 |
|[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 |
|[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 |
|[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 |
|[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 |
|[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 |
|[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 |
|[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 |
|[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 |
|[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 |
|[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 |
|[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 |
|[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 |
|[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 |
|[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 |
|[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 |
|[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 |
|[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 |
|[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 |
|[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 |
|[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 |
|[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 |
|[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 |
|[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 |
|[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 |
|[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 |
## Citation
```bibtex
@misc{yalniz2019billionscale,
title={Billion-scale semi-supervised learning for image classification},
author={I. Zeki Yalniz and Hervé Jégou and Kan Chen and Manohar Paluri and Dhruv Mahajan},
year={2019},
eprint={1905.00546},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@article{Xie2016,
title={Aggregated Residual Transformations for Deep Neural Networks},
author={Saining Xie and Ross Girshick and Piotr Dollár and Zhuowen Tu and Kaiming He},
journal={arXiv preprint arXiv:1611.05431},
year={2016}
}
```
```bibtex
@article{He2015,
author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {arXiv preprint arXiv:1512.03385},
year = {2015}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 38,870 | [
[
-0.0638427734375,
-0.0188446044921875,
0.004779815673828125,
0.0292205810546875,
-0.031890869140625,
-0.00794219970703125,
-0.01087188720703125,
-0.032012939453125,
0.081298828125,
0.0189208984375,
-0.048828125,
-0.041748046875,
-0.045196533203125,
-0.000164... |
Norod78/SDXL-simpstyle-Lora | 2023-09-19T16:33:59.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"stable-diffusion",
"lora",
"en",
"has_space",
"region:us"
] | text-to-image | Norod78 | null | null | Norod78/SDXL-simpstyle-Lora | 7 | 1,302 | diffusers | 2023-08-20T14:11:59 | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: simpstyle
tags:
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- stable-diffusion
- lora
- diffusers
widget:
- text: A cute dog simpstyle
- text: Picure of a Pokemon simpstyle
- text: Cthulhu rising from the sea in a great storm, Very detailed, clean, high quality, sharp image, simpstyle
- text: the girl with a pearl earring by johannes vermeer, Very detailed, clean, high quality, sharp image, simpstyle
inference: true
language:
- en
---
# Trigger words
Use "simpstyle" in your prompts
# Examples
The girl with a pearl earring by johannes vermeer, Very detailed, clean, high quality, sharp image, simpstyle

Dora the space explorer, simpstyle , Very detailed, clean, high quality, sharp image

| 1,452 | [
[
-0.022735595703125,
-0.06951904296875,
0.048187255859375,
0.036163330078125,
-0.032928466796875,
0.00998687744140625,
0.0038013458251953125,
-0.0117340087890625,
0.053497314453125,
0.022918701171875,
-0.0703125,
-0.04217529296875,
-0.059356689453125,
0.03695... |
laion/CLIP-ViT-B-32-256x256-DataComp-s34B-b86K | 2023-09-13T13:09:16.000Z | [
"open_clip",
"zero-shot-image-classification",
"dataset:mlfoundations/datacomp_pools",
"arxiv:2304.14108",
"license:mit",
"region:us"
] | zero-shot-image-classification | laion | null | null | laion/CLIP-ViT-B-32-256x256-DataComp-s34B-b86K | 6 | 1,302 | open_clip | 2023-09-12T19:17:55 | ---
license: mit
widget:
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
library_name: open_clip
datasets:
- mlfoundations/datacomp_pools
pipeline_tag: zero-shot-image-classification
---
# Model card for CLIP ViT-B-32 256x256 trained DataComp-1B
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Details](#training-details)
4. [Evaluation](#evaluation)
5. [Acknowledgements](#acknowledgements)
6. [Citation](#citation)
7. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
A CLIP ViT-B/32 model trained with the DataComp-1B (https://github.com/mlfoundations/datacomp) using OpenCLIP (https://github.com/mlfoundations/open_clip) at 256x256 resolution.
Model training done on the [JURECA](https://www.fz-juelich.de/en/ias/jsc/systems/supercomputers/jureca) cluster.
# Uses
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the DataComp paper (https://arxiv.org/abs/2304.14108) include additional discussion as it relates specifically to the training dataset.
## Direct Use
Zero-shot image classification, image and text retrieval, among others.
## Downstream Use
Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.
## Out-of-Scope Use
As per the OpenAI models,
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
# Training Details
## Training Data
This model was trained with the 1.4 Billion samples of the DataComp-1B dataset (https://arxiv.org/abs/2304.14108).
**IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress.
# Evaluation
Evaluation done on 38 datasets, using [LAION CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark).
## Testing Data, Factors & Metrics
### Testing Data
The testing is performed on a suite of 38 datasets. See our paper for more details (https://arxiv.org/abs/2304.14108).
## Results
The model achieves a 72.7% zero-shot top-1 accuracy on ImageNet-1k, 64.4% image retrieval recall@5 and 80.7% text retrieval recall@5 on COCO captions.
# Citation
**BibTeX:**
DataComp
```bibtex
@article{datacomp,
title={DataComp: In search of the next generation of multimodal datasets},
author={Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, Eyal Orgad, Rahim Entezari, Giannis Daras, Sarah Pratt, Vivek Ramanujan, Yonatan Bitton, Kalyani Marathe, Stephen Mussmann, Richard Vencu, Mehdi Cherti, Ranjay Krishna, Pang Wei Koh, Olga Saukh, Alexander Ratner, Shuran Song, Hannaneh Hajishirzi, Ali Farhadi, Romain Beaumont, Sewoong Oh, Alex Dimakis, Jenia Jitsev, Yair Carmon, Vaishaal Shankar, Ludwig Schmidt},
journal={arXiv preprint arXiv:2304.14108},
year={2023}
}
```
OpenAI CLIP paper
```
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
OpenCLIP software
```
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
# How to Get Started with the Model
See https://github.com/mlfoundations/open_clip | 7,226 | [
[
-0.032684326171875,
-0.047760009765625,
0.0143585205078125,
0.005252838134765625,
-0.0286407470703125,
-0.030242919921875,
-0.01248931884765625,
-0.0445556640625,
0.00537872314453125,
0.03387451171875,
-0.042938232421875,
-0.044891357421875,
-0.04522705078125,
... |
facebook/maskformer-swin-small-coco | 2023-09-11T20:35:38.000Z | [
"transformers",
"pytorch",
"safetensors",
"maskformer",
"vision",
"image-segmentation",
"dataset:coco",
"arxiv:2107.06278",
"license:other",
"endpoints_compatible",
"has_space",
"region:us"
] | image-segmentation | facebook | null | null | facebook/maskformer-swin-small-coco | 3 | 1,300 | transformers | 2022-03-02T23:29:05 | ---
license: other
tags:
- vision
- image-segmentation
datasets:
- coco
widget:
- src: http://images.cocodataset.org/val2017/000000039769.jpg
example_title: Cats
- src: http://images.cocodataset.org/val2017/000000039770.jpg
example_title: Castle
---
# MaskFormer
MaskFormer model trained on COCO panoptic segmentation (small-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
# load MaskFormer fine-tuned on COCO panoptic segmentation
feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-small-coco")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-small-coco")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to feature_extractor for postprocessing
result = feature_extractor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). | 2,778 | [
[
-0.0506591796875,
-0.055877685546875,
0.0173797607421875,
0.026519775390625,
-0.0205078125,
-0.01548004150390625,
0.004199981689453125,
-0.0472412109375,
0.0335693359375,
0.051116943359375,
-0.061279296875,
-0.04132080078125,
-0.058563232421875,
-0.014282226... |
timm/vit_base_patch16_clip_224.laion2b_ft_in12k | 2023-05-06T00:01:41.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-12k",
"dataset:laion-2b",
"arxiv:2212.07143",
"arxiv:2210.08402",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/vit_base_patch16_clip_224.laion2b_ft_in12k | 0 | 1,300 | timm | 2022-11-09T23:16:08 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-12k
- laion-2b
---
# Model card for vit_base_patch16_clip_224.laion2b_ft_in12k
A Vision Transformer (ViT) image classification model. Pretrained on LAION-2B image-text pairs using OpenCLIP. Fine-tuned on ImageNet-12k in `timm`. See recipes in [Reproducible scaling laws](https://arxiv.org/abs/2212.07143).
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 94.9
- GMACs: 16.9
- Activations (M): 16.5
- Image size: 224 x 224
- **Papers:**
- OpenCLIP: https://github.com/mlfoundations/open_clip
- Reproducible scaling laws for contrastive language-image learning: https://arxiv.org/abs/2212.07143
- LAION-5B: An open large-scale dataset for training next generation image-text models: https://arxiv.org/abs/2210.08402
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-12k
- **Pretrain Dataset:**
- LAION-2B
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_base_patch16_clip_224.laion2b_ft_in12k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_base_patch16_clip_224.laion2b_ft_in12k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 197, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
```bibtex
@article{cherti2022reproducible,
title={Reproducible scaling laws for contrastive language-image learning},
author={Cherti, Mehdi and Beaumont, Romain and Wightman, Ross and Wortsman, Mitchell and Ilharco, Gabriel and Gordon, Cade and Schuhmann, Christoph and Schmidt, Ludwig and Jitsev, Jenia},
journal={arXiv preprint arXiv:2212.07143},
year={2022}
}
```
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 5,696 | [
[
-0.0287933349609375,
-0.02813720703125,
0.01064300537109375,
0.0100250244140625,
-0.026153564453125,
-0.032318115234375,
-0.03448486328125,
-0.0313720703125,
0.0080413818359375,
0.026031494140625,
-0.029266357421875,
-0.042388916015625,
-0.05133056640625,
-0... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.